As part of their 40th anniversary celebration, the Society for Canadian Women in Science and Technology (SCWIST) is offering a workshop series, which you can attend even if you don’t have a Zoom account or a camera on your computer system,
Science Communication Workshop 3: Visual Science Storytelling with Sara ElShafie
Learn how to create aesthetic and informative visuals, and take your slide decks, posters, and figures to a new level!
Take your slide decks, posters, and figures to a new level! Nothing makes a story more powerful than compelling images. Learn how to create aesthetic and informative visuals that support your science story using principles of graphic design and story art.
Use color, tone, shape, and layout to both convey information and evoke emotion.
Balance precision with accessibility.
Develop a visual progression for any story requiring a series of images (e.g., for a slide deck).
Sara ElShafie is a global change biologist and science storytelling coach. She collaborates with artists in a wide range of industries to uncover the potential of storytelling to engage broad audiences with complex topics. ElShafie is the Founder and Principal of Science Through Story, LLC, dedicated to helping scientists and science educators connect with audiences through effective storytelling. She consults and runs workshops for groups ranging from graduate students to NASA scientists to theme park executives. She also organized a symposium, Science Through Narrative: Engaging Broad Audiences, with speakers from the scientific community as well as arts and entertainment industries, and curated a resulting published volume of open access papers. ElShafie holds a B.A. in Biological Sciences from the University of Chicago and a M.S. in Earth & Atmospheric Sciences from the University of Nebraska, Lincoln. She is completing her PhD in Integrative Biology with the Museum of Paleontology at the University of California, Berkeley.
Also from the event page, here’s the time and cost,
For anyone who’s curious about SCWIST and this series, there are details on the event page,
Since 1981, SCWIST has made great strides in promoting and empowering women in STEM. When you register, please consider adding a small donation to support our programs so all interested women and girls can see where a future in STEM can take them.
This Science Communication Workshop Series is being hosted by SCWIST using SCALE Project funds from a federal grant by Women and Gender Equality Canada (WAGE), and in partnership with the American Association for Forensic Sciences (AAFS) Anth Section’s Ad hoc committee on Outreach with generous funding by Dr. Marin Pilloud and the University of Nevada, Reno.
Our community partners for this series are: WISDOM-Manitoba, Island Women in Science & Technology (IWIST), Westcoast Women in Engineering (WWEST), Students for the Exploration and Development of Space Étudiants pour l’Exploration et le Développement Spatial (SEDS-ÉEDS Canada), Women in Tech World, and Immigrant & International Women in Science (IWS-Network).
Finally, for anyone who’d like to know more about the organization, here’s the SCWIST website.
For the record, this is spider mite silk (I have many posts about spider silk and its possible applications on this blog; just search ‘spider silk’)..
The international collaborative team includes a Canadian university in combination with a Spanish university and a Serbian university. The composition of the team is one I haven’t seen here before. From a December 17, 2020 news item on phys.org (Note: A link has been removed),
An international team of researchers has developed a new nanomaterial from the silk produced by the Tetranychus lintearius mite. This nanomaterial has the ability to penetrate human cells without damaging them and, therefore, has “promising biomedical properties”.
The Nature Scientific Reports journal has published an article by an international scientific team led by Miodrag Grbiç, a researcher from the universities of La Rioja (Spain), Western Ontario (Canada) and Belgrade (Serbia), in its latest issue entitled “The silk of gorse spider mite Tetranychus lintearius represents a novel natural source of nanoparticles and biomaterials.”
In it, researchers from the Murcian Institute for Agricultural and Food Research and Development (IMIDA), the Barcelona Institute of Photonic Sciences, the University of Western Ontario (Canada), the University of Belgrade (Serbia) and the University of La Rioja describe the discovery and characterisation of this mite silk. They also demonstrate its great potential as a source of nanoparticles and biomaterials for medical and technological uses.
The interest of this new material, which is more resistant than steel, ultra flexible, nano-sized, biodegradable, biocompatible and has an excellent ability to penetrate human cells without damaging them, lies in its natural character and its size (a thousand times smaller than human hair), which facilitates cell penetration.
These characteristics are ideal for use in pharmacology and biomedicine since it is biocompatible with organic tissues (stimulates cell proliferation without producing toxicity) and, in principle, biodegradable due to its protein structure (it does not produce residues).
Researcher Miodrag Grbi?, who heads the international group that has researched this mite silk, highlights “its enormous potential for biomedical applications, as thanks to its size it is able to easily penetrate both healthy and cancerous human cells”, which makes it ideal for transporting drugs in cancer therapies, as well as for the development of biosensors to detect pathogens and viruses.
THE ‘RIOJANO BUG’
Tetranychus lintearius is an endemic mite from the European Atlantic coast that feeds exclusively on gorse (Ulex europaeus). It is around 0.3 mm in size, making it smaller than the comma on a keyboard, while the strength of its silk is twice as high as standard spider silk.
It is a very rare species that has only been found so far in the municipality of Valgañón (La Rioja, Spain), in Sierra de la Demanda. It was located thanks to the collaboration of Rosario García, a botanist and former dean of the Faculty of Science and Technology at the University of La Rioja, which is why researchers call it “the Rioja bug” (“El Bicho Riojano”).
The resistance of the silk produced by Tetranychus lintearius is twice that of spider silk, a standard material used for this type of research, and stronger than steel. It also has advantages over the fibres secreted by the silkworm due to its higher Young’s modulus, its electrical charge and its smaller size. These characteristics, along with its lightness, make it a promising natural nanomaterial for technological uses.
This finding is the result of work carried out by the international group of researchers led by Miodrag Grbi?, who sequenced the genome of the red spider Tetranychus urticae in 2011, publishing the results in Nature:https://www.nature.com/articles/nature10640.
Unlike the red spider (Tetranychus urticae), the gorse mite (Tetranychus lintearius) produces a large amount of silk. It has been reared in the laboratories of the Department of Agriculture and Food of the University of La Rioja, under the care of Professor Ignacio Pérez Moreno, allowing research to continue. Red spider silk is difficult to handle and has a lower production rate.
Here’s a link to and a citation for the 2020 paper,
Plans for last year’s FACTT (Festival of Art and Science) 2020 had to be revised at the last minute due to COVID-19. This year, organizers were prepared so no in person sessions have to be cancelled or turned into virtual events. Here’s more from the Jan. 25, 2021 announcement I received (via email) from one of the festival partners, the ArtSci Salon at the University of Toronto,
Join us! Opening of FACTT 20-21 Improbable Times!
Thursday, January 28, 2021 at 3:30 PM EST – 5:30 PM EST Public · Anyone on or off Facebook – link will be disseminated closer to the event.
The Arte Institute and the RHI Initiative, in partnership with Cultivamos Cultura, have the pleasure to present the FACTT 2021 – Festival Art & Science. The festival opens on January 28, at 8.30 PM (GMT), and will be exhibited online on RHI Stage.
This year we are reshaping FACTT! Come join us for the kick-off of this amazing project!
A project spearheaded and promoted by the Arte Institute we are in or production and conception partners with Cultivamos Cultura and Ectopia (Portugal), InArts Lab@Ionian University (Greece), ArtSci Salon@The Fields Institute and Sensorium@York University (Canada), School of Visual Arts (USA), UNAM, Arte+Ciência and Bioscenica (Mexico), and Central Academy of Fine Arts (China).
Together we will work and bring into being our ideas and actions for this during the year of 2021!
FACTT 20/21 – Improbable Times presents a series of exceptional artworks jointly curated by Cultivamos Cultura and our partners. The challenge of a translation from the physical space that artworks occupy typically, into an exhibition that lives as a hybrid experience, involves rethinking the materiality of the work itself. It also questions whether we can live and interact with each other remotely and in person producing creative effective collaborative outcomes to immerse ourselves in. Improbable Times brings together a collection of works that reflect the times we live in, the constraints we are faced with, the drive to rethink what tomorrow may bring us, navigate it and build a better future, beyond borders.
January 28, 2021 | 8:30 PM (GMT)Program: – Introduction – Performance Toronto: void * ambience : Latency, with Joel Ong, Michael Palumbo and Kavi – Performance Mexico “El Tercero Cuerpo Sonoro” (Third Sonorous Body), by Arte+Ciência. – Q&A
The performance series void * ambience experiments with sound and video content that is developed through a focus on the topographies and networks through which these flow. Initiated during the time of COVID and social distancing, this project explores processes of information sharing, real-time performance and network communication protocols that contribute to the sustenance of our digital communities, shared experiences and telematic intimacies.
“El Tercero Cuerpo Sonoro” project is a digital drift that explores different relationships with the environment, nature, humans and non-humans from the formulation of an intersubjective body. Its main search is to generate resonances with and among the others.
In these complicated times in which it seems that our existence unfolds in front of the screen, confined to the space of the black mirror, it becomes urgent to challenge the limits and scopes of digital life. We need to rethink the way in which we inhabit the others as well as our own subjectivity.
Program: – Introduction – Performance Toronto: Proximal Spaces Artistic Directors: Joel Ong, Elaine Whittaker Graphic Designer: Natalie Plociennik Bhavesh Kakwani AR [augmented reality] development : Sachin Khargie, Ryan Martin Bioartists: Roberta Buiani, Nathalie Dubois Calero, Sarah Choukah, Nicole Clouston, Jess Holtz, Mick Lorusso, Maro Pebo, Felipe Shibuya – Performance Mexico Tercero Cuerpo Sonoro (Third Sonorous Body) by Arte+Ciência
FACTT team: Marta de Menezes, Suzanne Anker, Maria Antonia Gonzalez Valerio, Roberta Buiani, Jo Wei, Dalila Honorato, Joel Ong, Lena Lee and Minerva Ortiz.
For FACTT20/21 we propose to put together an exhibition where the virtual and the physical share space, a space that is hybrid from its conception, a space that desires to break the limits of access to culture, to collaboration, to the experience of art. A place where we can think deeply and creatively together about the adaptive moves we had and have to develop to the rapid and sudden changes our lives and environment are going through.
The Woodrow Wilson International Center for Scholars has planned a US-centric event, in this case, but I think it could be of interest to anyone interested in low-cost and open source tools.
For anyone who’s unfamiliar with the term ‘open source’, as applied to a tool and/or software, it’s pretty much the opposite of a tool/software that’s kept secret by a patent. The Webopedia entry for Open Source Tools defines the term this way (Note: Links have been removed),
Open source tools is a phrase used to mean a program — or tool — that performs a very specific task, in which the source code is openly published for use and/or modification from its original design, free of charge. …
Getting back to the Wilson Center, I received this Jan. 22, 2021 announcement (via email) about their upcoming ‘Low-Cost and Open Source Tools: Next Steps for Science and Policy’ event,
Low-Cost and Open Source Tools: Next Steps for Science and Policy
Monday Feb. 1, 2021 3:30pm – 5:00pm ET
Foldable and 3D printed microscopes are broadening access to the life sciences, low-cost and open microprocessors are supporting research from cognitive neuroscience to oceanography, and low-cost and open sensors are measuring air quality in communities around the world. In these examples and beyond, the things of science – the physical tools that generate data or contribute to scientific processes – are becoming less expensive, and more open.
Recent developments, including the extraordinary response to COVID-19 by maker and DIY communities, have demonstrated the value of low cost and open source hardware for addressing global challenges. These developments strengthen the capacity of individual innovators and community-based organizations and highlight concrete opportunities for their contribution to science and society. From a policy perspective, work on low-cost and open source hardware– as well as broader open science and open innovation initiatives– has spanned at least two presidential administrations.
When considering past policy and practical developments, where are we today?
With the momentum of a new presidential administration, what are the possible next steps for elevating the value and prominence of low-cost and open source tools? By bringing together perspectives from the general public and public policy communities, this event will articulate the proven potential and acknowledge the present obstacles of making low-cost and open source hardware for science more accessible and impactful.
Alison Parker, Senior Program Associate, Science & Technology Innovation Program, The Wilson Center
3:40 Keynote Speech: Perspectives from the UNESCO Open Science Recommendation
Ana Persic, United Nations Educational, Scientific and Cultural Organization (UNESCO)
3:55 Panel: The progress and promise of low-cost and open tools for accelerating science and addressing challenges
Meghan McCarthy, Program Lead, 3D Printing and Biovisualization, NIH/NIAID at Medical Science & Computing (MSC)
Gerald “Stinger” Guala, Earth Sciences Division, Science Mission Directorate, National Aeronautics and Space Administration (NASA)
Zac Manchester, The Robotics Institute, Carnegie Mellon University (CMU)
Moderator: Anne Bowser, Deputy Director, Science & Technology Innovation Program, The Wilson Center
4:45 Closing Remarks: What’s Next?
Shannon Dosemagen, Open Environmental Data Project
This project is an initiative of the Wilson Center’s THING Tank. From DIY microscopes made from paper and household items, to low cost and open microprocessors supporting research from cognitive neuroscience to oceanography, to low cost sensors measuring air quality in communities around the world, the things of science — that is, the physical tools that generate data or contribute to scientific processes — are changing the way that science happens.
Nanomaterials researchers in Finland, the United States and China have created a color atlas for 466 unique varieties of single-walled carbon nanotubes.
The nanotube color atlas is detailed in a study in Advanced Materials about a new method to predict the specific colors of thin films made by combining any of the 466 varieties. The research was conducted by researchers from Aalto University in Finland, Rice University and Peking University in China.
“Carbon, which we see as black, can appear transparent or take on any color of the rainbow,” said Aalto physicist Esko Kauppinen, the corresponding author of the study. “The sheet appears black if light is completely absorbed by carbon nanotubes in the sheet. If less than about half of the light is absorbed in the nanotubes, the sheet looks transparent. When the atomic structure of the nanotubes causes only certain colors of light, or wavelengths, to be absorbed, the wavelengths that are not absorbed are reflected as visible colors.”
Carbon nanotubes are long, hollow carbon molecules, similar in shape to a garden hose but with sides just one atom thick and diameters about 50,000 times smaller than a human hair. The outer walls of nanotubes are made of rolled graphene. And the wrapping angle of the graphene can vary, much like the angle of a roll of holiday gift wrap paper. If the gift wrap is rolled carefully, at zero angle, the ends of the paper will align with each side of the gift wrap tube. If the paper is wound carelessly, at an angle, the paper will overhang on one end of the tube.
The atomic structure and electronic behavior of each carbon nanotube is dictated by its wrapping angle, or chirality, and its diameter. The two traits are represented in a “(n,m)” numbering system that catalogs 466 varieties of nanotubes, each with a characteristic combination of chirality and diameter. Each (n,m) type of nanotube has a characteristic color.
Kauppinen’s research group has studied carbon nanotubes and nanotube thin films for years, and it previously succeeded in mastering the fabrication of colored nanotube thin films that appeared green, brown and silver-grey.
In the new study, Kauppinen’s team examined the relationship between the spectrum of absorbed light and the visual color of various thicknesses of dry nanotube films and developed a quantitative model that can unambiguously identify the coloration mechanism for nanotube films and predict the specific colors of films that combine tubes with different inherent colors and (n,m) designations.
Rice engineer and physicist Junichiro Kono, whose lab solved the mystery of colorful armchair nanotubes in 2012, provided films made solely of (6,5) nanotubes that were used to calibrate and verify the Aalto model. Researchers from Aalto and Peking universities used the model to calculate the absorption of the Rice film and its visual color. Experiments showed that the measured color of the film corresponded quite closely to the color forecast by the model.
The Aalto model shows that the thickness of a nanotube film, as well as the color of nanotubes it contains, affects the film’s absorption of light. Aalto’s atlas of 466 colors of nanotube films comes from combining different tubes. The research showed that the thinnest and most colorful tubes affect visible light more than those with larger diameters and faded colors.
“Esko’s group did an excellent job in theoretically explaining the colors, quantitatively, which really differentiates this work from previous studies on nanotube fluorescence and coloration,” Kono said.
Since 2013, Kono’s lab has pioneered a method for making highly ordered 2D nanotube films. Kono said he had hoped to supply Kauppinen’s team with highly ordered 2D crystalline films of nanotubes of a single chirality.
“That was the original idea, but unfortunately, we did not have appropriate single-chirality aligned films at that time,” Kono said. “In the future, our collaboration plans to extend this work to study polarization-dependent colors in highly ordered 2D crystalline films.”
The experimental method the Aalto researchers used to grow nanotubes for their films was the same as in their previous studies: Nanotubes grow from carbon monoxide gas and iron catalysts in a reactor that is heated to more than 850 degrees Celsius. The growth of nanotubes with different colors and (n,m) designations is regulated with the help of carbon dioxide that is added to the reactor.
“Since the previous study, we have pondered how we might explain the emergence of the colors of the nanotubes,” said Nan Wei, an assistant research professor at Peking University who previously worked as a postdoctoral researcher at Aalto. “Of the allotropes of carbon, graphite and charcoal are black, and pure diamonds are colorless to the human eye. However, now we noticed that single-walled carbon nanotubes can take on any color: for example, red, blue, green or brown.”
Kauppinen said colored thin films of nanotubes are pliable and ductile and could be useful in colored electronics structures and in solar cells.
“The color of a screen could be modified with the help of a tactile sensor in mobile phones, other touch screens or on top of window glass, for example,” he said.
Kauppinen said the research can also provide a foundation for new kinds of environmentally friendly dyes.
Here’s a link to and a citation for the paper,
Colors of Single‐Wall Carbon Nanotubes by Nan Wei, Ying Tian, Yongping Liao, Natsumi Komatsu, Weilu Gao, Alina Lyuleeva‐Husemann, Qiang Zhang, Aqeel Hussain, Er‐Xiong Ding, Fengrui Yao, Janne Halme. Kaihui Liu, Junichiro Kono, Hua Jiang, Esko I. Kauppinen. Advanced Materials DOI: https://doi.org/10.1002/adma.202006395 First published: 14 December 2020
Usually when a company is featured in a news item, there’s some reason why it’s considered newsworthy. Even after reading the article twice, I still don’t see what makes the Precision Nanosystems Inc. (PNI) newsworthy.
Kevin Griffin’s Jan. 17, 2021 article about Vancouver area Precision Nanosystems Inc. (PNI) for The Province is interesting for anyone who’s looking for information about members of the local biotechnology and/or nanomedicine community (Note: Links have been removed),
A Vancouver nanomedicine company is part of a team using new genetic technology to develop a COVID-19 vaccine.
Precision NanoSystems Incorporated is working on a vaccine in the same class the ones made by Pfizer-BioNTech and Moderna, the only two COVID-19 vaccines approved by Health Canada.
PNI’s vaccine is based on a new kind of technology called mRNA which stands for messenger ribonucleic acid. The mRNA class of vaccines carry genetic instructions to make proteins that trigger the body’s immune system. Once a body has antibodies, it can fight off a real infection when it comes in contact with SARS-CoV-2, the name of the virus that causes COVID-19.
James Taylor, CEO of Precision NanoSystems, said the “revolutionary technology is having an impact not only on COVID-19 pandemic but also the treatment of other diseases.
The federal government has invested $18.2 million in PNI to carry its vaccine candidate through pre-clinical studies and clinical trails.
Ottawa has also invested another $173 million in Medicago, a Quebec-city based company which is developing a virus-like particle vaccine on a plant-based platform and building a large-scale vaccine and antibody production facility. The federal government has an agreement with Medicago to buy up to 76 million doses (enough for 38 million people) of its COVID-19 vaccine.
PNI’s vaccine, which the company is developing with other collaborators, is still at an early, pre-clinical stage.
Taylor is one of the co-founders of PNI along with Euan Ramsay, the company’s chief commercial officer.
The scientific co-founders of PNI are physicist Carl Hansen [emphasis mine] and Pieter Cullis. Cullis is also board chairman and scientific adviser at Acuitas Therapeutics [emphasis mine], the UBC biotechnology company that developed the delivery system for the Pfizer-BioNTech COVID-19 vaccine.
PNI, founded in 2010 as a spin-off from UBC [University of British Columbia], focuses on developing technology and expertise in genetic medicine to treat a wide range of infectious and rare diseases and cancers.
What has been described as PNI’s flagship product is a NanoAssemblr Benchtop Instrument, which allows scientists to develop nanomedicines for testing.
It’s informational but none of this is new, if you’ve been following developments in the COVID-19 vaccine story or local biotechnology scene. The $18.2 million federal government investment was announced in the company’s latest press release dated October 23, 2020. Not exactly fresh news.
One possibility is that the company is trying to generate publicity prior to a big announcement. As to why a reporter would produce this profile, perhaps he was promised an exclusive?
Acuitas Therapeutics, which I highlighted in the excerpt from Griffin’s story, has been featured here before in a November 12, 2020 posting about lipid nanoparticles and their role in the development of the Pfizer-BioNTech COVID-19 vaccine.
Curiously (or not), Griffin didn’t mention Vancouver’s biggest ‘COVID-19 star’, AbCellera. You can find out more about that company in my December 30, 2020 posting titled, Avo Media, Science Telephone, and a Canadian COVID-19 billionaire scientist, which features a link to a video about AbCellera’s work (scroll down about 60% of the way to the subsection titled: Avo Media, The Tyee, and Science Telephone, second paragraph).
The Canadian COVID-19 billionaire scientist? That would be Carl Hansen, Chief Executive Officer and co-founder of AbCellera and co-founder of PNI. it’s such a small world sometimes.
I have two items, both concerning sound but in very different ways.
Phones in your ears
Researchers at the University of Illinois are working on smartphones you can put in your ears like you do an earbud. The work is in its very earliest stages as they are trying to establish a new field of research. There is a proposed timeline,
CSL’s [Coordinated Science Laboratory] Systems and Networking Research Group (SyNRG) is defining a new sub-area of mobile technology that they call “earable computing.” The team believes that earphones will be the next significant milestone in wearable devices, and that new hardware, software, and apps will all run on this platform.
“The leap from today’s earphones to ‘earables’ would mimic the transformation that we had seen from basic phones to smartphones,” said Romit Roy Choudhury, professor in electrical and computer engineering (ECE). “Today’s smartphones are hardly a calling device anymore, much like how tomorrow’s earables will hardly be a smartphone accessory.”
Instead, the group believes tomorrow’s earphones will continuously sense human behavior, run acoustic augmented reality, have Alexa and Siri whisper just-in-time information, track user motion and health, and offer seamless security, among many other capabilities.
The research questions that underlie earable computing draw from a wide range of fields, including sensing, signal processing, embedded systems, communications, and machine learning. The SyNRG team is on the forefront of developing new algorithms while also experimenting with them on real earphone platforms with live users.
Computer science PhD student Zhijian Yang and other members of the SyNRG group, including his fellow students Yu-Lin Wei and Liz Li, are leading the way. They have published a series of papers in this area, starting with one on the topic of hollow noise cancellation that was published at ACM SIGCOMM 2018. Recently, the group had three papers published at the 26th Annual International Conference on Mobile Computing and Networking (ACM MobiCom) on three different aspects of earables research: facial motion sensing, acoustic augmented reality, and voice localization for earphones.
“If you want to find a store in a mall,” says Zhijian, “the earphone could estimate the relative location of the store and play a 3D voice that simply says ‘follow me.’ In your ears, the sound would appear to come from the direction in which you should walk, as if it’s a voice escort.”
The second paper, EarSense: Earphones as a Teeth Activity Sensor, looks at how earphones could sense facial and in-mouth activities such as teeth movements and taps, enabling a hands-free modality of communication to smartphones. Moreover, various medical conditions manifest in teeth chatter, and the proposed technology would make it possible to identify them by wearing earphones during the day. In the future, the team is planning to look into analyzing facial muscle movements and emotions with earphone sensors.
The third publication, Voice Localization Using Nearby Wall Reflections, investigates the use of algorithms to detect the direction of a sound. This means that if Alice and Bob are having a conversation, Bob’s earphones would be able to tune into the direction Alice’s voice is coming from.
“We’ve been working on mobile sensing and computing for 10 years,” said Wei. “We have a lot of experience to define this emerging landscape of earable computing.”
Haitham Hassanieh, assistant professor in ECE, is also involved in this research. The team has been funded by both NSF [US National Science Foundation] and NIH [National Institutes of Health], as well as companies like Nokia and Google. See more at the group’s Earable Computing website.
As an ambient electronic musician, Yoko Sen spends much of her time building intricate, soothing soundscapes.
But when she was hospitalized in 2012, she found herself immersed in a very different sound environment.
Already panicked by her health condition, she couldn’t tune out the harsh tones of the medical machinery in her hospital room.
Instead, she zeroed in on two machines — a patient monitor and a bed fall alarm. Their piercing tones had blended together to create a diminished fifth, a musical interval so offensive that it was banned in medieval churches.
Sen went on to start Sen Sound, a Washington, D.C.-based social enterprise dedicated to improving the sound of hospitals.
‘Alarms are ignored, missed’
The volume of noise in today’s hospitals isn’t just unpleasant. It can also put patients’ health at risk.
According to Judy Edworthy, a professor of applied psychology at the University of Plymouth, the sheer number of alarms going off each day can spark a sort of auditory burnout among doctors and nurses.
“Alarms are ignored, missed, or generally just not paid attention to,” says Edworthy.
In a hospital environment that’s also inundated with announcements from overhead speakers, ringing phones, trolleys, and all other manner of insidious background sound, it can be difficult for staff to accurately locate and differentiate between the alarms.
The resulting problem has become so widespread that a term has been coined to describe it: alarm fatigue.
Raising the alarm
Sen’s company, launched in 2016, has partnered with hospitals and design incubators and even collaborates directly with medical device companies seeking to redesign their alarms.
Over the years, Sen has interviewed countless patients and hospital staff who share her frustration with noise.
But when she first sat down with the engineers responsible for the devices’ design, she found that they tended to treat the sound of their devices as an “afterthought.”
“When people first started to develop medical devices … people thought it was a good idea to have one or two sounds to demonstrate or to indicate when, let’s say for example, the patient’s temperature … exceeded some kind of range,” she [Edworthy] said.
“There wasn’t really any design put into this; it was just a sound that people thought would get your attention by being very loud and aversive and so on.”
Edworthy, who has spent decades studying medical alarm design, took things one step further this summer. In July, the International Standards Organization approved a new set of alarm designs, created by Edworthy, that mimic the natural hospital environment.
The standards, which are accepted by Health Canada, include an electronic heartbeat sound for alarms related to cardiac issues; and a rattling pillbox for drug administration.
Her [Sen’s] team continues to work with companies to improve the sound of existing medical devices. But she has also begun to think more deeply about the long-term future of hospital sound — especially as it relates to the end-of-life experience.
“A study shows that hearing can be the last sense to go when we die,” says Sen.
“It’s really beyond upsetting to think that many people end up dying in acute care hospitals and there are all these medical devices.”
As part of her interviews with patients, Sen has asked what sounds they would most like to hear at the end of their lives — and she discovered a common theme in their responses.
“I asked this question in many different countries, but they are all sounds that symbolize life,” said Sen.
“Sounds of nature, sound of water, voices of loved ones. It’s all the sounds of life that people say they want to hear.”
As the pandemic continues to affect hospitals around the world, those efforts have taken on a new resonance — and Sen hopes the current crisis might serve as an opportunity to help usher in a more healing soundscape.
“My own health crisis almost gave me a new pathway in life,” she said.
There’s a lot of arsenic in the world and it’s often a factor in making water undrinkable. When that water is used in farming It also pollutes soil and enters food-producing plants. A December 11, 2020 news item on Nanowerk announces research into arsenic detectors in plants,
Researchers have developed a living plant-based sensor that can in real-time detect and monitor levels of arsenic, a highly toxic heavy metal, in the soil. Arsenic pollution is a major threat to humans and ecosystems in many Asia Pacific countries.
Scientists from the Disruptive and Sustainable Technologies for Agricultural Precision (DiSTAP) research group at the Singapore-MIT Alliance for Research and Technology (SMART), MIT’s research enterprise in Singapore, have engineered a novel type of plant nanobionic optical sensor that can detect and monitor, in real time, levels of the highly toxic heavy metal arsenic in the underground environment. This development provides significant advantages over conventional methods used to measure arsenic in the environment and will be important for both environmental monitoring and agricultural applications to safeguard food safety, as arsenic is a contaminant in many common agricultural products such as rice, vegetables, and tea leaves.
Arsenic and its compounds are a serious threat to humans and ecosystems. Long-term exposure to arsenic in humans can cause a wide range of detrimental health effects, including cardiovascular disease such as heart attack, diabetes, birth defects, severe skin lesions, and numerous cancers including those of the skin, bladder, and lung. Elevated levels of soil arsenic as a result of anthropogenic activities such as mining and smelting are also harmful to plants, inhibiting growth and resulting in substantial crop losses.
Food crops can absorb arsenic from the soil, leading to contamination of food and produce consumed by humans. Arsenic in underground environments can also contaminate groundwater and other underground water sources, the long-term consumption of which can cause severe health issues. As such, developing accurate, effective, and easy-to-deploy arsenic sensors is important to protect both the agriculture industry and wider environmental safety.
The novel optical nanosensors exhibit changes in their fluorescence intensity upon detecting arsenic. Embedded in plant tissues, with no detrimental effects on the plant, these sensors provide a nondestructive way to monitor the internal dynamics of arsenic taken up by plants from the soil. This integration of optical nanosensors within living plants enables the conversion of plants into self-powered detectors of arsenic from their natural environment, marking a significant upgrade from the time- and equipment-intensive arsenic sampling methods of current conventional methods.
“Our plant-based nanosensor is notable not only for being the first of its kind, but also for the significant advantages it confers over conventional methods of measuring arsenic levels in the below-ground environment, requiring less time, equipment, and manpower,” says Lew. “We envision that this innovation will eventually see wide use in the agriculture industry and beyond. I am grateful to SMART DiSTAP and the Temasek Life Sciences Laboratory (TLL), both of which were instrumental in idea generation and scientific discussion as well as research funding for this work.”
Besides detecting arsenic in rice and spinach, the team also used a species of fern, Pteris cretica, which can hyperaccumulate arsenic. This fern species can absorb and tolerate high levels of arsenic with no detrimental effect — engineering an ultrasensitive plant-based arsenic detector, capable of detecting very low concentrations of arsenic, as low as 0.2 parts per billion. In contrast, the regulatory limit for arsenic detectors is 10 parts per billion. Notably, the novel nanosensors can also be integrated into other species of plants. The researchers say this is the first successful demonstration of living plant-based sensors for arsenic and represents a groundbreaking advancement that could prove highly useful in both agricultural research (e.g., to monitor arsenic taken up by edible crops for food safety) and general environmental monitoring.
Previously, conventional methods of measuring arsenic levels included regular field sampling, plant tissue digestion, extraction, and analysis using mass spectrometry. These methods are time-consuming, require extensive sample treatment, and often involve the use of bulky and expensive instrumentation. The new approach couples nanoparticle sensors with plants’ natural ability to efficiently extract analytes via the roots and transport them. This allows for the detection of arsenic uptake in living plants in real time, with portable, inexpensive electronics such as a portable Raspberry Pi platform equipped with a charge-coupled device camera akin to a smartphone camera.
Co-author, DiSTAP co-lead principal investigator, and MIT Professor Michael Strano adds, “This is a hugely exciting development, as, for the first time, we have developed a nanobionic sensor that can detect arsenic — a serious environmental contaminant and potential public health threat. With its myriad advantages over older methods of arsenic detection, this novel sensor could be a game-changer, as it is not only more time-efficient, but also more accurate and easier to deploy than older methods. It will also help plant scientists in organizations such as TLL to further produce crops that resist uptake of toxic elements. Inspired by TLL’s recent efforts to create rice crops which take up less arsenic, this work is a parallel effort to further support SMART DiSTAP’s efforts in food security research, constantly innovating and developing new technological capabilities to improve Singapore’s food quality and safety.”
The research is carried out by SMART and supported by the National Research Foundation (NRF) Singapore under its Campus for Research Excellence And Technological Enterprise (CREATE) program.
Led by MIT’s Strano and Singapore co-lead principal investigator Professor Chua Nam Hai, DiSTAP is one of the five Interdisciplinary Research Groups (IRGs) in SMART. The DiSTAP program addresses deep problems in food production in Singapore and the world by developing a suite of impactful and novel analytical genetic and biosynthetic technologies. The goal is to fundamentally change how plant biosynthetic pathways are discovered, monitored, engineered, and ultimately translated to meet the global demand for food and nutrients. Scientists from MIT, TTL, Nanyang Technological University, and National University of Singapore are collaboratively developing new tools for the continuous measurement of important plant metabolites and hormones for novel discovery, deeper understanding and control of plant biosynthetic pathways in ways not yet possible, especially in the context of green leafy vegetables; leveraging these new techniques to engineer plants with highly desirable properties for global food security, including high yield density production, drought and pathogen resistance and biosynthesis of high-value commercial products; developing tools for producing hydrophobic food components in industry-relevant microbes; developing novel microbial and enzymatic technologies to produce volatile organic compounds that can protect and/or promote growth of leafy vegetables; and applying these technologies to improve urban farming.
“The biohybrid retina is a cell therapy for the reconstruction of the damaged retina by implanting healthy cells in the patient’s eye,” says Fivos Panetsos, director of the Neuro-computation and Neuro-robotics Group of the UCM and member of the Institute of Health Research of the Hospital Clínico San Carlos de Madrid (IdISSC).
The cells of the artificial retina adhere to very thin silk fibroin biofilms – a biomaterial 100% biocompatible with human tissue – and covered by a gel which protects them during eye surgery and allows them to survive during the time they need to get integrated with the surrounding tissue after transplantation.
“The transplanted retina also contains mesenchymal cells that function as producers of neuroprotective and neuroreparative molecules and facilitate functional integration between implanted and patient cells”, adds UCM’s researcher and director of the study, published in the Journal of Neural Engineering .
One more step in a problem with more than 196 million affected
To build this artificial retina, researchers have developed silk fibroin films with mechanical characteristics similar to Bruch’s membrane – the layer of cells that supports the neural retina. Then, they have biofunctionalized them so that retinal cells could adhere, and on them they have grown epithelial and neural cells. Finally, they have carried out an in vitro study of the structural and functional characteristics of the biohybrid.
Age-Related Macular Degeneration (AMD) is a neurodegenerative disease that causes a progressive loss of central vision and even blindness in its most advanced stage. Triggered by heterogeneous, complex and still poorly understood mechanisms, it is the leading cause of irreversible vision loss in people over 65 years of age and affects more than 196 million people worldwide.
AMD is an incurable disease, and current treatments can only alleviate symptoms and slow down the progression of the disease. “This research is an important step towards solving the problem of blindness faced by AMD patients”, concludes Panetsos.
Teaching grammar and syntax to artificial intelligence (AI) algorithms (specifically natural language processing (NLP) algorithms) has helped researchers understand and predict viral mutations more speedily. This facility is especially useful at a time when the Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) virus seems to be mutating into more easily transmissible variants.
Will Douglas Heaven’s Jan. 14, 2021 article for the Massachusetts Institute of Technology’s MIT Technology Review describes the work that links AI, grammar, and mutating viruses (Note: Links have been removed),
Galileo once observed that nature is written in math. Biology might be written in words. Natural-language processing (NLP) algorithms are now able to generate protein sequences and predict virus mutations, including key changes that help the coronavirus evade the immune system.
The key insight making this possible is that many properties of biological systems can be interpreted in terms of words and sentences. “We’re learning the language of evolution,” says Bonnie Berger, a computational biologist at the Massachusetts Institute of Technology [MIT].
In the last few years, a handful of researchers—including teams from geneticist George Church’s [Professor of Health Sciences and Technology at Harvard University and MIT, etc.] lab and Salesforce [emphasis mine]—have shown that protein sequences and genetic codes can be modeled using NLP techniques.
In a study published in Science today, Berger and her colleagues pull several of these strands together and use NLP to predict mutations that allow viruses to avoid being detected by antibodies in the human immune system, a process known as viral immune escape. The basic idea is that the interpretation of a virus by an immune system is analogous to the interpretation of a sentence by a human.
Berger’s team uses two different linguistic concepts: grammar and semantics (or meaning). The genetic or evolutionary fitness of a virus—characteristics such as how good it is at infecting a host—can be interpreted in terms of grammatical correctness. A successful, infectious virus is grammatically correct; an unsuccessful one is not.
Similarly, mutations of a virus can be interpreted in terms of semantics. Mutations that make a virus appear different to things in its environment—such as changes in its surface proteins that make it invisible to certain antibodies—have altered its meaning. Viruses with different mutations can have different meanings, and a virus with a different meaning may need different antibodies to read it.
Instead of millions of sentences, they trained the NLP model on thousands of genetic sequences taken from three different viruses: 45,000 unique sequences for a strain of influenza, 60,000 for a strain of HIV, and between 3,000 and 4,000 for a strain of Sars-Cov-2, the virus that causes covid-19. “There’s less data for the coronavirus because there’s been less surveillance,” says Brian Hie, a graduate student at MIT, who built the models.
The overall aim of the approach is to identify mutations that might let a virus escape an immune system without making it less infectious—that is, mutations that change a virus’s meaning without making it grammatically incorrect.
But it’s also just the beginning. Treating genetic mutations as changes in meaning could be applied in different ways across biology. “A good analogy can go a long way,” says Bryson [Bryan Bryson, a biologist at MIT].
If you have time, I recommend reading Heaven’s Jan. 14, 2021 article in its entirety as it’s well written with clear explanations. As for the article’s mentions of George Church and Salesforce, the former could be expected while the latter is not (by me, I speak for no one else).
I find it fascinating that a company which describes itself (from What is Salesforce?) as providing “… customer relationship management, or CRM. It gives all your departments — including marketing, sales, commerce, and service — a shared view of your customers … ” seems to be conducting investigations into one (or more?) areas of biology.
For those who’d like to dive into the science as described in Heaven’s article, here’s a link to and a citation for the paper,