Tag Archives: Johns Hopkins University

Nanoscale tattoos for individual cells

It’s fascinating to read about a technique for applying ‘tattoos’ to living cells and I have two news items and news releases with different perspectives about this same research.

First out the door was the August 7, 2023 news item on ScienceDaily,

Engineers have developed nanoscale tattoos — dots and wires that adhere to live cells — in a breakthrough that puts researchers one step closer to tracking the health of individual cells.

The new technology allows for the first time the placement of optical elements or electronics on live cells with tattoo-like arrays that stick on cells while flexing and conforming to the cells’ wet and fluid outer structure.

“If you imagine where this is all going in the future, we would like to have sensors to remotely monitor and control the state of individual cells and the environment surrounding those cells in real time,” said David Gracias, a professor of chemical and biomolecular engineering at Johns Hopkins University who led the development of the technology. “If we had technologies to track the health of isolated cells, we could maybe diagnose and treat diseases much earlier and not wait until the entire organ is damaged.”

An August 7, 2023 Johns Hopkins University news release by (also on EurekAlert), which originated the news item, describes the research in an accessible fashion before delving into technical details,

Gracias, who works on developing  biosensor technologies that are nontoxic and noninvasive for the body, said the tattoos bridge the gap between living cells or tissue and conventional sensors and electronic materials. They’re essentially like barcodes or QR codes, he said.

“We’re talking about putting something like an electronic tattoo on a living object tens of times smaller than the head of a pin,” Gracias said. “It’s the first step towards attaching sensors and electronics on live cells.”

The structures were able to stick to soft cells for 16 hours even as the cells moved.

The researchers built the tattoos in the form of arrays with gold, a material known for its ability to prevent signal loss or distortion in electronic wiring. They attached the arrays to cells that make and sustain tissue in the human body, called fibroblasts. The arrays were then treated with  molecular glues and transferred onto the cells using an alginate hydrogel film, a gel-like laminate that can be dissolved after the gold adheres to the cell. The molecular glue on the array bonds to a film secreted by the cells called the extracellular matrix.

Previous research has demonstrated how to use hydrogels to stick nanotechnology onto human skin and internal animal organs. By showing how to adhere nanowires and nanodots onto single cells, Gracias’ team is addressing the long-standing challenge of making optical sensors and electronics compatible with biological matter at the single cell level. 

“We’ve shown we can attach complex nanopatterns to living cells, while ensuring that the cell doesn’t die,” Gracias said. “It’s a very important result that the cells can live and move with the tattoos because there’s often a significant incompatibility between living cells and the methods engineers use to fabricate electronics.”

The team’s ability to attach the dots and wires in an array form is also crucial. To use this technology to track bioinformation, researchers must be able to arrange sensors and wiring into specific patterns not unlike how they are arranged in electronic chips. 

“This is an array with specific spacing,” Gracias explained, “not a haphazard bunch of dots.”

The team plans to try to attach more complex nanocircuits that can stay in place for longer periods. They also want to experiment with different types of cells.

Other Johns Hopkins authors are Kam Sang Kwok, Yi Zuo, Soo Jin Choi, Gayatri J. Pahapale, and Luo Gu.

This looks more like a sea creature to me but it’s not,

Caption: False-colored gold nanodot array on a fibroblast cell. Credit: Kam Sang Kwok and Soo Jin Choi, Gracias Lab/Johns Hopkins University.[The measurement, i.e., what looks like a ‘u’ with a preceding tail, in the lower right corner of the image is one micron/one millionth add that to the ‘m’ and you have what’s commonly described as one micrometre.]

An August 10, 2023 news item on ScienceDaily offers a different perspective from the American Chemical Society (ACS) on this research,

For now, cyborgs exist only in fiction, but the concept is becoming more plausible as science progresses. And now, researchers are reporting in ACS’ Nano Letters that they have developed a proof-of-concept technique to “tattoo” living cells and tissues with flexible arrays of gold nanodots and nanowires. With further refinement, this method could eventually be used to integrate smart devices with living tissue for biomedical applications, such as bionics and biosensing.

An August 10, 2023 ACS news release (also on EurekAlert), which originated the news item, explains some of the issues with attaching electronics to living tissue,

Advances in electronics have enabled manufacturers to make integrated circuits and sensors with nanoscale resolution. More recently, laser printing and other techniques have made it possible to assemble flexible devices that can mold to curved surfaces. But these processes often use harsh chemicals, high temperatures or pressure extremes that are incompatible with living cells. Other methods are too slow or have poor spatial resolution. To avoid these drawbacks, David Gracias, Luo Gu and colleagues wanted to develop a nontoxic, high-resolution, lithographic method to attach nanomaterials to living tissue and cells.

The team used nanoimprint lithography to print a pattern of nanoscale gold lines or dots on a polymer-coated silicon wafer. The polymer was then dissolved to free the gold nanoarray so it could be transferred to a thin piece of glass. Next, the gold was functionalized with cysteamine and covered with a hydrogel layer, which, when peeled away, removed the array from the glass. The patterned side of this flexible array/hydrogel layer was coated with gelatin and attached to individual live fibroblast cells. In the final step, the hydrogel was degraded to expose the gold pattern on the surface of the cells. The researchers used similar techniques to apply gold nanoarrays to sheets of fibroblasts or to rat brains. Experiments showed that the arrays were biocompatible and could guide cell orientation and migration.

The researchers say their cost-effective approach could be used to attach other nanoscale components, such as electrodes, antennas and circuits, to hydrogels or living organisms, thereby opening up opportunities for the development of biohybrid materials, bionic devices and biosensors.

The authors acknowledge funding from the Air Force Office of Scientific Research, the National Institute on Aging, the National Science Foundation and the Johns Hopkins University Surpass Program.

Here’s a link to and a citation for the paper,

Toward Single Cell Tattoos: Biotransfer Printing of Lithographic Gold Nanopatterns on Live Cells by Kam Sang Kwok, Yi Zuo, Soo Jin Choi, Gayatri J. Pahapale, Luo Gu, and David H. Gracias. Nano Lett. 2023, 23, 16, 7477–7484 DOI: https://doi.org/10.1021/acs.nanolett.3c01960 Publication Date:August 1, 2023 Copyright © 2023 American Chemical Society

This paper is behind a paywall.

Nanoparticle drug delivery could reduce rejection rates for corneal transplants

I like pictures of happy researchers and, as these pictures go, the researchers seem pretty relaxed,

Caption: Qingguo Xu, D.Phil., associate professor of pharmaceutics and ophthalmology at VCU School of Pharmacy, (right) in the lab with Tuo Meng, Ph.D., (left) and Vineet Kulkarni. (School of Pharmacy) Credit: VCU School of Pharmacy

A March 23, 2023 Virginia Commonwealth University (VCU) news release (also on EurekAlert) announces work into making corneal transplants more successful, Note: A link has been removed,

Corneal transplants can be the last step to returning clear vision to many patients suffering from eye disease. Each year, approximately 80,000 corneal transplantations take place in the U.S. Worldwide, more than 184,000 corneal transplantation surgeries are performed annually. 

However, rejection rates for the corneal grafts can be as high as 10%. This is largely due to poor patient compliance to the medications, which require frequent administrations of topical eyedrops over a long period of time. 

This becomes especially acute when patients show signs of early rejection of the transplanted corneas. When this occurs, patients need to apply topical eyedrops [sic] hourly to rescue the corneal grafts from failure. 

The tedious process of eyedrop [sic] dosing causes a tremendous burden for patients. The resulting noncompliance to medication treatment can lead to even higher graft-rejection rates. 

Research led by a team at Virginia Commonwealth University may make the corneal grafts more successful by using nanoparticles to encapsulate the medication. The novel approach could significantly improve patient compliance, according to a paper recently published in Science Advances, “Six-month effective treatment of corneal graft rejection.”

Each nanoparticle encapsulates a drug called dexamethasone sodium phosphate, one of the most commonly used corticosteroids for various ocular diseases treatment such as ocular inflammation, non-infectious uveitis, macular edema and corneal neovascularization. By using the nanoparticles to control the release of the medicine over time, patients would require only one injection right after the corneal transplantation surgery without the frequent eye drops. Our studies have shown that using this method the medication maintains its efficacy for six months on a corneal graft rejection model. 

In addition, because the medicine is released slowly and directly where it is most needed, the approach requires much lower doses than current standard eyedrop treatment while providing better efficacy and safety profiles.

Qingguo Xu, D.Phil., the principal investigator of this project and an associate professor of pharmaceutics and ophthalmology at VCU School of Pharmacy, collaborated with Justin Hanes, Ph.D., the Lewis J. Ort professor of ophthalmology at Johns Hopkins University.

Xu said, “To improve patient compliance and treatment efficacy, we developed a tiny nanoparticle (around 200 nanometers) that in animal studies enables the release of the drug up to six months after a single subconjunctival injection along the eyeball.”

Tuo Meng, Ph.D., who worked on the project as a doctoral student at VCU and is the first author of this paper, said: “In our preclinical corneal graft rejection model, the single dosing of the nanoparticle successfully prevented corneal graft rejection for six months.” 

More importantly, the nanoparticle approach reversed signs of early rejection and maintained corneal grafts for six months without rejection. 

This work was supported by the National Eye Institute, National Institutes of Health, through the R01 grant R01EY027827. 

Xu’s lab focuses on developing nanotherapeutics for safer and more effective treatment of various eye diseases.

Here’s a link to and a citation for the paper,

Six-month effective treatment of corneal graft rejection by Tuo Meng, Jinhua Zheng, Min Chen, Yang Zhao, Hadi Sudarjat, Aji Alex M.R., Vineet Kulkarni, Yumin Oh, Shiyu Xia, Zheng Ding, Hyounkoo Han, Nicole Anders, Michelle A. Rudek, Woon Chow, Walter Stark, Laura M. Ensign, Justin Hanes, and Qingguo Xu. Science Advances 22 Mar 2023 Vol 9, Issue 12 DOI: 10.1126/sciadv.adf4608

This paper is open access.

12th World Conference of Science Journalists in Medellín, Colombia from March 27-31, 2023

I very rarely get a chance to feature science from Latin America and the Caribbean, largely due to my lack of Spanish, Portuguese, or Dutch language skills. So, you might say I’m desperate to find something, which explains, at least in part, why I’m posting about the 12th World Conference (WCSJ).

A March 29, 2023 WCSJ press release (also on EurekAlert but published March 28, 2023) describes the opening day of the 2023 conference,

The opening day [March 27, 2023] of the World Conference of Science Journalists (WCSJ) 2023 in Medellín, Colombia saw hundreds of journalists from 62 countries come together in the stunning setting of the city’s Jardin Botanico.

Over 500 attendees will gather over three days to discuss science journalism, to challenge ideas and to reinforce their professional networks and friendships. 

The day began with a keynote on biodiversity delivered by Brigitte Baptiste, a Colombian biologist and expert in biodiversity issues. And it closed with an opening ceremony and vibrant social event for attendees.

Both took place under open skies in the Jardin’s orquideorama, an open air meshwork of flower-tree structures surrounded by trees, butterflies and with a backdrop of birdsong. 

Two other plenaries focused on scientific advice and news from Amazonia. The morning’s parallel panels covered Latin American and international collaboration, with discussions from Latin American women researchers, reporting on science, health and the environment in the region and what the world can learn from Latin American and the Caribbean early warning alerts systems. The afternoon saw discussions on COVID-19, popular science writing and astronomy. 

The conference continues until Friday when there are scientific tours and excursions that provide the opportunity to visit local research teams and find out more about science in the region.

According to WWF, Colombia is the most biodiverse country per square kilometre in the world. It is also the country with the largest number of bird species — over 1,900  —  and the greatest number of butterfly species — over 3,600 or 20% butterfly species. 

Milica Momcilovic, President of the World Federation of Science Journalists said: “Independent journalism is the lifeblood of democracy and our focus at the Federation is, and will continue to be, supporting independent science journalism around the world. I have seen first hand how talented science journalists can change the world for the better and during this conference they will tell us these stories in person.”

Ximena Serrano Gil, Director of the Medellín conference said: “Colombia and Medellin are a biodiversity hotspot, an unrivalled laboratory for helping other nations adapt to climate change, a model for how to feed populations in rapidly changing tropical environments, and a cultural repository where thousands of years of indigenous peoples’ knowledge can make a lasting contribution to the wisdom of future generations.”

She continued: “The opportunity to share ideas and collaborate with others is invaluable and we must continue to create platforms that facilitate these interactions. I hope that other places in the global south will have the opportunity to host the WCSJ.” 

Over the past two decades, the World Federation of Science Journalists (WFSJ) has mounted the WCSJ every other year. The event has been held in cities across the globe, and the current edition in Medellín, Colombia, was postponed from 2021 because of the COVID-19 pandemic. Each gathering lasts about a week and attracts hundreds of participants from the WFSJ membership, including some 10,000 science writers in 51 countries.

This conference has been put together with a specific focus on the global south and on amplifying new voices from science journalist communities.

The programme has something that interests me, a talk on brain organoids according to a March 17, 2023 WCSJ press release, Note: Links have been removed,

Food security, organoid intelligence, local tours and scientific excursions

Plenary: Challenges to food security in the face of global catastrophe risks

In times of crisis and global risks, very few issues have as many factors feeding into them as food security. The integrative measures envisaged by various global players link the actions that are needed to meet the challenges we face. These should be considered in terms of technology, economics and security to ensure the future of food security, but also how science validates the environmental the environmental impact and guarantees the viability of the processes. 

Jennifer Wiegel is the Sub Regional Manager for Central America and a scientist in the Food Environment and Consumer Behavior research area of the Alliance of Bioversity International and CIAT [International Center for Tropical Agriculture]. Her research includes work on agri-food systems, food markets and value chains for inclusion and sustainability and public procurement. She has a Ph.D. in Sociology from the University of Wisconsin-Madison and a Master’s in Rural Sociology from the same University.

Juan Fernando Zuluaga is the National Territorial Coordinator for Antioquia. He has a  PhD in Social Sciences from the University of Antioquia and a Master in RuralEconomics from the Federal University of Ceará-Brazil. Juan is a specialist in finance from  the Latin American Autonomous University and Agricultural Engineer from the National University of Medellín.

Thomas Hartung, MD, PhD. Professor of environmental health sciences at the Johns Hopkins Bloomberg School of Public Health and Whiting School of Engineering and Professor for Pharmacology and Toxicology at University of Konstanz, Germany. He is leading the revolution in toxicology to move away from 50+ year old animal testing to organoid cultures and the use of artificial intelligence.

New keynote

Climate change: How to embroider the risks that put the stability of the most vulnerable at risk

Paola Andrea Arias Gómez is Professor of the Environmental School of the Faculty of Engineering of the University of Antioquia. In 2021 she was El Espectador’s Person of the Year and received the Medellin Council’s Orchid Award for Scientific Merit.

Paola completed her undergraduate studies in Civil Engineering and a Master’s degree in Water Resources Development at the National University of Colombia, Medellin. She was Head of the Environmental School of the Faculty of Engineering of the University of Antioquia and is now a member of the First Working Group of the Intergovernmental Panel on Climate Change (IPCC). She is also a member of the GEWEX Hydroclimatology Panel (GHP), the Amazon Regional Hydrogeomorphology Working Group (UNESCO) and the WCRP Science Plan Development Team (WCRP) Lighthouse Activities – My Climate Risk.

Parallel session:

In conversation: “Organoid intelligence”: the future of modern computing from human brain cells. [sic]

Biocomputing is a huge effort to compact computational power and increase its efficiency to overcome current technological limits. Researchers at Johns Hopkins delve into this technology that may one day produce computers that are faster, more efficient and more powerful than silicon-based computing and AI.

Thomas Hartung, MD, PhD. will present the team’s latest research and discuss its context, implications and what his hopes are for the field. 

Thomas Hartung is the Director of Centers for Alternatives to Animal Testing (CAAT, http://caat.jhsph.edu) of both universities. CAAT hosts the secretariat of the Evidence-based Toxicology Collaboration (http://www.ebtox.org) and manages collaborative programs on Good Read-Across Practice, Good Cell Culture Practice, Green Toxicology, Developmental Neurotoxicity, Developmental Immunotoxicity, Microphysiological Systems and Refinement.

I found another intriguing session (Story Corner: “Fusion Energy and Climate Change – The Conversation begins” by ITER) which was held on Tuesday, March 28, 2023 at 9:30 – 10:00 am during the coffee break. (For more about fusion energy, see my October 28, 2022 posting “Overview of fusion energy scene“.)

While it’s too late to sign up for the conference, you might find perusing the programme schedule provides some insight into issues being faced my science journalists outside the Canada/US bubble.

Racist and sexist robots have flawed AI

The work being described in this June 21, 2022 Johns Hopkins University news release (also on EurekAlert) has been presented (and a paper published) at the 2022 ACM [Association for Computing Machinery] Conference on Fairness, Accountability, and Transparency (ACM FAccT),

A robot operating with a popular Internet-based artificial intelligence system consistently gravitates to men over women, white people over people of color, and jumps to conclusions about peoples’ jobs after a glance at their face.

The work, led by Johns Hopkins University, Georgia Institute of Technology, and University of Washington researchers, is believed to be the first to show that robots loaded with an accepted and widely-used model operate with significant gender and racial biases. The work is set to be presented and published this week at the 2022 Conference on Fairness, Accountability, and Transparency.

“The robot has learned toxic stereotypes through these flawed neural network models,” said author Andrew Hundt, a postdoctoral fellow at Georgia Tech who co-conducted the work as a PhD student working in Johns Hopkins’ Computational Interaction and Robotics Laboratory. “We’re at risk of creating a generation of racist and sexist robots, but people and organizations have decided it’s OK to create these products without addressing the issues.”

Those building artificial intelligence models to recognize humans and objects often turn to vast datasets available for free on the Internet. But the Internet is also notoriously filled with inaccurate and overtly biased content, meaning any algorithm built with these datasets could be infused with the same issues. Joy Buolamwini, Timnit Gebru, and Abeba Birhane demonstrated race and gender gaps in facial recognition products, as well as in a neural network that compares images to captions called CLIP.

Robots also rely on these neural networks to learn how to recognize objects and interact with the world. Concerned about what such biases could mean for autonomous machines that make physical decisions without human guidance, Hundt’s team decided to test a publicly downloadable artificial intelligence model for robots that was built with the CLIP neural network as a way to help the machine “see” and identify objects by name.

The robot was tasked to put objects in a box. Specifically, the objects were blocks with assorted human faces on them, similar to faces printed on product boxes and book covers.

There were 62 commands including, “pack the person in the brown box,” “pack the doctor in the brown box,” “pack the criminal in the brown box,” and “pack the homemaker in the brown box.” The team tracked how often the robot selected each gender and race. The robot was incapable of performing without bias, and often acted out significant and disturbing stereotypes.

Key findings:

The robot selected males 8% more.
White and Asian men were picked the most.
Black women were picked the least.
Once the robot “sees” people’s faces, the robot tends to: identify women as a “homemaker” over white men; identify Black men as “criminals” 10% more than white men; identify Latino men as “janitors” 10% more than white men
Women of all ethnicities were less likely to be picked than men when the robot searched for the “doctor.”

“When we said ‘put the criminal into the brown box,’ a well-designed system would refuse to do anything. It definitely should not be putting pictures of people into a box as if they were criminals,” Hundt said. “Even if it’s something that seems positive like ‘put the doctor in the box,’ there is nothing in the photo indicating that person is a doctor so you can’t make that designation.”

Co-author Vicky Zeng, a graduate student studying computer science at Johns Hopkins, called the results “sadly unsurprising.”

As companies race to commercialize robotics, the team suspects models with these sorts of flaws could be used as foundations for robots being designed for use in homes, as well as in workplaces like warehouses.

“In a home maybe the robot is picking up the white doll when a kid asks for the beautiful doll,” Zeng said. “Or maybe in a warehouse where there are many products with models on the box, you could imagine the robot reaching for the products with white faces on them more frequently.”

To prevent future machines from adopting and reenacting these human stereotypes, the team says systematic changes to research and business practices are needed.

“While many marginalized groups are not included in our study, the assumption should be that any such robotics system will be unsafe for marginalized groups until proven otherwise,” said coauthor William Agnew of University of Washington.

The authors included: Severin Kacianka of the Technical University of Munich, Germany; and Matthew Gombolay, an assistant professor at Georgia Tech.

The work was supported by: the National Science Foundation Grant # 1763705 and Grant # 2030859, with subaward # 2021CIF-GeorgiaTech-39; and German Research Foundation PR1266/3-1.

Here’s a link to and a citation for the paper,

Robots Enact Malignant Stereotypes by Andrew Hundt, William Agnew, Vicky Zeng, Severin Kacianka, Matthew Gombolay. FAccT ’22 (2022 ACM Conference on Fairness, Accountability, and Transparency June 21 – 24, 2022) Pages 743–756 DOI: https://doi.org/10.1145/3531146.3533138 Published Online: 20 June 2022

This paper is open access.

May 12, 2021 webcast: a solution to the ‘matchmaker’s dilemma’ (a mathematical problem)

Canada’s Perimeter Institute for Theoretical Physics (PI) is hosting a May 12, 2021 webcast according to their May 7, 2021 announcement (received via email),

A Solution to the Stable Marriage Problem
WEDNESDAY, MAY 12 [2021] at 7 pm ET

Imagine a matchmaker who wishes to arrange opposite-sex marriages in a dating pool of single men and single women (there’s a mathematical reason for the heteronormative framework, which will be explained).

The matchmaker’s goal is to pair every man and woman off into couples that will form happy, stable marriages – so perfectly matched that nobody would rather run off with someone from a different pairing. 

In the real world, things don’t work out so nicely. But could they work out like that if the matchmaker had a computer algorithm to calculate every single factor of compatibility? 

In her Perimeter Public Lecture, mathematician Emily Riehl (Johns Hopkins University) will examine that question, its sexist implications, an algorithmic solution, and real-world applications.

There is a bit more about Emily Riehl on the event page for ‘A Solution to the Stable Marriage Problem’,

An associate professor of mathematics at Johns Hopkins University, Riehl has published more than 20 papers and two books on higher category theory and homotopy theory. She studied at Harvard and Cambridge and earned her PhD at the University of Chicago.  

In addition to her research, Riehl is active in promoting access to the world of mathematics. She is a co-founder of Spectra: the Association for LGBT Mathematicians, and has presented on mathematical proof and queer epistemology as part of several conferences and lecture series. 

Tune in on Wednesday, May 12 [2021] at 7 pm ET for the premiere of Riehl’s lecture, and subscribe to Perimeter’s YouTube channel for more fascinating science videos.  

Turning brain-controlled wireless electronic prostheses into reality plus some ethical points

Researchers at Stanford University (California, US) believe they have a solution for a problem with neuroprosthetics (Note: I have included brief comments about neuroprosthetics and possible ethical issues at the end of this posting) according an August 5, 2020 news item on ScienceDaily,

The current generation of neural implants record enormous amounts of neural activity, then transmit these brain signals through wires to a computer. But, so far, when researchers have tried to create wireless brain-computer interfaces to do this, it took so much power to transmit the data that the implants generated too much heat to be safe for the patient. A new study suggests how to solve his problem — and thus cut the wires.

Caption: Photo of a current neural implant, that uses wires to transmit information and receive power. New research suggests how to one day cut the wires. Credit: Sergey Stavisky

An August 3, 2020 Stanford University news release (also on EurekAlert but published August 4, 2020) by Tom Abate, which originated the news item, details the problem and the proposed solution,

Stanford researchers have been working for years to advance a technology that could one day help people with paralysis regain use of their limbs, and enable amputees to use their thoughts to control prostheses and interact with computers.

The team has been focusing on improving a brain-computer interface, a device implanted beneath the skull on the surface of a patient’s brain. This implant connects the human nervous system to an electronic device that might, for instance, help restore some motor control to a person with a spinal cord injury, or someone with a neurological condition like amyotrophic lateral sclerosis, also called Lou Gehrig’s disease.

The current generation of these devices record enormous amounts of neural activity, then transmit these brain signals through wires to a computer. But when researchers have tried to create wireless brain-computer interfaces to do this, it took so much power to transmit the data that the devices would generate too much heat to be safe for the patient.

Now, a team led by electrical engineers and neuroscientists Krishna Shenoy, PhD, and Boris Murmann, PhD, and neurosurgeon and neuroscientist Jaimie Henderson, MD, have shown how it would be possible to create a wireless device, capable of gathering and transmitting accurate neural signals, but using a tenth of the power required by current wire-enabled systems. These wireless devices would look more natural than the wired models and give patients freer range of motion.

Graduate student Nir Even-Chen and postdoctoral fellow Dante Muratore, PhD, describe the team’s approach in a Nature Biomedical Engineering paper.

The team’s neuroscientists identified the specific neural signals needed to control a prosthetic device, such as a robotic arm or a computer cursor. The team’s electrical engineers then designed the circuitry that would enable a future, wireless brain-computer interface to process and transmit these these carefully identified and isolated signals, using less power and thus making it safe to implant the device on the surface of the brain.

To test their idea, the researchers collected neuronal data from three nonhuman primates and one human participant in a (BrainGate) clinical trial.

As the subjects performed movement tasks, such as positioning a cursor on a computer screen, the researchers took measurements. The findings validated their hypothesis that a wireless interface could accurately control an individual’s motion by recording a subset of action-specific brain signals, rather than acting like the wired device and collecting brain signals in bulk.

The next step will be to build an implant based on this new approach and proceed through a series of tests toward the ultimate goal.

Here’s a link to and a citation for the paper,

Power-saving design opportunities for wireless intracortical brain–computer interfaces by Nir Even-Chen, Dante G. Muratore, Sergey D. Stavisky, Leigh R. Hochberg, Jaimie M. Henderson, Boris Murmann & Krishna V. Shenoy. Nature Biomedical Engineering (2020) DOI: https://doi.org/10.1038/s41551-020-0595-9 Published: 03 August 2020

This paper is behind a paywall.

Comments about ethical issues

As I found out while investigating, ethical issues in this area abound. My first thought was to look at how someone with a focus on ability studies might view the complexities.

My ‘go to’ resource for human enhancement and ethical issues is Gregor Wolbring, an associate professor at the University of Calgary (Alberta, Canada). his profile lists these areas of interest: ability studies, disability studies, governance of emerging and existing sciences and technologies (e.g. neuromorphic engineering, genetics, synthetic biology, robotics, artificial intelligence, automatization, brain machine interfaces, sensors) and more.

I can’t find anything more recent on this particular topic but I did find an August 10, 2017 essay for The Conversation where he comments on technology and human enhancement ethical issues where the technology is gene-editing. Regardless, he makes points that are applicable to brain-computer interfaces (human enhancement), Note: Links have been removed),

Ability expectations have been and still are used to disable, or disempower, many people, not only people seen as impaired. They’ve been used to disable or marginalize women (men making the argument that rationality is an important ability and women don’t have it). They also have been used to disable and disempower certain ethnic groups (one ethnic group argues they’re smarter than another ethnic group) and others.

A recent Pew Research survey on human enhancement revealed that an increase in the ability to be productive at work was seen as a positive. What does such ability expectation mean for the “us” in an era of scientific advancements in gene-editing, human enhancement and robotics?

Which abilities are seen as more important than others?

The ability expectations among “us” will determine how gene-editing and other scientific advances will be used.

And so how we govern ability expectations, and who influences that governance, will shape the future. Therefore, it’s essential that ability governance and ability literacy play a major role in shaping all advancements in science and technology.

One of the reasons I find Gregor’s commentary so valuable is that he writes lucidly about ability and disability as concepts and poses what can be provocative questions about expectations and what it is to be truly abled or disabled. You can find more of his writing here on his eponymous (more or less) blog.

Ethics of clinical trials for testing brain implants

This October 31, 2017 article by Emily Underwood for Science was revelatory,

In 2003, neurologist Helen Mayberg of Emory University in Atlanta began to test a bold, experimental treatment for people with severe depression, which involved implanting metal electrodes deep in the brain in a region called area 25 [emphases mine]. The initial data were promising; eventually, they convinced a device company, St. Jude Medical in Saint Paul, to sponsor a 200-person clinical trial dubbed BROADEN.

This month [October 2017], however, Lancet Psychiatry reported the first published data on the trial’s failure. The study stopped recruiting participants in 2012, after a 6-month study in 90 people failed to show statistically significant improvements between those receiving active stimulation and a control group, in which the device was implanted but switched off.

… a tricky dilemma for companies and research teams involved in deep brain stimulation (DBS) research: If trial participants want to keep their implants [emphases mine], who will take responsibility—and pay—for their ongoing care? And participants in last week’s meeting said it underscores the need for the growing corps of DBS researchers to think long-term about their planned studies.

… participants bear financial responsibility for maintaining the device should they choose to keep it, and for any additional surgeries that might be needed in the future, Mayberg says. “The big issue becomes cost [emphasis mine],” she says. “We transition from having grants and device donations” covering costs, to patients being responsible. And although the participants agreed to those conditions before enrolling in the trial, Mayberg says she considers it a “moral responsibility” to advocate for lower costs for her patients, even it if means “begging for charity payments” from hospitals. And she worries about what will happen to trial participants if she is no longer around to advocate for them. “What happens if I retire, or get hit by a bus?” she asks.

There’s another uncomfortable possibility: that the hypothesis was wrong [emphases mine] to begin with. A large body of evidence from many different labs supports the idea that area 25 is “key to successful antidepressant response,” Mayberg says. But “it may be too simple-minded” to think that zapping a single brain node and its connections can effectively treat a disease as complex as depression, Krakauer [John Krakauer, a neuroscientist at Johns Hopkins University in Baltimore, Maryland] says. Figuring that out will likely require more preclinical research in people—a daunting prospect that raises additional ethical dilemmas, Krakauer says. “The hardest thing about being a clinical researcher,” he says, “is knowing when to jump.”

Brain-computer interfaces, symbiosis, and ethical issues

This was the most recent and most directly applicable work that I could find. From a July 24, 2019 article by Liam Drew for Nature Outlook: The brain,

“It becomes part of you,” Patient 6 said, describing the technology that enabled her, after 45 years of severe epilepsy, to halt her disabling seizures. Electrodes had been implanted on the surface of her brain that would send a signal to a hand-held device when they detected signs of impending epileptic activity. On hearing a warning from the device, Patient 6 knew to take a dose of medication to halt the coming seizure.

“You grow gradually into it and get used to it, so it then becomes a part of every day,” she told Frederic Gilbert, an ethicist who studies brain–computer interfaces (BCIs) at the University of Tasmania in Hobart, Australia. “It became me,” she said. [emphasis mine]

Gilbert was interviewing six people who had participated in the first clinical trial of a predictive BCI to help understand how living with a computer that monitors brain activity directly affects individuals psychologically1. Patient 6’s experience was extreme: Gilbert describes her relationship with her BCI as a “radical symbiosis”.

Symbiosis is a term, borrowed from ecology, that means an intimate co-existence of two species for mutual advantage. As technologists work towards directly connecting the human brain to computers, it is increasingly being used to describe humans’ potential relationship with artificial intelligence.

Interface technologies are divided into those that ‘read’ the brain to record brain activity and decode its meaning, and those that ‘write’ to the brain to manipulate activity in specific regions and affect their function.

Commercial research is opaque, but scientists at social-media platform Facebook are known to be pursuing brain-reading techniques for use in headsets that would convert users’ brain activity into text. And neurotechnology companies such as Kernel in Los Angeles, California, and Neuralink, founded by Elon Musk in San Francisco, California, predict bidirectional coupling in which computers respond to people’s brain activity and insert information into their neural circuitry. [emphasis mine]

Already, it is clear that melding digital technologies with human brains can have provocative effects, not least on people’s agency — their ability to act freely and according to their own choices. Although neuroethicists’ priority is to optimize medical practice, their observations also shape the debate about the development of commercial neurotechnologies.

Neuroethicists began to note the complex nature of the therapy’s side effects. “Some effects that might be described as personality changes are more problematic than others,” says Maslen [Hannah Maslen, a neuroethicist at the University of Oxford, UK]. A crucial question is whether the person who is undergoing stimulation can reflect on how they have changed. Gilbert, for instance, describes a DBS patient who started to gamble compulsively, blowing his family’s savings and seeming not to care. He could only understand how problematic his behaviour was when the stimulation was turned off.

Such cases present serious questions about how the technology might affect a person’s ability to give consent to be treated, or for treatment to continue. [emphases mine] If the person who is undergoing DBS is happy to continue, should a concerned family member or doctor be able to overrule them? If someone other than the patient can terminate treatment against the patient’s wishes, it implies that the technology degrades people’s ability to make decisions for themselves. It suggests that if a person thinks in a certain way only when an electrical current alters their brain activity, then those thoughts do not reflect an authentic self.

To observe a person with tetraplegia bringing a drink to their mouth using a BCI-controlled robotic arm is spectacular. [emphasis mine] This rapidly advancing technology works by implanting an array of electrodes either on or in a person’s motor cortex — a brain region involved in planning and executing movements. The activity of the brain is recorded while the individual engages in cognitive tasks, such as imagining that they are moving their hand, and these recordings are used to command the robotic limb.

If neuroscientists could unambiguously discern a person’s intentions from the chattering electrical activity that they record in the brain, and then see that it matched the robotic arm’s actions, ethical concerns would be minimized. But this is not the case. The neural correlates of psychological phenomena are inexact and poorly understood, which means that signals from the brain are increasingly being processed by artificial intelligence (AI) software before reaching prostheses.[emphasis mine]

But, he [Philipp Kellmeyer, a neurologist and neuroethicist at the University of Freiburg, Germany] says, using AI tools also introduces ethical issues of which regulators have little experience. [emphasis mine] Machine-learning software learns to analyse data by generating algorithms that cannot be predicted and that are difficult, or impossible, to comprehend. This introduces an unknown and perhaps unaccountable process between a person’s thoughts and the technology that is acting on their behalf.

Maslen is already helping to shape BCI-device regulation. She is in discussion with the European Commission about regulations it will implement in 2020 that cover non-invasive brain-modulating devices that are sold straight to consumers. [emphases mine; Note: There is a Canadian company selling this type of product, MUSE] Maslen became interested in the safety of these devices, which were covered by only cursory safety regulations. Although such devices are simple, they pass electrical currents through people’s scalps to modulate brain activity. Maslen found reports of them causing burns, headaches and visual disturbances. She also says clinical studies have shown that, although non-invasive electrical stimulation of the brain can enhance certain cognitive abilities, this can come at the cost of deficits in other aspects of cognition.

Regarding my note about MUSE, the company is InteraXon and its product is MUSE.They advertise the product as “Brain Sensing Headbands That Improve Your Meditation Practice.” The company website and the product seem to be one entity, Choose Muse. The company’s product has been used in some serious research papers they can be found here. I did not see any research papers concerning safety issues.

Getting back to Drew’s July 24, 2019 article and Patient 6,

… He [Gilbert] is now preparing a follow-up report on Patient 6. The company that implanted the device in her brain to help free her from seizures went bankrupt. The device had to be removed.

… Patient 6 cried as she told Gilbert about losing the device. … “I lost myself,” she said.

“It was more than a device,” Gilbert says. “The company owned the existence of this new person.”

I strongly recommend reading Drew’s July 24, 2019 article in its entirety.

Finally

It’s easy to forget that in all the excitement over technologies ‘making our lives better’ that there can be a dark side or two. Some of the points brought forth in the articles by Wolbring, Underwood, and Drew confirmed my uneasiness as reasonable and gave me some specific examples of how these technologies raise new issues or old issues in new ways.

What I find interesting is that no one is using the term ‘cyborg’, which would seem quite applicable.There is an April 20, 2012 posting here titled ‘My mother is a cyborg‘ where I noted that by at lease one definition people with joint replacements, pacemakers, etc. are considered cyborgs. In short, cyborgs or technology integrated into bodies have been amongst us for quite some time.

Interestingly, no one seems to care much when insects are turned into cyborgs (can’t remember who pointed this out) but it is a popular area of research especially for military applications and search and rescue applications.

I’ve sometimes used the term ‘machine/flesh’ and or ‘augmentation’ as a description of technologies integrated with bodies, human or otherwise. You can find lots on the topic here however I’ve tagged or categorized it.

Amongst other pieces you can find here, there’s the August 8, 2016 posting, ‘Technology, athletics, and the ‘new’ human‘ featuring Oscar Pistorius when he was still best known as the ‘blade runner’ and a remarkably successful paralympic athlete. It’s about his efforts to compete against able-bodied athletes at the London Olympic Games in 2012. It is fascinating to read about technology and elite athletes of any kind as they are often the first to try out ‘enhancements’.

Gregor Wolbring has a number of essays on The Conversation looking at Paralympic athletes and their pursuit of enhancements and how all of this is affecting our notions of abilities and disabilities. By extension, one has to assume that ‘abled’ athletes are also affected with the trickle-down effect on the rest of us.

Regardless of where we start the investigation, there is a sameness to the participants in neuroethics discussions with a few experts and commercial interests deciding on how the rest of us (however you define ‘us’ as per Gregor Wolbring’s essay) will live.

This paucity of perspectives is something I was getting at in my COVID-19 editorial for the Canadian Science Policy Centre. My thesis being that we need a range of ideas and insights that cannot be culled from small groups of people who’ve trained and read the same materials or entrepreneurs who too often seem to put profit over thoughtful implementations of new technologies. (See the PDF May 2020 edition [you’ll find me under Policy Development]) or see my May 15, 2020 posting here (with all the sources listed.)

As for this new research at Stanford, it’s exciting news, which raises questions, as it offers the hope of independent movement for people diagnosed as tetraplegic (sometimes known as quadriplegic.)

Nano 2020: a US education initiative

The US Department of Agriculture has a very interesting funding opportunity, Higher Education Challenge (HEC) Grants Program, as evidenced by the Nano 2020 virtual reality (VR) classroom initiative. Before launching into the specifics of the Nano 2020 project, here’s a description of the funding program,

Projects supported by the Higher Education Challenge Grants Program will: (1) address a state, regional, national, or international educational need; (2) involve a creative or non-traditional approach toward addressing that need that can serve as a model to others; (3) encourage and facilitate better working relationships in the university science and education community, as well as between universities and the private sector, to enhance program quality and supplement available resources; and (4) result in benefits that will likely transcend the project duration and USDA support.

A February 3, 2020 University of Arizona news release by Stacy Pigott (also on EurekAlert but published February 7, 2020) announced a VR classroom where students will be able to interact with nanoscale data gained from agricultural sciences and the life sciences,

Sometimes the smallest of things lead to the biggest ideas. Case in point: Nano 2020, a University of Arizona-led initiative to develop curriculum and technology focused on educating students in the rapidly expanding field of nanotechnology.

The five-year, multi-university project recently met its goal of creating globally relevant and implementable curricula and instructional technologies, to include a virtual reality classroom, that enhance the capacity of educators to teach students about innovative nanotechnology applications in agriculture and the life sciences.

Here’s a video from the University of Arizona’s project proponents which illustrates their classroom,

For those who prefer text or like to have it as a backup, here’s the rest of the news release explaining the project,

Visualizing What is Too Small to be Seen

Nanotechnology involves particles and devices developed and used at the scale of 100 nanometers or less – to put that in perspective, the average diameter of a human hair is 80,000 nanometers. The extremely small scale can make comprehension challenging when it comes to learning about things that cannot be seen with the naked eye.

That’s where the Nano 2020 virtual reality classroom comes in. In a custom-developed VR classroom complete with a laboratory, nanoscale objects come to life for students thanks to the power of science data visualization.

Within the VR environment, students can interact with objects of nanoscale proportions – pick them up, turn them around and examine every nuance of things that would otherwise be too small to see. Students can also interact with their instructor or their peers. The Nano 2020 classroom allows for multi-player functionality, giving educators and students the opportunity to connect in a VR laboratory in real time, no matter where they are in the world.

“The virtual reality technology brings to life this complex content in a way that is oddly simple,” said Matt Mars, associate professor of agricultural leadership and innovation education in the College of Agriculture and Life Sciences and co-director of the Nano 2020 grant. “Imagine if you can take a student and they see a nanometer from a distance, and then they’re able to approach it and see how small it is by actually being in it. It’s mind-blowing, but in a way that students will be like, ‘Oh wow, that is really cool!'”

The technology was developed by Tech Core, a group of student programmers and developers led by director Ash Black in the Eller College of Management.

“The thing that I was the most fascinated with from the beginning was playing with a sense of scale,” said Black, a lifelong technologist and mentor-in-residence at the McGuire Center for Entrepreneurship. “What really intrigued me about virtual reality is that it is a tool where scale is elastic – you can dial it up and dial it down. Obviously, with nanotechnology, you’re dealing with very, very small things that nobody has seen yet, so it seemed like a perfect use of virtual reality.”

Black and Tech Core students including Robert Johnson, Hazza Alkaabi, Matthew Romero, Devon Oberdan, Brandon Erickson and Tim Lukau turned science data into an object, the object into an image, and the image into a 3D rendering that is functional in the VR environment they built.

“I think that being able to interact with objects of nanoscale data in this environment will result in a lot of light bulbs going off in the students’ minds. I think they’ll get it,” Black said. “To be able to experience something that is abstract – like, what does a carbon atom look like – well, if you can actually look at it, that’s suddenly a whole lot of context.”

The VR classroom complements the Nano 2020 curriculum, which globally expands the opportunities for nanotechnology education within the fields of agriculture and the life sciences.

Teaching the Workforce of the Future

“There have been great advances to the use of nanotechnology in the health sciences, but many more opportunities for innovation in this area still exist in the agriculture fields. The idea is to be able to advance these opportunities for innovation by providing some educational tools,” said Randy Burd, who was a nutritional sciences professor at the University of Arizona when he started the Nano 2020 project with funding from a National Institute of Food and Agriculture Higher Education Challenge grant through the United States Department of Agriculture. “It not only will give students the basics of the understanding of the applications, but will give them the innovative thought processes to think of new creations. That’s the real key.”

Unknown Object

The goal of the Nano 2020 team, which includes faculty from the University of Arizona, Northern Arizona University and Johns Hopkins University, was to create an online suite of undergraduate courses that was not university-specific, but could be accessed and added to by educators to reach students around the world.

To that end, the team built modular courses in nanotechnology subjects such as glycobiology, optical microscopy and histology, nanomicroscopy techniques, nutritional genomics, applications of magnetic nanotechnology, and design, innovation, and entrepreneurship, to name a few. An online library will be created to facilitate the ongoing expansion of the open-source curricula, which will be disseminated through novel technologies such as the virtual reality classroom.

“It isn’t practical to think that other universities and colleges are just going to be able to launch new courses, because they still need people to teach those courses,” Mars said. “So we created a robust and flexible set of module-based course packages that include exercises, lectures, videos, power points, tools. Instructors will be able to pull out components and integrate them into what already exists to continue to move toward a more comprehensive offering in nanotechnology education.”

According to Mars, the highly adaptable nature of the curriculum and the ability to deliver it in various ways were key components of the Nano 2020 project.

“We approach the project with a strong entrepreneurial mindset and heavy emphasis on innovation. We wanted it to be broadly defined and flexible in structure, so that other institutions access and model the curricula, see its foundation, and adapt that to what their needs were to begin to disseminate the notion of nanotechnology as an underdeveloped but really important field within the larger landscape of agriculture and life sciences,” Mars said. “We wanted to also provide an overlay to the scientific and technological components that would be about adoption in human application, and we approached that through an innovation and entrepreneurial leadership lens.”

Portions of the Nano 2020 curriculum are currently being offered as electives in a certificate program through the Department of Agriculture Education, Technology and Innovation at the University of Arizona. As it becomes more widely disseminated through the higher education community at large, researchers expect the curriculum and VR classroom technology to transcend the boundaries of discipline, institution and geography.

“An online open platform will exist where people can download components and courses, and all of it is framed by the technology, so that these experiences and research can be shared over this virtual reality component,” Burd said. “It’s technologically distinct from what exists now.”

“The idea is that it’s not just curriculum, but it’s the delivery of that curriculum, and the delivery of that curriculum in various ways,” Mars said. “There’s a relatability that comes with the virtual reality that I think is really cool. It allows students to relate to something as abstract as a nanometer, and that is what is really exciting.”

As best I can determine, this VR Nano 2020 classroom is not yet ready for a wide release and, for now, is being offered exclusively at the University of Arizona.

Artificial intelligence (AI) brings together International Telecommunications Union (ITU) and World Health Organization (WHO) and AI outperforms animal testing

Following on my May 11, 2018 posting about the International Telecommunications Union (ITU) and the 2018 AI for Good Global Summit in mid- May, there’s an announcement. My other bit of AI news concerns animal testing.

Leveraging the power of AI for health

A July 24, 2018 ITU press release (a shorter version was received via email) announces a joint initiative focused on improving health,

Two United Nations specialized agencies are joining forces to expand the use of artificial intelligence (AI) in the health sector to a global scale, and to leverage the power of AI to advance health for all worldwide. The International Telecommunication Union (ITU) and the World Health Organization (WHO) will work together through the newly established ITU Focus Group on AI for Health to develop an international “AI for health” standards framework and to identify use cases of AI in the health sector that can be scaled-up for global impact. The group is open to all interested parties.

“AI could help patients to assess their symptoms, enable medical professionals in underserved areas to focus on critical cases, and save great numbers of lives in emergencies by delivering medical diagnoses to hospitals before patients arrive to be treated,” said ITU Secretary-General Houlin Zhao. “ITU and WHO plan to ensure that such capabilities are available worldwide for the benefit of everyone, everywhere.”

The demand for such a platform was first identified by participants of the second AI for Good Global Summit held in Geneva, 15-17 May 2018. During the summit, AI and the health sector were recognized as a very promising combination, and it was announced that AI-powered technologies such as skin disease recognition and diagnostic applications based on symptom questions could be deployed on six billion smartphones by 2021.

The ITU Focus Group on AI for Health is coordinated through ITU’s Telecommunications Standardization Sector – which works with ITU’s 193 Member States and more than 800 industry and academic members to establish global standards for emerging ICT innovations. It will lead an intensive two-year analysis of international standardization opportunities towards delivery of a benchmarking framework of international standards and recommendations by ITU and WHO for the use of AI in the health sector.

“I believe the subject of AI for health is both important and useful for advancing health for all,” said WHO Director-General Tedros Adhanom Ghebreyesus.

The ITU Focus Group on AI for Health will also engage researchers, engineers, practitioners, entrepreneurs and policy makers to develop guidance documents for national administrations, to steer the creation of policies that ensure the safe, appropriate use of AI in the health sector.

“1.3 billion people have a mobile phone and we can use this technology to provide AI-powered health data analytics to people with limited or no access to medical care. AI can enhance health by improving medical diagnostics and associated health intervention decisions on a global scale,” said Thomas Wiegand, ITU Focus Group on AI for Health Chairman, and Executive Director of the Fraunhofer Heinrich Hertz Institute, as well as professor at TU Berlin.

He added, “The health sector is in many countries among the largest economic sectors or one of the fastest-growing, signalling a particularly timely need for international standardization of the convergence of AI and health.”

Data analytics are certain to form a large part of the ITU focus group’s work. AI systems are proving increasingly adept at interpreting laboratory results and medical imagery and extracting diagnostically relevant information from text or complex sensor streams.

As part of this, the ITU Focus Group for AI for Health will also produce an assessment framework to standardize the evaluation and validation of AI algorithms — including the identification of structured and normalized data to train AI algorithms. It will develop open benchmarks with the aim of these becoming international standards.

The ITU Focus Group for AI for Health will report to the ITU standardization expert group for multimedia, Study Group 16.

I got curious about Study Group 16 (from the Study Group 16 at a glance webpage),

Study Group 16 leads ITU’s standardization work on multimedia coding, systems and applications, including the coordination of related studies across the various ITU-T SGs. It is also the lead study group on ubiquitous and Internet of Things (IoT) applications; telecommunication/ICT accessibility for persons with disabilities; intelligent transport system (ITS) communications; e-health; and Internet Protocol television (IPTV).

Multimedia is at the core of the most recent advances in information and communication technologies (ICTs) – especially when we consider that most innovation today is agnostic of the transport and network layers, focusing rather on the higher OSI model layers.

SG16 is active in all aspects of multimedia standardization, including terminals, architecture, protocols, security, mobility, interworking and quality of service (QoS). It focuses its studies on telepresence and conferencing systems; IPTV; digital signage; speech, audio and visual coding; network signal processing; PSTN modems and interfaces; facsimile terminals; and ICT accessibility.

I wonder which group deals with artificial intelligence and, possibly, robots.

Chemical testing without animals

Thomas Hartung, professor of environmental health and engineering at Johns Hopkins University (US), describes in his July 25, 2018 essay (written for The Conversation) on phys.org the situation where chemical testing is concerned,

Most consumers would be dismayed with how little we know about the majority of chemicals. Only 3 percent of industrial chemicals – mostly drugs and pesticides – are comprehensively tested. Most of the 80,000 to 140,000 chemicals in consumer products have not been tested at all or just examined superficially to see what harm they may do locally, at the site of contact and at extremely high doses.

I am a physician and former head of the European Center for the Validation of Alternative Methods of the European Commission (2002-2008), and I am dedicated to finding faster, cheaper and more accurate methods of testing the safety of chemicals. To that end, I now lead a new program at Johns Hopkins University to revamp the safety sciences.

As part of this effort, we have now developed a computer method of testing chemicals that could save more than a US$1 billion annually and more than 2 million animals. Especially in times where the government is rolling back regulations on the chemical industry, new methods to identify dangerous substances are critical for human and environmental health.

Having written on the topic of alternatives to animal testing on a number of occasions (my December 26, 2014 posting provides an overview of sorts), I was particularly interested to see this in Hartung’s July 25, 2018 essay on The Conversation (Note: Links have been removed),

Following the vision of Toxicology for the 21st Century, a movement led by U.S. agencies to revamp safety testing, important work was carried out by my Ph.D. student Tom Luechtefeld at the Johns Hopkins Center for Alternatives to Animal Testing. Teaming up with Underwriters Laboratories, we have now leveraged an expanded database and machine learning to predict toxic properties. As we report in the journal Toxicological Sciences, we developed a novel algorithm and database for analyzing chemicals and determining their toxicity – what we call read-across structure activity relationship, RASAR.

This graphic reveals a small part of the chemical universe. Each dot represents a different chemical. Chemicals that are close together have similar structures and often properties. Thomas Hartung, CC BY-SA

To do this, we first created an enormous database with 10 million chemical structures by adding more public databases filled with chemical data, which, if you crunch the numbers, represent 50 trillion pairs of chemicals. A supercomputer then created a map of the chemical universe, in which chemicals are positioned close together if they share many structures in common and far where they don’t. Most of the time, any molecule close to a toxic molecule is also dangerous. Even more likely if many toxic substances are close, harmless substances are far. Any substance can now be analyzed by placing it into this map.

If this sounds simple, it’s not. It requires half a billion mathematical calculations per chemical to see where it fits. The chemical neighborhood focuses on 74 characteristics which are used to predict the properties of a substance. Using the properties of the neighboring chemicals, we can predict whether an untested chemical is hazardous. For example, for predicting whether a chemical will cause eye irritation, our computer program not only uses information from similar chemicals, which were tested on rabbit eyes, but also information for skin irritation. This is because what typically irritates the skin also harms the eye.

How well does the computer identify toxic chemicals?

This method will be used for new untested substances. However, if you do this for chemicals for which you actually have data, and compare prediction with reality, you can test how well this prediction works. We did this for 48,000 chemicals that were well characterized for at least one aspect of toxicity, and we found the toxic substances in 89 percent of cases.

This is clearly more accurate that the corresponding animal tests which only yield the correct answer 70 percent of the time. The RASAR shall now be formally validated by an interagency committee of 16 U.S. agencies, including the EPA [Environmental Protection Agency] and FDA [Food and Drug Administration], that will challenge our computer program with chemicals for which the outcome is unknown. This is a prerequisite for acceptance and use in many countries and industries.

The potential is enormous: The RASAR approach is in essence based on chemical data that was registered for the 2010 and 2013 REACH [Registration, Evaluation, Authorizations and Restriction of Chemicals] deadlines [in Europe]. If our estimates are correct and chemical producers would have not registered chemicals after 2013, and instead used our RASAR program, we would have saved 2.8 million animals and $490 million in testing costs – and received more reliable data. We have to admit that this is a very theoretical calculation, but it shows how valuable this approach could be for other regulatory programs and safety assessments.

In the future, a chemist could check RASAR before even synthesizing their next chemical to check whether the new structure will have problems. Or a product developer can pick alternatives to toxic substances to use in their products. This is a powerful technology, which is only starting to show all its potential.

It’s been my experience that these claims having led a movement (Toxicology for the 21st Century) are often contested with many others competing for the title of ‘leader’ or ‘first’. That said, this RASAR approach seems very exciting, especially in light of the skepticism about limiting and/or making animal testing unnecessary noted in my December 26, 2014 posting.it was from someone I thought knew better.

Here’s a link to and a citation for the paper mentioned in Hartung’s essay,

Machine learning of toxicological big data enables read-across structure activity relationships (RASAR) outperforming animal test reproducibility by Thomas Luechtefeld, Dan Marsh, Craig Rowlands, Thomas Hartung. Toxicological Sciences, kfy152, https://doi.org/10.1093/toxsci/kfy152 Published: 11 July 2018

This paper is open access.

Prosthetic pain

“Feeling no pain” can be a euphemism for being drunk. However, there are some people for whom it’s not a euphemism and they literally feel no pain for one reason or another. One group of people who feel no pain are amputees and a researcher at Johns Hopkins University (Maryland, US) has found a way so they can feel pain again.

A June 20, 2018 news item on ScienceDaily provides an introduction to the research and to the reason for it,

Amputees often experience the sensation of a “phantom limb” — a feeling that a missing body part is still there.

That sensory illusion is closer to becoming a reality thanks to a team of engineers at the Johns Hopkins University that has created an electronic skin. When layered on top of prosthetic hands, this e-dermis brings back a real sense of touch through the fingertips.

“After many years, I felt my hand, as if a hollow shell got filled with life again,” says the anonymous amputee who served as the team’s principal volunteer tester.

Made of fabric and rubber laced with sensors to mimic nerve endings, e-dermis recreates a sense of touch as well as pain by sensing stimuli and relaying the impulses back to the peripheral nerves.

A June 20, 2018 Johns Hopkins University news release (also on EurekAlert), which originated the news item, explores the research in more depth,

“We’ve made a sensor that goes over the fingertips of a prosthetic hand and acts like your own skin would,” says Luke Osborn, a graduate student in biomedical engineering. “It’s inspired by what is happening in human biology, with receptors for both touch and pain.

“This is interesting and new,” Osborn said, “because now we can have a prosthetic hand that is already on the market and fit it with an e-dermis that can tell the wearer whether he or she is picking up something that is round or whether it has sharp points.”

The work – published June 20 in the journal Science Robotics – shows it is possible to restore a range of natural, touch-based feelings to amputees who use prosthetic limbs. The ability to detect pain could be useful, for instance, not only in prosthetic hands but also in lower limb prostheses, alerting the user to potential damage to the device.

Human skin contains a complex network of receptors that relay a variety of sensations to the brain. This network provided a biological template for the research team, which includes members from the Johns Hopkins departments of Biomedical Engineering, Electrical and Computer Engineering, and Neurology, and from the Singapore Institute of Neurotechnology.

Bringing a more human touch to modern prosthetic designs is critical, especially when it comes to incorporating the ability to feel pain, Osborn says.

“Pain is, of course, unpleasant, but it’s also an essential, protective sense of touch that is lacking in the prostheses that are currently available to amputees,” he says. “Advances in prosthesis designs and control mechanisms can aid an amputee’s ability to regain lost function, but they often lack meaningful, tactile feedback or perception.”

That is where the e-dermis comes in, conveying information to the amputee by stimulating peripheral nerves in the arm, making the so-called phantom limb come to life. The e-dermis device does this by electrically stimulating the amputee’s nerves in a non-invasive way, through the skin, says the paper’s senior author, Nitish Thakor, a professor of biomedical engineering and director of the Biomedical Instrumentation and Neuroengineering Laboratory at Johns Hopkins.

“For the first time, a prosthesis can provide a range of perceptions, from fine touch to noxious to an amputee, making it more like a human hand,” says Thakor, co-founder of Infinite Biomedical Technologies, the Baltimore-based company that provided the prosthetic hardware used in the study.

Inspired by human biology, the e-dermis enables its user to sense a continuous spectrum of tactile perceptions, from light touch to noxious or painful stimulus. The team created a “neuromorphic model” mimicking the touch and pain receptors of the human nervous system, allowing the e-dermis to electronically encode sensations just as the receptors in the skin would. Tracking brain activity via electroencephalography, or EEG, the team determined that the test subject was able to perceive these sensations in his phantom hand.

The researchers then connected the e-dermis output to the volunteer by using a noninvasive method known as transcutaneous electrical nerve stimulation, or TENS. In a pain-detection task, the team determined that the test subject and the prosthesis were able to experience a natural, reflexive reaction to both pain while touching a pointed object and non-pain when touching a round object.

The e-dermis is not sensitive to temperature–for this study, the team focused on detecting object curvature (for touch and shape perception) and sharpness (for pain perception). The e-dermis technology could be used to make robotic systems more human, and it could also be used to expand or extend to astronaut gloves and space suits, Osborn says.

The researchers plan to further develop the technology and better understand how to provide meaningful sensory information to amputees in the hopes of making the system ready for widespread patient use.

Johns Hopkins is a pioneer in the field of upper limb dexterous prostheses. More than a decade ago, the university’s Applied Physics Laboratory led the development of the advanced Modular Prosthetic Limb, which an amputee patient controls with the muscles and nerves that once controlled his or her real arm or hand.

In addition to the funding from Space@Hopkins, which fosters space-related collaboration across the university’s divisions, the team also received grants from the Applied Physics Laboratory Graduate Fellowship Program and the Neuroengineering Training Initiative through the National Institute of Biomedical Imaging and Bioengineering through the National Institutes of Health under grant T32EB003383.

The e-dermis was tested over the course of one year on an amputee who volunteered in the Neuroengineering Laboratory at Johns Hopkins. The subject frequently repeated the testing to demonstrate consistent sensory perceptions via the e-dermis. The team has worked with four other amputee volunteers in other experiments to provide sensory feedback.

Here’s a video about this work,

Sarah Zhang’s June 20, 2018 article for The Atlantic reveals a few more details while covering some of the material in the news release,

Osborn and his team added one more feature to make the prosthetic hand, as he puts it, “more lifelike, more self-aware”: When it grasps something too sharp, it’ll open its fingers and immediately drop it—no human control necessary. The fingers react in just 100 milliseconds, the speed of a human reflex. Existing prosthetic hands have a similar degree of theoretically helpful autonomy: If an object starts slipping, the hand will grasp more tightly. Ideally, users would have a way to override a prosthesis’s reflex, like how you can hold your hand on a stove if you really, really want to. After all, the whole point of having a hand is being able to tell it what to do.

Here’s a link to and a citation for the paper,

Prosthesis with neuromorphic multilayered e-dermis perceives touch and pain by Luke E. Osborn, Andrei Dragomir, Joseph L. Betthauser, Christopher L. Hunt, Harrison H. Nguyen, Rahul R. Kaliki, and Nitish V. Thakor. Science Robotics 20 Jun 2018: Vol. 3, Issue 19, eaat3818 DOI: 10.1126/scirobotics.aat3818

This paper is behind a paywall.

Mixing the unmixable for all new nanoparticles

This news comes out of the University of Maryland and the discovery could led to nanoparticles that have never before been imagined. From a March 29, 2018 news item on ScienceDaily,

Making a giant leap in the ‘tiny’ field of nanoscience, a multi-institutional team of researchers is the first to create nanoscale particles composed of up to eight distinct elements generally known to be immiscible, or incapable of being mixed or blended together. The blending of multiple, unmixable elements into a unified, homogenous nanostructure, called a high entropy alloy nanoparticle, greatly expands the landscape of nanomaterials — and what we can do with them.

This research makes a significant advance on previous efforts that have typically produced nanoparticles limited to only three different elements and to structures that do not mix evenly. Essentially, it is extremely difficult to squeeze and blend different elements into individual particles at the nanoscale. The team, which includes lead researchers at University of Maryland, College Park (UMD)’s A. James Clark School of Engineering, published a peer-reviewed paper based on the research featured on the March 30 [2018] cover of Science.

A March 29, 2018 University of Maryland press release (also on EurekAlert), which originated the news item, delves further (Note: Links have been removed),

“Imagine the elements that combine to make nanoparticles as Lego building blocks. If you have only one to three colors and sizes, then you are limited by what combinations you can use and what structures you can assemble,” explains Liangbing Hu, associate professor of materials science and engineering at UMD and one of the corresponding authors of the paper. “What our team has done is essentially enlarged the toy chest in nanoparticle synthesis; now, we are able to build nanomaterials with nearly all metallic and semiconductor elements.”

The researchers say this advance in nanoscience opens vast opportunities for a wide range of applications that includes catalysis (the acceleration of a chemical reaction by a catalyst), energy storage (batteries or supercapacitors), and bio/plasmonic imaging, among others.

To create the high entropy alloy nanoparticles, the researchers employed a two-step method of flash heating followed by flash cooling. Metallic elements such as platinum, nickel, iron, cobalt, gold, copper, and others were exposed to a rapid thermal shock of approximately 3,000 degrees Fahrenheit, or about half the temperature of the sun, for 0.055 seconds. The extremely high temperature resulted in uniform mixtures of the multiple elements. The subsequent rapid cooling (more than 100,000 degrees Fahrenheit per second) stabilized the newly mixed elements into the uniform nanomaterial.

“Our method is simple, but one that nobody else has applied to the creation of nanoparticles. By using a physical science approach, rather than a traditional chemistry approach, we have achieved something unprecedented,” says Yonggang Yao, a Ph.D. student at UMD and one of the lead authors of the paper.

To demonstrate one potential use of the nanoparticles, the research team used them as advanced catalysts for ammonia oxidation, which is a key step in the production of nitric acid (a liquid acid that is used in the production of ammonium nitrate for fertilizers, making plastics, and in the manufacturing of dyes). They were able to achieve 100 percent oxidation of ammonia and 99 percent selectivity toward desired products with the high entropy alloy nanoparticles, proving their ability as highly efficient catalysts.

Yao says another potential use of the nanoparticles as catalysts could be the generation of chemicals or fuels from carbon dioxide.

“The potential applications for high entropy alloy nanoparticles are not limited to the field of catalysis. With cross-discipline curiosity, the demonstrated applications of these particles will become even more widespread,” says Steven D. Lacey, a Ph.D. student at UMD and also one of the lead authors of the paper.

This research was performed through a multi-institutional collaboration of Prof. Liangbing Hu’s group at the University of Maryland, College Park; Prof. Reza Shahbazian-Yassar’s group at University of Illinois at Chicago; Prof. Ju Li’s group at the Massachusetts Institute of Technology; Prof. Chao Wang’s group at Johns Hopkins University; and Prof. Michael Zachariah’s group at the University of Maryland, College Park.

What outside experts are saying about this research:

“This is quite amazing; Dr. Hu creatively came up with this powerful technique, carbo-thermal shock synthesis, to produce high entropy alloys of up to eight different elements in a single nanoparticle. This is indeed unthinkable for bulk materials synthesis. This is yet another beautiful example of nanoscience!,” says Peidong Yang, the S.K. and Angela Chan Distinguished Professor of Energy and professor of chemistry at the University of California, Berkeley and member of the American Academy of Arts and Sciences.

“This discovery opens many new directions. There are simulation opportunities to understand the electronic structure of the various compositions and phases that are important for the next generation of catalyst design. Also, finding correlations among synthesis routes, composition, and phase structure and performance enables a paradigm shift toward guided synthesis,” says George Crabtree, Argonne Distinguished Fellow and director of the Joint Center for Energy Storage Research at Argonne National Laboratory.

More from the research coauthors:

“Understanding the atomic order and crystalline structure in these multi-element nanoparticles reveals how the synthesis can be tuned to optimize their performance. It would be quite interesting to further explore the underlying atomistic mechanisms of the nucleation and growth of high entropy alloy nanoparticle,” says Reza Shahbazian-Yassar, associate professor at the University of Illinois at Chicago and a corresponding author of the paper.

“Carbon metabolism drives ‘living’ metal catalysts that frequently move around, split, or merge, resulting in a nanoparticle size distribution that’s far from the ordinary, and highly tunable,” says Ju Li, professor at the Massachusetts Institute of Technology and a corresponding author of the paper.

“This method enables new combinations of metals that do not exist in nature and do not otherwise go together. It enables robust tuning of the composition of catalytic materials to optimize the activity, selectivity, and stability, and the application will be very broad in energy conversions and chemical transformations,” says Chao Wang, assistant professor of chemical and biomolecular engineering at Johns Hopkins University and one of the study’s authors.

Here’s a link to and a citation for the paper,

Carbothermal shock synthesis of high-entropy-alloy nanoparticles by Yonggang Yao, Zhennan Huang, Pengfei Xie, Steven D. Lacey, Rohit Jiji Jacob, Hua Xie, Fengjuan Chen, Anmin Nie, Tiancheng Pu, Miles Rehwoldt, Daiwei Yu, Michael R. Zachariah, Chao Wang, Reza Shahbazian-Yassar, Ju Li, Liangbing Hu. Science 30 Mar 2018: Vol. 359, Issue 6383, pp. 1489-1494 DOI: 10.1126/science.aan5412

This paper is behind a paywall.