Blue quantum dots and your television screen

Scientists used equipment at the Canadian Light Source (CLS; synchrotron in Saskatoon, Saskatchewan, Canada) in the quest for better glowing dots on your television (maybe computers and telephones, too?) screen. From an August 20, 2020 news item on Nanowerk,

There are many things quantum dots could do, but the most obvious place they could change our lives is to make the colours on our TVs and screens more pristine. Research using the Canadian Light Source (CLS) at the University of Saskatchewan is helping to bring this technology closer to our living rooms.

An August 19, 2020 CLS news release (also received via email) by Victoria Martinez, which originated the news item, explains what quantum dots are and fills in with technical details about this research,

Quantum dots are nanocrystals that glow, a property that scientists have been working with to develop next-generation LEDs. When a quantum dot glows, it creates very pure light in a precise wavelength of red, blue or green. Conventional LEDs, found in our TV screens today, produce white light that is filtered to achieve desired colours, a process that leads to less bright and muddier colours.

Until now, blue-glowing quantum dots, which are crucial for creating a full range of colour, have proved particularly challenging for researchers to develop. However, University of Toronto (U of T) researcher Dr. Yitong Dong and collaborators have made a huge leap in blue quantum dot fluorescence, results they recently published in Nature Nanotechnology.

“The idea is that if you have a blue LED, you have everything. We can always down convert the light from blue to green and red,” says Dong. “Let’s say you have green, then you cannot use this lower-energy light to make blue.”

The team’s breakthrough has led to quantum dots that produce green light at an external quantum efficiency (EQE) of 22% and blue at 12.3%. The theoretical maximum efficiency is not far off at 25%, and this is the first blue perovskite LED reported as achieving an EQE higher than 10%.

The Science

Dong has been working in the field of quantum dots for two years in Dr. Edward Sargent’s research group at the U of T. This astonishing increase in efficiency took time, an unusual production approach, and overcoming several scientific hurdles to achieve.

CLS techniques, particularly GIWAXS [grazing incidence wide-angle X-ray scattering] on the HXMA beamline [hard X-ray micro-analysis (HXMA)], allowed the researchers to verify the structures achieved in their quantum dot films. This validated their results and helped clarify what the structural changes achieve in terms of LED performance.

“The CLS was very helpful. GIWAXS is a fascinating technique,” says Dong.

The first challenge was uniformity, important to ensuring a clear blue colour and to prevent the LED from moving towards producing green light.

“We used a special synthetic approach to achieve a very uniform assembly, so every single particle has the same size and shape. The overall film is nearly perfect and maintains the blue emission conditions all the way through,” says Dong.

Next, the team needed to tackle the charge injection needed to excite the dots into luminescence. Since the crystals are not very stable, they need stabilizing molecules to act as scaffolding and support them. These are typically long molecule chains, with up to 18 carbon-non-conductive molecules at the surface, making it hard to get the energy to produce light.

“We used a special surface structure to stabilize the quantum dot. Compared to the films made with long chain molecules capped quantum dots, our film has 100 times higher conductivity, sometimes even 1000 times higher.”

This remarkable performance is a key benchmark in bringing these nanocrystal LEDs to market. However, stability remains an issue and quantum dot LEDs suffer from short lifetimes. Dong is excited about the potential for the field and adds, “I like photons, these are interesting materials, and, well, these glowing crystals are just beautiful.”

Here’s a link to and a citation for the paper,

Bipolar-shell resurfacing for blue LEDs based on strongly confined perovskite quantum dots by Yitong Dong, Ya-Kun Wang, Fanglong Yuan, Andrew Johnston, Yuan Liu, Dongxin Ma, Min-Jae Choi, Bin Chen, Mahshid Chekini, Se-Woong Baek, Laxmi Kishore Sagar, James Fan, Yi Hou, Mingjian Wu, Seungjin Lee, Bin Sun, Sjoerd Hoogland, Rafael Quintero-Bermudez, Hinako Ebe, Petar Todorovic, Filip Dinic, Peicheng Li, Hao Ting Kung, Makhsud I. Saidaminov, Eugenia Kumacheva, Erdmann Spiecker, Liang-Sheng Liao, Oleksandr Voznyy, Zheng-Hong Lu, Edward H. Sargent. Nature Nanotechnology volume 15, pages668–674(2020) DOI: https://doi.org/10.1038/s41565-020-0714-5 Published: 06 July 2020 Issue Date: August 2020

This paper is behind a paywall.

If you search “Edward Sargent,” he’s the last author listed in the citation, here on this blog, you will find a number of postings that feature work from his laboratory at the University of Toronto.

New nanotubes discovered in the eye

I was half-expecting to read about some sort of fancy carbon nanotubes—I was wrong. From an August 12, 2020 news item on ScienceDaily where the researchers keep the mystery going for a while,

A new mechanism of blood redistribution that is essential for the proper functioning of the adult retina has just been discovered in vivo by researchers at the University of Montreal Hospital Research Centre (CRCHUM).

“For the first time, we have identified a communication structure between cells that is required to coordinate blood supply in the living retina,” said Dr. Adriana Di Polo, a neuroscience professor at Université de Montréal and holder of a Canada Research Chair in glaucoma and age-related neurodegeneration, who supervised the study.

“We already knew that activated retinal areas receive more blood than non-activated ones,” she said, “but until now no one understood how this essential blood delivery was finely regulated.”

The study was conducted on mice by two members of Di Polo’s lab: Dr. Luis Alarcon-Martinez, a postdoctoral fellow, and Deborah Villafranca-Baughman, a PhD student. Both are the first co-authors of this study.

In living animals, as in humans, the retina uses the oxygen and nutrients contained in the blood to fully function. This vital exchange takes place through capillaries, the thinnest blood vessels in all organs of the body. When the blood supply is dramatically reduced or cut off — such as in ischemia or stroke — the retina does not receive the oxygen it needs. In this condition, the cells begin to die and the retina stops working as it should.

An August 12, 2020 University of Montreal Hospital Research Centre (CRCHUM) news release on EurekAlert clears up the mystery,

Tunnelling between cells

Wrapped around the capillaries are pericytes, cells that have the ability to control the amount of blood passing through a single capillary simply by squeezing and releasing it.

“Using a microscopy technique to visualize vascular changes in living mice, we showed that pericytes project very thin tubes, called inter-pericyte tunnelling nanotubes, [emphasis mine] to communicate with other pericytes located in distant capillaries,” said Alarcon-Martinez. “Through these nanotubes, the pericytes can talk to each other to deliver blood where it is most needed.”

Another important feature, added Villafranca-Baughman, is that “the capillaries lose their ability to shuttle blood where it is required when the tunnelling nanotubes are damaged–after an ischemic stroke, for example. The lack of blood supply that follows has a detrimental effect on neurons and the overall tissue function.”

The team’s findings suggest that microvascular deficits observed in neurodegenerative diseases like strokes, glaucoma, and Alzheimer’s disease might result from the loss of tunnelling nanotubes and impaired blood distribution. Strategies that protect these nanostructures should then be beneficial, but remain to be demonstrated.

Here’s a link to and a citation for the paper

Interpericyte tunnelling nanotubes regulate neurovascular coupling by Luis Alarcon-Martinez, Deborah Villafranca-Baughman, Heberto Quintero, J. Benjamin Kacerovsky, Florence Dotigny, Keith K. Murai, Alexandre Prat, Pierre Drapeau & Adriana Di Polo. Nature (2020) Published: 12 August 2020 DOI: https://doi.org/10.1038/s41586-020-2589-x

This paper is behind a paywall.

Loop quantum cosmology connects the tiniest with the biggest in a cosmic tango

Caption: Tiny quantum fluctuations in the early universe explain two major mysteries about the large-scale structure of the universe, in a cosmic tango of the very small and the very large. A new study by researchers at Penn State used the theory of quantum loop gravity to account for these mysteries, which Einstein’s theory of general relativity considers anomalous.. Credit: Dani Zemba, Penn State

A July 29, 2020 news item on ScienceDaily announces a study showing that quantum loop cosmology can account for some large-scale mysteries,

While [1] Einstein’s theory of general relativity can explain a large array of fascinating astrophysical and cosmological phenomena, some aspects of the properties of the universe at the largest-scales remain a mystery. A new study using loop quantum cosmology — a theory that uses quantum mechanics to extend gravitational physics beyond Einstein’s theory of general relativity — accounts for two major mysteries. While the differences in the theories occur at the tiniest of scales — much smaller than even a proton — they have consequences at the largest of accessible scales in the universe. The study, which appears online July 29 [2020] in the journal Physical Review Letters, also provides new predictions about the universe that future satellite missions could test.

A July 29, 2020 Pennsylvania State University (Penn State) news release (also on EurekAlert) by Gail McCormick, which originated the news item, describes how this work helped us avoid a crisis in cosmology,

While [2] a zoomed-out picture of the universe looks fairly uniform, it does have a large-scale structure, for example because galaxies and dark matter are not uniformly distributed throughout the universe. The origin of this structure has been traced back to the tiny inhomogeneities observed in the Cosmic Microwave Background (CMB)–radiation that was emitted when the universe was 380 thousand years young that we can still see today. But the CMB itself has three puzzling features that are considered anomalies because they are difficult to explain using known physics.

“While [3] seeing one of these anomalies may not be that statistically remarkable, seeing two or more together suggests we live in an exceptional universe,” said Donghui Jeong, associate professor of astronomy and astrophysics at Penn State and an author of the paper. “A recent study in the journal Nature Astronomy proposed an explanation for one of these anomalies that raised so many additional concerns, they flagged a ‘possible crisis in cosmology‘ [emphasis mine].’ Using quantum loop cosmology, however, we have resolved two of these anomalies naturally, avoiding that potential crisis.”

Research over the last three decades has greatly improved our understanding of the early universe, including how the inhomogeneities in the CMB were produced in the first place. These inhomogeneities are a result of inevitable quantum fluctuations in the early universe. During a highly accelerated phase of expansion at very early times–known as inflation–these primordial, miniscule fluctuations were stretched under gravity’s influence and seeded the observed inhomogeneities in the CMB.

“To understand how primordial seeds arose, we need a closer look at the early universe, where Einstein’s theory of general relativity breaks down,” said Abhay Ashtekar, Evan Pugh Professor of Physics, holder of the Eberly Family Chair in Physics, and director of the Penn State Institute for Gravitation and the Cosmos. “The standard inflationary paradigm based on general relativity treats space time as a smooth continuum. Consider a shirt that appears like a two-dimensional surface, but on closer inspection you can see that it is woven by densely packed one-dimensional threads. In this way, the fabric of space time is really woven by quantum threads. In accounting for these threads, loop quantum cosmology allows us to go beyond the continuum described by general relativity where Einstein’s physics breaks down–for example beyond the Big Bang.”

The researchers’ previous investigation into the early universe replaced the idea of a Big Bang singularity, where the universe emerged from nothing, with the Big Bounce, where the current expanding universe emerged from a super-compressed mass that was created when the universe contracted in its preceding phase. They found that all of the large-scale structures of the universe accounted for by general relativity are equally explained by inflation after this Big Bounce using equations of loop quantum cosmology.

In the new study, the researchers determined that inflation under loop quantum cosmology also resolves two of the major anomalies that appear under general relativity.

“The primordial fluctuations we are talking about occur at the incredibly small Planck scale,” said Brajesh Gupt, a postdoctoral researcher at Penn State at the time of the research and currently at the Texas Advanced Computing Center of the University of Texas at Austin. “A Planck length is about 20 orders of magnitude smaller than the radius of a proton. But corrections to inflation at this unimaginably small scale simultaneously explain two of the anomalies at the largest scales in the universe, in a cosmic tango of the very small and the very large.”

The researchers also produced new predictions about a fundamental cosmological parameter and primordial gravitational waves that could be tested during future satellite missions, including LiteBird and Cosmic Origins Explorer, which will continue improve our understanding of the early universe.

That’s a lot of ‘while’. I’ve done this sort of thing, too, and whenever I come across it later; it’s painful.

Here’s a link to and a citation for the paper,

Alleviating the Tension in the Cosmic Microwave Background Using Planck-Scale Physics by Abhay Ashtekar, Brajesh Gupt, Donghui Jeong, and V. Sreenath. Phys. Rev. Lett. 125, 051302 DOI: https://doi.org/10.1103/PhysRevLett.125.051302 Published 29 July 2020 © 2020 American Physical Society

This paper is behind a paywall.

Spotting the difference between dengue and Zika infections with gold nanosensors

This July 29, 2020 news item on Nanowerk features research from Brazil,

A new class of nanosensor developed in Brazil could more accurately identify dengue and Zika infections, a task that is complicated by their genetic similarities and which can result in misdiagnosis.

The technique uses gold nanoparticles and can “observe” viruses at the atomic level, according to a study published in Scientific Reports (“Nanosensors based on LSPR are able to serologically differentiate dengue from Zika infections”).

Belonging to the Flavivirus genus in the Flaviviridae family, Zika and dengue viruses share more than 50 per cent similarity in their amino acid sequence. Both viruses are spread by mosquitos and can have long-term side effects. The Flaviviridae virus family was named after the yellow fever virus and comes from the Latin word for golden, or yellow, in colour.

“Diagnosing [dengue virus] infections is a high priority in countries affected by annual epidemics of dengue fever. The correct diagnostic is essential for patient managing and prognostic as there are no specific antiviral drugs to treat the infection,” the authors say.

More than 1.8 million people are suspected to have been infected with dengue so far this year in the Americas, with 4000 severe cases and almost 700 deaths, the Pan American Health Organization says. The annual global average is estimated to be between 100 million and 400 million dengue infections, according to the World Health Organization.

Flávio Fonseca, study co-author and researcher at the Federal University of Minas Gerais, tells SciDev.Net it is almost impossible to differentiate between dengue and Zika viruses.

“A serologic test that detects antibodies against dengue also captures Zika-generated antibodies. We call it cross-reactivity,” he says.

Meghie Rodrigues’ July 29, 2020 article for SciDev.net, which originated the news item, delves further into the work,

Co-author and virologist, Maurício Nogueira, tells SciDev.Net that avoiding cross-reactivity is crucial because “dengue is a disease that kills — and can do so quickly if the right diagnosis is not made. As for Zika, it offers risks for foetuses to develop microcephaly, and we can’t let pregnant women spend seven or eight months wondering whether they have the virus or not.”

There is also no specific antiviral treatment for Zika and the search for a vaccine is ongoing.

Virus differentiation is important to accurately measure the real impact of both diseases on public health. The most widely used blood test, the enzyme-linked immunosorbent assay (ELISA), is limited in its ability to tell the difference between the viruses, the authors say.

As dengue has four variations, known as serotypes, the team created four different nanoparticles and covered each of them with a different dengue protein. They applied ELISA serum and a blood sample. The researchers found that sample antibodies bound with the viruses’ proteins, changing the pattern of electrons on the gold nanoparticle surface.

Should you check out Rodrigues’ entire article, you might want to take some time to explore SciDev.net to find science news from countries that don’t often get the coverage they should.

Here’s a link to and a citation for the researchers’ paper,

Nanosensors based on LSPR are able to serologically differentiate dengue from Zika infections by Alice F. Versiani, Estefânia M. N. Martins, Lidia M. Andrade, Laura Cox, Glauco C. Pereira, Edel F. Barbosa-Stancioli, Mauricio L. Nogueira, Luiz O. Ladeira & Flávio G. da Fonseca. Scientific Reports volume 10, Article number: 11302 (2020) DOI: https://doi.org/10.1038/s41598-020-68357-9 Published: 09 July 2020

This paper is open access.

Plantains and carbon nanotubes to improve cars

I always enjoy the unexpected in a story and this one has to do with plantains and luxury cars, from a July 29, 2020 news item on phys.org (Note: A link has been removed),

A luxury automobile is not really a place to look for something like sisal, hemp, or wood. Yet automakers have been using natural fibers for decades. Some high-end sedans and coupes use these in composite materials for interior door panels, for engine, interior and noise insulation, and internal engine covers, among other uses.

Unlike steel or aluminum, natural fiber composites do not rust or corrode. They can also be durable and easily molded. The biggest advantages of fiber reinforced polymer composites for cars are light weight, good crash properties, and noise- and vibration-reducing characteristics. But making more parts of a vehicle from renewable sources is a challenge. Natural fiber polymer composites can crack, break and bend. The reasons include low tensile, flexural and impact strength in the composite material.

Researchers from the University of Johannesburg [South Africe] have now demonstrated that plantain, a starchy type of banana, is a promising source for an emerging type of composite material for the automotive industry. The natural plantain fibers are combined with carbon nanotubes and epoxy resin to form a natural fiber-reinforced polymer hybrid nanocomposite material. Plantain is a year-round staple food crop in tropical regions of Africa, Asia and South America. Many types of plantain are eaten cooked.

A July 29, 2020 University of Johannesburg press release, which originated the news item, delves into plantains and how their fibers enhance nanocomposites destined for integration into luxury cars,

Plantain is a year-round staple food crop in tropical regions of Africa, Asia and South America. Many types of plantain are eaten cooked.

The researchers moulded a composite material from epoxy resin, treated plantain fibers and carbon nanotubes. The optimum amount of nanotubes was 1% by weight of the plantain-epoxy resin combined.

The resulting plantain nanocomposite was much stronger and stiffer than epoxy resin on its own.

The composite had 31% more tensile and 34% more flexural strength than the epoxy resin alone. The nanocomposite also had 52% higher tensile modulus and 29% higher flexural modulus than the epoxy resin alone.

“The hybridization of plantain with multi-walled carbon nanotubes increases the mechanical and thermal strength of the composite. These increases make the hybrid composite a competitive and alternative material for certain car parts,” says Prof Tien-Chien Jen.

Prof Jen is the lead researcher in the study and the Head of the Department of Mechanical Engineering Science at the University of Johannesburg.

Natural fibres vs metals

Producing car parts from renewable sources have several benefits, says Dr Patrick Ehi Imoisili. Dr Imoisili is a postdoctoral researcher in the Department of Mechanical Engineering Science at the University of Johannesburg.

“There is a trend of using natural fibre in vehicles. The reason is that natural fibres composites are renewable, low cost and low density. They have high specific strength and stiffness. The manufacturing processes are relatively safe,” says Imoisili.

“Using car parts made from these composites, can reduce the mass of a vehicle. That can result in better fuel-efficiency and safety. These components will not rust or corrode like metals. Also, they can be stiff, durable and easily molded,” he adds.

However, some natural fibre reinforced polymer composites currently have disadvantages such as water absorption, low impact strength and low heat resistance. Car owners can notice effects such as cracking, bending or warping of a car part, says Imoisili.

Standardised tests

The researchers subjected the plantain nanocomposite to a series of standardised industrial tests. These included ASTM Test Methods D638 and D790; impact testing according to the ASTM A-370 standard; and ASTM D-2240.

The tests showed that a composite with 1% nanotubes had the best strength and stiffness, compared to epoxy resin alone.

The plantain nanocomposite also showed marked improvement in micro hardness, impact strength and thermal conductivity compared to epoxy resin alone.

Moulding a nanocomposite from natural fibres

The researchers compression-moulded a ‘stress test object’. They used 1 part inedible plantain fibres, 4 parts epoxy resin and multi-walled carbon nanotubes. The epoxy resin and nanotubes came from commercial suppliers. The epoxy was similar to resins that auto manufacturers use in certain car parts.

The plantain fibres came from the ‘trunks’ or pseudo-stems, of plantain plants in the south-western region of Nigeria. The pseudo-stems consist of tightly-overlapping leaves.

The researchers treated the plantain fibers with several processes. The first process is an ancient method to separate plant fibres from stems, called water-retting.

In the second process, the fibres were soaked in a 3% caustic soda solution for 4 hours. After drying, the fibres were treated with high-frequency microwave radiation of 2.45GHz at 550W for 2 minutes.

The caustic soda and microwave treatments improved the bonding between the plantain fibers and the epoxy resin in the nanocomposite.

Next, the researchers dispersed the nanotubes in ethanol to prevent ‘bunching’ of the tubes in the composite. After that, the plantain fibres, nanotubes and epoxy resin were combined inside a mold. The mold was then compressed with a load for 24 hours at room temperature.

Food crop vs industrial raw material

Plantain is grown in tropical regions worldwide. This includes Mexico, Florida and Texas in North America; Brazil, Honduras, Guatemala in South and Central America; India, China, and Southeast Asia.

In West and Central Africa, farmers grow plantain in Cameroon, Ghana, Uganda, Rwanda, Nigeria, Cote d’Ivoire and Benin.

Using biomass from major staple food crops can create problems in food security for people with low incomes. In addition, the automobile industry will need access to reliable sources of natural fibres to increase use of natural fibre composites.

In the case of plantains, potential tensions between food security and industrial uses for composite materials are low. This is because plantain farmers discard the pseudo-stems as agro-waste after harvest.

Here’s a link to and a citation for the paper,

Physical, mechanical and thermal properties of high frequency microwave treated plantain (Musa Paradisiaca) fibre/MWCNT hybrid epoxy nanocomposites by Patrick Ehi Imoisili, Kingsley Ukoba, Tien-Chien Jen. Journal of Materials Research and Technology Volume 9, Issue 3, May–June 2020, Pages 4933-4939 DOI: https://doi.org/10.1016/j.jmrt.2020.03.012

This paper is open access.

Turning brain-controlled wireless electronic prostheses into reality plus some ethical points

Researchers at Stanford University (California, US) believe they have a solution for a problem with neuroprosthetics (Note: I have included brief comments about neuroprosthetics and possible ethical issues at the end of this posting) according an August 5, 2020 news item on ScienceDaily,

The current generation of neural implants record enormous amounts of neural activity, then transmit these brain signals through wires to a computer. But, so far, when researchers have tried to create wireless brain-computer interfaces to do this, it took so much power to transmit the data that the implants generated too much heat to be safe for the patient. A new study suggests how to solve his problem — and thus cut the wires.

Caption: Photo of a current neural implant, that uses wires to transmit information and receive power. New research suggests how to one day cut the wires. Credit: Sergey Stavisky

An August 3, 2020 Stanford University news release (also on EurekAlert but published August 4, 2020) by Tom Abate, which originated the news item, details the problem and the proposed solution,

Stanford researchers have been working for years to advance a technology that could one day help people with paralysis regain use of their limbs, and enable amputees to use their thoughts to control prostheses and interact with computers.

The team has been focusing on improving a brain-computer interface, a device implanted beneath the skull on the surface of a patient’s brain. This implant connects the human nervous system to an electronic device that might, for instance, help restore some motor control to a person with a spinal cord injury, or someone with a neurological condition like amyotrophic lateral sclerosis, also called Lou Gehrig’s disease.

The current generation of these devices record enormous amounts of neural activity, then transmit these brain signals through wires to a computer. But when researchers have tried to create wireless brain-computer interfaces to do this, it took so much power to transmit the data that the devices would generate too much heat to be safe for the patient.

Now, a team led by electrical engineers and neuroscientists Krishna Shenoy, PhD, and Boris Murmann, PhD, and neurosurgeon and neuroscientist Jaimie Henderson, MD, have shown how it would be possible to create a wireless device, capable of gathering and transmitting accurate neural signals, but using a tenth of the power required by current wire-enabled systems. These wireless devices would look more natural than the wired models and give patients freer range of motion.

Graduate student Nir Even-Chen and postdoctoral fellow Dante Muratore, PhD, describe the team’s approach in a Nature Biomedical Engineering paper.

The team’s neuroscientists identified the specific neural signals needed to control a prosthetic device, such as a robotic arm or a computer cursor. The team’s electrical engineers then designed the circuitry that would enable a future, wireless brain-computer interface to process and transmit these these carefully identified and isolated signals, using less power and thus making it safe to implant the device on the surface of the brain.

To test their idea, the researchers collected neuronal data from three nonhuman primates and one human participant in a (BrainGate) clinical trial.

As the subjects performed movement tasks, such as positioning a cursor on a computer screen, the researchers took measurements. The findings validated their hypothesis that a wireless interface could accurately control an individual’s motion by recording a subset of action-specific brain signals, rather than acting like the wired device and collecting brain signals in bulk.

The next step will be to build an implant based on this new approach and proceed through a series of tests toward the ultimate goal.

Here’s a link to and a citation for the paper,

Power-saving design opportunities for wireless intracortical brain–computer interfaces by Nir Even-Chen, Dante G. Muratore, Sergey D. Stavisky, Leigh R. Hochberg, Jaimie M. Henderson, Boris Murmann & Krishna V. Shenoy. Nature Biomedical Engineering (2020) DOI: https://doi.org/10.1038/s41551-020-0595-9 Published: 03 August 2020

This paper is behind a paywall.

Comments about ethical issues

As I found out while investigating, ethical issues in this area abound. My first thought was to look at how someone with a focus on ability studies might view the complexities.

My ‘go to’ resource for human enhancement and ethical issues is Gregor Wolbring, an associate professor at the University of Calgary (Alberta, Canada). his profile lists these areas of interest: ability studies, disability studies, governance of emerging and existing sciences and technologies (e.g. neuromorphic engineering, genetics, synthetic biology, robotics, artificial intelligence, automatization, brain machine interfaces, sensors) and more.

I can’t find anything more recent on this particular topic but I did find an August 10, 2017 essay for The Conversation where he comments on technology and human enhancement ethical issues where the technology is gene-editing. Regardless, he makes points that are applicable to brain-computer interfaces (human enhancement), Note: Links have been removed),

Ability expectations have been and still are used to disable, or disempower, many people, not only people seen as impaired. They’ve been used to disable or marginalize women (men making the argument that rationality is an important ability and women don’t have it). They also have been used to disable and disempower certain ethnic groups (one ethnic group argues they’re smarter than another ethnic group) and others.

A recent Pew Research survey on human enhancement revealed that an increase in the ability to be productive at work was seen as a positive. What does such ability expectation mean for the “us” in an era of scientific advancements in gene-editing, human enhancement and robotics?

Which abilities are seen as more important than others?

The ability expectations among “us” will determine how gene-editing and other scientific advances will be used.

And so how we govern ability expectations, and who influences that governance, will shape the future. Therefore, it’s essential that ability governance and ability literacy play a major role in shaping all advancements in science and technology.

One of the reasons I find Gregor’s commentary so valuable is that he writes lucidly about ability and disability as concepts and poses what can be provocative questions about expectations and what it is to be truly abled or disabled. You can find more of his writing here on his eponymous (more or less) blog.

Ethics of clinical trials for testing brain implants

This October 31, 2017 article by Emily Underwood for Science was revelatory,

In 2003, neurologist Helen Mayberg of Emory University in Atlanta began to test a bold, experimental treatment for people with severe depression, which involved implanting metal electrodes deep in the brain in a region called area 25 [emphases mine]. The initial data were promising; eventually, they convinced a device company, St. Jude Medical in Saint Paul, to sponsor a 200-person clinical trial dubbed BROADEN.

This month [October 2017], however, Lancet Psychiatry reported the first published data on the trial’s failure. The study stopped recruiting participants in 2012, after a 6-month study in 90 people failed to show statistically significant improvements between those receiving active stimulation and a control group, in which the device was implanted but switched off.

… a tricky dilemma for companies and research teams involved in deep brain stimulation (DBS) research: If trial participants want to keep their implants [emphases mine], who will take responsibility—and pay—for their ongoing care? And participants in last week’s meeting said it underscores the need for the growing corps of DBS researchers to think long-term about their planned studies.

… participants bear financial responsibility for maintaining the device should they choose to keep it, and for any additional surgeries that might be needed in the future, Mayberg says. “The big issue becomes cost [emphasis mine],” she says. “We transition from having grants and device donations” covering costs, to patients being responsible. And although the participants agreed to those conditions before enrolling in the trial, Mayberg says she considers it a “moral responsibility” to advocate for lower costs for her patients, even it if means “begging for charity payments” from hospitals. And she worries about what will happen to trial participants if she is no longer around to advocate for them. “What happens if I retire, or get hit by a bus?” she asks.

There’s another uncomfortable possibility: that the hypothesis was wrong [emphases mine] to begin with. A large body of evidence from many different labs supports the idea that area 25 is “key to successful antidepressant response,” Mayberg says. But “it may be too simple-minded” to think that zapping a single brain node and its connections can effectively treat a disease as complex as depression, Krakauer [John Krakauer, a neuroscientist at Johns Hopkins University in Baltimore, Maryland] says. Figuring that out will likely require more preclinical research in people—a daunting prospect that raises additional ethical dilemmas, Krakauer says. “The hardest thing about being a clinical researcher,” he says, “is knowing when to jump.”

Brain-computer interfaces, symbiosis, and ethical issues

This was the most recent and most directly applicable work that I could find. From a July 24, 2019 article by Liam Drew for Nature Outlook: The brain,

“It becomes part of you,” Patient 6 said, describing the technology that enabled her, after 45 years of severe epilepsy, to halt her disabling seizures. Electrodes had been implanted on the surface of her brain that would send a signal to a hand-held device when they detected signs of impending epileptic activity. On hearing a warning from the device, Patient 6 knew to take a dose of medication to halt the coming seizure.

“You grow gradually into it and get used to it, so it then becomes a part of every day,” she told Frederic Gilbert, an ethicist who studies brain–computer interfaces (BCIs) at the University of Tasmania in Hobart, Australia. “It became me,” she said. [emphasis mine]

Gilbert was interviewing six people who had participated in the first clinical trial of a predictive BCI to help understand how living with a computer that monitors brain activity directly affects individuals psychologically1. Patient 6’s experience was extreme: Gilbert describes her relationship with her BCI as a “radical symbiosis”.

Symbiosis is a term, borrowed from ecology, that means an intimate co-existence of two species for mutual advantage. As technologists work towards directly connecting the human brain to computers, it is increasingly being used to describe humans’ potential relationship with artificial intelligence.

Interface technologies are divided into those that ‘read’ the brain to record brain activity and decode its meaning, and those that ‘write’ to the brain to manipulate activity in specific regions and affect their function.

Commercial research is opaque, but scientists at social-media platform Facebook are known to be pursuing brain-reading techniques for use in headsets that would convert users’ brain activity into text. And neurotechnology companies such as Kernel in Los Angeles, California, and Neuralink, founded by Elon Musk in San Francisco, California, predict bidirectional coupling in which computers respond to people’s brain activity and insert information into their neural circuitry. [emphasis mine]

Already, it is clear that melding digital technologies with human brains can have provocative effects, not least on people’s agency — their ability to act freely and according to their own choices. Although neuroethicists’ priority is to optimize medical practice, their observations also shape the debate about the development of commercial neurotechnologies.

Neuroethicists began to note the complex nature of the therapy’s side effects. “Some effects that might be described as personality changes are more problematic than others,” says Maslen [Hannah Maslen, a neuroethicist at the University of Oxford, UK]. A crucial question is whether the person who is undergoing stimulation can reflect on how they have changed. Gilbert, for instance, describes a DBS patient who started to gamble compulsively, blowing his family’s savings and seeming not to care. He could only understand how problematic his behaviour was when the stimulation was turned off.

Such cases present serious questions about how the technology might affect a person’s ability to give consent to be treated, or for treatment to continue. [emphases mine] If the person who is undergoing DBS is happy to continue, should a concerned family member or doctor be able to overrule them? If someone other than the patient can terminate treatment against the patient’s wishes, it implies that the technology degrades people’s ability to make decisions for themselves. It suggests that if a person thinks in a certain way only when an electrical current alters their brain activity, then those thoughts do not reflect an authentic self.

To observe a person with tetraplegia bringing a drink to their mouth using a BCI-controlled robotic arm is spectacular. [emphasis mine] This rapidly advancing technology works by implanting an array of electrodes either on or in a person’s motor cortex — a brain region involved in planning and executing movements. The activity of the brain is recorded while the individual engages in cognitive tasks, such as imagining that they are moving their hand, and these recordings are used to command the robotic limb.

If neuroscientists could unambiguously discern a person’s intentions from the chattering electrical activity that they record in the brain, and then see that it matched the robotic arm’s actions, ethical concerns would be minimized. But this is not the case. The neural correlates of psychological phenomena are inexact and poorly understood, which means that signals from the brain are increasingly being processed by artificial intelligence (AI) software before reaching prostheses.[emphasis mine]

But, he [Philipp Kellmeyer, a neurologist and neuroethicist at the University of Freiburg, Germany] says, using AI tools also introduces ethical issues of which regulators have little experience. [emphasis mine] Machine-learning software learns to analyse data by generating algorithms that cannot be predicted and that are difficult, or impossible, to comprehend. This introduces an unknown and perhaps unaccountable process between a person’s thoughts and the technology that is acting on their behalf.

Maslen is already helping to shape BCI-device regulation. She is in discussion with the European Commission about regulations it will implement in 2020 that cover non-invasive brain-modulating devices that are sold straight to consumers. [emphases mine; Note: There is a Canadian company selling this type of product, MUSE] Maslen became interested in the safety of these devices, which were covered by only cursory safety regulations. Although such devices are simple, they pass electrical currents through people’s scalps to modulate brain activity. Maslen found reports of them causing burns, headaches and visual disturbances. She also says clinical studies have shown that, although non-invasive electrical stimulation of the brain can enhance certain cognitive abilities, this can come at the cost of deficits in other aspects of cognition.

Regarding my note about MUSE, the company is InteraXon and its product is MUSE.They advertise the product as “Brain Sensing Headbands That Improve Your Meditation Practice.” The company website and the product seem to be one entity, Choose Muse. The company’s product has been used in some serious research papers they can be found here. I did not see any research papers concerning safety issues.

Getting back to Drew’s July 24, 2019 article and Patient 6,

… He [Gilbert] is now preparing a follow-up report on Patient 6. The company that implanted the device in her brain to help free her from seizures went bankrupt. The device had to be removed.

… Patient 6 cried as she told Gilbert about losing the device. … “I lost myself,” she said.

“It was more than a device,” Gilbert says. “The company owned the existence of this new person.”

I strongly recommend reading Drew’s July 24, 2019 article in its entirety.

Finally

It’s easy to forget that in all the excitement over technologies ‘making our lives better’ that there can be a dark side or two. Some of the points brought forth in the articles by Wolbring, Underwood, and Drew confirmed my uneasiness as reasonable and gave me some specific examples of how these technologies raise new issues or old issues in new ways.

What I find interesting is that no one is using the term ‘cyborg’, which would seem quite applicable.There is an April 20, 2012 posting here titled ‘My mother is a cyborg‘ where I noted that by at lease one definition people with joint replacements, pacemakers, etc. are considered cyborgs. In short, cyborgs or technology integrated into bodies have been amongst us for quite some time.

Interestingly, no one seems to care much when insects are turned into cyborgs (can’t remember who pointed this out) but it is a popular area of research especially for military applications and search and rescue applications.

I’ve sometimes used the term ‘machine/flesh’ and or ‘augmentation’ as a description of technologies integrated with bodies, human or otherwise. You can find lots on the topic here however I’ve tagged or categorized it.

Amongst other pieces you can find here, there’s the August 8, 2016 posting, ‘Technology, athletics, and the ‘new’ human‘ featuring Oscar Pistorius when he was still best known as the ‘blade runner’ and a remarkably successful paralympic athlete. It’s about his efforts to compete against able-bodied athletes at the London Olympic Games in 2012. It is fascinating to read about technology and elite athletes of any kind as they are often the first to try out ‘enhancements’.

Gregor Wolbring has a number of essays on The Conversation looking at Paralympic athletes and their pursuit of enhancements and how all of this is affecting our notions of abilities and disabilities. By extension, one has to assume that ‘abled’ athletes are also affected with the trickle-down effect on the rest of us.

Regardless of where we start the investigation, there is a sameness to the participants in neuroethics discussions with a few experts and commercial interests deciding on how the rest of us (however you define ‘us’ as per Gregor Wolbring’s essay) will live.

This paucity of perspectives is something I was getting at in my COVID-19 editorial for the Canadian Science Policy Centre. My thesis being that we need a range of ideas and insights that cannot be culled from small groups of people who’ve trained and read the same materials or entrepreneurs who too often seem to put profit over thoughtful implementations of new technologies. (See the PDF May 2020 edition [you’ll find me under Policy Development]) or see my May 15, 2020 posting here (with all the sources listed.)

As for this new research at Stanford, it’s exciting news, which raises questions, as it offers the hope of independent movement for people diagnosed as tetraplegic (sometimes known as quadriplegic.)

Hydrogel (a soft, wet material) can memorize, retrieve, and forget information like a human brain

This is fascinating and it’s not a memristor. (You can find out more about memristors here on the Nanowerk website). Getting back to the research, scientists at Hokkaido University (Japan) are training squishy hydrogel to remember according to a July 28, 2020 news item on phys.org (Note: Links have been removed),

Hokkaido University researchers have found a soft and wet material that can memorize, retrieve, and forget information, much like the human brain. They report their findings in the journal Proceedings of the National Academy of Sciences (PNAS).

The human brain learns things, but tends to forget them when the information is no longer important. Recreating this dynamic memory process in manmade materials has been a challenge. Hokkaido University researchers now report a hydrogel that mimics the dynamic memory function of the brain: encoding information that fades with time depending on the memory intensity.

Hydrogels are flexible materials composed of a large percentage of water—in this case about 45%—along with other chemicals that provide a scaffold-like structure to contain the water. Professor Jian Ping Gong, Assistant Professor Kunpeng Cui and their students and colleagues in Hokkaido University’s Institute for Chemical Reaction Design and Discovery (WPI-ICReDD) are seeking to develop hydrogels that can serve biological functions.

“Hydrogels are excellent candidates to mimic biological functions because they are soft and wet like human tissues,” says Gong. “We are excited to demonstrate how hydrogels can mimic some of the memory functions of brain tissue.”

Caption: The hydrogel’s memorizing-forgetting behavior is achieved based on fast water uptake (swelling) at high temperature and slow water release (shrinking) at low temperature, which is enabled by dynamic bonds in the gel. The swelling part turns from transparent to opaque when cooled, enabling memory retrieval. (Chengtao Yu et al., PNAS, July 27, 2020) Credit: Chengtao Yu et al., PNAS, July 27, 2020

A July 27, 2020 Hokkaido University press release (also on EurekAlert but published July 28, 2020), which originated the news item, investigates just how the scientists trained the hydrogel,

In this study, the researchers placed a thin hydrogel between two plastic plates; the top plate had a shape or letters cut out, leaving only that area of the hydrogel exposed. For example, patterns included an airplane and the word “GEL.” They initially placed the gel in a cold water bath to establish equilibrium. Then they moved the gel to a hot bath. The gel absorbed water into its structure causing a swell, but only in the exposed area. This imprinted the pattern, which is like a piece of information, onto the gel. When the gel was moved back to the cold water bath, the exposed area turned opaque, making the stored information visible, due to what they call “structure frustration.” At the cold temperature, the hydrogel gradually shrank, releasing the water it had absorbed. The pattern slowly faded. The longer the gel was left in the hot water, the darker or more intense the imprint would be, and therefore the longer it took to fade or “forget” the information. The team also showed hotter temperatures intensified the memories.

“This is similar to humans,” says Cui. “The longer you spend learning something or the stronger the emotional stimuli, the longer it takes to forget it.”

The team showed that the memory established in the hydrogel is stable against temperature fluctuation and large physical stretching. More interestingly, the forgetting processes can be programmed by tuning the thermal learning time or temperature. For example, when they applied different learning times to each letter of “GEL,” the letters disappeared sequentially.

The team used a hydrogel containing materials called polyampholytes or PA gels. The memorizing-forgetting behavior is achieved based on fast water uptake and slow water release, which is enabled by dynamic bonds in the hydrogels. “This approach should work for a variety of hydrogels with physical bonds,” says Gong.

“The hydrogel’s brain-like memory system could be explored for some applications, such as disappearing messages for security,” Cui added.

Here’s a link to and a citation for the paper,

Hydrogels as dynamic memory with forgetting ability by Chengtao Yu, Honglei Guo, Kunpeng Cui, Xueyu Li, Ya Nan Ye, Takayuki Kurokawa, and Jian Ping Gong. PNAS August 11, 2020 117 (32) 18962-18968 DOI: https://doi.org/10.1073/pnas.2006842117 First published July 27, 2020

This paper is behind a paywall.

Neurotransistor for brainlike (neuromorphic) computing

According to researchers at Helmholtz-Zentrum Dresden-Rossendorf and the rest of the international team collaborating on the work, it’s time to look more closely at plasticity in the neuronal membrane,.

From the abstract for their paper, Intrinsic plasticity of silicon nanowire neurotransistors for dynamic memory and learning functions by Eunhye Baek, Nikhil Ranjan Das, Carlo Vittorio Cannistraci, Taiuk Rim, Gilbert Santiago Cañón Bermúdez, Khrystyna Nych, Hyeonsu Cho, Kihyun Kim, Chang-Ki Baek, Denys Makarov, Ronald Tetzlaff, Leon Chua, Larysa Baraban & Gianaurelio Cuniberti. Nature Electronics volume 3, pages 398–408 (2020) DOI: https://doi.org/10.1038/s41928-020-0412-1 Published online: 25 May 2020 Issue Date: July 2020

Neuromorphic architectures merge learning and memory functions within a single unit cell and in a neuron-like fashion. Research in the field has been mainly focused on the plasticity of artificial synapses. However, the intrinsic plasticity of the neuronal membrane is also important in the implementation of neuromorphic information processing. Here we report a neurotransistor made from a silicon nanowire transistor coated by an ion-doped sol–gel silicate film that can emulate the intrinsic plasticity of the neuronal membrane.

Caption: Neurotransistors: from silicon chips to neuromorphic architecture. Credit: TU Dresden / E. Baek Courtesy: Helmholtz-Zentrum Dresden-Rossendorf

A July 14, 2020 news item on Nanowerk announced the research (Note: A link has been removed),

Especially activities in the field of artificial intelligence, like teaching robots to walk or precise automatic image recognition, demand ever more powerful, yet at the same time more economical computer chips. While the optimization of conventional microelectronics is slowly reaching its physical limits, nature offers us a blueprint how information can be processed and stored quickly and efficiently: our own brain.

For the very first time, scientists at TU Dresden and the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) have now successfully imitated the functioning of brain neurons using semiconductor materials. They have published their research results in the journal Nature Electronics (“Intrinsic plasticity of silicon nanowire neurotransistors for dynamic memory and learning functions”).

A July 14, 2020 Helmholtz-Zentrum Dresden-Rossendorf press release (also on EurekAlert), which originated the news items delves further into the research,

Today, enhancing the performance of microelectronics is usually achieved by reducing component size, especially of the individual transistors on the silicon computer chips. “But that can’t go on indefinitely – we need new approaches”, Larysa Baraban asserts. The physicist, who has been working at HZDR since the beginning of the year, is one of the three primary authors of the international study, which involved a total of six institutes. One approach is based on the brain, combining data processing with data storage in an artificial neuron.

“Our group has extensive experience with biological and chemical electronic sensors,” Baraban continues. “So, we simulated the properties of neurons using the principles of biosensors and modified a classical field-effect transistor to create an artificial neurotransistor.” The advantage of such an architecture lies in the simultaneous storage and processing of information in a single component. In conventional transistor technology, they are separated, which slows processing time and hence ultimately also limits performance.

Silicon wafer + polymer = chip capable of learning

Modeling computers on the human brain is no new idea. Scientists made attempts to hook up nerve cells to electronics in Petri dishes decades ago. “But a wet computer chip that has to be fed all the time is of no use to anybody,” says Gianaurelio Cuniberti from TU Dresden. The Professor for Materials Science and Nanotechnology is one of the three brains behind the neurotransistor alongside Ronald Tetzlaff, Professor of Fundamentals of Electrical Engineering in Dresden, and Leon Chua [emphasis mine] from the University of California at Berkeley, who had already postulated similar components in the early 1970s.

Now, Cuniberti, Baraban and their team have been able to implement it: “We apply a viscous substance – called solgel – to a conventional silicon wafer with circuits. This polymer hardens and becomes a porous ceramic,” the materials science professor explains. “Ions move between the holes. They are heavier than electrons and slower to return to their position after excitation. This delay, called hysteresis, is what causes the storage effect.” As Cuniberti explains, this is a decisive factor in the functioning of the transistor. “The more an individual transistor is excited, the sooner it will open and let the current flow. This strengthens the connection. The system is learning.”

Cuniberti and his team are not focused on conventional issues, though. “Computers based on our chip would be less precise and tend to estimate mathematical computations rather than calculating them down to the last decimal,” the scientist explains. “But they would be more intelligent. For example, a robot with such processors would learn to walk or grasp; it would possess an optical system and learn to recognize connections. And all this without having to develop any software.” But these are not the only advantages of neuromorphic computers. Thanks to their plasticity, which is similar to that of the human brain, they can adapt to changing tasks during operation and, thus, solve problems for which they were not originally programmed.

I highlighted Dr. Leon Chua’s name as he was one of the first to conceptualize the notion of a memristor (memory resistor), which is what the press release seems to be referencing with the mention of artificial synapses. Dr. Chua very kindly answered a few questions for me about his work which I published in an April 13, 2010 posting (scroll down about 40% of the way).

Transforming electronics with metal-breathing bacteria

‘Metal-breathing’ bacteria, eh? A July 28, 2020 news item on Nanowerk announces the research into new materials for electronics (Note: A link has been removed),

When the Shewanella oneidensis bacterium “breathes” in certain metal and sulfur compounds anaerobically, the way an aerobic organism would process oxygen, it produces materials that could be used to enhance electronics, electrochemical energy storage, and drug-delivery devices.

The ability of this bacterium to produce molybdenum disulfide – a material that is able to transfer electrons easily, like graphene – is the focus of research published in Biointerphases (“Synthesis and characterization of molybdenum disulfide nanoparticles in Shewanella oneidensis MR-1 biofilms”) by a team of engineers from Rensselaer Polytechnic Institute.

A July 28, 2020 Rensselaer Polytechnic Institute (RPI) news release (also on EurekAlert) by Torie Wells, which originated the news item, describes the work in more detail,

“This has some serious potential if we can understand this process and control aspects of how the bacteria are making these and other materials,” said Shayla Sawyer, an associate professor of electrical, computer, and systems engineering at Rensselaer.

The research was led by James Rees, who is currently a postdoctoral research associate under the Sawyer group in close partnership and with the support of the Jefferson Project at Lake George — a collaboration between Rensselaer, IBM Research, and The FUND for Lake George that is pioneering a new model for environmental monitoring and prediction. This research is an important step toward developing a new generation of nutrient sensors that can be deployed on lakes and other water bodies.

“We find bacteria that are adapted to specific geochemical or biochemical environments can create, in some cases, very interesting and novel materials,” Rees said. “We are trying to bring that into the electrical engineering world.”

Rees conducted this pioneering work as a graduate student, co-advised by Sawyer and Yuri Gorby, the third author on this paper. Compared with other anaerobic bacteria, one thing that makes Shewanella oneidensis particularly unusual and interesting is that it produces nanowires capable of transferring electrons [emphasis mine].

“That lends itself to connecting to electronic devices that have already been made,” Sawyer said. “So, it’s the interface between the living world and the manmade world that is fascinating.”

Sawyer and Rees also found that, because their electronic signatures can be mapped and monitored, bacterial biofilms could also act as an effective nutrient sensor that could provide Jefferson Project researchers with key information about the health of an aquatic ecosystem like Lake George.

“This groundbreaking work using bacterial biofilms represents the potential for an exciting new generation of ‘living sensors,’ which would completely transform our ability to detect excess nutrients in water bodies in real-time. This is critical to understanding and mitigating harmful algal blooms and other important water quality issues around the world,” said Rick Relyea, director of the Jefferson Project.

Sawyer and Rees plan to continue exploring how to optimally develop this bacterium to harness its wide-ranging potential applications.

“We sometimes get the question with the research: Why bacteria? Or, why bring microbiology into materials science?” Rees said. “Biology has had such a long run of inventing materials through trial and error. The composites and novel structures invented by human scientists are almost a drop in the bucket compared to what biology has been able to do.”

Here’s a link to and a citation for the paper,

Synthesis and characterization of molybdenum disulfide nanoparticles in Shewanella oneidensis MR-1 biofilms by James D. Rees, Yuri A. Gorby, and Shayla M. Sawyer. Biointerphases 15, 041006 (2020) DOI: https://doi.org/10.1116/6.0000199 Published Online: 24 July 2020

This paper is behind a paywall.