Category Archives: human enhancement

Compact and affordable brain-computer interface (BCI)

This device could help people with disabilities to regain control of their limbs or provide advance warnings of seizures to people with epilepsy and it’s all based on technology that is a century old.

A January 19, 2022 Skolkovo Institute of Science and Technology (Skoltech) press release (also on EurekAlert) provides details about the device (Note: A link has been removed),

Scientists from Skoltech, South Ural State University, and elsewhere have developed a device for recording brain activity that is more compact and affordable than the solutions currently on the market. With its high signal quality and customizable configuration, the device could help people with restricted mobility regain control of their limbs or provide advance warnings of an impending seizure to patients with epilepsy. The article presenting the device and testing results came out in Experimental Brain Research.

Researchers and medics, as well as engineers working on futuristic gadgets, need tools that measure brain activity. Among their scientific applications are research on sleep, decision-making, memory, and attention. In a clinical setting, these tools allow doctors to assess the extent of damage to an injured brain and monitor coma patients. Further down cyberpunk lane, brain signals can be translated into commands and sent to an external or implanted device, either to make up for lost functions in the body or for plain fun. The commands could range from moving the arm of an exoskeleton worn by a paralyzed person to turning on the TV.

Invented about a century ago, electroencephalographers are devices that read the electrical activity of the brain via small electrodes placed on the scalp. The recorded signals are then used for research, diagnostics, or gadgetry. The problem with the existing systems used in labs and hospitals is they are bulky and/or expensive. And even then, the number of electrodes is limited, resulting in moderate signal quality. Amateur devices tend to be more affordable, but with even poorer sensitivity.

To fill that gap, researchers from South Ural State University, North Carolina State University, and Brainflow — led by electronic research engineer Ildar Rakhmatulin and Skoltech neuroscientist Professor Mikhail Lebedev — created a device you can build for just $350, compared with the $1,000 or more you would need for currently available analogs. Besides being less expensive, the new electroencephalographer has as many as 24 electrodes or more. Importantly, it also provides research-grade signal quality. At half a centimeter in diameter (about 1/5 inches), the processing unit is compact enough to be worn throughout the day or during the night. The entire device weighs about 150 grams (about 5 ounces).

The researchers have made the instructions for building the device and the accompanying documentation and software openly available on GitHub. The team hopes this will attract more enthusiasts involved in brain-computer interface development, giving an impetus to support and rehabilitation system development, cognitive research, and pushing the geek community to come up with new futuristic gizmos.

“The more convenient and affordable such devices become, the more chances there are this would drive the home lab movement, with some of the research on brain-computer interfaces migrating from large science centers to small-scale amateur projects,” Lebedev said.

“Or we could see people with limited mobility using do-it-yourself interfaces to train, say, a smartphone-based system that would electrically stimulate a biceps to flex the arm at the elbow,” the researcher went on. “That works on someone who has lost control over their arm due to spinal cord trauma or a stroke, where the commands are still generated in the brain — they just don’t reach the limb, and that’s where our little brain-computer interfacing comes in.”

According to the team, such interfaces could also help patients with epilepsy by detecting tell-tale brain activity patterns that indicate when a seizure is imminent, so they can prepare by lying down comfortably in a safe space or attempting to suppress the seizure via electrical stimulation.

Here’s a link to and a citation for the paper,

Low-cost brain computer interface for everyday use by Ildar Rakhmatulin, Andrey Parfenov, Zachary Traylor, Chang S. Nam & Mikhail Lebedev. Experimental Brain Research volume 239,Issue Date: December 2021, pages 3573–3583 (2021) DOI: Published online: 29 September 2021

This paper is behind a paywall.

You can find Brainflow here and this description on its homepage: “BrainFlow is a library intended to obtain, parse and analyze EEG, EMG, ECG and other kinds of data from biosensors.”

Going blind when your neural implant company flirts with bankruptcy (long read)

This story got me to thinking about what happens when any kind of implant company (pacemaker, deep brain stimulator, etc.) goes bankrupt or is acquired by another company with a different business model.

As I worked on this piece, more issues were raised and the scope expanded to include prosthetics along with implants while the focus narrowed to neuro as in, neural implants and neuroprosthetics. At the same time, I found salient examples for this posting in other medical advances such as gene editing.

In sum, all references to implants and prosthetics are to neural devices and some issues are illustrated with salient examples from other medical advances (specifically, gene editing).

Definitions (for those who find them useful)

The US Food and Drug Administration defines implants and prosthetics,

Medical implants are devices or tissues that are placed inside or on the surface of the body. Many implants are prosthetics, intended to replace missing body parts. Other implants deliver medication, monitor body functions, or provide support to organs and tissues.

As for what constitutes a neural implant/neuroprosthetic, there’s this from Emily Waltz’s January 20, 2020 article (How Do Neural Implants Work? Neural implants are used for deep brain stimulation, vagus nerve stimulation, and mind-controlled prostheses) for the Institute of Electrical and Electronics Engineers (IEEE) Spectrum magazine,

A neural implant, then, is a device—typically an electrode of some kind—that’s inserted into the body, comes into contact with tissues that contain neurons, and interacts with those neurons in some way.

Now, let’s start with the recent near bankruptcy of a retinal implant company.

The company goes bust (more or less)

From a February 25, 2022 Science Friday (a National Public Radio program) posting/audio file, Note: Links have been removed,

Barbara Campbell was walking through a New York City subway station during rush hour when her world abruptly went dark. For four years, Campbell had been using a high-tech implant in her left eye that gave her a crude kind of bionic vision, partially compensating for the genetic disease that had rendered her completely blind in her 30s. “I remember exactly where I was: I was switching from the 6 train to the F train,” Campbell tells IEEE Spectrum. “I was about to go down the stairs, and all of a sudden I heard a little ‘beep, beep, beep’ sound.’”

It wasn’t her phone battery running out. It was her Argus II retinal implant system powering down. The patches of light and dark that she’d been able to see with the implant’s help vanished.

Terry Byland is the only person to have received this kind of implant in both eyes. He got the first-generation Argus I implant, made by the company Second Sight Medical Products, in his right eye in 2004, and the subsequent Argus II implant in his left 11 years later. He helped the company test the technology, spoke to the press movingly about his experiences, and even met Stevie Wonder at a conference. “[I] went from being just a person that was doing the testing to being a spokesman,” he remembers.

Yet in 2020, Byland had to find out secondhand that the company had abandoned the technology and was on the verge of going bankrupt. While his two-implant system is still working, he doesn’t know how long that will be the case. “As long as nothing goes wrong, I’m fine,” he says. “But if something does go wrong with it, well, I’m screwed. Because there’s no way of getting it fixed.”

Science Friday and the IEEE [Institute of Electrical and Electronics Engineers] Spectrum magazine collaborated to produce this story. You’ll find the audio files and the transcript of interviews with the authors and one of the implant patients in this February 25, 2022 Science Friday (a National Public Radio program) posting.

Here’s more from the February 15, 2022 IEEE Spectrum article by Eliza Strickland and Mark Harris,

Ross Doerr, another Second Sight patient, doesn’t mince words: “It is fantastic technology and a lousy company,” he says. He received an implant in one eye in 2019 and remembers seeing the shining lights of Christmas trees that holiday season. He was thrilled to learn in early 2020 that he was eligible for software upgrades that could further improve his vision. Yet in the early months of the COVID-19 pandemic, he heard troubling rumors about the company and called his Second Sight vision-rehab therapist. “She said, ‘Well, funny you should call. We all just got laid off,’ ” he remembers. She said, ‘By the way, you’re not getting your upgrades.’ ”

These three patients, and more than 350 other blind people around the world with Second Sight’s implants in their eyes, find themselves in a world in which the technology that transformed their lives is just another obsolete gadget. One technical hiccup, one broken wire, and they lose their artificial vision, possibly forever. To add injury to insult: A defunct Argus system in the eye could cause medical complications or interfere with procedures such as MRI scans, and it could be painful or expensive to remove.

The writers included some information about what happened to the business, from the February 15, 2022 IEEE Spectrum article, Note: Links have been removed,

After Second Sight discontinued its retinal implant in 2019 and nearly went out of business in 2020, a public offering in June 2021 raised US $57.5 million at $5 per share. The company promised to focus on its ongoing clinical trial of a brain implant, called Orion, that also provides artificial vision. But its stock price plunged to around $1.50, and in February 2022, just before this article was published, the company announced a proposed merger with an early-stage biopharmaceutical company called Nano Precision Medical (NPM). None of Second Sight’s executives will be on the leadership team of the new company, which will focus on developing NPM’s novel implant for drug delivery.The company’s current leadership declined to be interviewed for this article but did provide an emailed statement prior to the merger announcement. It said, in part: “We are a recognized global leader in neuromodulation devices for blindness and are committed to developing new technologies to treat the broadest population of sight-impaired individuals.”

It’s unclear what Second Sight’s proposed merger means for Argus patients. The day after the merger was announced, Adam Mendelsohn, CEO of Nano Precision Medical, told Spectrum that he doesn’t yet know what contractual obligations the combined company will have to Argus and Orion patients. But, he says, NPM will try to do what’s “right from an ethical perspective.” The past, he added in an email, is “simply not relevant to the new future.”

There may be some alternatives, from the February 15, 2022 IEEE Spectrum article (Note: Links have been removed),

Second Sight may have given up on its retinal implant, but other companies still see a need—and a market—for bionic vision without brain surgery. Paris-based Pixium Vision is conducting European and U.S. feasibility trials to see if its Prima system can help patients with age-related macular degeneration, a much more common condition than retinitis pigmentosa.

Daniel Palanker, a professor of ophthalmology at Stanford University who licensed his technology to Pixium, says the Prima implant is smaller, simpler, and cheaper than the Argus II. But he argues that Prima’s superior image resolution has the potential to make Pixium Vision a success. “If you provide excellent vision, there will be lots of patients,” he tells Spectrum. “If you provide crappy vision, there will be very few.”

Some clinicians involved in the Argus II work are trying to salvage what they can from the technology. Gislin Dagnelie, an associate professor of ophthalmology at Johns Hopkins University School of Medicine, has set up a network of clinicians who are still working with Argus II patients. The researchers are experimenting with a thermal camera to help users see faces, a stereo camera to filter out the background, and AI-powered object recognition. These upgrades are unlikely to result in commercial hardware today but could help future vision prostheses.

The writers have carefully balanced this piece so it is not an outright condemnation of the companies (Second Sight and Nano Precision), from the February 15, 2022 IEEE Spectrum article,

Failure is an inevitable part of innovation. The Argus II was an innovative technology, and progress made by Second Sight may pave the way for other companies that are developing bionic vision systems. But for people considering such an implant in the future, the cautionary tale of Argus patients left in the lurch may make a tough decision even tougher. Should they take a chance on a novel technology? If they do get an implant and find that it helps them navigate the world, should they allow themselves to depend upon it?

Abandoning the Argus II technology—and the people who use it—might have made short-term financial sense for Second Sight, but it’s a decision that could come back to bite the merged company if it does decide to commercialize a brain implant, believes Doerr.

For anyone curious about retinal implant technology (specifically the Argus II), I have a description in a June 30, 2015 posting.

Speculations and hopes for neuroprosthetics

The field of neuroprosthetics is very active. Dr Arthur Saniotis and Prof Maciej Henneberg have written an article where they speculate about the possibilities of a neuroprosthetic that may one day merge with neurons in a February 21, 2022 Nanowerk Spotlight article,

For over a generation several types of medical neuroprosthetics have been developed, which have improved the lives of thousands of individuals. For instance, cochlear implants have restored functional hearing in individuals with severe hearing impairment.

Further advances in motor neuroprosthetics are attempting to restore motor functions in tetraplegic, limb loss and brain stem stroke paralysis subjects.

Currently, scientists are working on various kinds of brain/machine interfaces [BMI] in order to restore movement and partial sensory function. One such device is the ‘Ipsihand’ that enables movement of a paralyzed hand. The device works by detecting the recipient’s intention in the form of electrical signals, thereby triggering hand movement.

Another recent development is the 12 month BMI gait neurohabilitation program that uses a visual-tactile feedback system in combination with a physical exoskeleton and EEG operated AI actuators while walking. This program has been tried on eight patients with reported improvements in lower limb movement and somatic sensation.

Surgically placed electrode implants have also reduced tremor symptoms in individuals with Parkinson’s disease.

Although neuroprosthetics have provided various benefits they do have their problems. Firstly, electrode implants to the brain are prone to degradation, necessitating new implants after a few years. Secondly, as in any kind of surgery, implanted electrodes can cause post-operative infection and glial scarring. Furthermore, one study showed that the neurobiological efficacy of an implant is dependent on the rate of speed of its insertion.

But what if humans designed a neuroprosthetic, which could bypass the medical glitches of invasive neuroprosthetics? However, instead of connecting devices to neural networks, this neuroprosthetic would directly merge with neurons – a novel step. Such a neuroprosthetic could radically optimize treatments for neurodegenerative disorders and brain injuries, and possibly cognitive enhancement [emphasis mine].

A team of three international scientists has recently designed a nanobased neuroprosthetic, which was published in Frontiers in Neuroscience (“Integration of Nanobots Into Neural Circuits As a Future Therapy for Treating Neurodegenerative Disorders“). [open access paper published in 2018]

An interesting feature of their nanobot neuroprosthetic is that it has been inspired from nature by way of endomyccorhizae – a type of plant/fungus symbiosis, which is over four hundred million years old. During endomyccorhizae, fungi use numerous threadlike projections called mycelium that penetrate plant roots, forming colossal underground networks with nearby root systems. During this process fungi take up vital nutrients while protecting plant roots from infections – a win-win relationship. Consequently, the nano-neuroprosthetic has been named ‘endomyccorhizae ligand interface’, or ‘ELI’ for short.

The Spotlight article goes on to describe how these nanobots might function. As for the possibility of cognitive enhancement, I wonder if that might come to be described as a form of ‘artificial intelligence’.

(Dr Arthur Saniotis and Prof Maciej Henneberg are both from the Department of Anthropology, Ludwik Hirszfeld Institute of Immunology and Experimental Therapy, Polish Academy of Sciences; and Biological Anthropology and Comparative Anatomy Research Unit, Adelaide Medical School, University of Adelaide. Abdul-Rahman Sawalma who’s listed as an author on the 2018 paper is from the Palestinian Neuroscience Initiative, Al-Quds University, Beit Hanina, Palestine.)

Saniotis and Henneberg’s Spotlight article presents an optimistic view of neuroprosthetics. It seems telling that they cite cochlear implants as a success story when it is viewed by many as ethically fraught (see the Cochlear implant Wikipedia entry; scroll down to ‘Criticism and controversy’).

Ethics and your implants

This is from an April 6, 2015 article by Luc Henry on,

Technologist: What are the potential consequences of accepting the “augmented human” in society?

Gregor Wolbring: There are many that we might not even envision now. But let me focus on failure and obsolescence [emphasis mine], two issues that are rarely discussed. What happens when the mechanisms fails in the middle of an action? Failure has hazardous consequences, but obsolescence has psychological ones. …. The constant surgical inter­vention needed to update the hardware may not be feasible. A person might feel obsolete if she cohabits with others using a newer version.

T. Are researchers working on prosthetics sometimes disconnected from reality?

G. W. Students engaged in the development of prosthetics have to learn how to think in societal terms and develop a broader perspective. Our education system provides them with a fascination for clever solutions to technological challenges but not with tools aiming at understanding the consequences, such as whether their product might increase or decrease social justice.

Wolbring is a professor at the University of Calgary’s Cumming School of Medicine (profile page) who writes on social issues to do with human enhancement/ augmentation. As well,

Some of his areas of engagement are: ability studies including governance of ability expectations, disability studies, governance of emerging and existing sciences and technologies (e.g. nanoscale science and technology, molecular manufacturing, aging, longevity and immortality, cognitive sciences, neuromorphic engineering, genetics, synthetic biology, robotics, artificial intelligence, automatization, brain machine interfaces, sensors), impact of science and technology on marginalized populations, especially people with disabilities he governance of bodily enhancement, sustainability issues, EcoHealth, resilience, ethics issues, health policy issues, human rights and sport.

He also maintains his own website here.

Not just startups

I’d classify Second Sight as a tech startup company and they have a high rate of failure, which may not have been clear to the patients who had the implants. Clinical trials can present problems too as this excerpt from my September 17, 2020 posting notes,

This October 31, 2017 article by Emily Underwood for Science was revelatory,

“In 2003, neurologist Helen Mayberg of Emory University in Atlanta began to test a bold, experimental treatment for people with severe depression, which involved implanting metal electrodes deep in the brain in a region called area 25 [emphases mine]. The initial data were promising; eventually, they convinced a device company, St. Jude Medical in Saint Paul, to sponsor a 200-person clinical trial dubbed BROADEN.

This month [October 2017], however, Lancet Psychiatry reported the first published data on the trial’s failure. The study stopped recruiting participants in 2012, after a 6-month study in 90 people failed to show statistically significant improvements between those receiving active stimulation and a control group, in which the device was implanted but switched off.

… a tricky dilemma for companies and research teams involved in deep brain stimulation (DBS) research: If trial participants want to keep their implants [emphases mine], who will take responsibility—and pay—for their ongoing care? And participants in last week’s meeting said it underscores the need for the growing corps of DBS researchers to think long-term about their planned studies.”

Symbiosis can be another consequence, as mentioned in my September 17, 2020 posting,

From a July 24, 2019 article by Liam Drew for Nature Outlook: The brain,

“It becomes part of you,” Patient 6 said, describing the technology that enabled her, after 45 years of severe epilepsy, to halt her disabling seizures. Electrodes had been implanted on the surface of her brain that would send a signal to a hand-held device when they detected signs of impending epileptic activity. On hearing a warning from the device, Patient 6 knew to take a dose of medication to halt the coming seizure.

“You grow gradually into it and get used to it, so it then becomes a part of every day,” she told Frederic Gilbert, an ethicist who studies brain–computer interfaces (BCIs) at the University of Tasmania in Hobart, Australia. “It became me,” she said. [emphasis mine]

Symbiosis is a term, borrowed from ecology, that means an intimate co-existence of two species for mutual advantage. As technologists work towards directly connecting the human brain to computers, it is increasingly being used to describe humans’ potential relationship with artificial intelligence. [emphasis mine]

It’s complicated

For a lot of people these devices are or could be life-changing. At the same time, there are a number of different issues related to implants/prosthetics; the following is not an exhaustive list. As Wolbring notes, issues that we can’t begin to imagine now are likely to emerge as these medical advances become more ubiquitous.


Assistive technologies are almost always portrayed as helpful. For example, a cochlear implant gives people without hearing the ability to hear. The assumption is that this is always a good thing—unless you’re a deaf person who wants to define the problem a little differently. Who gets to decide what is good and ‘normal’ and what is desirable?

While the cochlear implant is the most extreme example I can think of, there are variations of these questions throughout the ‘disability’ communities.

Also, as Wolbring notes in his interview with the, the education system tends to favour technological solutions which don’t take social issues into account. Wolbring cites social justice issues when he mentions failure and obsolescence.

Technical failures and obsolescence

The story, excerpted earlier in this posting, opened with a striking example of a technical failure at an awkward moment; a blind woman depending on her retinal implant loses all sight as she maneuvers through a subway station in New York City.

Aside from being an awful way to find out the company supplying and supporting your implant is in serious financial trouble and can’t offer assistance or repair, the failure offers a preview of what could happen as implants and prosthetics become more commonly used.

Keeping up/fomo (fear of missing out)/obsolescence

It used to be called ‘keeping up with the Joneses, it’s the practice of comparing yourself and your worldly goods to someone else(‘s) and then trying to equal what they have or do better. Usually, people want to have more and better than the mythical Joneses.

These days, the phenomenon (which has been expanded to include social networking) is better known as ‘fomo’ or fear of missing out (see the Fear of missing out Wikipedia entry).

Whatever you want to call it, humanity’s competitive nature can be seen where technology is concerned. When I worked in technology companies, I noticed that hardware and software were sometimes purchased for features that were effectively useless to us. But, not upgrading to a newer version was unthinkable.

Call it fomo or ‘keeping up with the Joneses’, it’s a powerful force and when people (and even companies) miss out or can’t keep up, it can lead to a sense of inferiority in the same way that having an obsolete implant or prosthetic could.

Social consequences

Could there be a neural implant/neuroprosthetic divide? There is already a digital divide (from its Wikipedia entry),

The digital divide is a gap between those who have access to new technology and those who do not … people without access to the Internet and other ICTs [information and communication technologies] are at a socio-economic disadvantage because they are unable or less able to find and apply for jobs, shop and sell online, participate democratically, or research and learn.

After reading Wolbring’s comments, it’s not hard to imagine a neural implant/neuroprosthetic divide with its attendant psychological and social consequences.

What kind of human am I?

There are other issues as noted in my September 17, 2020 posting. I’ve already mentioned ‘patient 6’, the woman who developed a symbiotic relationship with her brain/computer interface. This is how the relationship ended,

… He [Frederic Gilbert, ethicist] is now preparing a follow-up report on Patient 6. The company that implanted the device in her brain to help free her from seizures went bankrupt. The device had to be removed.

… Patient 6 cried as she told Gilbert about losing the device. … “I lost myself,” she said.

“It was more than a device,” Gilbert says. “The company owned the existence of this new person.”

Above human

The possibility that implants will not merely restore or endow someone with ‘standard’ sight or hearing or motion or … but will augment or improve on nature was broached in this May 2, 2013 posting, More than human—a bionic ear that extends hearing beyond the usual frequencies and is one of many in the ‘Human Enhancement’ category on this blog.

More recently, Hugh Herr, an Associate Professor at the Massachusetts Institute of Technology (MIT), leader of the Biomechatronics research group at MIT’s Media Lab, a double amputee, and prosthetic enthusiast, starred in the recent (February 23, 2022) broadcast of ‘Augmented‘ on the Public Broadcasting Service (PBS) science programme, Nova.

I found ‘Augmented’ a little offputting as it gave every indication of being an advertisement for Herr’s work in the form of a hero’s journey. I was not able to watch more than 10 mins. This preview gives you a pretty good idea of what it was like although the part in ‘Augmented, where he says he’d like to be a cyborg hasn’t been included,

At a guess, there were a few talking heads (taking up from 10%-20% of the running time) who provided some cautionary words to counterbalance the enthusiasm in the rest of the programme. It’s a standard approach designed to give the impression that both sides of a question are being recognized. The cautionary material is usually inserted past the 1/2 way mark while leaving several minutes at the end for returning to the more optimistic material.

In a February 2, 2010 posting I have excerpts from an article featuring quotes from Herr that I still find startling,

Written by Paul Hochman for Fast Company, Bionic Legs, iLimbs, and Other Super-Human Prostheses [ETA March 23, 2022: an updated version of the article is now on] delves further into the world where people may be willing to trade a healthy limb for a prosthetic. From the article,

There are many advantages to having your leg amputated.

Pedicure costs drop 50% overnight. A pair of socks lasts twice as long. But Hugh Herr, the director of the Biomechatronics Group at the MIT Media Lab, goes a step further. “It’s actually unfair,” Herr says about amputees’ advantages over the able-bodied. “As tech advancements in prosthetics come along, amputees can exploit those improvements. They can get upgrades. A person with a natural body can’t.”

Herr is not the only one who favours prosthetics (also from the Hochman article),

This influx of R&D cash, combined with breakthroughs in materials science and processor speed, has had a striking visual and social result: an emblem of hurt and loss has become a paradigm of the sleek, modern, and powerful. Which is why Michael Bailey, a 24-year-old student in Duluth, Georgia, is looking forward to the day when he can amputate the last two fingers on his left hand.

“I don’t think I would have said this if it had never happened,” says Bailey, referring to the accident that tore off his pinkie, ring, and middle fingers. “But I told Touch Bionics I’d cut the rest of my hand off if I could make all five of my fingers robotic.”

But Bailey is most surprised by his own reaction. “When I’m wearing it, I do feel different: I feel stronger. As weird as that sounds, having a piece of machinery incorporated into your body, as a part of you, well, it makes you feel above human.[emphasis mine] It’s a very powerful thing.”

My September 17, 2020 posting touches on more ethical and social issues including some of those surrounding consumer neurotechnologies or brain-computer interfaces (BCI). Unfortunately, I don’t have space for these issues here.

As for Paul Hochman’s article, Bionic Legs, iLimbs, and Other Super-Human Prostheses, now on, it has been updated.

Money makes the world go around

Money and business practices have been indirectly referenced (for the most part) up to now in this posting. The February 15, 2022 IEEE Spectrum article and Hochman’s article, Bionic Legs, iLimbs, and Other Super-Human Prostheses, cover two aspects of the money angle.

In the IEEE Spectrum article, a tech start-up company, Second Sight, ran into financial trouble and is acquired by a company that has no plans to develop Second Sight’s core technology. The people implanted with the Argus II technology have been stranded as were ‘patient 6’ and others participating in the clinical trial described in the July 24, 2019 article by Liam Drew for Nature Outlook: The brain mentioned earlier in this posting.

I don’t know anything about the business bankruptcy mentioned in the Drew article but one of the business problems described in the IEEE Spectrum article suggests that Second Sight was founded before answering a basic question, “What is the market size for this product?”

On 18 July 2019, Second Sight sent Argus patients a letter saying it would be phasing out the retinal implant technology to clear the way for the development of its next-generation brain implant for blindness, Orion, which had begun a clinical trial with six patients the previous year. …

“The leadership at the time didn’t believe they could make [the Argus retinal implant] part of the business profitable,” Greenberg [Robert Greenberg, Second Sight co-founder] says. “I understood the decision, because I think the size of the market turned out to be smaller than we had thought.”


The question of whether a medical procedure or medicine can be profitable (or should the question be sufficiently profitable?) was referenced in my April 26, 2019 posting in the context of gene editing and personalized medicine

Edward Abrahams, president of the Personalized Medicine Coalition (US-based), advocates for personalized medicine while noting in passing, market forces as represented by Goldman Sachs in his May 23, 2018 piece for (Note: A link has been removed),

Goldman Sachs, for example, issued a report titled “The Genome Revolution.” It argues that while “genome medicine” offers “tremendous value for patients and society,” curing patients may not be “a sustainable business model.” [emphasis mine] The analysis underlines that the health system is not set up to reap the benefits of new scientific discoveries and technologies. Just as we are on the precipice of an era in which gene therapies, gene-editing, and immunotherapies promise to address the root causes of disease, Goldman Sachs says that these therapies have a “very different outlook with regard to recurring revenue versus chronic therapies.”

The ‘Glybera’ story in my July 4, 2019 posting (scroll down about 40% of the way) highlights the issue with “recurring revenue versus chronic therapies,”

Kelly Crowe in a November 17, 2018 article for the CBC (Canadian Broadcasting Corporation) news writes about Glybera,

It is one of this country’s great scientific achievements.

“The first drug ever approved that can fix a faulty gene.

It’s called Glybera, and it can treat a painful and potentially deadly genetic disorder with a single dose — a genuine made-in-Canada medical breakthrough.

But most Canadians have never heard of it.

Here’s my summary (from the July 4, 2019 posting),

It cost $1M for a single treatment and that single treatment is good for at least 10 years.

Pharmaceutical companies make their money from repeated use of their medicaments and Glybera required only one treatment so the company priced it according to how much they would have gotten for repeated use, $100,000 per year over a 10 year period. The company was not able to persuade governments and/or individuals to pay the cost

In the end, 31 people got the treatment, most of them received it for free through clinical trials.

For rich people only?

Megan Devlin’s March 8, 2022 article for the Daily Hive announces a major research investment into medical research (Note: A link has been removed),

Vancouver [Canada] billionaire Chip Wilson revealed Tuesday [March 8, 2022] that he has a rare genetic condition that causes his muscles to waste away, and announced he’s spending $100 million on research to find a cure.

His condition is called facio-scapulo-humeral muscular dystrophy, or FSHD for short. It progresses rapidly in some people and more slowly in others, but is characterized by progressive muscle weakness starting the the face, the neck, shoulders, and later the lower body.

“I’m out for survival of my own life,” Wilson said.

“I also have the resources to do something about this which affects so many people in the world.”

Wilson hopes the $100 million will produce a cure or muscle-regenerating treatment by 2027.

“This could be one of the biggest discoveries of all time, for humankind,” Wilson said. “Most people lose muscle, they fall, and they die. If we can keep muscle as we age this can be a longevity drug like we’ve never seen before.”

According to, FSHD affects between four and 10 people out of every 100,000 [emphasis mine], Right now, therapies are limited to exercise and pain management. There is no way to stall or reverse the disease’s course.

Wilson is best known for founding athleisure clothing company Lululemon. He also owns the most expensive home in British Columbia, a $73 million mansion in Vancouver’s Kitsilano neighbourhood.

Let’s see what the numbers add up to,

4 – 10 people out of 100,000

40 – 100 people out of 1M

1200 – 3,000 people out of 30M (let’s say this is Canada’s population)\

12,000 – 30,000 people out of 300M (let’s say this is the US’s population)

42,000 – 105,000 out of 1.115B (let’s say this is China’s population)

The rough total comes to 55,200 to 138,000 people between three countries with a combined population total of 1.445B. Given how business currently operates, it seems unlikely that any company will want to offer Wilson’s hoped for medical therapy although he and possibly others may benefit from a clinical trial.

Should profit or wealth be considerations?

The stories about the patients with the implants and the patients who need Glybera are heartbreaking and point to a question not often asked when medical therapies and medications are developed. Is the profit model the best choice and, if so, how much profit?

I have no answer to that question but I wish it was asked by medical researchers and policy makers.

As for wealthy people dictating the direction for medical research, I don’t have answers there either. I hope the research will yield applications and/or valuable information for more than Wilson’s disease.

It’s his money after all

Wilson calls his new venture, SolveFSHD. It doesn’t seem to be affiliated with any university or biomedical science organization and it’s not clear how the money will be awarded (no programmes, no application procedure, no panel of experts). There are three people on the team, Eva R. Chin, scientist and executive director, Chip Wilson, SolveFSHD founder/funder, and FSHD patient, and Neil Camarta, engineer, executive (fossil fuels and clean energy), and FSHD patient. There’s also a Twitter feed (presumably for the latest updates):

Perhaps unrelated but intriguing is news about a proposed new building in Kenneth Chan’s March 31, 2022 article for the Daily Hive,

Low Tide Properties, the real estate arm of Lululemon founder Chip Wilson [emphasis mine], has submitted a new development permit application to build a 148-ft-tall, eight-storey, mixed-use commercial building in the False Creek Flats of Vancouver.

The proposal, designed by local architectural firm Musson Cattell Mackey Partnership, calls for 236,000 sq ft of total floor area, including 105,000 sq ft of general office space, 102,000 sq ft of laboratory space [emphasis mine], and 5,000 sq ft of ground-level retail space. An outdoor amenity space for building workers will be provided on the rooftop.

[next door] The 2001-built, five-storey building at 1618 Station Street immediately to the west of the development site is also owned by Low Tide Properties [emphasis mine]. The Ferguson, the name of the existing building, contains about 79,000 sq ft of total floor area, including 47,000 sq ft of laboratory space and 32,000 sq ft of general office space. Biotechnology company Stemcell technologies [STEMCELL] Technologies] is the anchor tenant [emphasis mine].

I wonder if this proposed new building will house SolveFSHD and perhaps other FSHD-focused enterprises. The proximity of STEMCELL Technologies could be quite convenient. In any event, $100M will buy a lot (pun intended).

The end

Issues I’ve described here in the context of neural implants/neuroprosthetics and cutting edge medical advances are standard problems not specific to these technologies/treatments:

  • What happens when the technology fails (hopefully not at a critical moment)?
  • What happens when your supplier goes out of business or discontinues the products you purchase from them?
  • How much does it cost?
  • Who can afford the treatment/product? Will it only be for rich people?
  • Will this technology/procedure/etc. exacerbate or create new social tensions between social classes, cultural groups, religious groups, races, etc.?

Of course, having your neural implant fail suddenly in the middle of a New York City subway station seems a substantively different experience than having your car break down on the road.

There are, of course, there are the issues we can’t yet envision (as Wolbring notes) and there are issues such as symbiotic relationships with our implants and/or feeling that you are “above human.” Whether symbiosis and ‘implant/prosthetic superiority’ will affect more than a small number of people or become major issues is still to be determined.

There’s a lot to be optimistic about where new medical research and advances are concerned but I would like to see more thoughtful coverage in the media (e.g., news programmes and documentaries like ‘Augmented’) and more thoughtful comments from medical researchers.

Of course, the biggest issue I’ve raised here is about the current business models for health care products where profit is valued over people’s health and well-being. it’s a big question and I don’t see any definitive answers but the question put me in mind of this quote (from a September 22, 2020 obituary for US Supreme Court Justice Ruth Bader Ginsburg by Irene Monroe for Curve),

Ginsburg’s advocacy for justice was unwavering and showed it, especially with each oral dissent. In another oral dissent, Ginsburg quoted a familiar Martin Luther King Jr. line, adding her coda:” ‘The arc of the universe is long, but it bends toward justice,’” but only “if there is a steadfast commitment to see the task through to completion.” …

Martin Luther King Jr. popularized and paraphrased the quote (from a January 18, 2018 article by Mychal Denzel Smith for Huffington Post),

His use of the quote is best understood by considering his source material. “The arc of the moral universe is long, but it bends toward justice” is King’s clever paraphrasing of a portion of a sermon delivered in 1853 by the abolitionist minister Theodore Parker. Born in Lexington, Massachusetts, in 1810, Parker studied at Harvard Divinity School and eventually became an influential transcendentalist and minister in the Unitarian church. In that sermon, Parker said: “I do not pretend to understand the moral universe. The arc is a long one. My eye reaches but little ways. I cannot calculate the curve and complete the figure by experience of sight. I can divine it by conscience. And from what I see I am sure it bends toward justice.”

I choose to keep faith that people will get the healthcare products they need and that all of us need to keep working at making access more fair.

Ever heard a bird singing and wondered what kind of bird?

The Cornell University Lab of Ornithology’s sound recognition feature in its Merlin birding app(lication) can answer that question for you according to a July 14, 2021 article by Steven Melendez for Fast Company (Note: Links have been removed),

The lab recently upgraded its Merlin smartphone app, designed for both new and experienced birdwatchers. It now features an AI-infused “Sound ID” feature that can capture bird sounds and compare them to crowdsourced samples to figure out just what bird is making that sound. … people have used it to identify more than 1 million birds. New user counts are also up 58% since the two weeks before launch, and up 44% over the same period last year, according to Drew Weber, Merlin’s project coordinator.

Even when it’s listening to bird sounds, the app still relies on recent advances in image recognition, says project research engineer Grant Van Horn. …, it actually transforms the sound into a visual graph called a spectrogram, similar to what you might see in an audio editing program. Then, it analyzes that spectrogram to look for similarities to known bird calls, which come from the Cornell Lab’s eBird citizen science project.

There’s more detail about Merlin in Marc Devokaitis’ June 23, 2021 article for the Cornell Chronicle,

… Merlin can recognize the sounds of more than 400 species from the U.S. and Canada, with that number set to expand rapidly in future updates.

As Merlin listens, it uses artificial intelligence (AI) technology to identify each species, displaying in real time a list and photos of the birds that are singing or calling.

Automatic song ID has been a dream for decades, but analyzing sound has always been extremely difficult. The breakthrough came when researchers, including Merlin lead researcher Grant Van Horn, began treating the sounds as images and applying new and powerful image classification algorithms like the ones that already power Merlin’s Photo ID feature.

“Each sound recording a user makes gets converted from a waveform to a spectrogram – a way to visualize the amplitude [volume], frequency [pitch] and duration of the sound,” Van Horn said. “So just like Merlin can identify a picture of a bird, it can now use this picture of a bird’s sound to make an ID.”

Merlin’s pioneering approach to sound identification is powered by tens of thousands of citizen scientists who contributed their bird observations and sound recordings to eBird, the Cornell Lab’s global database.

“Thousands of sound recordings train Merlin to recognize each bird species, and more than a billion bird observations in eBird tell Merlin which birds are likely to be present at a particular place and time,” said Drew Weber, Merlin project coordinator. “Having this incredibly robust bird dataset – and feeding that into faster and more powerful machine-learning tools – enables Merlin to identify birds by sound now, when doing so seemed like a daunting challenge just a few years ago.”

The Merlin Bird ID app with the new Sound ID feature is available for free on iOS and Android devices. Click here to download the Merlin Bird ID app and follow the prompts. If you already have Merlin installed on your phone, tap “Get Sound ID.”

Do take a look at Devokaitis’ June 23, 2021 article for more about how the Merlin app provides four ways to identify birds.

For anyone who likes to listen to the news, there’s an August 26, 2021 podcast (The Warblers by Birds Canada) featuring Drew Weber, Merlin project coordinator, and Jody Allair, Birds Canada Director of Community Engagement, discussing Merlin,

It’s a dream come true – there’s finally an app for identifying bird sounds. In the next episode of The Warblers podcast, we’ll explore the Merlin Bird ID app’s new Sound ID feature and how artificial intelligence is redefining birding. We talk with Drew Weber and Jody Allair and go deep into the implications and opportunities that this technology will bring for birds, and new as well as experienced birders.

The Warblers is hosted by Andrea Gress and Andrés Jiménez.

Use Gene Editing to Make Better Babies (a February 17, 2022 livestreamed debate from 05:00 PM − 06:30 PM EST)

I have high hopes for this debate on gene edited babies. Intelligence Squared US convenes good debates. (I watched their ‘de-extinction’ debate back in 2019, which coincidentally, featured George Church, one of the debaters in this event.) Not ‘good’ in that I necessarily agree or am interested in the topics but good as in thoughtful. Here’s more from the organization’s mission on their What is IQ2US? webpage,

A nonpartisan, nonprofit organization, Intelligence Squared U.S. addresses a fundamental problem in America: the extreme polarization of our nation and our politics.

Our mission is to restore critical thinking, facts, reason, and civility to American public discourse.

More about the upcoming debate can be found on the Use Gene Editing to Make Better Babies event page,

Use Gene Editing to Make Better Babies
Hosted By John Donvan

Thursday, February 17, 2022
05:00 PM − 06:30 PM EST

A genetic disease runs in your family. Your doctor tells you that, should you wish to have a child, that child is likely to also carry the disease. But a new gene-editing technology could change your fate. It could ensure that your baby is — and remains — healthy. Even more, it could potentially make sure your grandchildren are also free of the disease. What do you do? Now, imagine it’s not a rare genetic disorder, but general illness, or eye color, or cognitive ability, or athleticism. Do you opt into this new world of genetically edited humans? And what if it’s not just you. What your friends, neighbors, and colleagues are also embracing this genetic revolution? Right now, science doesn’t give you that choice. But huge advancements in CRISPR [clustered regularly interspaced short palindromic repeats] technology are making human gene editing a reality. In fact, in 2018, a Chinese scientist announced the first genetically modified babies; twin girls made to resist HIV, smallpox, and malaria. The promise of this technology is clear. But gene editing is not without its perils. Its critics say the technology is destined to exacerbate inequality, pressure all parents (and nations) into editing their children to stay competitive, and meddling with the most basic aspect of our humanity. In this context, we ask the question: Should we use gene editing to make better babies?

Main Points

The use of gene editing allows for couples to have children when they might otherwise have that option unavailable for them. It also allows for less to be left to chance during the pregnancy.

Gene editing will allow for babies to be born with reduced or eliminated chances of inheriting and passing on genes linked to diseases. We have a moral imperative to use technology that will improve the quality of life.

It is only a matter of time before gene editing becomes a widespread technology, potentially used by competitors and rivals on the international stage. If we have the technology, we should use it to our advantage to remain competitive.

The use of gene editing to create “better” outcomes in children will inherently create social stratification based on any gene editing, likely reflecting existing socioeconomic status. Additionally, the term ‘better’ is arbitrary and potentially short-sighted and dangerous.

Currently, there exist reasonable alternatives to gene editing for every condition for which gene editing can be used. 

The technology is still developing, and the long-term effects of any gene-editing could be potentially dangerous with consequences echoing throughout the gene environment. 

A February 8, 2022 Intelligence Squared U.S. news release about the upcoming debate (received via email) provides details about the debaters,


* George Church, Geneticist & Founder, Personal Genome Project 
George Church is one of the nation’s leading geneticists and scholars. He is a professor of genetics at Harvard Medical School and MIT. In 1984, he developed the first direct genomic sequencing method, which resulted in the first genome sequence. He also helped initiate the Human Genome Project in 1984 and the Personal Genome Project in 2005. Church also serves as the director of the National Institutes of Health Center of Excellence in Genomic Science.  

* Amy Webb, Futurist & Author, “The Genesis Machine”  
Amy Webb is an award-winning author and futurist. She is the founder and CEO of the Future Today Institute and was named one of five women changing the world by Forbes. Her new book, “The Genesis Machine,” explores the future of synthetic biology, including human gene editing. Webb is a professor of strategic foresight at New York University’s Stern School of Business and has been elected a life member of the Council on Foreign Relations.  


* Marcy Darnovsky, Policy Advocate & Executive Director, Center for Genetics and Society 
Marcy Darnovsky is a policy advocate and one of the most prominent voices on the politics of human biotechnology. As executive director of the Center for Genetics and Society, Darnovsky is focused on the social justice and public interest implications of gene editing. This work is informed by her background as an organizer and advocate in a range of environmental and progressive political movements.    

* Françoise Baylis, Philosopher & Author, “Altered Inheritance”  
Françoise Baylis is a philosopher whose innovative work in bioethics, at the intersection of policy and practice, has stretched the very boundaries of the field. She is the author of “Altered Inheritance: CRISPR and the Ethics of Human Genome Editing,” which explores the scientific, ethical, and political implications of human genome editing. Baylis is a research professor at Dalhousie University and a fellow of the Canadian Academy of Health Sciences. In 2017, she was awarded the Canadian Bioethics Society Lifetime Achievement Award. 

Getting back to the Use Gene Editing to Make Better Babies event page, there are a few options,

Request a Ticket

Have a question? Ask us

There’s also an option to Vote For or Against the Motion but you’ll have to go to the Use Gene Editing to Make Better Babies event page.

Two of the debaters have been mentioned on this blog before, George Church and Françoise Baylis. There are several references to Church including this mention with regard to Dr. He Jiankui and his CRISPR twins (July 28, 2020 posting). Françoise Baylis features in four 2019 postings with the most recent being this October 17, 2019 piece.

For anyone curious about the ‘de-extinction’ debate, it was described here in a January 18, 2019 posting prior to the event.

Soft, inflatable, and potentially low-cost neuroprosthetic hand?

An August 16, 2021 news item on ScienceDaily describes a new type of neuroprosthetic,

For the more than 5 million people in the world who have undergone an upper-limb amputation, prosthetics have come a long way. Beyond traditional mannequin-like appendages, there is a growing number of commercial neuroprosthetics — highly articulated bionic limbs, engineered to sense a user’s residual muscle signals and robotically mimic their intended motions.

But this high-tech dexterity comes at a price. Neuroprosthetics can cost tens of thousands of dollars and are built around metal skeletons, with electrical motors that can be heavy and rigid.

Now engineers at MIT [Massachusetts Institute of Technology] and Shanghai Jiao Tong University have designed a soft, lightweight, and potentially low-cost neuroprosthetic hand. Amputees who tested the artificial limb performed daily activities, such as zipping a suitcase, pouring a carton of juice, and petting a cat, just as well as — and in some cases better than — those with more rigid neuroprosthetics.

Here’s a video demonstration,

An August 16, 2021 MIT news news release (also on EurekAlert), which originated the news item, provides more detail,

The researchers found the prosthetic, designed with a system for tactile feedback, restored some primitive sensation in a volunteer’s residual limb. The new design is also surprisingly durable, quickly recovering after being struck with a hammer or run over with a car.

The smart hand is soft and elastic, and weighs about half a pound. Its components total around $500 — a fraction of the weight and material cost associated with more rigid smart limbs.

“This is not a product yet, but the performance is already similar or superior to existing neuroprosthetics, which we’re excited about,” says Xuanhe Zhao, professor of mechanical engineering and of civil and environmental engineering at MIT. “There’s huge potential to make this soft prosthetic very low cost, for low-income families who have suffered from amputation.”

Zhao and his colleagues have published their work today [August 16, 2021] in Nature Biomedical Engineering. Co-authors include MIT postdoc Shaoting Lin, along with Guoying Gu, Xiangyang Zhu, and collaborators at Shanghai Jiao Tong University in China.

Big Hero hand

The team’s pliable new design bears an uncanny resemblance to a certain inflatable robot in the animated film “Big Hero 6.” Like the squishy android, the team’s artificial hand is made from soft, stretchy material — in this case, the commercial elastomer EcoFlex. The prosthetic comprises five balloon-like fingers, each embedded with segments of fiber, similar to articulated bones in actual fingers. The bendy digits are connected to a 3-D-printed “palm,” shaped like a human hand.

Rather than controlling each finger using mounted electrical motors, as most neuroprosthetics do, the researchers used a simple pneumatic system to precisely inflate fingers and bend them in specific positions. This system, including a small pump and valves, can be worn at the waist, significantly reducing the prosthetic’s weight.

Lin developed a computer model to relate a finger’s desired position to the corresponding pressure a pump would have to apply to achieve that position. Using this model, the team developed a controller that directs the pneumatic system to inflate the fingers, in positions that mimic five common grasps, including pinching two and three fingers together, making a balled-up fist, and cupping the palm.

The pneumatic system receives signals from EMG sensors — electromyography sensors that measure electrical signals generated by motor neurons to control muscles. The sensors are fitted at the prosthetic’s opening, where it attaches to a user’s limb. In this arrangement, the sensors can pick up signals from a residual limb, such as when an amputee imagines making a fist.

The team then used an existing algorithm that “decodes” muscle signals and relates them to common grasp types. They used this algorithm to program the controller for their pneumatic system. When an amputee imagines, for instance, holding a wine glass, the sensors pick up the residual muscle signals, which the controller then translates into corresponding pressures. The pump then applies those pressures to inflate each finger and produce the amputee’s intended grasp.

Going a step further in their design, the researchers looked to enable tactile feedback — a feature that is not incorporated in most commercial neuroprosthetics. To do this, they stitched to each fingertip a pressure sensor, which when touched or squeezed produces an electrical signal proportional to the sensed pressure. Each sensor is wired to a specific location on an amputee’s residual limb, so the user can “feel” when the prosthetic’s thumb is pressed, for example, versus the forefinger.

Good grip

To test the inflatable hand, the researchers enlisted two volunteers, each with upper-limb amputations. Once outfitted with the neuroprosthetic, the volunteers learned to use it by repeatedly contracting the muscles in their arm while imagining making five common grasps.

After completing this 15-minute training, the volunteers were asked to perform a number of standardized tests to demonstrate manual strength and dexterity. These tasks included stacking checkers, turning pages, writing with a pen, lifting heavy balls, and picking up fragile objects like strawberries and bread. They repeated the same tests using a more rigid, commercially available bionic hand and found that the inflatable prosthetic was as good, or even better, at most tasks, compared to its rigid counterpart.

One volunteer was also able to intuitively use the soft prosthetic in daily activities, for instance to eat food like crackers, cake, and apples, and to handle objects and tools, such as laptops, bottles, hammers, and pliers. This volunteer could also safely manipulate the squishy prosthetic, for instance to shake someone’s hand, touch a flower, and pet a cat.

In a particularly exciting exercise, the researchers blindfolded the volunteer and found he could discern which prosthetic finger they poked and brushed. He was also able to “feel” bottles of different sizes that were placed in the prosthetic hand, and lifted them in response. The team sees these experiments as a promising sign that amputees can regain a form of sensation and real-time control with the inflatable hand.

The team has filed a patent on the design, through MIT, and is working to improve its sensing and range of motion.

“We now have four grasp types. There can be more,” Zhao says. “This design can be improved, with better decoding technology, higher-density myoelectric arrays, and a more compact pump that could be worn on the wrist. We also want to customize the design for mass production, so we can translate soft robotic technology to benefit society.”

Here’s a link to and a citation for the paper,

A soft neuroprosthetic hand providing simultaneous myoelectric control and tactile feedback by Guoying Gu, Ningbin Zhang, Haipeng Xu, Shaoting Lin, Yang Yu, Guohong Chai, Lisen Ge, Houle Yang, Qiwen Shao, Xinjun Sheng, Xiangyang Zhu, Xuanhe Zhao. Nature Biomedical Engineering (2021) DOI: Published: 16 August 2021

This paper is behind a paywall.

Asparagus spinal cord?

I love this picture,

Pelling in the kitchen with asparagus, the veggie that inspired his work on spinal cord injuries. Credit: Andrew Pelling?

The image accompanies Cari Shane’s August 4, 2021 article for Atlas Obscura’s Gastro Obscura about Andrew Pelling and his asparagus-based scaffolds for spinal cord stem cells (Note: A link has been removed),

Around 10 years ago, Pelling [Dr. Andrew Pelling at the University of Ottawa], a biophysicist, started thinking with his team about materials that could be used to reconstruct damaged or diseased human tissues. Surrounded by a rainbow of fresh fruits and vegetables at his University of Ottawa lab, Pelling and his team dismantle biological systems, mixing and matching parts, and put them back together in new and creative ways. It’s a little bit like a hacker who takes parts from a phone, a computer, and a car to build a robotic arm. Or like Mary Shelley’s Dr. Frankenstein, who built a monster out of cadavers. Except Pelling’s team has turned an apple into an ear and, most recently, a piece of asparagus into a scaffold for spinal-cord implants.

Pelling believes the future of regenerative medicine—which uses external therapies to help the body heal, the same way a cut heals by itself or a broken bone can mend without surgery—is in the supermarket produce aisle. He calls it “augmented biology,” and it’s a lot less expensive—by thousands and thousands of dollars—than implanting organs donated by humans, taken from animals, or manmade or bioengineered from animal tissue.

Decellularization as a process for implantation is fairly new, developed in the mid 1990s primarily by Doris Taylor. By washing out the genetic materials that make an apple an apple, for example, you are left with plant tissue, or a “cellulose mesh,” explains Pelling. “What we’re doing is washing out all the plant DNA, RNA proteins, all that sort of stuff that can cause immune responses, and rejection. And we’re just leaving behind the fiber in a plant—like literally the stuff that gets stuck in your teeth.”

When Pelling noticed the resemblance between a decellularized apple slice and an ear, he saw the true potential of his lab games. If he implanted the apple scaffolding into a living animal, he wondered, would it “be accepted” and vascularize? That is, would the test animal’s body glom onto the plant cells as if they weren’t a dangerous, foreign body and instead send out signals to create a blood supply, allowing the plant tissue to become a living part of the animal’s body? The answer was yes. “Suddenly, and by accident, we developed a material that has huge therapeutic and regenerative potential,” says Pelling. The apple ear does not enable hearing, and it remains in the animal-testing phase, but it may have applications for aesthetic implantation.

Soon after his breakthrough apple experiment, which was published in 2016 and earned him the moniker of “mad scientist,” Pelling shifted his focus to asparagus. The idea hit him when he was cooking. Looking at the end of a spear, he thought, “Hey, it looks like a spinal cord. What the hell? Maybe we can do something,” he says.

… Pelling implanted decellularized asparagus tissue under the skin of a lab rat. In just a few weeks, blood vessels flowed through the asparagus scaffolding; healthy cells from the animal moved into the tissue and turned the scaffold into living tissue. “The surprise here was that the body, instead of rejecting this material, it actually integrated into the material,” says Pelling. In the bioengineering world, getting that to happen has typically been a major challenge.

And then came the biggest surprise of all. Rats with severed spinal cords that had been implanted with the asparagus tissue were able to walk again, just a few weeks after implantation. …

While using asparagus tissue as scaffolding to repair spinal cords is not a “miracle cure,” says Pelling, it’s unlike the kinds of implants that have come before. Donated or manufactured organs are historically both more complicated and more expensive. Pelling’s implants were “done without stem cells or electrical stimulation or exoskeletons, or any of the usual approaches, but rather using very low cost, accessible materials that we honestly just bought at the grocery store,” he says, “and, we achieved the same level of recovery.” (At least in animal tests.) Plus, whereas patients usually need lifelong immunosuppressants, which can have negative side effects, to prevent their body from rejecting an implant, that doesn’t seem necessary with Pelling’s plant-based implants. And, so far, the plant-based implants don’t seem to break down over time like traditional spinal-cord implants. “The inertness of plant tissue is exactly why it’s so biocompatible,” says Pelling.

In October 2020, the asparagus implant was designated as a “breakthrough device” by the FDA [US Food and Drug Administration]. The designation means human trials will be fast-tracked and likely begin in a few years. …

Shane’s August 4, 2021 article is fascinating and well illustrated with a number of embedded images. If you have the time and the inclination, do read it.

More of Pelling’s work can be found here at the Pelling Lab website. He was mentioned (by name only as a participant in the second Canadian DIY Biology Summit organized by the Public Health Agency of Canada [PHAC]) here in an April 21, 2020 posting (my 10 year review of science culture in Canada). You’ll find the Pelling mention under the DIY Biology subhead about 20% of the way down the screen.

Restoring words with a neuroprosthesis

There seems to have been an update to the script for the voiceover. You’ll find it at the 1 min. 30 secs. mark ( spoken: “with up to 93% accuracy at 18 words per minute`’ vs. written “with median 74% accuracy at 15 words per minute)".

A July 14, 2021 news item on ScienceDaily announces the latest work on a a neuroprosthetic from the University of California at San Francisco (UCSF),

Researchers at UC San Francisco have successfully developed a “speech neuroprosthesis” that has enabled a man with severe paralysis to communicate in sentences, translating signals from his brain to the vocal tract directly into words that appear as text on a screen.

The achievement, which was developed in collaboration with the first participant of a clinical research trial, builds on more than a decade of effort by UCSF neurosurgeon Edward Chang, MD, to develop a technology that allows people with paralysis to communicate even if they are unable to speak on their own. The study appears July 15 [2021] in the New England Journal of Medicine.

A July 14, 2021 UCSF news release (also on EurekAlert), which originated the news item, delves further into the topic,

“To our knowledge, this is the first successful demonstration of direct decoding of full words from the brain activity of someone who is paralyzed and cannot speak,” said Chang, the Joan and Sanford Weill Chair of Neurological Surgery at UCSF, Jeanne Robertson Distinguished Professor, and senior author on the study. “It shows strong promise to restore communication by tapping into the brain’s natural speech machinery.”

Each year, thousands of people lose the ability to speak due to stroke, accident, or disease. With further development, the approach described in this study could one day enable these people to fully communicate.

Translating Brain Signals into Speech

Previously, work in the field of communication neuroprosthetics has focused on restoring communication through spelling-based approaches to type out letters one-by-one in text. Chang’s study differs from these efforts in a critical way: his team is translating signals intended to control muscles of the vocal system for speaking words, rather than signals to move the arm or hand to enable typing. Chang said this approach taps into the natural and fluid aspects of speech and promises more rapid and organic communication.

“With speech, we normally communicate information at a very high rate, up to 150 or 200 words per minute,” he said, noting that spelling-based approaches using typing, writing, and controlling a cursor are considerably slower and more laborious. “Going straight to words, as we’re doing here, has great advantages because it’s closer to how we normally speak.”

Over the past decade, Chang’s progress toward this goal was facilitated by patients at the UCSF Epilepsy Center who were undergoing neurosurgery to pinpoint the origins of their seizures using electrode arrays placed on the surface of their brains. These patients, all of whom had normal speech, volunteered to have their brain recordings analyzed for speech-related activity. Early success with these patient volunteers paved the way for the current trial in people with paralysis.

Previously, Chang and colleagues in the UCSF Weill Institute for Neurosciences mapped the cortical activity patterns associated with vocal tract movements that produce each consonant and vowel. To translate those findings into speech recognition of full words, David Moses, PhD, a postdoctoral engineer in the Chang lab and lead author of the new study, developed new methods for real-time decoding of those patterns, as well as incorporating statistical language models to improve accuracy.

But their success in decoding speech in participants who were able to speak didn’t guarantee that the technology would work in a person whose vocal tract is paralyzed. “Our models needed to learn the mapping between complex brain activity patterns and intended speech,” said Moses. “That poses a major challenge when the participant can’t speak.”

In addition, the team didn’t know whether brain signals controlling the vocal tract would still be intact for people who haven’t been able to move their vocal muscles for many years. “The best way to find out whether this could work was to try it,” said Moses.

The First 50 Words

To investigate the potential of this technology in patients with paralysis, Chang partnered with colleague Karunesh Ganguly, MD, PhD, an associate professor of neurology, to launch a study known as “BRAVO” (Brain-Computer Interface Restoration of Arm and Voice). The first participant in the trial is a man in his late 30s who suffered a devastating brainstem stroke more than 15 years ago that severely damaged the connection between his brain and his vocal tract and limbs. Since his injury, he has had extremely limited head, neck, and limb movements, and communicates by using a pointer attached to a baseball cap to poke letters on a screen.

The participant, who asked to be referred to as BRAVO1, worked with the researchers to create a 50-word vocabulary that Chang’s team could recognize from brain activity using advanced computer algorithms. The vocabulary – which includes words such as “water,” “family,” and “good” – was sufficient to create hundreds of sentences expressing concepts applicable to BRAVO1’s daily life.

For the study, Chang surgically implanted a high-density electrode array over BRAVO1’s speech motor cortex. After the participant’s full recovery, his team recorded 22 hours of neural activity in this brain region over 48 sessions and several months. In each session, BRAVO1 attempted to say each of the 50 vocabulary words many times while the electrodes recorded brain signals from his speech cortex.

Translating Attempted Speech into Text

To translate the patterns of recorded neural activity into specific intended words, Moses’s two co-lead authors, Sean Metzger and Jessie Liu, both bioengineering graduate students in the Chang Lab, used custom neural network models, which are forms of artificial intelligence. When the participant attempted to speak, these networks distinguished subtle patterns in brain activity to detect speech attempts and identify which words he was trying to say.

To test their approach, the team first presented BRAVO1 with short sentences constructed from the 50 vocabulary words and asked him to try saying them several times. As he made his attempts, the words were decoded from his brain activity, one by one, on a screen.

Then the team switched to prompting him with questions such as “How are you today?” and “Would you like some water?” As before, BRAVO1’s attempted speech appeared on the screen. “I am very good,” and “No, I am not thirsty.”

Chang and Moses found that the system was able to decode words from brain activity at rate of up to 18 words per minute with up to 93 percent accuracy (75 percent median). Contributing to the success was a language model Moses applied that implemented an “auto-correct” function, similar to what is used by consumer texting and speech recognition software.

Moses characterized the early trial results as a proof of principle. “We were thrilled to see the accurate decoding of a variety of meaningful sentences,” he said. “We’ve shown that it is actually possible to facilitate communication in this way and that it has potential for use in conversational settings.”

Looking forward, Chang and Moses said they will expand the trial to include more participants affected by severe paralysis and communication deficits. The team is currently working to increase the number of words in the available vocabulary, as well as improve the rate of speech.

Both said that while the study focused on a single participant and a limited vocabulary, those limitations don’t diminish the accomplishment. “This is an important technological milestone for a person who cannot communicate naturally,” said Moses, “and it demonstrates the potential for this approach to give a voice to people with severe paralysis and speech loss.”

… all of UCSF. Funding sources [emphasis mine] included National Institutes of Health (U01 NS098971-01), philanthropy, and a sponsored research agreement with Facebook Reality Labs (FRL), [emphasis mine] which completed in early 2021.

UCSF researchers conducted all clinical trial design, execution, data analysis and reporting. Research participant data were collected solely by UCSF, are held confidentially, and are not shared with third parties. FRL provided high-level feedback and machine learning advice.

Here’s a link to and a citation for the paper,

Neuroprosthesis for Decoding Speech in a Paralyzed Person with Anarthria by David A. Moses, Ph.D., Sean L. Metzger, M.S., Jessie R. Liu, B.S., Gopala K. Anumanchipalli, Ph.D., Joseph G. Makin, Ph.D., Pengfei F. Sun, Ph.D., Josh Chartier, Ph.D., Maximilian E. Dougherty, B.A., Patricia M. Liu, M.A., Gary M. Abrams, M.D., Adelyn Tu-Chan, D.O., Karunesh Ganguly, M.D., Ph.D., and Edward F. Chang, M.D. N Engl J Med 2021; 385:217-227 DOI: 10.1056/NEJMoa2027540 Published July 15, 2021

This paper is mostly behind a paywall but you do have this option: “Create your account to get 2 free subscriber-only articles each month.”

Your cyborg future (brain-computer interface) is closer than you think

Researchers at the Imperial College London (ICL) are warning that brain-computer interfaces (BCIs) may pose a number of quandaries. (At the end of this post, I have a little look into some of the BCI ethical issues previously explored on this blog.)

Here’s more from a July 20, 2021American Institute of Physics (AIP) news release (also on EurekAlert),

Surpassing the biological limitations of the brain and using one’s mind to interact with and control external electronic devices may sound like the distant cyborg future, but it could come sooner than we think.

Researchers from Imperial College London conducted a review of modern commercial brain-computer interface (BCI) devices, and they discuss the primary technological limitations and humanitarian concerns of these devices in APL Bioengineering, from AIP Publishing.

The most promising method to achieve real-world BCI applications is through electroencephalography (EEG), a method of monitoring the brain noninvasively through its electrical activity. EEG-based BCIs, or eBCIs, will require a number of technological advances prior to widespread use, but more importantly, they will raise a variety of social, ethical, and legal concerns.

Though it is difficult to understand exactly what a user experiences when operating an external device with an eBCI, a few things are certain. For one, eBCIs can communicate both ways. This allows a person to control electronics, which is particularly useful for medical patients that need help controlling wheelchairs, for example, but also potentially changes the way the brain functions.

“For some of these patients, these devices become such an integrated part of themselves that they refuse to have them removed at the end of the clinical trial,” said Rylie Green, one of the authors. “It has become increasingly evident that neurotechnologies have the potential to profoundly shape our own human experience and sense of self.”

Aside from these potentially bleak mental and physiological side effects, intellectual property concerns are also an issue and may allow private companies that develop eBCI technologies to own users’ neural data.

“This is particularly worrisome, since neural data is often considered to be the most intimate and private information that could be associated with any given user,” said Roberto Portillo-Lara, another author. “This is mainly because, apart from its diagnostic value, EEG data could be used to infer emotional and cognitive states, which would provide unparalleled insight into user intentions, preferences, and emotions.”

As the availability of these platforms increases past medical treatment, disparities in access to these technologies may exacerbate existing social inequalities. For example, eBCIs can be used for cognitive enhancement and cause extreme imbalances in academic or professional successes and educational advancements.

“This bleak panorama brings forth an interesting dilemma about the role of policymakers in BCI commercialization,” Green said. “Should regulatory bodies intervene to prevent misuse and unequal access to neurotech? Should society follow instead the path taken by previous innovations, such as the internet or the smartphone, which originally targeted niche markets but are now commercialized on a global scale?”

She calls on global policymakers, neuroscientists, manufacturers, and potential users of these technologies to begin having these conversations early and collaborate to produce answers to these difficult moral questions.

“Despite the potential risks, the ability to integrate the sophistication of the human mind with the capabilities of modern technology constitutes an unprecedented scientific achievement, which is beginning to challenge our own preconceptions of what it is to be human,” [emphasis mine] Green said.

Caption: A schematic demonstrates the steps required for eBCI operation. EEG sensors acquire electrical signals from the brain, which are processed and outputted to control external devices. Credit: Portillo-Lara et al.

Here’s a link to and a citation for the paper,

Mind the gap: State-of-the-art technologies and applications for EEG-based brain-computer interfaces by Roberto Portillo-Lara, Bogachan Tahirbegi, Christopher A.R. Chapman, Josef A. Goding, and Rylie A. Green. APL Bioengineering, Volume 5, Issue 3, , 031507 (2021) DOI: Published Online: 20 July 2021

This paper appears to be open access.

Back on September 17, 2020 I published a post about a brain implant and included some material I’d dug up on ethics and brain-computer interfaces and was most struck by one of the stories. Here’s the excerpt (which can be found under the “Brain-computer interfaces, symbiosis, and ethical issues” subhead): … From a July 24, 2019 article by Liam Drew for Nature Outlook: The brain,

“It becomes part of you,” Patient 6 said, describing the technology that enabled her, after 45 years of severe epilepsy, to halt her disabling seizures. Electrodes had been implanted on the surface of her brain that would send a signal to a hand-held device when they detected signs of impending epileptic activity. On hearing a warning from the device, Patient 6 knew to take a dose of medication to halt the coming seizure.

“You grow gradually into it and get used to it, so it then becomes a part of every day,” she told Frederic Gilbert, an ethicist who studies brain–computer interfaces (BCIs) at the University of Tasmania in Hobart, Australia. “It became me,” she said. [emphasis mine]

Gilbert was interviewing six people who had participated in the first clinical trial of a predictive BCI to help understand how living with a computer that monitors brain activity directly affects individuals psychologically1. Patient 6’s experience was extreme: Gilbert describes her relationship with her BCI as a “radical symbiosis”.

This is from another part of the September 17, 2020 posting,

… He [Gilbert] is now preparing a follow-up report on Patient 6. The company that implanted the device in her brain to help free her from seizures went bankrupt. The device had to be removed.

… Patient 6 cried as she told Gilbert about losing the device. … “I lost myself,” she said.

“It was more than a device,” Gilbert says. “The company owned the existence of this new person.”

It wasn’t my first thought when the topic of ethics and BCIs came up but as Gilbert’s research highlights: what happens if the company that made your implant and monitors it goes bankrupt?

If you have the time, do take a look at the entire entry under the “Brain-computer interfaces, symbiosis, and ethical issues” subhead of the September 17, 2020 posting or read the July 24, 2019 article by Liam Drew.

Should you have a problem finding the July 20, 2021 American Institute of Physics news release at either of the two links I have previously supplied, there’s a July 20, 2021 copy at

University of Alberta researchers 3D print nose cartilage

A May 4, 2021 news item on ScienceDaily announced work that may result the restoration of nasal cartilage for skin cancer patients,

A team of University of Alberta researchers has discovered a way to use 3-D bioprinting technology to create custom-shaped cartilage for use in surgical procedures. The work aims to make it easier for surgeons to safely restore the features of skin cancer patients living with nasal cartilage defects after surgery.

The researchers used a specially designed hydrogel — a material similar to Jell-O — that could be mixed with cells harvested from a patient and then printed in a specific shape captured through 3-D imaging. Over a matter of weeks, the material is cultured in a lab to become functional cartilage.

“It takes a lifetime to make cartilage in an individual, while this method takes about four weeks. So you still expect that there will be some degree of maturity that it has to go through, especially when implanted in the body. But functionally it’s able to do the things that cartilage does,” said Adetola Adesida, a professor of surgery in the Faculty of Medicine & Dentistry.

“It has to have certain mechanical properties and it has to have strength. This meets those requirements with a material that (at the outset) is 92 per cent water,” added Yaman Boluk, a professor in the Faculty of Engineering.

Who would have thought that nose cartilage would look like a worm?

Caption: 3-D printed cartilage is shaped into a curve suitable for use in surgery to rebuild a nose. The technology could eventually replace the traditional method of taking cartilage from the patient’s rib, a procedure that comes with complications. Credit: University of Alberta

A May 4, 2021 University of Alberta news release (also on EurekAlert) by Ross Neitz, which originated the news item, details why this research is important,

Adesida, Boluk and graduate student Xiaoyi Lan led the project to create the 3-D printed cartilage in hopes of providing a better solution for a clinical problem facing many patients with skin cancer.

Each year upwards of three million people in North America are diagnosed with non-melanoma skin cancer. Of those, 40 per cent will have lesions on their noses, with many requiring surgery to remove them. As part of the procedure, many patients may have cartilage removed, leaving facial disfiguration.

Traditionally, surgeons would take cartilage from one of the patient’s ribs and reshape it to fit the needed size and shape for reconstructive surgery. But the procedure comes with complications.

“When the surgeons restructure the nose, it is straight. But when it adapts to its new environment, it goes through a period of remodelling where it warps, almost like the curvature of the rib,” said Adesida. “Visually on the face, that’s a problem.

“The other issue is that you’re opening the rib compartment, which protects the lungs, just to restructure the nose. It’s a very vital anatomical location. The patient could have a collapsed lung and has a much higher risk of dying,” he added.

The researchers say their work is an example of both precision medicine and regenerative medicine. Lab-grown cartilage printed specifically for the patient can remove the risk of lung collapse, infection in the lungs and severe scarring at the site of a patient’s ribs.

“This is to the benefit of the patient. They can go on the operating table, have a small biopsy taken from their nose in about 30 minutes, and from there we can build different shapes of cartilage specifically for them,” said Adesida. “We can even bank the cells and use them later to build everything needed for the surgery. This is what this technology allows you to do.”

The team is continuing its research and is now testing whether the lab-grown cartilage retains its properties after transplantation in animal models. The team hopes to move the work to a clinical trial within the next two to three years.

Here’s a link to and a citation for the paper,

Bioprinting of human nasoseptal chondrocytes-laden collagen hydrogel for cartilage tissue engineering by Xiaoyi Lan, Yan Liang, Esra J. N. Erkut, Melanie Kunze, Aillette Mulet-Sierra, Tianxing Gong, Martin Osswald, Khalid Ansari, Hadi Seikaly, Yaman Boluk, Adetola B. Adesida. The FASEB Journal Volume 35, Issue 3 March 2021 e21191 DOI: First published online: 17 February 2021

This paper is open access.

Synaptic transistor better then memristor when it comes to brainlike learning for computers

An April 30, 2021 news item on Nanowerk announced research from a joint team at Northwestern University (located in Chicago, Illinois, US) and University of Hong Kong of researchers in the field of neuromorphic (brainlike) computing,

Researchers have developed a brain-like computing device that is capable of learning by association.

Similar to how famed physiologist Ivan Pavlov conditioned dogs to associate a bell with food, researchers at Northwestern University and the University of Hong Kong successfully conditioned their circuit to associate light with pressure.

The device’s secret lies within its novel organic, electrochemical “synaptic transistors,” which simultaneously process and store information just like the human brain. The researchers demonstrated that the transistor can mimic the short-term and long-term plasticity of synapses in the human brain, building on memories to learn over time.

With its brain-like ability, the novel transistor and circuit could potentially overcome the limitations of traditional computing, including their energy-sapping hardware and limited ability to perform multiple tasks at the same time. The brain-like device also has higher fault tolerance, continuing to operate smoothly even when some components fail.

“Although the modern computer is outstanding, the human brain can easily outperform it in some complex and unstructured tasks, such as pattern recognition, motor control and multisensory integration,” said Northwestern’s Jonathan Rivnay, a senior author of the study. “This is thanks to the plasticity of the synapse, which is the basic building block of the brain’s computational power. These synapses enable the brain to work in a highly parallel, fault tolerant and energy-efficient manner. In our work, we demonstrate an organic, plastic transistor that mimics key functions of a biological synapse.”

Rivnay is an assistant professor of biomedical engineering at Northwestern’s McCormick School of Engineering. He co-led the study with Paddy Chan, an associate professor of mechanical engineering at the University of Hong Kong. Xudong Ji, a postdoctoral researcher in Rivnay’s group, is the paper’s first author.

Caption: By connecting single synaptic transistors into a neuromorphic circuit, researchers demonstrated that their device could simulate associative learning. Credit: Northwestern University

An April 30, 2021 Northwestern University news release (also on EurekAlert), which originated the news item, includes a good explanation about brainlike computing and information about how synaptic transistors work along with some suggestions for future applications,

Conventional, digital computing systems have separate processing and storage units, causing data-intensive tasks to consume large amounts of energy. Inspired by the combined computing and storage process in the human brain, researchers, in recent years, have sought to develop computers that operate more like the human brain, with arrays of devices that function like a network of neurons.

“The way our current computer systems work is that memory and logic are physically separated,” Ji said. “You perform computation and send that information to a memory unit. Then every time you want to retrieve that information, you have to recall it. If we can bring those two separate functions together, we can save space and save on energy costs.”

Currently, the memory resistor, or “memristor,” is the most well-developed technology that can perform combined processing and memory function, but memristors suffer from energy-costly switching and less biocompatibility. These drawbacks led researchers to the synaptic transistor — especially the organic electrochemical synaptic transistor, which operates with low voltages, continuously tunable memory and high compatibility for biological applications. Still, challenges exist.

“Even high-performing organic electrochemical synaptic transistors require the write operation to be decoupled from the read operation,” Rivnay said. “So if you want to retain memory, you have to disconnect it from the write process, which can further complicate integration into circuits or systems.”

How the synaptic transistor works

To overcome these challenges, the Northwestern and University of Hong Kong team optimized a conductive, plastic material within the organic, electrochemical transistor that can trap ions. In the brain, a synapse is a structure through which a neuron can transmit signals to another neuron, using small molecules called neurotransmitters. In the synaptic transistor, ions behave similarly to neurotransmitters, sending signals between terminals to form an artificial synapse. By retaining stored data from trapped ions, the transistor remembers previous activities, developing long-term plasticity.

The researchers demonstrated their device’s synaptic behavior by connecting single synaptic transistors into a neuromorphic circuit to simulate associative learning. They integrated pressure and light sensors into the circuit and trained the circuit to associate the two unrelated physical inputs (pressure and light) with one another.

Perhaps the most famous example of associative learning is Pavlov’s dog, which naturally drooled when it encountered food. After conditioning the dog to associate a bell ring with food, the dog also began drooling when it heard the sound of a bell. For the neuromorphic circuit, the researchers activated a voltage by applying pressure with a finger press. To condition the circuit to associate light with pressure, the researchers first applied pulsed light from an LED lightbulb and then immediately applied pressure. In this scenario, the pressure is the food and the light is the bell. The device’s corresponding sensors detected both inputs.

After one training cycle, the circuit made an initial connection between light and pressure. After five training cycles, the circuit significantly associated light with pressure. Light, alone, was able to trigger a signal, or “unconditioned response.”

Future applications

Because the synaptic circuit is made of soft polymers, like a plastic, it can be readily fabricated on flexible sheets and easily integrated into soft, wearable electronics, smart robotics and implantable devices that directly interface with living tissue and even the brain [emphasis mine].

“While our application is a proof of concept, our proposed circuit can be further extended to include more sensory inputs and integrated with other electronics to enable on-site, low-power computation,” Rivnay said. “Because it is compatible with biological environments, the device can directly interface with living tissue, which is critical for next-generation bioelectronics.”

Here’s a link to and a citation for the paper,

Mimicking associative learning using an ion-trapping non-volatile synaptic organic electrochemical transistor by Xudong Ji, Bryan D. Paulsen, Gary K. K. Chik, Ruiheng Wu, Yuyang Yin, Paddy K. L. Chan & Jonathan Rivnay . Nature Communications volume 12, Article number: 2480 (2021) DOI: Published: 30 April 2021

This paper is open access.

“… devices that directly interface with living tissue and even the brain,” would I be the only one thinking about cyborgs?