I stumbled across this November 15, 2022 news item on Nanowerk highlighting work on the sense of touch in the virual originally announced in October 2022,
A collaborative research team co-led by City University of Hong Kong (CityU) has developed a wearable tactile rendering system, which can mimic the sensation of touch with high spatial resolution and a rapid response rate. The team demonstrated its application potential in a braille display, adding the sense of touch in the metaverse for functions such as virtual reality shopping and gaming, and potentially facilitating the work of astronauts, deep-sea divers and others who need to wear thick gloves.
Here’s what you’ll need to wear for this virtual tactile experience,
“We can hear and see our families over a long distance via phones and cameras, but we still cannot feel or hug them. We are physically isolated by space and time, especially during this long-lasting pandemic,” said Dr Yang Zhengbao,Associate Professor in the Department of Mechanical Engineering of CityU, who co-led the study. “Although there has been great progress in developing sensors that digitally capture tactile features with high resolution and high sensitivity, we still lack a system that can effectively virtualize the sense of touch that can record and playback tactile sensations over space and time.”
In collaboration with Chinese tech giant Tencent’s Robotics X Laboratory, the team developed a novel electrotactile rendering system for displaying various tactile sensations with high spatial resolution and a rapid response rate. Their findings were published in the scientific journal Science Advances under the title “Super-resolution Wearable Electro-tactile Rendering System”.
Limitations in existing techniques
Existing techniques to reproduce tactile stimuli can be broadly classified into two categories: mechanical and electrical stimulation. By applying a localised mechanical force or vibration on the skin, mechanical actuators can elicit stable and continuous tactile sensations. However, they tend to be bulky, limiting the spatial resolution when integrated into a portable or wearable device. Electrotactile stimulators, in contrast, which evoke touch sensations in the skin at the location of the electrode by passing a local electric current though the skin, can be light and flexible while offering higher resolution and a faster response. But most of them rely on high voltage direct-current (DC) pulses (up to hundreds of volts) to penetrate the stratum corneum, the outermost layer of the skin, to stimulate the receptors and nerves, which poses a safety concern. Also, the tactile rendering resolution needed to be improved.
The latest electro-tactile actuator developed by the team is very thin and flexible and can be easily integrated into a finger cot. This fingertip wearable device can display different tactile sensations, such as pressure, vibration, and texture roughness in high fidelity. Instead of using DC pulses, the team developed a high-frequency alternating stimulation strategy and succeeded in lowering the operating voltage under 30 V, ensuring the tactile rendering is safe and comfortable.
They also proposed a novel super-resolution strategy that can render tactile sensation at locations between physical electrodes, instead of only at the electrode locations. This increases the spatial resolution of their stimulators by more than three times (from 25 to 105 points), so the user can feel more realistic tactile perception.
Tactile stimuli with high spatial resolution
“Our new system can elicit tactile stimuli with both high spatial resolution (76 dots/cm2), similar to the density of related receptors in the human skin, and a rapid response rate (4 kHz),” said Mr LinWeikang, a PhD student at CityU, who made and tested the device.
The team ran different tests to show various application possibilities of this new wearable electrotactile rendering system. For example, they proposed a new Braille strategy that is much easier for people with a visual impairment to learn.
The proposed strategy breaks down the alphabet and numerical digits into individual strokes and order in the same way they are written. By wearing the new electrotactile rendering system on a fingertip, the user can recognise the alphabet presented by feeling the direction and the sequence of the strokes with the fingertip sensor. “This would be particularly useful for people who lose their eye sight later in life, allowing them to continue to read and write using the same alphabetic system they are used to, without the need to learn the whole Braille dot system,” said Dr Yang.
Enabling touch in the metaverse
Second, the new system is well suited for VR/AR [virtual reality/augmented reality] applications and games, adding the sense of touch to the metaverse. The electrodes can be made highly flexible and scalable to cover larger areas, such as the palm. The team demonstrated that a user can virtually sense the texture of clothes in a virtual fashion shop. The user also experiences an itchy sensation in the fingertips when being licked by a VR cat. When stroking a virtual cat’s fur, the user can feel a variance in the roughness as the strokes change direction and speed.
The system can also be useful in transmitting fine tactile details through thick gloves. The team successfully integrated the thin, light electrodes of the electrotactile rendering system into flexible tactile sensors on a safety glove. The tactile sensor array captures the pressure distribution on the exterior of the glove and relays the information to the user in real time through tactile stimulation. In the experiment, the user could quickly and accurately locate a tiny steel washer just 1 mm in radius and 0.44mm thick based on the tactile feedback from the glove with sensors and stimulators. This shows the system’s potential in enabling high-fidelity tactile perception, which is currently unavailable to astronauts, firefighters, deep-sea divers and others who need wear thick protective suits or gloves.
“We expect our technology to benefit a broad spectrum of applications, such as information transmission, surgical training, teleoperation, and multimedia entertainment,” added Dr Yang.
Here’s a link to and a citation for the paper,
Super-resolution wearable electrotactile rendering system by Weikang Lin, Dongsheng Zhang, Wang Wei Lee, Xuelong Li, Ying Hong, Qiqi Pan, Ruirui Zhang, Guoxiang Peng, Hong Z. Tan, Zhengyou Zhang, Lei Wei, and Zhengbao Yang. Science Advances 9 Sep 2022 Vol 8, Issue 36 DOI: 10.1126/sciadv.abp8738
Before getting to the two news items, it might be a good idea to note that ‘artificial intelligence (AI)’ and ‘robot’ are not synonyms although they are often used that way, even by people who should know better. (sigh … I do it too)
A robot may or may not be animated with artificial intelligence while artificial intelligence algorithms may be installed on a variety of devices such as a phone or a computer or a thermostat or a … .
It’s something to bear in mind when reading about the two new institutions being launched. Now, on to Harvard University.
Kempner Institute for the Study of Natural and Artificial Intelligence
On Thursday [September 22, 2022], leadership from the Chan Zuckerberg Initiative (CZI) and Harvard University celebrated the launch of the Kempner Institute for the Study of Natural and Artificial Intelligence at Harvard University with a symposium on Harvard’s campus. Speakers included CZI Head of Science Stephen Quake, President of Harvard University Lawrence Bacow, Provost of Harvard University Alan Garber, and Kempner Institute co-directors Bernardo Sabatini and Sham Kakade. The event also included remarks and panels from industry leaders in science, technology, and artificial intelligence, including Bill Gates, Eric Schmidt, Andy Jassy, Daniel Huttenlocher, Sam Altman, Joelle Pineau, Sangeeta Bhatia, and Yann LeCun, among many others.
The Kempner Institute will seek to better understand the basis of intelligence in natural and artificial systems. Its bold premise is that the two fields are intimately interconnected; the next generation of AI will require the same principles that our brains use for fast, flexible natural reasoning, and understanding how our brains compute and reason requires theories developed for AI. The Kempner Institute will study AI systems, including artificial neural networks, to develop both principled theories [emphasis mine] and a practical understanding of how these systems operate and learn. It will also focus on research topics such as learning and memory, perception and sensation, brain function, and metaplasticity. The Institute will recruit and train future generations of researchers from undergraduates and graduate students to post-docs and faculty — actively recruiting from underrepresented groups at every stage of the pipeline — to study intelligence from biological, cognitive, engineering, and computational perspectives.
CZI Co-Founder and Co-CEO Mark Zuckerberg [chairman and chief executive officer of Meta/Facebook] said: “The Kempner Institute will be a one-of-a-kind institute for studying intelligence and hopefully one that helps us discover what intelligent systems really are, how they work, how they break and how to repair them. There’s a lot of exciting implications because once you understand how something is supposed to work and how to repair it once it breaks, you can apply that to the broader mission the Chan Zuckerberg Initiative has to empower scientists to help cure, prevent or manage all diseases.”
CZI Co-Founder and Co-CEO Priscilla Chan said: “Just attending this school meant the world to me. But to stand on this stage and to be able to give something back is truly a dream come true … All of this progress starts with building one fundamental thing: a Kempner community that’s diverse, multi-disciplinary and multi-generational, because incredible ideas can come from anyone. If you bring together people from all different disciplines to look at a problem and give them permission to articulate their perspective, you might start seeing insights or solutions in a whole different light. And those new perspectives lead to new insights and discoveries and generate new questions that can lead an entire field to blossom. So often, that momentum is what breaks the dam and tears down old orthodoxies, unleashing new floods of new ideas that allow us to progress together as a society.”
CZI Head of Science Stephen Quake said: “It’s an honor to partner with Harvard in building this extraordinary new resource for students and science. This is a once-in-a-generation moment for life sciences and medicine. We are living in such an extraordinary and exciting time for science. Many breakthrough discoveries are going to happen not only broadly but right here on this campus and at this institute.”
CZI’s 10-year vision is to advance research and develop technologies to observe, measure, and analyze any biological process within the human body — across spatial scales and in real time. CZI’s goal is to accelerate scientific progress by funding scientific research to advance entire fields; working closely with scientists and engineers at partner institutions like the Chan Zuckerberg Biohub and Chan Zuckerberg Institute for Advanced Biological Imaging to do the research that can’t be done in conventional environments; and building and democratizing next-generation software and hardware tools to drive biological insights and generate more accurate and biologically important sources of data.
President of Harvard University Lawrence Bacow said: “Here we are with this incredible opportunity that Priscilla Chan and Mark Zuckerberg have given us to imagine taking what we know about the brain, neuroscience and how to model intelligence and putting them together in ways that can inform both, and can truly advance our understanding of intelligence from multiple perspectives.”
Kempner Institute Co-Director and Gordon McKay Professor of Computer Science and of Statistics at the Harvard John A. Paulson School of Engineering and Applied Sciences Sham Kakade said: “Now we begin assembling a world-leading research and educational program at Harvard that collectively tries to understand the fundamental mechanisms of intelligence and seeks to apply these new technologies for the benefit of humanity … We hope to create a vibrant environment for all of us to engage in broader research questions … We want to train the next generation of leaders because those leaders will go on to do the next set of great things.”
Kempner Institute Co-Director and the Alice and Rodman W. Moorhead III Professor of Neurobiology at Harvard Medical School Bernardo Sabatini said: “We’re blending research, education and computation to nurture, raise up and enable any scientist who is interested in unraveling the mysteries of the brain. This field is a nascent and interdisciplinary one, so we’re going to have to teach neuroscience to computational biologists, who are going to have to teach machine learning to cognitive scientists and math to biologists. We’re going to do whatever is necessary to help each individual thrive and push the field forward … Success means we develop mathematical theories that explain how our brains compute and learn, and these theories should be specific enough to be testable and useful enough to start to explain diseases like schizophrenia, dyslexia or autism.”
About the Chan Zuckerberg Initiative
The Chan Zuckerberg Initiative was founded in 2015 to help solve some of society’s toughest challenges — from eradicating disease and improving education, to addressing the needs of our communities. Through collaboration, providing resources and building technology, our mission is to help build a more inclusive, just and healthy future for everyone. For more information, please visit chanzuckerberg.com.
Principled theories, eh. I don’t see a single mention of ethicists or anyone in the social sciences or the humanities or the arts. How are scientists and engineers who have no training in or education in or, even, an introduction to ethics or social impacts or psychology going to manage this?
Mark Zuckerberg’s approach to these issues was something along the lines of “it’s easier to ask for forgiveness than to ask for permission.” I understand there have been changes but it took far too long to recognize the damage let alone attempt to address it.
If you want to gain a little more insight into the Kempner Institute, there’s a December 7, 2021 article by Alvin Powell announcing the institute for the Harvard Gazette,
The institute will be funded by a $500 million gift from Priscilla Chan and Mark Zuckerberg, which was announced Tuesday [December 7, 2021] by the Chan Zuckerberg Initiative. The gift will support 10 new faculty appointments, significant new computing infrastructure, and resources to allow students to flow between labs in pursuit of ideas and knowledge. The institute’s name honors Zuckerberg’s mother, Karen Kempner Zuckerberg, and her parents — Zuckerberg’s grandparents — Sidney and Gertrude Kempner. Chan and Zuckerberg have given generously to Harvard in the past, supporting students, faculty, and researchers in a range of areas, including around public service, literacy, and cures.
“The Kempner Institute at Harvard represents a remarkable opportunity to bring together approaches and expertise in biological and cognitive science with machine learning, statistics, and computer science to make real progress in understanding how the human brain works to improve how we address disease, create new therapies, and advance our understanding of the human body and the world more broadly,” said President Larry Bacow.
Bernardo Sabatini and Sham Kakade [Institute co-directors]
GAZETTE: Tell me about the new institute. What is its main reason for being?
SABATINI: The institute is designed to take from two fields and bring them together, hopefully to create something that’s essentially new, though it’s been tried in a couple of places. Imagine that you have over here cognitive scientists and neurobiologists who study the human brain, including the basic biological mechanisms of intelligence and decision-making. And then over there, you have people from computer science, from mathematics and statistics, who study artificial intelligence systems. Those groups don’t talk to each other very much.
We want to recruit from both populations to fill in the middle and to create a new population, through education, through graduate programs, through funding programs — to grow from academic infancy — those equally versed in neuroscience and in AI systems, who can be leaders for the next generation.
Over the millions of years that vertebrates have been evolving, the human brain has developed specializations that are fundamental for learning and intelligence. We need to know what those are to understand their benefits and to ask whether they can make AI systems better. At the same time, as people who study AI and machine learning (ML) develop mathematical theories as to how those systems work and can say that a network of the following structure with the following properties learns by calculating the following function, then we can take those theories and ask, “Is that actually how the human brain works?”
KAKADE: There’s a question of why now? In the technological space, the advancements are remarkable even to me, as a researcher who knows how these things are being made. I think there’s a long way to go, but many of us feel that this is the right time to study intelligence more broadly. You might also ask: Why is this mission unique and why is this institute different from what’s being done in academia and in industry? Academia is good at putting out ideas. Industry is good at turning ideas into reality. We’re in a bit of a sweet spot. We have the scale to study approaches at a very different level: It’s not going to be just individual labs pursuing their own ideas. We may not be as big as the biggest companies, but we can work on the types of problems that they work on, such as having the compute resources to work on large language models. Industry has exciting research, but the spectrum of ideas produced is very different, because they have different objectives.
How humans and super smart robots will live and work together in the future will be among the key issues being scrutinised by experts at a new centre of excellence for AI and autonomous machines based at The University of Manchester.
The Manchester Centre for Robotics and AI will be a new specialist multi-disciplinary centre to explore developments in smart robotics through the lens of artificial intelligence (AI) and autonomous machinery.
The University of Manchester has built a modern reputation of excellence in AI and robotics, partly based on the legacy of pioneering thought leadership begun in this field in Manchester by legendary codebreaker Alan Turing.
Manchester’s new multi-disciplinary centre is home to world-leading research from across the academic disciplines – and this group will hold its first conference on Wednesday, Nov 23, at the University’s new engineering and materials facilities.
A highlight will be a joint talk by robotics expert Dr Andy Weightman and theologian Dr Scott Midson which is expected to put a spotlight on ‘posthumanism’, a future world where humans won’t be the only highly intelligent decision-makers.
Dr Weightman, who researches home-based rehabilitation robotics for people with neurological impairment, and Dr Midson, who researches theological and philosophical critiques of posthumanism, will discuss how interdisciplinary research can help with the special challenges of rehabilitation robotics – and, ultimately, what it means to be human “in the face of the promises and challenges of human enhancement through robotic and autonomous machines”.
Other topics that the centre will have a focus on will include applications of robotics in extreme environments.
For the past decade, a specialist Manchester team led by Professor Barry Lennox has designed robots to work safely in nuclear decommissioning sites in the UK. A ground-breaking robot called Lyra that has been developed by Professor Lennox’s team – and recently deployed at the Dounreay site in Scotland, the “world’s deepest nuclear clean up site” – has been listed in Time Magazine’s Top 200 innovations of 2022.
Angelo Cangelosi, Professor of Machine Learning and Robotics at Manchester, said the University offers a world-leading position in the field of autonomous systems – a technology that will be an integral part of our future world.
Professor Cangelosi, co-Director of Manchester’s Centre for Robotics and AI, said: “We are delighted to host our inaugural conference which will provide a special showcase for our diverse academic expertise to design robotics for a variety of real world applications.
“Our research and innovation team are at the interface between robotics, autonomy and AI – and their knowledge is drawn from across the University’s disciplines, including biological and medical sciences – as well the humanities and even theology. [emphases mine]
“This rich diversity offers Manchester a distinctive approach to designing robots and autonomous systems for real world applications, especially when combined with our novel use of AI-based knowledge.”
Delegates will have a chance to observe a series of robots and autonomous machines being demoed at the new conference.
The University of Manchester’s Centre for Robotics and AI will aim to:
design control systems with a focus on bio-inspired solutions to mechatronics, eg the use of biomimetic sensors, actuators and robot platforms;
develop new software engineering and AI methodologies for verification in autonomous systems, with the aim to design trustworthy autonomous systems;
research human-robot interaction, with a pioneering focus on the use of brain-inspired approaches [emphasis mine] to robot control, learning and interaction; and
research the ethics and human-centred robotics issues, for the understanding of the impact of the use of robots and autonomous systems with individuals and society.
In some ways, the Kempner Institute and the Manchester Centre for Robotics and AI have very similar interests, especially where the brain is concerned. What fascinates me is the Manchester Centre’s inclusion of theologian Dr Scott Midson and the discussion (at the meeting) of ‘posthumanism’. The difference is between actual engagement at the symposium (the centre) and mere mention in a news release (the institute).
If you’ve been longing for an opportunity to discover more and to engage in discussion about brain-machine interfaces (BMIs) and their legal, technical, and ethical issues, an opportunity is just a day away. From a September 20, 2022 (US) National Academies of Sciences, Engineering, and Medicine (NAS/NASEM or National Academies) notice (received via email),
Sept. 22-23  Workshop Explores Technical, Legal, Ethical Issues Raised by Brain-Machine Interfaces [official title: Brain-Machine and Related Neural Interface Technologies: Scientific, Technical, Ethical, and Regulatory Issues – A Workshop]
Technological developments and advances in understanding of the human brain have led to the development of new Brain-Machine Interface technologies. These include technologies that “read” the brain to record brain activity and decode its meaning, and those that “write” to the brain to manipulate activity in specific brain regions. Right now, most of these interface technologies are medical devices placed inside the brain or other parts of the nervous system – for example, devices that use deep brain stimulation to modulate the tremors of Parkinson’s disease.
But tech companies are developing mass-market wearable devices that focus on understanding emotional states or intended movements, such as devices used to detect fatigue, boost alertness, or enable thoughts to control gaming and other digital-mechanical systems. Such applications raise ethical and legal issues, including risks that thoughts or mood might be accessed or manipulated by companies, governments, or others; risks to privacy; and risks related to a widening of social inequalities.
A virtual workshop [emphasis mine] hosted by the National Academies of Sciences, Engineering, and Medicine on Sept. 22-23  will explore the present and future of these technologies and the ethical, legal, and regulatory issues they raise.
The workshop will run from 12:15 p.m. to 4:25 p.m. ET on Sept. 22 and from noon to 4:30 p.m. ET on Sept. 23. View agenda and register.
For those who might want a peak at the agenda before downloading it, I have listed the titles for the sessions (from my downloaded Agenda, Note: I’ve reformatted the information; there are no breaks, discussion periods, or Q&As included),
Sept. 22, 2022 Draft Agenda
12: 30 pm ET Brain-Machine and Related Neural Interface Technologies: The State and Limitations of the Technology
2:30 pm ET Brain-Machine and Related Neural Interface Technologies: Reading and Writing the Brain for Movement
Sept. 23, 2022 Draft Agenda
12:05 pm ET Brain-Machine and Related Neural Interface Technologies: Reading and Writing the Brain for Mood and Affect
2:05 pm ET Brain-Machine and Related Neural Interface Technologies: Reading and Writing the Brain for Thought, Communication, and Memory
4:00 pm ET Concluding Thoughts from Workshop Planning Committee
Regarding terminology, there’s brain-machine interface (BMI), which I think is a more generic term that includes: brain-computer interface (BCI), neural interface and/or neural implant. There are other terms as well, including this one in the title of my September 17, 2020 posting, “Turning brain-controlled wireless electronic prostheses [emphasis mine] into reality plus some ethical points.” I have a more recent April 5, 2022 posting, which is a very deep dive, “Going blind when your neural implant company flirts with bankruptcy (long read).” As you can see, various social issues associated with these devices have been of interest to me.
I’m not sure quite what to make of the session titles. There doesn’t seem to be all that much emphasis on ethical and legal issues but perhaps that’s the role the various speakers will play.
Scientists from Skoltech, South Ural State University, and elsewhere have developed a device for recording brain activity that is more compact and affordable than the solutions currently on the market. With its high signal quality and customizable configuration, the device could help people with restricted mobility regain control of their limbs or provide advance warnings of an impending seizure to patients with epilepsy. The article presenting the device and testing results came out in Experimental Brain Research.
Researchers and medics, as well as engineers working on futuristic gadgets, need tools that measure brain activity. Among their scientific applications are research on sleep, decision-making, memory, and attention. In a clinical setting, these tools allow doctors to assess the extent of damage to an injured brain and monitor coma patients. Further down cyberpunk lane, brain signals can be translated into commands and sent to an external or implanted device, either to make up for lost functions in the body or for plain fun. The commands could range from moving the arm of an exoskeleton worn by a paralyzed person to turning on the TV.
Invented about a century ago, electroencephalographers are devices that read the electrical activity of the brain via small electrodes placed on the scalp. The recorded signals are then used for research, diagnostics, or gadgetry. The problem with the existing systems used in labs and hospitals is they are bulky and/or expensive. And even then, the number of electrodes is limited, resulting in moderate signal quality. Amateur devices tend to be more affordable, but with even poorer sensitivity.
To fill that gap, researchers from South Ural State University, North Carolina State University, and Brainflow — led by electronic research engineer Ildar Rakhmatulin and Skoltech neuroscientist Professor Mikhail Lebedev — created a device you can build for just $350, compared with the $1,000 or more you would need for currently available analogs. Besides being less expensive, the new electroencephalographer has as many as 24 electrodes or more. Importantly, it also provides research-grade signal quality. At half a centimeter in diameter (about 1/5 inches), the processing unit is compact enough to be worn throughout the day or during the night. The entire device weighs about 150 grams (about 5 ounces).
The researchers have made the instructions for building the device and the accompanying documentation and software openly available on GitHub. The team hopes this will attract more enthusiasts involved in brain-computer interface development, giving an impetus to support and rehabilitation system development, cognitive research, and pushing the geek community to come up with new futuristic gizmos.
“The more convenient and affordable such devices become, the more chances there are this would drive the home lab movement, with some of the research on brain-computer interfaces migrating from large science centers to small-scale amateur projects,” Lebedev said.
“Or we could see people with limited mobility using do-it-yourself interfaces to train, say, a smartphone-based system that would electrically stimulate a biceps to flex the arm at the elbow,” the researcher went on. “That works on someone who has lost control over their arm due to spinal cord trauma or a stroke, where the commands are still generated in the brain — they just don’t reach the limb, and that’s where our little brain-computer interfacing comes in.”
According to the team, such interfaces could also help patients with epilepsy by detecting tell-tale brain activity patterns that indicate when a seizure is imminent, so they can prepare by lying down comfortably in a safe space or attempting to suppress the seizure via electrical stimulation.
Here’s a link to and a citation for the paper,
Low-cost brain computer interface for everyday use by Ildar Rakhmatulin, Andrey Parfenov, Zachary Traylor, Chang S. Nam & Mikhail Lebedev. Experimental Brain Research volume 239,Issue Date: December 2021, pages 3573–3583 (2021) DOI: https://doi.org/10.1007/s00221-021-06231-4 Published online: 29 September 2021
This paper is behind a paywall.
You can find Brainflow here and this description on its homepage: “BrainFlow is a library intended to obtain, parse and analyze EEG, EMG, ECG and other kinds of data from biosensors.”
This story got me to thinking about what happens when any kind of implant company (pacemaker, deep brain stimulator, etc.) goes bankrupt or is acquired by another company with a different business model.
As I worked on this piece, more issues were raised and the scope expanded to include prosthetics along with implants while the focus narrowed to neuro as in, neural implants and neuroprosthetics. At the same time, I found salient examples for this posting in other medical advances such as gene editing.
In sum, all references to implants and prosthetics are to neural devices and some issues are illustrated with salient examples from other medical advances (specifically, gene editing).
Medical implants are devices or tissues that are placed inside or on the surface of the body. Many implants are prosthetics, intended to replace missing body parts. Other implants deliver medication, monitor body functions, or provide support to organs and tissues.
As for what constitutes a neural implant/neuroprosthetic, there’s this from Emily Waltz’s January 20, 2020 article (How Do Neural Implants Work? Neural implants are used for deep brain stimulation, vagus nerve stimulation, and mind-controlled prostheses) for the Institute of Electrical and Electronics Engineers (IEEE) Spectrum magazine,
A neural implant, then, is a device—typically an electrode of some kind—that’s inserted into the body, comes into contact with tissues that contain neurons, and interacts with those neurons in some way.
Now, let’s start with the recent near bankruptcy of a retinal implant company.
Barbara Campbell was walking through a New York City subway station during rush hour when her world abruptly went dark. For four years, Campbell had been using a high-tech implant in her left eye that gave her a crude kind of bionic vision, partially compensating for the genetic disease that had rendered her completely blind in her 30s. “I remember exactly where I was: I was switching from the 6 train to the F train,” Campbell tells IEEE Spectrum. “I was about to go down the stairs, and all of a sudden I heard a little ‘beep, beep, beep’ sound.’”
It wasn’t her phone battery running out. It was her Argus II retinal implant system powering down. The patches of light and dark that she’d been able to see with the implant’s help vanished.
Terry Byland is the only person to have received this kind of implant in both eyes. He got the first-generation Argus I implant, made by the company Second Sight Medical Products, in his right eye in 2004, and the subsequent Argus II implant in his left 11 years later. He helped the company test the technology, spoke to the press movingly about his experiences, and even met Stevie Wonder at a conference. “[I] went from being just a person that was doing the testing to being a spokesman,” he remembers.
Yet in 2020, Byland had to find out secondhand that the company had abandoned the technology and was on the verge of going bankrupt. While his two-implant system is still working, he doesn’t know how long that will be the case. “As long as nothing goes wrong, I’m fine,” he says. “But if something does go wrong with it, well, I’m screwed. Because there’s no way of getting it fixed.”
Ross Doerr, another Second Sight patient, doesn’t mince words: “It is fantastic technology and a lousy company,” he says. He received an implant in one eye in 2019 and remembers seeing the shining lights of Christmas trees that holiday season. He was thrilled to learn in early 2020 that he was eligible for software upgrades that could further improve his vision. Yet in the early months of the COVID-19 pandemic, he heard troubling rumors about the company and called his Second Sight vision-rehab therapist. “She said, ‘Well, funny you should call. We all just got laid off,’ ” he remembers. “She said, ‘By the way, you’re not getting your upgrades.’ ”
These three patients, and more than 350 other blind people around the world with Second Sight’s implants in their eyes, find themselves in a world in which the technology that transformed their lives is just another obsolete gadget. One technical hiccup, one broken wire, and they lose their artificial vision, possibly forever. To add injury to insult: A defunct Argus system in the eye could cause medical complications or interfere with procedures such as MRI scans, and it could be painful or expensive to remove.
After Second Sight discontinued its retinal implant in 2019 and nearly went out of business in 2020, a public offering in June 2021 raised US $57.5 million at $5 per share. The company promised to focus on its ongoing clinical trial of a brain implant, called Orion, that also provides artificial vision. But its stock price plunged to around $1.50, and in February 2022, just before this article was published, the company announced a proposed merger with an early-stage biopharmaceutical company called Nano Precision Medical (NPM). None of Second Sight’s executives will be on the leadership team of the new company, which will focus on developing NPM’s novel implant for drug delivery.The company’s current leadership declined to be interviewed for this article but did provide an emailed statement prior to the merger announcement. It said, in part: “We are a recognized global leader in neuromodulation devices for blindness and are committed to developing new technologies to treat the broadest population of sight-impaired individuals.”
It’s unclear what Second Sight’s proposed merger means for Argus patients. The day after the merger was announced, Adam Mendelsohn, CEO of Nano Precision Medical, told Spectrum that he doesn’t yet know what contractual obligations the combined company will have to Argus and Orion patients. But, he says, NPM will try to do what’s “right from an ethical perspective.” The past, he added in an email, is “simply not relevant to the new future.”
Second Sight may have given up on its retinal implant, but other companies still see a need—and a market—for bionic vision without brain surgery. Paris-based Pixium Vision is conducting European and U.S. feasibility trials to see if its Prima system can help patients with age-related macular degeneration, a much more common condition than retinitis pigmentosa.
Daniel Palanker, a professor of ophthalmology at Stanford University who licensed his technology to Pixium, says the Prima implant is smaller, simpler, and cheaper than the Argus II. But he argues that Prima’s superior image resolution has the potential to make Pixium Vision a success. “If you provide excellent vision, there will be lots of patients,” he tells Spectrum. “If you provide crappy vision, there will be very few.”
Some clinicians involved in the Argus II work are trying to salvage what they can from the technology. Gislin Dagnelie, an associate professor of ophthalmology at Johns Hopkins University School of Medicine, has set up a network of clinicians who are still working with Argus II patients. The researchers are experimenting with a thermal camera to help users see faces, a stereo camera to filter out the background, and AI-powered object recognition. These upgrades are unlikely to result in commercial hardware today but could help future vision prostheses.
Failure is an inevitable part of innovation. The Argus II was an innovative technology, and progress made by Second Sight may pave the way for other companies that are developing bionic vision systems. But for people considering such an implant in the future, the cautionary tale of Argus patients left in the lurch may make a tough decision even tougher. Should they take a chance on a novel technology? If they do get an implant and find that it helps them navigate the world, should they allow themselves to depend upon it?
Abandoning the Argus II technology—and the people who use it—might have made short-term financial sense for Second Sight, but it’s a decision that could come back to bite the merged company if it does decide to commercialize a brain implant, believes Doerr.
For anyone curious about retinal implant technology (specifically the Argus II), I have a description in a June 30, 2015 posting.
Speculations and hopes for neuroprosthetics
The field of neuroprosthetics is very active. Dr Arthur Saniotis and Prof Maciej Henneberg have written an article where they speculate about the possibilities of a neuroprosthetic that may one day merge with neurons in a February 21, 2022 Nanowerk Spotlight article,
For over a generation several types of medical neuroprosthetics have been developed, which have improved the lives of thousands of individuals. For instance, cochlear implants have restored functional hearing in individuals with severe hearing impairment.
Further advances in motor neuroprosthetics are attempting to restore motor functions in tetraplegic, limb loss and brain stem stroke paralysis subjects.
Currently, scientists are working on various kinds of brain/machine interfaces [BMI] in order to restore movement and partial sensory function. One such device is the ‘Ipsihand’ that enables movement of a paralyzed hand. The device works by detecting the recipient’s intention in the form of electrical signals, thereby triggering hand movement.
Another recent development is the 12 month BMI gait neurohabilitation program that uses a visual-tactile feedback system in combination with a physical exoskeleton and EEG operated AI actuators while walking. This program has been tried on eight patients with reported improvements in lower limb movement and somatic sensation.
Surgically placed electrode implants have also reduced tremor symptoms in individuals with Parkinson’s disease.
Although neuroprosthetics have provided various benefits they do have their problems. Firstly, electrode implants to the brain are prone to degradation, necessitating new implants after a few years. Secondly, as in any kind of surgery, implanted electrodes can cause post-operative infection and glial scarring. Furthermore, one study showed that the neurobiological efficacy of an implant is dependent on the rate of speed of its insertion.
But what if humans designed a neuroprosthetic, which could bypass the medical glitches of invasive neuroprosthetics? However, instead of connecting devices to neural networks, this neuroprosthetic would directly merge with neurons – a novel step. Such a neuroprosthetic could radically optimize treatments for neurodegenerative disorders and brain injuries, and possibly cognitive enhancement [emphasis mine].
An interesting feature of their nanobot neuroprosthetic is that it has been inspired from nature by way of endomyccorhizae – a type of plant/fungus symbiosis, which is over four hundred million years old. During endomyccorhizae, fungi use numerous threadlike projections called mycelium that penetrate plant roots, forming colossal underground networks with nearby root systems. During this process fungi take up vital nutrients while protecting plant roots from infections – a win-win relationship. Consequently, the nano-neuroprosthetic has been named ‘endomyccorhizae ligand interface’, or ‘ELI’ for short.
The Spotlight article goes on to describe how these nanobots might function. As for the possibility of cognitive enhancement, I wonder if that might come to be described as a form of ‘artificial intelligence’.
(Dr Arthur Saniotis and Prof Maciej Henneberg are both from the Department of Anthropology, Ludwik Hirszfeld Institute of Immunology and Experimental Therapy, Polish Academy of Sciences; and Biological Anthropology and Comparative Anatomy Research Unit, Adelaide Medical School, University of Adelaide. Abdul-Rahman Sawalma who’s listed as an author on the 2018 paper is from the Palestinian Neuroscience Initiative, Al-Quds University, Beit Hanina, Palestine.)
Saniotis and Henneberg’s Spotlight article presents an optimistic view of neuroprosthetics. It seems telling that they cite cochlear implants as a success story when it is viewed by many as ethically fraught (see the Cochlear implant Wikipedia entry; scroll down to ‘Criticism and controversy’).
Technologist: What are the potential consequences of accepting the “augmented human” in society?
Gregor Wolbring: There are many that we might not even envision now. But let me focus on failure and obsolescence [emphasis mine], two issues that are rarely discussed. What happens when the mechanisms fails in the middle of an action? Failure has hazardous consequences, but obsolescence has psychological ones. …. The constant surgical intervention needed to update the hardware may not be feasible. A person might feel obsolete if she cohabits with others using a newer version.
T. Are researchers working on prosthetics sometimes disconnected from reality?
G. W. Students engaged in the development of prosthetics have to learn how to think in societal terms and develop a broader perspective. Our education system provides them with a fascination for clever solutions to technological challenges but not with tools aiming at understanding the consequences, such as whether their product might increase or decrease social justice.
Wolbring is a professor at the University of Calgary’s Cumming School of Medicine (profile page) who writes on social issues to do with human enhancement/ augmentation. As well,
Some of his areas of engagement are: ability studies including governance of ability expectations, disability studies, governance of emerging and existing sciences and technologies (e.g. nanoscale science and technology, molecular manufacturing, aging, longevity and immortality, cognitive sciences, neuromorphic engineering, genetics, synthetic biology, robotics, artificial intelligence, automatization, brain machine interfaces, sensors), impact of science and technology on marginalized populations, especially people with disabilities he governance of bodily enhancement, sustainability issues, EcoHealth, resilience, ethics issues, health policy issues, human rights and sport.
I’d classify Second Sight as a tech startup company and they have a high rate of failure, which may not have been clear to the patients who had the implants. Clinical trials can present problems too as this excerpt from my September 17, 2020 posting notes,
“In 2003, neurologist Helen Mayberg of Emory University in Atlanta began to test a bold, experimental treatment for people with severe depression, which involved implanting metal electrodes deep in the brain in a region called area 25 [emphases mine]. The initial data were promising; eventually, they convinced a device company, St. Jude Medical in Saint Paul, to sponsor a 200-person clinical trial dubbed BROADEN.
This month [October 2017], however, Lancet Psychiatry reported the first published data on the trial’s failure. The study stopped recruiting participants in 2012, after a 6-month study in 90 people failed to show statistically significant improvements between those receiving active stimulation and a control group, in which the device was implanted but switched off.
… a tricky dilemma for companies and research teams involved in deep brain stimulation (DBS) research: If trial participants want to keep their implants [emphases mine], who will take responsibility—and pay—for their ongoing care? And participants in last week’s meeting said it underscores the need for the growing corps of DBS researchers to think long-term about their planned studies.”
“It becomes part of you,” Patient 6 said, describing the technology that enabled her, after 45 years of severe epilepsy, to halt her disabling seizures. Electrodes had been implanted on the surface of her brain that would send a signal to a hand-held device when they detected signs of impending epileptic activity. On hearing a warning from the device, Patient 6 knew to take a dose of medication to halt the coming seizure.
“You grow gradually into it and get used to it, so it then becomes a part of every day,” she told Frederic Gilbert, an ethicist who studies brain–computer interfaces (BCIs) at the University of Tasmania in Hobart, Australia. “It became me,” she said. [emphasis mine]
Symbiosis is a term, borrowed from ecology, that means an intimate co-existence of two species for mutual advantage. As technologists work towards directly connecting the human brain to computers, it is increasingly being used to describe humans’ potential relationship with artificial intelligence. [emphasis mine]
For a lot of people these devices are or could be life-changing. At the same time, there are a number of different issues related to implants/prosthetics; the following is not an exhaustive list. As Wolbring notes, issues that we can’t begin to imagine now are likely to emerge as these medical advances become more ubiquitous.
Assistive technologies are almost always portrayed as helpful. For example, a cochlear implant gives people without hearing the ability to hear. The assumption is that this is always a good thing—unless you’re a deaf person who wants to define the problem a little differently. Who gets to decide what is good and ‘normal’ and what is desirable?
While the cochlear implant is the most extreme example I can think of, there are variations of these questions throughout the ‘disability’ communities.
Also, as Wolbring notes in his interview with the Technologist.eu, the education system tends to favour technological solutions which don’t take social issues into account. Wolbring cites social justice issues when he mentions failure and obsolescence.
Technical failures and obsolescence
The story, excerpted earlier in this posting, opened with a striking example of a technical failure at an awkward moment; a blind woman depending on her retinal implant loses all sight as she maneuvers through a subway station in New York City.
Aside from being an awful way to find out the company supplying and supporting your implant is in serious financial trouble and can’t offer assistance or repair, the failure offers a preview of what could happen as implants and prosthetics become more commonly used.
Keeping up/fomo (fear of missing out)/obsolescence
It used to be called ‘keeping up with the Joneses, it’s the practice of comparing yourself and your worldly goods to someone else(‘s) and then trying to equal what they have or do better. Usually, people want to have more and better than the mythical Joneses.
These days, the phenomenon (which has been expanded to include social networking) is better known as ‘fomo’ or fear of missing out (see the Fear of missing out Wikipedia entry).
Whatever you want to call it, humanity’s competitive nature can be seen where technology is concerned. When I worked in technology companies, I noticed that hardware and software were sometimes purchased for features that were effectively useless to us. But, not upgrading to a newer version was unthinkable.
Call it fomo or ‘keeping up with the Joneses’, it’s a powerful force and when people (and even companies) miss out or can’t keep up, it can lead to a sense of inferiority in the same way that having an obsolete implant or prosthetic could.
Could there be a neural implant/neuroprosthetic divide? There is already a digital divide (from its Wikipedia entry),
The digital divide is a gap between those who have access to new technology and those who do not … people without access to the Internet and other ICTs [information and communication technologies] are at a socio-economic disadvantage because they are unable or less able to find and apply for jobs, shop and sell online, participate democratically, or research and learn.
After reading Wolbring’s comments, it’s not hard to imagine a neural implant/neuroprosthetic divide with its attendant psychological and social consequences.
What kind of human am I?
There are other issues as noted in my September 17, 2020 posting. I’ve already mentioned ‘patient 6’, the woman who developed a symbiotic relationship with her brain/computer interface. This is how the relationship ended,
… He [Frederic Gilbert, ethicist] is now preparing a follow-up report on Patient 6. The company that implanted the device in her brain to help free her from seizures went bankrupt. The device had to be removed.
… Patient 6 cried as she told Gilbert about losing the device. … “I lost myself,” she said.
“It was more than a device,” Gilbert says. “The company owned the existence of this new person.”
More recently, Hugh Herr, an Associate Professor at the Massachusetts Institute of Technology (MIT), leader of the Biomechatronics research group at MIT’s Media Lab, a double amputee, and prosthetic enthusiast, starred in the recent (February 23, 2022) broadcast of ‘Augmented‘ on the Public Broadcasting Service (PBS) science programme, Nova.
I found ‘Augmented’ a little offputting as it gave every indication of being an advertisement for Herr’s work in the form of a hero’s journey. I was not able to watch more than 10 mins. This preview gives you a pretty good idea of what it was like although the part in ‘Augmented, where he says he’d like to be a cyborg hasn’t been included,
At a guess, there were a few talking heads (taking up from 10%-20% of the running time) who provided some cautionary words to counterbalance the enthusiasm in the rest of the programme. It’s a standard approach designed to give the impression that both sides of a question are being recognized. The cautionary material is usually inserted past the 1/2 way mark while leaving several minutes at the end for returning to the more optimistic material.
Written by Paul Hochman for Fast Company, Bionic Legs, iLimbs, and Other Super-Human Prostheses [ETA March 23, 2022: an updated version of the article is now on Genius.com] delves further into the world where people may be willing to trade a healthy limb for a prosthetic. From the article,
There are many advantages to having your leg amputated.
Pedicure costs drop 50% overnight. A pair of socks lasts twice as long. But Hugh Herr, the director of the Biomechatronics Group at the MIT Media Lab, goes a step further. “It’s actually unfair,” Herr says about amputees’ advantages over the able-bodied. “As tech advancements in prosthetics come along, amputees can exploit those improvements. They can get upgrades. A person with a natural body can’t.”
Herr is not the only one who favours prosthetics (also from the Hochman article),
This influx of R&D cash, combined with breakthroughs in materials science and processor speed, has had a striking visual and social result: an emblem of hurt and loss has become a paradigm of the sleek, modern, and powerful. Which is why Michael Bailey, a 24-year-old student in Duluth, Georgia, is looking forward to the day when he can amputate the last two fingers on his left hand.
“I don’t think I would have said this if it had never happened,” says Bailey, referring to the accident that tore off his pinkie, ring, and middle fingers. “But I told Touch Bionics I’d cut the rest of my hand off if I could make all five of my fingers robotic.”
But Bailey is most surprised by his own reaction. “When I’m wearing it, I do feel different: I feel stronger. As weird as that sounds, having a piece of machinery incorporated into your body, as a part of you, well, it makes you feel above human.[emphasis mine] It’s a very powerful thing.”
My September 17, 2020 posting touches on more ethical and social issues including some of those surrounding consumer neurotechnologies or brain-computer interfaces (BCI). Unfortunately, I don’t have space for these issues here.
In the IEEE Spectrum article, a tech start-up company, Second Sight, ran into financial trouble and is acquired by a company that has no plans to develop Second Sight’s core technology. The people implanted with the Argus II technology have been stranded as were ‘patient 6’ and others participating in the clinical trial described in the July 24, 2019 article by Liam Drew for Nature Outlook: The brain mentioned earlier in this posting.
I don’t know anything about the business bankruptcy mentioned in the Drew article but one of the business problems described in the IEEE Spectrum article suggests that Second Sight was founded before answering a basic question, “What is the market size for this product?”
On 18 July 2019, Second Sight sent Argus patients a letter saying it would be phasing out the retinal implant technology to clear the way for the development of its next-generation brain implant for blindness, Orion, which had begun a clinical trial with six patients the previous year. …
“The leadership at the time didn’t believe they could make [the Argus retinal implant] part of the business profitable,” Greenberg [Robert Greenberg, Second Sight co-founder] says. “I understood the decision, because I think the size of the market turned out to be smaller than we had thought.”
The question of whether a medical procedure or medicine can be profitable (or should the question be sufficiently profitable?) was referenced in my April 26, 2019 posting in the context of gene editing and personalized medicine
Edward Abrahams, president of the Personalized Medicine Coalition (US-based), advocates for personalized medicine while noting in passing, market forces as represented by Goldman Sachs in his May 23, 2018 piece for statnews.com (Note: A link has been removed),
Goldman Sachs, for example, issued a report titled “The Genome Revolution.” It argues that while “genome medicine” offers “tremendous value for patients and society,” curing patients may not be “a sustainable business model.” [emphasis mine] The analysis underlines that the health system is not set up to reap the benefits of new scientific discoveries and technologies. Just as we are on the precipice of an era in which gene therapies, gene-editing, and immunotherapies promise to address the root causes of disease, Goldman Sachs says that these therapies have a “very different outlook with regard to recurring revenue versus chronic therapies.”
The ‘Glybera’ story in my July 4, 2019 posting (scroll down about 40% of the way) highlights the issue with “recurring revenue versus chronic therapies,”
It cost $1M for a single treatment and that single treatment is good for at least 10 years.
Pharmaceutical companies make their money from repeated use of their medicaments and Glybera required only one treatment so the company priced it according to how much they would have gotten for repeated use, $100,000 per year over a 10 year period. The company was not able to persuade governments and/or individuals to pay the cost
In the end, 31 people got the treatment, most of them received it for free through clinical trials.
For rich people only?
Megan Devlin’s March 8, 2022 article for the Daily Hive announces a major research investment into medical research (Note: A link has been removed),
Vancouver [Canada] billionaire Chip Wilson revealed Tuesday [March 8, 2022] that he has a rare genetic condition that causes his muscles to waste away, and announced he’s spending $100 million on research to find a cure.
His condition is called facio-scapulo-humeral muscular dystrophy, or FSHD for short. It progresses rapidly in some people and more slowly in others, but is characterized by progressive muscle weakness starting the the face, the neck, shoulders, and later the lower body.
“I’m out for survival of my own life,” Wilson said.
“I also have the resources to do something about this which affects so many people in the world.”
Wilson hopes the $100 million will produce a cure or muscle-regenerating treatment by 2027.
“This could be one of the biggest discoveries of all time, for humankind,” Wilson said. “Most people lose muscle, they fall, and they die. If we can keep muscle as we age this can be a longevity drug like we’ve never seen before.”
According to rarediseases.org, FSHD affects between four and 10 people out of every 100,000 [emphasis mine], Right now, therapies are limited to exercise and pain management. There is no way to stall or reverse the disease’s course.
Wilson is best known for founding athleisure clothing company Lululemon. He also owns the most expensive home in British Columbia, a $73 million mansion in Vancouver’s Kitsilano neighbourhood.
Let’s see what the numbers add up to,
4 – 10 people out of 100,000
40 – 100 people out of 1M
1200 – 3,000 people out of 30M (let’s say this is Canada’s population)\
12,000 – 30,000 people out of 300M (let’s say this is the US’s population)
42,000 – 105,000 out of 1.115B (let’s say this is China’s population)
The rough total comes to 55,200 to 138,000 people between three countries with a combined population total of 1.445B. Given how business currently operates, it seems unlikely that any company will want to offer Wilson’s hoped for medical therapy although he and possibly others may benefit from a clinical trial.
Should profit or wealth be considerations?
The stories about the patients with the implants and the patients who need Glybera are heartbreaking and point to a question not often asked when medical therapies and medications are developed. Is the profit model the best choice and, if so, how much profit?
I have no answer to that question but I wish it was asked by medical researchers and policy makers.
As for wealthy people dictating the direction for medical research, I don’t have answers there either. I hope the research will yield applications and/or valuable information for more than Wilson’s disease.
It’s his money after all
Wilson calls his new venture, SolveFSHD. It doesn’t seem to be affiliated with any university or biomedical science organization and it’s not clear how the money will be awarded (no programmes, no application procedure, no panel of experts). There are three people on the team, Eva R. Chin, scientist and executive director, Chip Wilson, SolveFSHD founder/funder, and FSHD patient, and Neil Camarta, engineer, executive (fossil fuels and clean energy), and FSHD patient. There’s also a Twitter feed (presumably for the latest updates): https://twitter.com/SOLVEFSHD.
Low Tide Properties, the real estate arm of Lululemon founder Chip Wilson [emphasis mine], has submitted a new development permit application to build a 148-ft-tall, eight-storey, mixed-use commercial building in the False Creek Flats of Vancouver.
The proposal, designed by local architectural firm Musson Cattell Mackey Partnership, calls for 236,000 sq ft of total floor area, including 105,000 sq ft of general office space, 102,000 sq ft of laboratory space [emphasis mine], and 5,000 sq ft of ground-level retail space. An outdoor amenity space for building workers will be provided on the rooftop.
[next door] The 2001-built, five-storey building at 1618 Station Street immediately to the west of the development site is also owned by Low Tide Properties [emphasis mine]. The Ferguson, the name of the existing building, contains about 79,000 sq ft of total floor area, including 47,000 sq ft of laboratory space and 32,000 sq ft of general office space. Biotechnology company Stemcell technologies [STEMCELL] Technologies] is the anchor tenant [emphasis mine].
I wonder if this proposed new building will house SolveFSHD and perhaps other FSHD-focused enterprises. The proximity of STEMCELL Technologies could be quite convenient. In any event, $100M will buy a lot (pun intended).
Issues I’ve described here in the context of neural implants/neuroprosthetics and cutting edge medical advances are standard problems not specific to these technologies/treatments:
What happens when the technology fails (hopefully not at a critical moment)?
What happens when your supplier goes out of business or discontinues the products you purchase from them?
How much does it cost?
Who can afford the treatment/product? Will it only be for rich people?
Will this technology/procedure/etc. exacerbate or create new social tensions between social classes, cultural groups, religious groups, races, etc.?
Of course, having your neural implant fail suddenly in the middle of a New York City subway station seems a substantively different experience than having your car break down on the road.
There are, of course, there are the issues we can’t yet envision (as Wolbring notes) and there are issues such as symbiotic relationships with our implants and/or feeling that you are “above human.” Whether symbiosis and ‘implant/prosthetic superiority’ will affect more than a small number of people or become major issues is still to be determined.
There’s a lot to be optimistic about where new medical research and advances are concerned but I would like to see more thoughtful coverage in the media (e.g., news programmes and documentaries like ‘Augmented’) and more thoughtful comments from medical researchers.
Of course, the biggest issue I’ve raised here is about the current business models for health care products where profit is valued over people’s health and well-being. it’s a big question and I don’t see any definitive answers but the question put me in mind of this quote (from a September 22, 2020 obituary for US Supreme Court Justice Ruth Bader Ginsburg by Irene Monroe for Curve),
Ginsburg’s advocacy for justice was unwavering and showed it, especially with each oral dissent. In another oral dissent, Ginsburg quoted a familiar Martin Luther King Jr. line, adding her coda:” ‘The arc of the universe is long, but it bends toward justice,’” but only “if there is a steadfast commitment to see the task through to completion.” …
Martin Luther King Jr. popularized and paraphrased the quote (from a January 18, 2018 article by Mychal Denzel Smith for Huffington Post),
His use of the quote is best understood by considering his source material. “The arc of the moral universe is long, but it bends toward justice” is King’s clever paraphrasing of a portion of a sermon delivered in 1853 by the abolitionist minister Theodore Parker. Born in Lexington, Massachusetts, in 1810, Parker studied at Harvard Divinity School and eventually became an influential transcendentalist and minister in the Unitarian church. In that sermon, Parker said: “I do not pretend to understand the moral universe. The arc is a long one. My eye reaches but little ways. I cannot calculate the curve and complete the figure by experience of sight. I can divine it by conscience. And from what I see I am sure it bends toward justice.”
I choose to keep faith that people will get the healthcare products they need and that all of us need to keep working at making access more fair.
The Cornell University Lab of Ornithology’s sound recognition feature in its Merlin birding app(lication) can answer that question for you according to a July 14, 2021 article by Steven Melendez for Fast Company (Note: Links have been removed),
The lab recently upgraded its Merlin smartphone app, designed for both new and experienced birdwatchers. It now features an AI-infused “Sound ID” feature that can capture bird sounds and compare them to crowdsourced samples to figure out just what bird is making that sound. … people have used it to identify more than 1 million birds. New user counts are also up 58% since the two weeks before launch, and up 44% over the same period last year, according to Drew Weber, Merlin’s project coordinator.
Even when it’s listening to bird sounds, the app still relies on recent advances in image recognition, says project research engineer Grant Van Horn. …, it actually transforms the sound into a visual graph called a spectrogram, similar to what you might see in an audio editing program. Then, it analyzes that spectrogram to look for similarities to known bird calls, which come from the Cornell Lab’s eBird citizen science project.
… Merlin can recognize the sounds of more than 400 species from the U.S. and Canada, with that number set to expand rapidly in future updates.
As Merlin listens, it uses artificial intelligence (AI) technology to identify each species, displaying in real time a list and photos of the birds that are singing or calling.
Automatic song ID has been a dream for decades, but analyzing sound has always been extremely difficult. The breakthrough came when researchers, including Merlin lead researcher Grant Van Horn, began treating the sounds as images and applying new and powerful image classification algorithms like the ones that already power Merlin’s Photo ID feature.
“Each sound recording a user makes gets converted from a waveform to a spectrogram – a way to visualize the amplitude [volume], frequency [pitch] and duration of the sound,” Van Horn said. “So just like Merlin can identify a picture of a bird, it can now use this picture of a bird’s sound to make an ID.”
Merlin’s pioneering approach to sound identification is powered by tens of thousands of citizen scientists who contributed their bird observations and sound recordings to eBird, the Cornell Lab’s global database.
“Thousands of sound recordings train Merlin to recognize each bird species, and more than a billion bird observations in eBird tell Merlin which birds are likely to be present at a particular place and time,” said Drew Weber, Merlin project coordinator. “Having this incredibly robust bird dataset – and feeding that into faster and more powerful machine-learning tools – enables Merlin to identify birds by sound now, when doing so seemed like a daunting challenge just a few years ago.”
The Merlin Bird ID app with the new Sound ID feature is available for free on iOS and Android devices. Click here to download the Merlin Bird ID app and follow the prompts. If you already have Merlin installed on your phone, tap “Get Sound ID.”
Do take a look at Devokaitis’ June 23, 2021 article for more about how the Merlin app provides four ways to identify birds.
For anyone who likes to listen to the news, there’s an August 26, 2021 podcast (The Warblers by Birds Canada) featuring Drew Weber, Merlin project coordinator, and Jody Allair, Birds Canada Director of Community Engagement, discussing Merlin,
It’s a dream come true – there’s finally an app for identifying bird sounds. In the next episode of The Warblers podcast, we’ll explore the Merlin Bird ID app’s new Sound ID feature and how artificial intelligence is redefining birding. We talk with Drew Weber and Jody Allair and go deep into the implications and opportunities that this technology will bring for birds, and new as well as experienced birders.
The Warblers is hosted by Andrea Gress and Andrés Jiménez.
I have high hopes for this debate on gene edited babies. Intelligence Squared US convenes good debates. (I watched their ‘de-extinction’ debate back in 2019, which coincidentally, featured George Church, one of the debaters in this event.) Not ‘good’ in that I necessarily agree or am interested in the topics but good as in thoughtful. Here’s more from the organization’s mission on their What is IQ2US? webpage,
A nonpartisan, nonprofit organization, Intelligence Squared U.S. addresses a fundamental problem in America: the extreme polarization of our nation and our politics.
Our mission is to restore critical thinking, facts, reason, and civility to American public discourse.
Use Gene Editing to Make Better Babies Hosted By John Donvan
Thursday, February 17, 2022 05:00 PM − 06:30 PM EST
A genetic disease runs in your family. Your doctor tells you that, should you wish to have a child, that child is likely to also carry the disease. But a new gene-editing technology could change your fate. It could ensure that your baby is — and remains — healthy. Even more, it could potentially make sure your grandchildren are also free of the disease. What do you do? Now, imagine it’s not a rare genetic disorder, but general illness, or eye color, or cognitive ability, or athleticism. Do you opt into this new world of genetically edited humans? And what if it’s not just you. What your friends, neighbors, and colleagues are also embracing this genetic revolution? Right now, science doesn’t give you that choice. But huge advancements in CRISPR [clustered regularly interspaced short palindromic repeats] technology are making human gene editing a reality. In fact, in 2018, a Chinese scientist announced the first genetically modified babies; twin girls made to resist HIV, smallpox, and malaria. The promise of this technology is clear. But gene editing is not without its perils. Its critics say the technology is destined to exacerbate inequality, pressure all parents (and nations) into editing their children to stay competitive, and meddling with the most basic aspect of our humanity. In this context, we ask the question: Should we use gene editing to make better babies?
The use of gene editing allows for couples to have children when they might otherwise have that option unavailable for them. It also allows for less to be left to chance during the pregnancy.
Gene editing will allow for babies to be born with reduced or eliminated chances of inheriting and passing on genes linked to diseases. We have a moral imperative to use technology that will improve the quality of life.
It is only a matter of time before gene editing becomes a widespread technology, potentially used by competitors and rivals on the international stage. If we have the technology, we should use it to our advantage to remain competitive.
The use of gene editing to create “better” outcomes in children will inherently create social stratification based on any gene editing, likely reflecting existing socioeconomic status. Additionally, the term ‘better’ is arbitrary and potentially short-sighted and dangerous.
Currently, there exist reasonable alternatives to gene editing for every condition for which gene editing can be used.
The technology is still developing, and the long-term effects of any gene-editing could be potentially dangerous with consequences echoing throughout the gene environment.
A February 8, 2022 Intelligence Squared U.S. news release about the upcoming debate (received via email) provides details about the debaters,
FOR THE MOTION – BIOS
* George Church, Geneticist & Founder, Personal Genome Project George Church is one of the nation’s leading geneticists and scholars. He is a professor of genetics at Harvard Medical School and MIT. In 1984, he developed the first direct genomic sequencing method, which resulted in the first genome sequence. He also helped initiate the Human Genome Project in 1984 and the Personal Genome Project in 2005. Church also serves as the director of the National Institutes of Health Center of Excellence in Genomic Science.
* Amy Webb, Futurist & Author, “The Genesis Machine” Amy Webb is an award-winning author and futurist. She is the founder and CEO of the Future Today Institute and was named one of five women changing the world by Forbes. Her new book, “The Genesis Machine,” explores the future of synthetic biology, including human gene editing. Webb is a professor of strategic foresight at New York University’s Stern School of Business and has been elected a life member of the Council on Foreign Relations.
AGAINST THE MOTION – BIOS
* Marcy Darnovsky, Policy Advocate & Executive Director, Center for Genetics and Society Marcy Darnovsky is a policy advocate and one of the most prominent voices on the politics of human biotechnology. As executive director of the Center for Genetics and Society, Darnovsky is focused on the social justice and public interest implications of gene editing. This work is informed by her background as an organizer and advocate in a range of environmental and progressive political movements.
* Françoise Baylis, Philosopher & Author, “Altered Inheritance” Françoise Baylis is a philosopher whose innovative work in bioethics, at the intersection of policy and practice, has stretched the very boundaries of the field. She is the author of “Altered Inheritance: CRISPR and the Ethics of Human Genome Editing,” which explores the scientific, ethical, and political implications of human genome editing. Baylis is a research professor at Dalhousie University and a fellow of the Canadian Academy of Health Sciences. In 2017, she was awarded the Canadian Bioethics Society Lifetime Achievement Award.
Two of the debaters have been mentioned on this blog before, George Church and Françoise Baylis. There are several references to Church including this mention with regard to Dr. He Jiankui and his CRISPR twins (July 28, 2020 posting). Françoise Baylis features in four 2019 postings with the most recent being this October 17, 2019 piece.
For anyone curious about the ‘de-extinction’ debate, it was described here in a January 18, 2019 posting prior to the event.
For the more than 5 million people in the world who have undergone an upper-limb amputation, prosthetics have come a long way. Beyond traditional mannequin-like appendages, there is a growing number of commercial neuroprosthetics — highly articulated bionic limbs, engineered to sense a user’s residual muscle signals and robotically mimic their intended motions.
But this high-tech dexterity comes at a price. Neuroprosthetics can cost tens of thousands of dollars and are built around metal skeletons, with electrical motors that can be heavy and rigid.
Now engineers at MIT [Massachusetts Institute of Technology] and Shanghai Jiao Tong University have designed a soft, lightweight, and potentially low-cost neuroprosthetic hand. Amputees who tested the artificial limb performed daily activities, such as zipping a suitcase, pouring a carton of juice, and petting a cat, just as well as — and in some cases better than — those with more rigid neuroprosthetics.
The researchers found the prosthetic, designed with a system for tactile feedback, restored some primitive sensation in a volunteer’s residual limb. The new design is also surprisingly durable, quickly recovering after being struck with a hammer or run over with a car.
The smart hand is soft and elastic, and weighs about half a pound. Its components total around $500 — a fraction of the weight and material cost associated with more rigid smart limbs.
“This is not a product yet, but the performance is already similar or superior to existing neuroprosthetics, which we’re excited about,” says Xuanhe Zhao, professor of mechanical engineering and of civil and environmental engineering at MIT. “There’s huge potential to make this soft prosthetic very low cost, for low-income families who have suffered from amputation.”
Zhao and his colleagues have published their work today [August 16, 2021] in Nature Biomedical Engineering. Co-authors include MIT postdoc Shaoting Lin, along with Guoying Gu, Xiangyang Zhu, and collaborators at Shanghai Jiao Tong University in China.
Big Hero hand
The team’s pliable new design bears an uncanny resemblance to a certain inflatable robot in the animated film “Big Hero 6.” Like the squishy android, the team’s artificial hand is made from soft, stretchy material — in this case, the commercial elastomer EcoFlex. The prosthetic comprises five balloon-like fingers, each embedded with segments of fiber, similar to articulated bones in actual fingers. The bendy digits are connected to a 3-D-printed “palm,” shaped like a human hand.
Rather than controlling each finger using mounted electrical motors, as most neuroprosthetics do, the researchers used a simple pneumatic system to precisely inflate fingers and bend them in specific positions. This system, including a small pump and valves, can be worn at the waist, significantly reducing the prosthetic’s weight.
Lin developed a computer model to relate a finger’s desired position to the corresponding pressure a pump would have to apply to achieve that position. Using this model, the team developed a controller that directs the pneumatic system to inflate the fingers, in positions that mimic five common grasps, including pinching two and three fingers together, making a balled-up fist, and cupping the palm.
The pneumatic system receives signals from EMG sensors — electromyography sensors that measure electrical signals generated by motor neurons to control muscles. The sensors are fitted at the prosthetic’s opening, where it attaches to a user’s limb. In this arrangement, the sensors can pick up signals from a residual limb, such as when an amputee imagines making a fist.
The team then used an existing algorithm that “decodes” muscle signals and relates them to common grasp types. They used this algorithm to program the controller for their pneumatic system. When an amputee imagines, for instance, holding a wine glass, the sensors pick up the residual muscle signals, which the controller then translates into corresponding pressures. The pump then applies those pressures to inflate each finger and produce the amputee’s intended grasp.
Going a step further in their design, the researchers looked to enable tactile feedback — a feature that is not incorporated in most commercial neuroprosthetics. To do this, they stitched to each fingertip a pressure sensor, which when touched or squeezed produces an electrical signal proportional to the sensed pressure. Each sensor is wired to a specific location on an amputee’s residual limb, so the user can “feel” when the prosthetic’s thumb is pressed, for example, versus the forefinger.
To test the inflatable hand, the researchers enlisted two volunteers, each with upper-limb amputations. Once outfitted with the neuroprosthetic, the volunteers learned to use it by repeatedly contracting the muscles in their arm while imagining making five common grasps.
After completing this 15-minute training, the volunteers were asked to perform a number of standardized tests to demonstrate manual strength and dexterity. These tasks included stacking checkers, turning pages, writing with a pen, lifting heavy balls, and picking up fragile objects like strawberries and bread. They repeated the same tests using a more rigid, commercially available bionic hand and found that the inflatable prosthetic was as good, or even better, at most tasks, compared to its rigid counterpart.
One volunteer was also able to intuitively use the soft prosthetic in daily activities, for instance to eat food like crackers, cake, and apples, and to handle objects and tools, such as laptops, bottles, hammers, and pliers. This volunteer could also safely manipulate the squishy prosthetic, for instance to shake someone’s hand, touch a flower, and pet a cat.
In a particularly exciting exercise, the researchers blindfolded the volunteer and found he could discern which prosthetic finger they poked and brushed. He was also able to “feel” bottles of different sizes that were placed in the prosthetic hand, and lifted them in response. The team sees these experiments as a promising sign that amputees can regain a form of sensation and real-time control with the inflatable hand.
The team has filed a patent on the design, through MIT, and is working to improve its sensing and range of motion.
“We now have four grasp types. There can be more,” Zhao says. “This design can be improved, with better decoding technology, higher-density myoelectric arrays, and a more compact pump that could be worn on the wrist. We also want to customize the design for mass production, so we can translate soft robotic technology to benefit society.”
The image accompanies Cari Shane’s August 4, 2021 article for Atlas Obscura’s Gastro Obscura about Andrew Pelling and his asparagus-based scaffolds for spinal cord stem cells (Note: A link has been removed),
Around 10 years ago, Pelling [Dr. Andrew Pelling at the University of Ottawa], a biophysicist, started thinking with his team about materials that could be used to reconstruct damaged or diseased human tissues. Surrounded by a rainbow of fresh fruits and vegetables at his University of Ottawa lab, Pelling and his team dismantle biological systems, mixing and matching parts, and put them back together in new and creative ways. It’s a little bit like a hacker who takes parts from a phone, a computer, and a car to build a robotic arm. Or like Mary Shelley’s Dr. Frankenstein, who built a monster out of cadavers. Except Pelling’s team has turned an apple into an ear and, most recently, a piece of asparagus into a scaffold for spinal-cord implants.
Pelling believes the future of regenerative medicine—which uses external therapies to help the body heal, the same way a cut heals by itself or a broken bone can mend without surgery—is in the supermarket produce aisle. He calls it “augmented biology,” and it’s a lot less expensive—by thousands and thousands of dollars—than implanting organs donated by humans, taken from animals, or manmade or bioengineered from animal tissue.
Decellularization as a process for implantation is fairly new, developed in the mid 1990s primarily by Doris Taylor. By washing out the genetic materials that make an apple an apple, for example, you are left with plant tissue, or a “cellulose mesh,” explains Pelling. “What we’re doing is washing out all the plant DNA, RNA proteins, all that sort of stuff that can cause immune responses, and rejection. And we’re just leaving behind the fiber in a plant—like literally the stuff that gets stuck in your teeth.”
When Pelling noticed the resemblance between a decellularized apple slice and an ear, he saw the true potential of his lab games. If he implanted the apple scaffolding into a living animal, he wondered, would it “be accepted” and vascularize? That is, would the test animal’s body glom onto the plant cells as if they weren’t a dangerous, foreign body and instead send out signals to create a blood supply, allowing the plant tissue to become a living part of the animal’s body? The answer was yes. “Suddenly, and by accident, we developed a material that has huge therapeutic and regenerative potential,” says Pelling. The apple ear does not enable hearing, and it remains in the animal-testing phase, but it may have applications for aesthetic implantation.
Soon after his breakthrough apple experiment, which was published in 2016 and earned him the moniker of “mad scientist,” Pelling shifted his focus to asparagus. The idea hit him when he was cooking. Looking at the end of a spear, he thought, “Hey, it looks like a spinal cord. What the hell? Maybe we can do something,” he says.
… Pelling implanted decellularized asparagus tissue under the skin of a lab rat. In just a few weeks, blood vessels flowed through the asparagus scaffolding; healthy cells from the animal moved into the tissue and turned the scaffold into living tissue. “The surprise here was that the body, instead of rejecting this material, it actually integrated into the material,” says Pelling. In the bioengineering world, getting that to happen has typically been a major challenge.
And then came the biggest surprise of all. Rats with severed spinal cords that had been implanted with the asparagus tissue were able to walk again, just a few weeks after implantation. …
While using asparagus tissue as scaffolding to repair spinal cords is not a “miracle cure,” says Pelling, it’s unlike the kinds of implants that have come before. Donated or manufactured organs are historically both more complicated and more expensive. Pelling’s implants were “done without stem cells or electrical stimulation or exoskeletons, or any of the usual approaches, but rather using very low cost, accessible materials that we honestly just bought at the grocery store,” he says, “and, we achieved the same level of recovery.” (At least in animal tests.) Plus, whereas patients usually need lifelong immunosuppressants, which can have negative side effects, to prevent their body from rejecting an implant, that doesn’t seem necessary with Pelling’s plant-based implants. And, so far, the plant-based implants don’t seem to break down over time like traditional spinal-cord implants. “The inertness of plant tissue is exactly why it’s so biocompatible,” says Pelling.
In October 2020, the asparagus implant was designated as a “breakthrough device” by the FDA [US Food and Drug Administration]. The designation means human trials will be fast-tracked and likely begin in a few years. …
Shane’s August 4, 2021 article is fascinating and well illustrated with a number of embedded images. If you have the time and the inclination, do read it.
More of Pelling’s work can be found here at the Pelling Lab website. He was mentioned (by name only as a participant in the second Canadian DIY Biology Summit organized by the Public Health Agency of Canada [PHAC]) here in an April 21, 2020 posting (my 10 year review of science culture in Canada). You’ll find the Pelling mention under the DIY Biology subhead about 20% of the way down the screen.
There seems to have been an update to the script for the voiceover. You’ll find it at the 1 min. 30 secs. mark ( spoken: “with up to 93% accuracy at 18 words per minute`’ vs. written “with median 74% accuracy at 15 words per minute)".
A July 14, 2021 news item on ScienceDaily announces the latest work on a a neuroprosthetic from the University of California at San Francisco (UCSF),
Researchers at UC San Francisco have successfully developed a “speech neuroprosthesis” that has enabled a man with severe paralysis to communicate in sentences, translating signals from his brain to the vocal tract directly into words that appear as text on a screen.
The achievement, which was developed in collaboration with the first participant of a clinical research trial, builds on more than a decade of effort by UCSF neurosurgeon Edward Chang, MD, to develop a technology that allows people with paralysis to communicate even if they are unable to speak on their own. The study appears July 15  in the New England Journal of Medicine.
“To our knowledge, this is the first successful demonstration of direct decoding of full words from the brain activity of someone who is paralyzed and cannot speak,” said Chang, the Joan and Sanford Weill Chair of Neurological Surgery at UCSF, Jeanne Robertson Distinguished Professor, and senior author on the study. “It shows strong promise to restore communication by tapping into the brain’s natural speech machinery.”
Each year, thousands of people lose the ability to speak due to stroke, accident, or disease. With further development, the approach described in this study could one day enable these people to fully communicate.
Translating Brain Signals into Speech
Previously, work in the field of communication neuroprosthetics has focused on restoring communication through spelling-based approaches to type out letters one-by-one in text. Chang’s study differs from these efforts in a critical way: his team is translating signals intended to control muscles of the vocal system for speaking words, rather than signals to move the arm or hand to enable typing. Chang said this approach taps into the natural and fluid aspects of speech and promises more rapid and organic communication.
“With speech, we normally communicate information at a very high rate, up to 150 or 200 words per minute,” he said, noting that spelling-based approaches using typing, writing, and controlling a cursor are considerably slower and more laborious. “Going straight to words, as we’re doing here, has great advantages because it’s closer to how we normally speak.”
Over the past decade, Chang’s progress toward this goal was facilitated by patients at the UCSF Epilepsy Center who were undergoing neurosurgery to pinpoint the origins of their seizures using electrode arrays placed on the surface of their brains. These patients, all of whom had normal speech, volunteered to have their brain recordings analyzed for speech-related activity. Early success with these patient volunteers paved the way for the current trial in people with paralysis.
Previously, Chang and colleagues in the UCSF Weill Institute for Neurosciences mapped the cortical activity patterns associated with vocal tract movements that produce each consonant and vowel. To translate those findings into speech recognition of full words, David Moses, PhD, a postdoctoral engineer in the Chang lab and lead author of the new study, developed new methods for real-time decoding of those patterns, as well as incorporating statistical language models to improve accuracy.
But their success in decoding speech in participants who were able to speak didn’t guarantee that the technology would work in a person whose vocal tract is paralyzed. “Our models needed to learn the mapping between complex brain activity patterns and intended speech,” said Moses. “That poses a major challenge when the participant can’t speak.”
In addition, the team didn’t know whether brain signals controlling the vocal tract would still be intact for people who haven’t been able to move their vocal muscles for many years. “The best way to find out whether this could work was to try it,” said Moses.
The First 50 Words
To investigate the potential of this technology in patients with paralysis, Chang partnered with colleague Karunesh Ganguly, MD, PhD, an associate professor of neurology, to launch a study known as “BRAVO” (Brain-Computer Interface Restoration of Arm and Voice). The first participant in the trial is a man in his late 30s who suffered a devastating brainstem stroke more than 15 years ago that severely damaged the connection between his brain and his vocal tract and limbs. Since his injury, he has had extremely limited head, neck, and limb movements, and communicates by using a pointer attached to a baseball cap to poke letters on a screen.
The participant, who asked to be referred to as BRAVO1, worked with the researchers to create a 50-word vocabulary that Chang’s team could recognize from brain activity using advanced computer algorithms. The vocabulary – which includes words such as “water,” “family,” and “good” – was sufficient to create hundreds of sentences expressing concepts applicable to BRAVO1’s daily life.
For the study, Chang surgically implanted a high-density electrode array over BRAVO1’s speech motor cortex. After the participant’s full recovery, his team recorded 22 hours of neural activity in this brain region over 48 sessions and several months. In each session, BRAVO1 attempted to say each of the 50 vocabulary words many times while the electrodes recorded brain signals from his speech cortex.
Translating Attempted Speech into Text
To translate the patterns of recorded neural activity into specific intended words, Moses’s two co-lead authors, Sean Metzger and Jessie Liu, both bioengineering graduate students in the Chang Lab, used custom neural network models, which are forms of artificial intelligence. When the participant attempted to speak, these networks distinguished subtle patterns in brain activity to detect speech attempts and identify which words he was trying to say.
To test their approach, the team first presented BRAVO1 with short sentences constructed from the 50 vocabulary words and asked him to try saying them several times. As he made his attempts, the words were decoded from his brain activity, one by one, on a screen.
Then the team switched to prompting him with questions such as “How are you today?” and “Would you like some water?” As before, BRAVO1’s attempted speech appeared on the screen. “I am very good,” and “No, I am not thirsty.”
Chang and Moses found that the system was able to decode words from brain activity at rate of up to 18 words per minute with up to 93 percent accuracy (75 percent median). Contributing to the success was a language model Moses applied that implemented an “auto-correct” function, similar to what is used by consumer texting and speech recognition software.
Moses characterized the early trial results as a proof of principle. “We were thrilled to see the accurate decoding of a variety of meaningful sentences,” he said. “We’ve shown that it is actually possible to facilitate communication in this way and that it has potential for use in conversational settings.”
Looking forward, Chang and Moses said they will expand the trial to include more participants affected by severe paralysis and communication deficits. The team is currently working to increase the number of words in the available vocabulary, as well as improve the rate of speech.
Both said that while the study focused on a single participant and a limited vocabulary, those limitations don’t diminish the accomplishment. “This is an important technological milestone for a person who cannot communicate naturally,” said Moses, “and it demonstrates the potential for this approach to give a voice to people with severe paralysis and speech loss.”
… all of UCSF. Funding sources [emphasis mine] included National Institutes of Health (U01 NS098971-01), philanthropy, and a sponsored research agreement with Facebook Reality Labs (FRL), [emphasis mine] which completed in early 2021.
UCSF researchers conducted all clinical trial design, execution, data analysis and reporting. Research participant data were collected solely by UCSF, are held confidentially, and are not shared with third parties. FRL provided high-level feedback and machine learning advice.
Here’s a link to and a citation for the paper,
Neuroprosthesis for Decoding Speech in a Paralyzed Person with Anarthria by David A. Moses, Ph.D., Sean L. Metzger, M.S., Jessie R. Liu, B.S., Gopala K. Anumanchipalli, Ph.D., Joseph G. Makin, Ph.D., Pengfei F. Sun, Ph.D., Josh Chartier, Ph.D., Maximilian E. Dougherty, B.A., Patricia M. Liu, M.A., Gary M. Abrams, M.D., Adelyn Tu-Chan, D.O., Karunesh Ganguly, M.D., Ph.D., and Edward F. Chang, M.D. N Engl J Med 2021; 385:217-227 DOI: 10.1056/NEJMoa2027540 Published July 15, 2021
This paper is mostly behind a paywall but you do have this option: “Create your account to get 2 free subscriber-only articles each month.”