Tag Archives: BRAIN Initiative

Implantable brain-computer interface collaborative community (iBCI-CC) launched

That’s quite a mouthful, ‘implantable brain-computer interface collaborative community (iBCI-CC). I assume the organization will be popularly known by its abbreviation.`A March 11, 2024 Mass General Brigham news release (also on EurekAlert) announces the iBCI-CC’s launch, Note: Mass stands for Massachusetts,

Mass General Brigham is establishing the Implantable Brain-Computer Interface Collaborative Community (iBCI-CC). This is the first Collaborative Community in the clinical neurosciences that has participation from the U.S. Food and Drug Administration (FDA).

BCIs are devices that interface with the nervous system and use software to interpret neural activity. Commonly, they are designed for improved access to communication or other technologies for people with physical disability. Implantable BCIs are investigational devices that hold the promise of unlocking new frontiers in restorative neurotechnology, offering potential breakthroughs in neurorehabilitation and in restoring function for people living with neurologic disease or injury.

The iBCI-CC (https://www.ibci-cc.org/) is a groundbreaking initiative aimed at fostering collaboration among diverse stakeholders to accelerate the development, safety and accessibility of iBCI technologies. The iBCI-CC brings together researchers, clinicians, medical device manufacturers, patient advocacy groups and individuals with lived experience of neurological conditions. This collaborative effort aims to propel the field of iBCIs forward by employing harmonized approaches that drive continuous innovation and ensure equitable access to these transformative technologies.

One of the first milestones for the iBCI-CC was to engage the participation of the FDA. “Brain-computer interfaces have the potential to restore lost function for patients suffering from a variety of neurological conditions. However, there are clinical, regulatory, coverage and payment questions that remain, which may impede patient access to this novel technology,” said David McMullen, M.D., Director of the Office of Neurological and Physical Medicine Devices in the FDA’s Center for Devices and Radiological Health (CDRH), and FDA member of the iBCI-CC. “The IBCI-CC will serve as an open venue to identify, discuss and develop approaches for overcoming these hurdles.”

The iBCI-CC will hold regular meetings open both to its members and the public to ensure inclusivity and transparency. Mass General Brigham will serve as the convener of the iBCI-CC, providing administrative support and ensuring alignment with the community’s objectives.

Over the past year, the iBCI-CC was organized by the interdisciplinary collaboration of leaders including Leigh Hochberg, MD, PhD, an internationally respected leader in BCI development and clinical testing and director of the Center for Neurotechnology and Neurorecovery at Massachusetts General Hospital; Jennifer French, MBA, executive director of the Neurotech Network and a Paralympic silver medalist; and Joe Lennerz, MD, PhD, a regulatory science expert and director of the Pathology Innovation Collaborative Community. These three organizers lead a distinguished group of Charter Signatories representing a diverse range of expertise and organizations.

“As a neurointensive care physician, I know how many patients with neurologic disorders could benefit from these devices,” said Dr. Hochberg. “Increasing discoveries in academia and the launch of multiple iBCI and related neurotech companies means that the time is right to identify common goals and metrics so that iBCIs are not only safe and effective, but also have thoroughly considered the design and function preferences of the people who hope to use them”.

Jennifer French, said, “Bringing diverse perspectives together, including those with lived experience, is a critical component to help address complex issues facing this field.” French has decades of experience working in the neurotech and patient advocacy fields. Living with a spinal cord injury, she also uses an implanted neurotech device for daily functions. “This ecosystem of neuroscience is on the cusp to collectively move the field forward by addressing access to the latest groundbreaking technology, in an equitable and ethical way. We can’t wait to engage and recruit the broader BCI community.”

Joe Lennerz, MD, PhD, emphasized, “Engaging in pre-competitive initiatives offers an often-overlooked avenue to drive meaningful progress. The collaboration of numerous thought leaders plays a pivotal role, with a crucial emphasis on regulatory engagement to unlock benefits for patients.”

The iBCI-CC is supported by key stakeholders within the Mass General Brigham system. Merit Cudkowicz, MD, MSc, chair of the Neurology Department, director of the Sean M. Healey and AMG Center for ALS at Massachusetts General Hospital, and Julianne Dorn Professor of Neurology at Harvard Medical School, said, “There is tremendous excitement in the ALS [amyotrophic lateral sclerosis, or Lou Gehrig’s disease] community for new devices that could ease and improve the ability of people with advanced ALS to communicate with their family, friends, and care partners. This important collaborative community will help to speed the development of a new class of neurologic devices to help our patients.”

Bailey McGuire, program manager of strategy and operations at Mass General Brigham’s Data Science Office, said, “We are thrilled to convene the iBCI-CC at Mass General Brigham’s DSO. By providing an administrative infrastructure, we want to help the iBCI-CC advance regulatory science and accelerate the availability of iBCI solutions that incorporate novel hardware and software that can benefit individuals with neurological conditions. We’re excited to help in this incredible space.”

For more information about the iBCI-CC, please visit https://www.ibci-cc.org/.

About Mass General Brigham

Mass General Brigham is an integrated academic health care system, uniting great minds to solve the hardest problems in medicine for our communities and the world. Mass General Brigham connects a full continuum of care across a system of academic medical centers, community and specialty hospitals, a health insurance plan, physician networks, community health centers, home care, and long-term care services. Mass General Brigham is a nonprofit organization committed to patient care, research, teaching, and service to the community. In addition, Mass General Brigham is one of the nation’s leading biomedical research organizations with several Harvard Medical School teaching hospitals. For more information, please visit massgeneralbrigham.org.

About the iBCI-CC Organizers:

Leigh Hochberg, MD, PhD is a neurointensivist at Massachusetts General Hospital’s Department of Neurology, where he directs the MGH Center for Neurotechnology and Neurorecovery. He is also the IDE Sponsor-Investigator and Directorof the BrainGate clinical trials, conducted by a consortium of scientists and clinicians at Brown, Emory, MGH, VA Providence, Stanford, and UC-Davis; the L. Herbert Ballou University Professor of Engineering and Professor of Brain Science at Brown University; Senior Lecturer on Neurology at Harvard Medical School; and Associate Director, VA RR&D Center for Neurorestoration and Neurotechnology in Providence.

Jennifer French, MBA, is the Executive Director of Neurotech Network, a nonprofit organization that focuses on education and advocacy of neurotechnologies. She serves on several Boards including the IEEE Neuroethics Initiative, Institute of Neuroethics, OpenMind platform, BRAIN Initiative Multi-Council and Neuroethics Working Groups, and the American Brain Coalition. She is the author of On My Feet Again (Neurotech Press, 2013) and is co-author of Bionic Pioneers (Neurotech Press, 2014). French lives with tetraplegia due to a spinal cord injury. She is an early user of an experimental implanted neural prosthesis for paralysis and is the Past-President and Founding member of the North American SCI Consortium.

Joe Lennerz, MD PhD, serves as the Chief Scientific Officer at BostonGene, an AI analytics and genomics startup based in Boston. Dr. Lennerz obtained a PhD in neurosciences, specializing in electrophysiology. He works on biomarker development and migraine research. Additionally, he is the co-founder and leader of the Pathology Innovation Collaborative Community, a regulatory science initiative focusing on diagnostics and software as a medical device (SaMD), convened by the Medical Device Innovation Consortium. He also serves as the co-chair of the federal Clinical Laboratory Fee Schedule (CLFS) advisory panel to the Centers for Medicare & Medicaid Services (CMS).

it’s been a while since I’ve come across BrainGate (see Leigh Hochberg bio in the above news release), which was last mentioned here in an April 2, 2021 posting, “BrainGate demonstrates a high-bandwidth wireless brain-computer interface (BCI).”

Here are two of my more recent postings about brain-computer interfaces,

This next one is an older posting but perhaps the most relevant to the announcement of this collaborative community’s purpose,

There’s a lot more on brain-computer interfaces (BCI) here, just use the term in the blog search engine.

Portable and non-invasive (?) mind-reading AI (artificial intelligence) turns thoughts into text and some thoughts about the near future

First, here’s some of the latest research and if by ‘non-invasive,’ you mean that electrodes are not being planted in your brain, then this December 12, 2023 University of Technology Sydney (UTS) press release (also on EurekAlert) highlights non-invasive mind-reading AI via a brain-computer interface (BCI), Note: Links have been removed,

In a world-first, researchers from the GrapheneX-UTS Human-centric Artificial Intelligence Centre at the University of Technology Sydney (UTS) have developed a portable, non-invasive system that can decode silent thoughts and turn them into text. 

The technology could aid communication for people who are unable to speak due to illness or injury, including stroke or paralysis. It could also enable seamless communication between humans and machines, such as the operation of a bionic arm or robot.

The study has been selected as the spotlight paper at the NeurIPS conference, a top-tier annual meeting that showcases world-leading research on artificial intelligence and machine learning, held in New Orleans on 12 December 2023.

The research was led by Distinguished Professor CT Lin, Director of the GrapheneX-UTS HAI Centre, together with first author Yiqun Duan and fellow PhD candidate Jinzhou Zhou from the UTS Faculty of Engineering and IT.

In the study participants silently read passages of text while wearing a cap that recorded electrical brain activity through their scalp using an electroencephalogram (EEG). A demonstration of the technology can be seen in this video [See UTS press release].

The EEG wave is segmented into distinct units that capture specific characteristics and patterns from the human brain. This is done by an AI model called DeWave developed by the researchers. DeWave translates EEG signals into words and sentences by learning from large quantities of EEG data. 

“This research represents a pioneering effort in translating raw EEG waves directly into language, marking a significant breakthrough in the field,” said Distinguished Professor Lin.

“It is the first to incorporate discrete encoding techniques in the brain-to-text translation process, introducing an innovative approach to neural decoding. The integration with large language models is also opening new frontiers in neuroscience and AI,” he said.

Previous technology to translate brain signals to language has either required surgery to implant electrodes in the brain, such as Elon Musk’s Neuralink [emphasis mine], or scanning in an MRI machine, which is large, expensive, and difficult to use in daily life.

These methods also struggle to transform brain signals into word level segments without additional aids such as eye-tracking, which restrict the practical application of these systems. The new technology is able to be used either with or without eye-tracking.

The UTS research was carried out with 29 participants. This means it is likely to be more robust and adaptable than previous decoding technology that has only been tested on one or two individuals, because EEG waves differ between individuals. 

The use of EEG signals received through a cap, rather than from electrodes implanted in the brain, means that the signal is noisier. In terms of EEG translation however, the study reported state-of the art performance, surpassing previous benchmarks.

“The model is more adept at matching verbs than nouns. However, when it comes to nouns, we saw a tendency towards synonymous pairs rather than precise translations, such as ‘the man’ instead of ‘the author’,” said Duan. [emphases mine; synonymous, eh? what about ‘woman’ or ‘child’ instead of the ‘man’?]

“We think this is because when the brain processes these words, semantically similar words might produce similar brain wave patterns. Despite the challenges, our model yields meaningful results, aligning keywords and forming similar sentence structures,” he said.

The translation accuracy score is currently around 40% on BLEU-1. The BLEU score is a number between zero and one that measures the similarity of the machine-translated text to a set of high-quality reference translations. The researchers hope to see this improve to a level that is comparable to traditional language translation or speech recognition programs, which is closer to 90%.

The research follows on from previous brain-computer interface technology developed by UTS in association with the Australian Defence Force [ADF] that uses brainwaves to command a quadruped robot, which is demonstrated in this ADF video [See my June 13, 2023 posting, “Mind-controlled robots based on graphene: an Australian research story” for the story and embedded video].

About one month after the research announcement regarding the University of Technology Sydney’s ‘non-invasive’ brain-computer interface (BCI), I stumbled across an in-depth piece about the field of ‘non-invasive’ mind-reading research.

Neurotechnology and neurorights

Fletcher Reveley’s January 18, 2024 article on salon.com (originally published January 3, 2024 on Undark) shows how quickly the field is developing and raises concerns, Note: Links have been removed,

One afternoon in May 2020, Jerry Tang, a Ph.D. student in computer science at the University of Texas at Austin, sat staring at a cryptic string of words scrawled across his computer screen:

“I am not finished yet to start my career at twenty without having gotten my license I never have to pull out and run back to my parents to take me home.”

The sentence was jumbled and agrammatical. But to Tang, it represented a remarkable feat: A computer pulling a thought, however disjointed, from a person’s mind.

For weeks, ever since the pandemic had shuttered his university and forced his lab work online, Tang had been at home tweaking a semantic decoder — a brain-computer interface, or BCI, that generates text from brain scans. Prior to the university’s closure, study participants had been providing data to train the decoder for months, listening to hours of storytelling podcasts while a functional magnetic resonance imaging (fMRI) machine logged their brain responses. Then, the participants had listened to a new story — one that had not been used to train the algorithm — and those fMRI scans were fed into the decoder, which used GPT1, a predecessor to the ubiquitous AI chatbot ChatGPT, to spit out a text prediction of what it thought the participant had heard. For this snippet, Tang compared it to the original story:

“Although I’m twenty-three years old I don’t have my driver’s license yet and I just jumped out right when I needed to and she says well why don’t you come back to my house and I’ll give you a ride.”

The decoder was not only capturing the gist of the original, but also producing exact matches of specific words — twenty, license. When Tang shared the results with his adviser, a UT Austin neuroscientist named Alexander Huth who had been working towards building such a decoder for nearly a decade, Huth was floored. “Holy shit,” Huth recalled saying. “This is actually working.” By the fall of 2021, the scientists were testing the device with no external stimuli at all — participants simply imagined a story and the decoder spat out a recognizable, albeit somewhat hazy, description of it. “What both of those experiments kind of point to,” said Huth, “is the fact that what we’re able to read out here was really like the thoughts, like the idea.”

The scientists brimmed with excitement over the potentially life-altering medical applications of such a device — restoring communication to people with locked-in syndrome, for instance, whose near full-body paralysis made talking impossible. But just as the potential benefits of the decoder snapped into focus, so too did the thorny ethical questions posed by its use. Huth himself had been one of the three primary test subjects in the experiments, and the privacy implications of the device now seemed visceral: “Oh my god,” he recalled thinking. “We can look inside my brain.”

Huth’s reaction mirrored a longstanding concern in neuroscience and beyond: that machines might someday read people’s minds. And as BCI technology advances at a dizzying clip, that possibility and others like it — that computers of the future could alter human identities, for example, or hinder free will — have begun to seem less remote. “The loss of mental privacy, this is a fight we have to fight today,” said Rafael Yuste, a Columbia University neuroscientist. “That could be irreversible. If we lose our mental privacy, what else is there to lose? That’s it, we lose the essence of who we are.”

Spurred by these concerns, Yuste and several colleagues have launched an international movement advocating for “neurorights” — a set of five principles Yuste argues should be enshrined in law as a bulwark against potential misuse and abuse of neurotechnology. But he may be running out of time.

Reveley’s January 18, 2024 article provides fascinating context and is well worth reading if you have the time.

For my purposes, I’m focusing on ethics, Note: Links have been removed,

… as these and other advances propelled the field forward, and as his own research revealed the discomfiting vulnerability of the brain to external manipulation, Yuste found himself increasingly concerned by the scarce attention being paid to the ethics of these technologies. Even Obama’s multi-billion-dollar BRAIN Initiative, a government program designed to advance brain research, which Yuste had helped launch in 2013 and supported heartily, seemed to mostly ignore the ethical and societal consequences of the research it funded. “There was zero effort on the ethical side,” Yuste recalled.

Yuste was appointed to the rotating advisory group of the BRAIN Initiative in 2015, where he began to voice his concerns. That fall, he joined an informal working group to consider the issue. “We started to meet, and it became very evident to me that the situation was a complete disaster,” Yuste said. “There was no guidelines, no work done.” Yuste said he tried to get the group to generate a set of ethical guidelines for novel BCI technologies, but the effort soon became bogged down in bureaucracy. Frustrated, he stepped down from the committee and, together with a University of Washington bioethicist named Sara Goering, decided to independently pursue the issue. “Our aim here is not to contribute to or feed fear for doomsday scenarios,” the pair wrote in a 2016 article in Cell, “but to ensure that we are reflective and intentional as we prepare ourselves for the neurotechnological future.”

In the fall of 2017, Yuste and Goering called a meeting at the Morningside Campus of Columbia, inviting nearly 30 experts from all over the world in such fields as neurotechnology, artificial intelligence, medical ethics, and the law. By then, several other countries had launched their own versions of the BRAIN Initiative, and representatives from Australia, Canada [emphasis mine], China, Europe, Israel, South Korea, and Japan joined the Morningside gathering, along with veteran neuroethicists and prominent researchers. “We holed ourselves up for three days to study the ethical and societal consequences of neurotechnology,” Yuste said. “And we came to the conclusion that this is a human rights issue. These methods are going to be so powerful, that enable to access and manipulate mental activity, and they have to be regulated from the angle of human rights. That’s when we coined the term ‘neurorights.’”

The Morningside group, as it became known, identified four principal ethical priorities, which were later expanded by Yuste into five clearly defined neurorights: The right to mental privacy, which would ensure that brain data would be kept private and its use, sale, and commercial transfer would be strictly regulated; the right to personal identity, which would set boundaries on technologies that could disrupt one’s sense of self; the right to fair access to mental augmentation, which would ensure equality of access to mental enhancement neurotechnologies; the right of protection from bias in the development of neurotechnology algorithms; and the right to free will, which would protect an individual’s agency from manipulation by external neurotechnologies. The group published their findings in an often-cited paper in Nature.

But while Yuste and the others were focused on the ethical implications of these emerging technologies, the technologies themselves continued to barrel ahead at a feverish speed. In 2014, the first kick of the World Cup was made by a paraplegic man using a mind-controlled robotic exoskeleton. In 2016, a man fist bumped Obama using a robotic arm that allowed him to “feel” the gesture. The following year, scientists showed that electrical stimulation of the hippocampus could improve memory, paving the way for cognitive augmentation technologies. The military, long interested in BCI technologies, built a system that allowed operators to pilot three drones simultaneously, partially with their minds. Meanwhile, a confusing maelstrom of science, science-fiction, hype, innovation, and speculation swept the private sector. By 2020, over $33 billion had been invested in hundreds of neurotech companies — about seven times what the NIH [US National Institutes of Health] had envisioned for the 12-year span of the BRAIN Initiative itself.

Now back to Tang and Huth (from Reveley’s January 18, 2024 article), Note: Links have been removed,

Central to the ethical questions Huth and Tang grappled with was the fact that their decoder, unlike other language decoders developed around the same time, was non-invasive — it didn’t require its users to undergo surgery. Because of that, their technology was free from the strict regulatory oversight that governs the medical domain. (Yuste, for his part, said he believes non-invasive BCIs pose a far greater ethical challenge than invasive systems: “The non-invasive, the commercial, that’s where the battle is going to get fought.”) Huth and Tang’s decoder faced other hurdles to widespread use — namely that fMRI machines are enormous, expensive, and stationary. But perhaps, the researchers thought, there was a way to overcome that hurdle too.

The information measured by fMRI machines — blood oxygenation levels, which indicate where blood is flowing in the brain — can also be measured with another technology, functional Near-Infrared Spectroscopy, or fNIRS. Although lower resolution than fMRI, several expensive, research-grade, wearable fNIRS headsets do approach the resolution required to work with Huth and Tang’s decoder. In fact, the scientists were able to test whether their decoder would work with such devices by simply blurring their fMRI data to simulate the resolution of research-grade fNIRS. The decoded result “doesn’t get that much worse,” Huth said.

And while such research-grade devices are currently cost-prohibitive for the average consumer, more rudimentary fNIRS headsets have already hit the market. Although these devices provide far lower resolution than would be required for Huth and Tang’s decoder to work effectively, the technology is continually improving, and Huth believes it is likely that an affordable, wearable fNIRS device will someday provide high enough resolution to be used with the decoder. In fact, he is currently teaming up with scientists at Washington University to research the development of such a device.

Even comparatively primitive BCI headsets can raise pointed ethical questions when released to the public. Devices that rely on electroencephalography, or EEG, a commonplace method of measuring brain activity by detecting electrical signals, have now become widely available — and in some cases have raised alarm. In 2019, a school in Jinhua, China, drew criticism after trialing EEG headbands that monitored the concentration levels of its pupils. (The students were encouraged to compete to see who concentrated most effectively, and reports were sent to their parents.) Similarly, in 2018 the South China Morning Post reported that dozens of factories and businesses had begun using “brain surveillance devices” to monitor workers’ emotions, in the hopes of increasing productivity and improving safety. The devices “caused some discomfort and resistance in the beginning,” Jin Jia, then a brain scientist at Ningbo University, told the reporter. “After a while, they got used to the device.”

But the primary problem with even low-resolution devices is that scientists are only just beginning to understand how information is actually encoded in brain data. In the future, powerful new decoding algorithms could discover that even raw, low-resolution EEG data contains a wealth of information about a person’s mental state at the time of collection. Consequently, nobody can definitively know what they are giving away when they allow companies to collect information from their brains.

Huth and Tang concluded that brain data, therefore, should be closely guarded, especially in the realm of consumer products. In an article on Medium from last April, Tang wrote that “decoding technology is continually improving, and the information that could be decoded from a brain scan a year from now may be very different from what can be decoded today. It is crucial that companies are transparent about what they intend to do with brain data and take measures to ensure that brain data is carefully protected.” (Yuste said the Neurorights Foundation recently surveyed the user agreements of 30 neurotech companies and found that all of them claim ownership of users’ brain data — and most assert the right to sell that data to third parties. [emphases mine]) Despite these concerns, however, Huth and Tang maintained that the potential benefits of these technologies outweighed their risks, provided the proper guardrails [emphasis mine] were put in place.

It would seem the first guardrails are being set up in South America (from Reveley’s January 18, 2024 article), Note: Links have been removed,

On a hot summer night in 2019, Yuste sat in the courtyard of an adobe hotel in the north of Chile with his close friend, the prominent Chilean doctor and then-senator Guido Girardi, observing the vast, luminous skies of the Atacama Desert and discussing, as they often did, the world of tomorrow. Girardi, who every year organizes the Congreso Futuro, Latin America’s preeminent science and technology event, had long been intrigued by the accelerating advance of technology and its paradigm-shifting impact on society — “living in the world at the speed of light,” as he called it. Yuste had been a frequent speaker at the conference, and the two men shared a conviction that scientists were birthing technologies powerful enough to disrupt the very notion of what it meant to be human.

Around midnight, as Yuste finished his pisco sour, Girardi made an intriguing proposal: What if they worked together to pass an amendment to Chile’s constitution, one that would enshrine protections for mental privacy as an inviolable right of every Chilean? It was an ambitious idea, but Girardi had experience moving bold pieces of legislation through the senate; years earlier he had spearheaded Chile’s famous Food Labeling and Advertising Law, which required companies to affix health warning labels on junk food. (The law has since inspired dozens of countries to pursue similar legislation.) With BCI, here was another chance to be a trailblazer. “I said to Rafael, ‘Well, why don’t we create the first neuro data protection law?’” Girardi recalled. Yuste readily agreed.

… Girardi led the political push, promoting a piece of legislation that would amend Chile’s constitution to protect mental privacy. The effort found surprising purchase across the political spectrum, a remarkable feat in a country famous for its political polarization. In 2021, Chile’s congress unanimously passed the constitutional amendment, which Piñera [Sebastián Piñera] swiftly signed into law. (A second piece of legislation, which would establish a regulatory framework for neurotechnology, is currently under consideration by Chile’s congress.) “There was no divide between the left or right,” recalled Girardi. “This was maybe the only law in Chile that was approved by unanimous vote.” Chile, then, had become the first country in the world to enshrine “neurorights” in its legal code.

Even before the passage of the Chilean constitutional amendment, Yuste had begun meeting regularly with Jared Genser, an international human rights lawyer who had represented such high-profile clients as Desmond Tutu, Liu Xiaobo, and Aung San Suu Kyi. (The New York Times Magazine once referred to Genser as “the extractor” for his work with political prisoners.) Yuste was seeking guidance on how to develop an international legal framework to protect neurorights, and Genser, though he had just a cursory knowledge of neurotechnology, was immediately captivated by the topic. “It’s fair to say he blew my mind in the first hour of discussion,” recalled Genser. Soon thereafter, Yuste, Genser, and a private-sector entrepreneur named Jamie Daves launched the Neurorights Foundation, a nonprofit whose first goal, according to its website, is “to protect the human rights of all people from the potential misuse or abuse of neurotechnology.”

To accomplish this, the organization has sought to engage all levels of society, from the United Nations and regional governing bodies like the Organization of American States, down to national governments, the tech industry, scientists, and the public at large. Such a wide-ranging approach, said Genser, “is perhaps insanity on our part, or grandiosity. But nonetheless, you know, it’s definitely the Wild West as it comes to talking about these issues globally, because so few people know about where things are, where they’re heading, and what is necessary.”

This general lack of knowledge about neurotech, in all strata of society, has largely placed Yuste in the role of global educator — he has met several times with U.N. Secretary-General António Guterres, for example, to discuss the potential dangers of emerging neurotech. And these efforts are starting to yield results. Guterres’s 2021 report, “Our Common Agenda,” which sets forth goals for future international cooperation, urges “updating or clarifying our application of human rights frameworks and standards to address frontier issues,” such as “neuro-technology.” Genser attributes the inclusion of this language in the report to Yuste’s advocacy efforts.

But updating international human rights law is difficult, and even within the Neurorights Foundation there are differences of opinion regarding the most effective approach. For Yuste, the ideal solution would be the creation of a new international agency, akin to the International Atomic Energy Agency — but for neurorights. “My dream would be to have an international convention about neurotechnology, just like we had one about atomic energy and about certain things, with its own treaty,” he said. “And maybe an agency that would essentially supervise the world’s efforts in neurotechnology.”

Genser, however, believes that a new treaty is unnecessary, and that neurorights can be codified most effectively by extending interpretation of existing international human rights law to include them. The International Covenant of Civil and Political Rights, for example, already ensures the general right to privacy, and an updated interpretation of the law could conceivably clarify that that clause extends to mental privacy as well.

There is no need for immediate panic (from Reveley’s January 18, 2024 article),

… while Yuste and the others continue to grapple with the complexities of international and national law, Huth and Tang have found that, for their decoder at least, the greatest privacy guardrails come not from external institutions but rather from something much closer to home — the human mind itself. Following the initial success of their decoder, as the pair read widely about the ethical implications of such a technology, they began to think of ways to assess the boundaries of the decoder’s capabilities. “We wanted to test a couple kind of principles of mental privacy,” said Huth. Simply put, they wanted to know if the decoder could be resisted.

In late 2021, the scientists began to run new experiments. First, they were curious if an algorithm trained on one person could be used on another. They found that it could not — the decoder’s efficacy depended on many hours of individualized training. Next, they tested whether the decoder could be thrown off simply by refusing to cooperate with it. Instead of focusing on the story that was playing through their headphones while inside the fMRI machine, participants were asked to complete other mental tasks, such as naming random animals, or telling a different story in their head. “Both of those rendered it completely unusable,” Huth said. “We didn’t decode the story they were listening to, and we couldn’t decode anything about what they were thinking either.”

Given how quickly this field of research is progressing, it seems like a good idea to increase efforts to establish neurorights (from Reveley’s January 18, 2024 article),

For Yuste, however, technologies like Huth and Tang’s decoder may only mark the beginning of a mind-boggling new chapter in human history, one in which the line between human brains and computers will be radically redrawn — or erased completely. A future is conceivable, he said, where humans and computers fuse permanently, leading to the emergence of technologically augmented cyborgs. “When this tsunami hits us I would say it’s not likely it’s for sure that humans will end up transforming themselves — ourselves — into maybe a hybrid species,” Yuste said. He is now focused on preparing for this future.

In the last several years, Yuste has traveled to multiple countries, meeting with a wide assortment of politicians, supreme court justices, U.N. committee members, and heads of state. And his advocacy is beginning to yield results. In August, Mexico began considering a constitutional reform that would establish the right to mental privacy. Brazil is currently considering a similar proposal, while Spain, Argentina, and Uruguay have also expressed interest, as has the European Union. In September [2023], neurorights were officially incorporated into Mexico’s digital rights charter, while in Chile, a landmark Supreme Court ruling found that Emotiv Inc, a company that makes a wearable EEG headset, violated Chile’s newly minted mental privacy law. That suit was brought by Yuste’s friend and collaborator, Guido Girardi.

“This is something that we should take seriously,” he [Huth] said. “Because even if it’s rudimentary right now, where is that going to be in five years? What was possible five years ago? What’s possible now? Where’s it gonna be in five years? Where’s it gonna be in 10 years? I think the range of reasonable possibilities includes things that are — I don’t want to say like scary enough — but like dystopian enough that I think it’s certainly a time for us to think about this.”

You can find The Neurorights Foundation here and/or read Reveley’s January 18, 2024 article on salon.com or as originally published January 3, 2024 on Undark. Finally, thank you for the article, Fletcher Reveley!

US White House’s grand computing challenge could mean a boost for research into artificial intelligence and brains

An Oct. 20, 2015 posting by Lynn Bergeson on Nanotechnology Now announces a US White House challenge incorporating nanotechnology, computing, and brain research (Note: A link has been removed),

On October 20, 2015, the White House announced a grand challenge to develop transformational computing capabilities by combining innovations in multiple scientific disciplines. See https://www.whitehouse.gov/blog/2015/10/15/nanotechnology-inspired-grand-challenge-future-computing The Office of Science and Technology Policy (OSTP) states that, after considering over 100 responses to its June 17, 2015, request for information, it “is excited to announce the following grand challenge that addresses three Administration priorities — the National Nanotechnology Initiative, the National Strategic Computing Initiative (NSCI), and the BRAIN initiative.” The grand challenge is to “[c]reate a new type of computer that can proactively interpret and learn from data, solve unfamiliar problems using what it has learned, and operate with the energy efficiency of the human brain.”

Here’s where the Oct. 20, 2015 posting, which originated the news item, by Lloyd Whitman, Randy Bryant, and Tom Kalil for the US White House blog gets interesting,

 While it continues to be a national priority to advance conventional digital computing—which has been the engine of the information technology revolution—current technology falls far short of the human brain in terms of both the brain’s sensing and problem-solving abilities and its low power consumption. Many experts predict that fundamental physical limitations will prevent transistor technology from ever matching these twin characteristics. We are therefore challenging the nanotechnology and computer science communities to look beyond the decades-old approach to computing based on the Von Neumann architecture as implemented with transistor-based processors, and chart a new path that will continue the rapid pace of innovation beyond the next decade.

There are growing problems facing the Nation that the new computing capabilities envisioned in this challenge might address, from delivering individualized treatments for disease, to allowing advanced robots to work safely alongside people, to proactively identifying and blocking cyber intrusions. To meet this challenge, major breakthroughs are needed not only in the basic devices that store and process information and the amount of energy they require, but in the way a computer analyzes images, sounds, and patterns; interprets and learns from data; and identifies and solves problems. [emphases mine]

Many of these breakthroughs will require new kinds of nanoscale devices and materials integrated into three-dimensional systems and may take a decade or more to achieve. These nanotechnology innovations will have to be developed in close coordination with new computer architectures, and will likely be informed by our growing understanding of the brain—a remarkable, fault-tolerant system that consumes less power than an incandescent light bulb.

Recent progress in developing novel, low-power methods of sensing and computation—including neuromorphic, magneto-electronic, and analog systems—combined with dramatic advances in neuroscience and cognitive sciences, lead us to believe that this ambitious challenge is now within our reach. …

This is the first time I’ve come across anything that publicly links the BRAIN initiative to computing, artificial intelligence, and artificial brains. (For my own sake, I make an arbitrary distinction between algorithms [artificial intelligence] and devices that simulate neural plasticity [artificial brains].)The emphasis in the past has always been on new strategies for dealing with Parkinson’s and other neurological diseases and conditions.

Gray Matters volume 2: Integrative Approaches for Neuroscience, Ethics, and Society issued March 2015 by US Presidential Bioethics Commission

The second and final volume in the Grey Matters  set (from the US Presidential Commission for the Study of Bioethical Issues produced in response to a request from President Barack Obama regarding the BRAIN (Brain Research through Advancing Innovative Neurotechnologies) initiative) has just been released.

The formal title of the latest volume is Gray Matters: Topics at the Intersection of Neuroscience, Ethics, and Society, volume two. The first was titled: Gray Matters: Integrative Approaches for Neuroscience, Ethics, and Society, volume one.)

According to volume 2 of the report’s executive summary,

… In its first volume on neuroscience and ethics, Gray Matters: Integrative Approaches for Neuroscience, Ethics, and Society, the Bioethics Commission emphasized the importance of integrating ethics and neuroscience throughout the research endeavor.1 This second volume, Gray Matters: Topics at the Intersection of Neuroscience, Ethics, and Society, takes an in-depth look at three topics at the intersection of neuroscience and society that have captured the public’s attention.

The Bioethics Commission found widespread agreement that contemporary neuroscience holds great promise for relieving human suffering from a number of devastating neurological disorders. Less agreement exists on multiple other topics, and the Bioethics Commission focused on three cauldrons of controversy—cognitive enhancement, consent capacity, and neuroscienceand the legal system. These topics illustrate the ethical tensions and societal implications of advancing neuroscience and technology, and bring into heightened relief many important ethical considerations.

A March 26, 2015 post by David Bruggeman on his Pasco Phronesis blog further describes the 168 pp. second volume of the report,

There are fourteen main recommendations in the report:

Prioritize Existing Strategies to Maintain and Improve Neural Health

Continue to examine and develop existing tools and techniques for brain health

Prioritize Treatment of Neurological Disorders

As with the previous recommendation, it would be valuable to focus on existing means of addressing neurological disorders and working to improve them.

Study Novel Neural Modifiers to Augment or Enhance Neural Function

Existing research in this area is limited and inconclusive.

Ensure Equitable Access to Novel Neural Modifiers to Augment or Enhance Neural Function

Access to cognitive enhancements will need to be handled carefully to avoid exacerbating societal inequities (think the stratified societies of the film Elysium or the Star Trek episode “The Cloud Minders“).

Create Guidance About the Use of Neural Modifiers

Professional societies and expert groups need to develop guidance for health care providers that receive requests for prescriptions for cognitive enhancements (something like an off-label use of attention deficit drugs, beta blockers or other medicines to boost cognition rather than address perceived deficits).

If you don’t have time to look at the 2nd volume, David’s post covers many of the important points.

Nanoparticle-based radiogenetics to control brain cells

While the title for this post sounds like an opening for a zombie-themed story, this Oct. 8, 2014 news item on Nanowerk actually concerns brain research at Rockefeller University (US), Note: A link has been removed,

A proposal to develop a new way to remotely control brain cells from Sarah Stanley, a Research Associate in Rockefeller University’s Laboratory of Molecular Genetics, headed by Jeffrey M. Friedman, is among the first to receive funding from the BRAIN initiative. The project will make use of a technique called radiogenetics that combines the use of radio waves or magnetic fields with nanoparticles to turn neurons on or off.

An Oct. 7, 2014 Rockefeller University news release, which originated the news item, further describes the BRAIN initiative and the research (Note: Links have been removed),

The NIH [National Institutes of Health]  is one of four federal agencies involved in the BRAIN (Brain Research through Advancing Innovative Neurotechnologies) initiative. Following in the ambitious footsteps of the Human Genome Project, the BRAIN initiative seeks to create a dynamic map of the brain in action, a goal that requires the development of new technologies. The BRAIN initiative working group, which outlined the broad scope of the ambitious project, was co-chaired by Rockefeller’s Cori Bargmann, head of the Laboratory of Neural Circuits and Behavior.

Stanley’s grant, for $1.26 million over three years, is one of 58 projects to get BRAIN grants, the NIH announced. The NIH’s plan for its part of this national project, which has been pitched as “America’s next moonshot,” calls for $4.5 billion in federal funds over 12 years.

The technology Stanley is developing would enable researchers to manipulate the activity of neurons, as well as other cell types, in freely moving animals in order to better understand what these cells do. Other techniques for controlling selected groups of neurons exist, but her new nanoparticle-based technique has a unique combination of features that may enable new types of experimentation. For instance, it would allow researchers to rapidly activate or silence neurons within a small area of the brain or dispersed across a larger region, including those in difficult-to-access locations. Stanley also plans to explore the potential this method has for use treating patients.

“Francis Collins, director of the NIH, has discussed the need for studying the circuitry of the brain, which is formed by interconnected neurons. Our remote-control technology may provide a tool with which researchers can ask new questions about the roles of complex circuits in regulating behavior,” Stanley says.

Here’s an image that Rockefeller University has used to illustrate the concept of radio-controlled brain cells,

 

BRAIN control: The new technology uses radio waves to activate or silence cells remotely. The bright spots above represent cells with increased calcium after treatment with radio waves, a change that would allow neurons to fire. [downloaded from: http://newswire.rockefeller.edu/2014/10/07/rockefeller-neurobiology-lab-is-awarded-first-round-brain-initiative-grant/]

BRAIN control: The new technology uses radio waves to activate or silence cells remotely. The bright spots above represent cells with increased calcium after treatment with radio waves, a change that would allow neurons to fire. [downloaded from: http://newswire.rockefeller.edu/2014/10/07/rockefeller-neurobiology-lab-is-awarded-first-round-brain-initiative-grant/]

You can find out more about the US BRAIN initiative here.

US military wants you to remember

While this July 10, 2014 news item on ScienceDaily concerns DARPA, an implantable neural device, and the Lawrence Livermore National Laboratory (LLNL), it is a new project and not the one featured here in a June 18, 2014 posting titled: ‘DARPA (US Defense Advanced Research Projects Agency) awards funds for implantable neural interface’.

The new project as per the July 10, 2014 news item on ScienceDaily concerns memory,

The Department of Defense’s Defense Advanced Research Projects Agency (DARPA) awarded Lawrence Livermore National Laboratory (LLNL) up to $2.5 million to develop an implantable neural device with the ability to record and stimulate neurons within the brain to help restore memory, DARPA officials announced this week.

The research builds on the understanding that memory is a process in which neurons in certain regions of the brain encode information, store it and retrieve it. Certain types of illnesses and injuries, including Traumatic Brain Injury (TBI), Alzheimer’s disease and epilepsy, disrupt this process and cause memory loss. TBI, in particular, has affected 270,000 military service members since 2000.

A July 2, 2014 LLNL news release, which originated the news item, provides more detail,

The goal of LLNL’s work — driven by LLNL’s Neural Technology group and undertaken in collaboration with the University of California, Los Angeles (UCLA) and Medtronic — is to develop a device that uses real-time recording and closed-loop stimulation of neural tissues to bridge gaps in the injured brain and restore individuals’ ability to form new memories and access previously formed ones.

Specifically, the Neural Technology group will seek to develop a neuromodulation system — a sophisticated electronics system to modulate neurons — that will investigate areas of the brain associated with memory to understand how new memories are formed. The device will be developed at LLNL’s Center for Bioengineering.

“Currently, there is no effective treatment for memory loss resulting from conditions like TBI,” said LLNL’s project leader Satinderpall Pannu, director of the LLNL’s Center for Bioengineering, a unique facility dedicated to fabricating biocompatible neural interfaces. …

LLNL will develop a miniature, wireless and chronically implantable neural device that will incorporate both single neuron and local field potential recordings into a closed-loop system to implant into TBI patients’ brains. The device — implanted into the entorhinal cortex and hippocampus — will allow for stimulation and recording from 64 channels located on a pair of high-density electrode arrays. The entorhinal cortex and hippocampus are regions of the brain associated with memory.

The arrays will connect to an implantable electronics package capable of wireless data and power telemetry. An external electronic system worn around the ear will store digital information associated with memory storage and retrieval and provide power telemetry to the implantable package using a custom RF-coil system.

Designed to last throughout the duration of treatment, the device’s electrodes will be integrated with electronics using advanced LLNL integration and 3D packaging technologies. The microelectrodes that are the heart of this device are embedded in a biocompatible, flexible polymer.

Using the Center for Bioengineering’s capabilities, Pannu and his team of engineers have achieved 25 patents and many publications during the last decade. The team’s goal is to build the new prototype device for clinical testing by 2017.

Lawrence Livermore’s collaborators, UCLA and Medtronic, will focus on conducting clinical trials and fabricating parts and components, respectively.

“The RAM [Restoring Active Memory] program poses a formidable challenge reaching across multiple disciplines from basic brain research to medicine, computing and engineering,” said Itzhak Fried, lead investigator for the UCLA on this project and  professor of neurosurgery and psychiatry and biobehavioral sciences at the David Geffen School of Medicine at UCLA and the Semel Institute for Neuroscience and Human Behavior. “But at the end of the day, it is the suffering individual, whether an injured member of the armed forces or a patient with Alzheimer’s disease, who is at the center of our thoughts and efforts.”

LLNL’s work on the Restoring Active Memory program supports [US] President [Barack] Obama’s Brain Research through Advancing Innovative Neurotechnologies (BRAIN) initiative.

Obama’s BRAIN is picking up speed.

DARPA (US Defense Advanced Research Projects Agency) awards funds for implantable neural interface

I’m not a huge fan of neural implantable devices (at least not the ones that facilitate phone calls directly to and from the brain as per my April 30, 2010 posting; scroll down about 40% of the way) but they are important from a therapeutic perspective. On that  note, the Lawrence Livermore National Laboratory (LLNL) has received an award of $5.6M from the US Defense Advanced Research Projects Agency (DARPA) to advance their work on neural implantable interfaces. From a June 13, 2014 news item on Azonano,

Lawrence Livermore National Laboratory recently received $5.6 million from the Department of Defense’s Defense Advanced Research Projects Agency (DARPA) to develop an implantable neural interface with the ability to record and stimulate neurons within the brain for treating neuropsychiatric disorders.

The technology will help doctors to better understand and treat post-traumatic stress disorder (PTSD), traumatic brain injury (TBI), chronic pain and other conditions.

Several years ago, researchers at Lawrence Livermore in conjunction with Second Sight Medical Products developed the world’s first neural interface (an artificial retina) that was successfully implanted into blind patients to help partially restore their vision. The new neural device is based on similar technology used to create the artificial retina.

An LLNL June 11, 2014 news release, which originated the news item, provides some fascinating insight into the interrelations between various US programs focused on the brain and neural implants,

“DARPA is an organization that advances technology by leaps and bounds,” said LLNL’s project leader Satinderpall Pannu, director of the Lab’s Center for Micro- and Nanotechnology and Center for Bioengineering, a facility dedicated to fabricating biocompatible neural interfaces. “This DARPA program will allow us to develop a revolutionary device to help patients suffering from neuropsychiatric disorders and other neural conditions.”

The project is part of DARPA’s SUBNETS (Systems-Based Neurotechnology for Emerging Therapies) program. The agency is launching new programs to support President Obama’s BRAIN (Brain Research through Advancing Innovative Neurotechnologies) Initiative, a new research effort aimed to revolutionize our understanding of the human mind and uncover ways to treat, prevent and cure brain disorders.

LLNL and Medtronic are collaborating with UCSF, UC Berkeley, Cornell University, New York University, PositScience Inc. and Cortera Neurotechnologies on the DARPA SUBNETS project. Some collaborators will be developing the electronic components of the device, while others will be validating and characterizing it.

As part of its collaboration with LLNL, Medtronic will consult on the development of new technologies and provide its investigational Activa PC+S deep brain stimulation (DBS) system, which is the first to enable the sensing and recording of brain signals while simultaneously providing targeted DBS. This system has recently been made available to leading researchers for early-stage research and could lead to a better understanding of how various devastating neurological conditions develop and progress. The knowledge gained as part of this collaboration could lead to the next generation of advanced systems for treating neural disease.

As for what LLNL will contribute (from the news release),

The LLNL Neural Technology group will develop an implantable neural device with hundreds of electrodes by leveraging their thin-film neural interface technology, a more than tenfold increase over current Deep Brain Stimulation (DBS) devices. The electrodes will be integrated with electronics using advanced LLNL integration and 3D packaging technologies. The goal is to seal the electronic components in miniaturized, self-contained, wireless neural hardware. The microelectrodes that are the heart of this device are embedded in a biocompatible, flexible polymer.

Surgically implanted into the brain, the neural device is designed to help researchers understand the underlying dynamics of neuropsychiatric disorders and re-train neural networks to unlearn these disorders and restore proper function. This will enable the device to be eventually removed from the patient instead of being dependent on it.

This image from LLNL illustrates their next generation neural implant,

This rendering shows the next generation neural device capable of recording and stimulating the human central nervous system being developed at Lawrence Livermore National Laboratory. The implantable neural interface will record from and stimulate neurons within the brain for treating neuropsychiatric disorders.

This rendering shows the next generation neural device capable of recording and stimulating the human central nervous system being developed at Lawrence Livermore National Laboratory. The implantable neural interface will record from and stimulate neurons within the brain for treating neuropsychiatric disorders.

i expect there will be many more ‘brain’ projects to come with the advent of the US BRAIN initiative (funds of $100M in 2014 and $200M in 2015) and the European Union’s Human Brain Project (1B Euros to be spent on research over a 10 year period).

Gray Matters: Integrative Approaches for Neuroscience, Ethics, and Society issued May 2014 by US Presidential Bioethics Commission (part three of five)

The Brain research, ethics, and nanotechnology (part one of five) May 19, 2014 post kicked off a series titled ‘Brains, prostheses, nanotechnology, and human enhancement’ which brings together a number of developments in the worlds of neuroscience, prosthetics, and, incidentally, nanotechnology in the field of interest called human enhancement. Parts one through four are an attempt to draw together a number of new developments, mostly in the US and in Europe. Due to my language skills which extend to English and, more tenuously, French, I can’t provide a more ‘global perspective’. Part five features a summary.

A May 14, 2014 news release on EurekAlert announced the release of volume 1 (in a projected 2-volume series) from the US Presidential Commission for the Study of Bioethical Issues in response to a request from President Barack Obama regarding the BRAIN (Brain Research through Advancing Innovative Neurotechnologies) initiative,

Bioethics commission plays early role in BRAIN Initiative
Calls for integrating ethics explicitly throughout neuroscience research ‘Everyone benefits when the emphasis is on integration, not intervention’

Washington, DC— Calling for the integration of ethics across the life of neuroscientific research endeavors, the Presidential Commission for the Study of Bioethical Issues (Bioethics Commission) released volume one of its two-part response to President Obama’s request related to the Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative. The report, Gray Matters: Integrative Approaches for Neuroscience, Ethics, and Society, includes four recommendations for institutions and individuals engaged in neuroscience research including government agencies and other funders.

You can find volume one: Gray Matters: Integrative Approaches for Neuroscience, Ethics, and Society here. For those who prefer the short story, here’s more from the news release,

“Neurological conditions—which include addiction, chronic pain, dementia, depression, epilepsy, multiple sclerosis, Parkinson’s disease, schizophrenia, stroke, and traumatic brain injury, among other conditions—affect more than one billion people globally. Neuroscience has begun to make important breakthroughs, but given the complexity of the brain, we must better understand it in order to make desired progress,” said Amy Gutmann, Ph.D., Bioethics Commission Chair. “But because research on our brains strikes at the very core of who we are, the ethical stakes of neuroscience research could not be higher. Ethicists and scientists should be together at the table in the earliest stages of research planning fostering a fluent two-way conversation. Too often in our nation’s past, ethical lapses in research have had tragic consequences and derailed scientific progress.”

President Obama asked the Bioethics Commission to play a critical role in ensuring that neuroscientific investigational methods and protocols are consistent with sound ethical principles and practices. Specifically the President asked the Bioethics Commission to “identify proactively a set of core ethical standards – both to guide neuroscience research and to address some of the ethical dilemmas that may be raised by the application of neuroscience research findings.”

“Our rapidly advancing knowledge of the nervous system – and ability to detect disease sometimes even before symptoms begin – has not yet led to much needed breakthroughs in treatment, repair, and prevention; the BRAIN initiative will hopefully accelerate the trajectory of discoveries against terrible neurologic maladies,” Commission Member and neuroimmunologist Stephen Hauser, M.D., said.

In its report the Bioethics Commission noted that when facing the promise of neuroscience, we are compelled to consider carefully scientific advances that have the potential to alter our conception of the very private and autonomous nature of self. Our understanding of the mind, our private thoughts, and our volition necessitates careful reflection about the scientific, societal, and ethical aspects of neuroscience endeavors. Integrating ethics explicitly and systematically into the relatively new field of contemporary neuroscience allows us to incorporate ethical insights into the scientific process and to consider societal implications of neuroscience research from the start. Early ethics integration can prevent the need for corrective interventions resulting from ethical mishaps that erode public trust in science.

“In short, everyone benefits when the emphasis is on integration, not intervention,” Gutmann said. “Ethics in science must not come to the fore for the first time after something has gone wrong. An essential step is to include expert ethicists in the BRAIN Initiative advisory and review bodies.”

Recommendations

In its report the Bioethics Commission noted that although ethics is already integrated into science in various ways, more explicit and systematic integration serves to elucidate implicit ethical judgments and allows their merits to be assessed more thoughtfully. The Commission offered four recommendations.

  1. Integrate ethics early and explicitly throughout research: Institutions and individuals engaged in neuroscience research should integrate ethics across the life of a research endeavor, identifying the key ethical questions associated with their research and taking immediate steps to make explicit their systems for addressing those questions. Sufficient resources should be dedicated to support ethics integration. Approaches to ethics integration discussed by the Bioethics Commission include:a. Implementing ethics education at all levels
    b. Developing institutional infrastructure to facilitate integration
    c. Researching the ethical, legal, and social implications of scientific research
    d. Providing research ethics consultation services
    e. Engaging with stakeholders
    f. Including an ethics perspective on the research team
  2. Evaluate existing and innovative approaches to ethics integration: Government agencies and other research funders should initiate and support research that evaluates existing as well as innovative approaches to ethics integration. Institutions and individuals engaged in neuroscience research should take into account the best available evidence for what works when implementing, modifying, or improving systems for ethics integration.
  3. Integrate ethics and science through education at all levels: Government agencies and other research funders should initiate and support research that develops innovative models and evaluates existing and new models for integrating ethics and science through education at all levels.
  4. Explicitly include ethical perspectives on advisory and review bodies: BRAIN Initiative-related scientific advisory and funding review bodies should include substantive participation by persons with relevant expertise in the ethical and societal implications of the neuroscience research under consideration.

Next the Bioethics Commission will consider the ethical and societal implications of neuroscience research and its applications more broadly – ethical implications that a strongly integrated research and ethics infrastructure will be well equipped to address, and that myriad stakeholders, including scientists, ethicists, educators, public and private funders, advocacy organizations, and the public should be prepared to handle.

Gray Matters: Integrative Approaches for Neuroscience, Ethics, and Society is the Bioethics Commission’s seventh report. The Commission seeks to identify and promote policies and practices that ensure that scientific research, health care delivery, and technological innovation are conducted by the United States in a socially and ethically responsible manner. The Commission is an independent, deliberative panel of thoughtful experts that advises the President and the Administration, and, in so doing, educates the nation on bioethical issues. To date the Commission has:

  • Advised the White House on the benefits and risks of synthetic biology;
  • Completed an independent historical overview and ethical analysis of the U.S. Public Health Service STD experiments in Guatemala in the 1940s;
  • Assessed the rules that currently protect human participants in research;
  • Examined the pressing privacy concerns raised by the emergence and increasing use of whole genome sequencing;
  • Conducted a thorough review of the ethical considerations of conducting clinical trials of medical countermeasures with children, including the ethical considerations involved in conducting a pre-and post-event study of anthrax vaccine adsorbed for post-exposure prophylaxis with children; and
  • Offered ethical analysis and recommendations for clinicians, researchers, and direct-to-consumer testing companies on how to manage the increasingly common issue of incidental and secondary findings.

David Bruggeman offers a few thoughts on this volume of the series in a May 14, 2014 posting on his Pasco Phronesis blog,

Of specific application to the BRAIN Initiative is the need to include professionals with expertise in ethics in advisory boards and similar entities conducting research in this area.

Volume Two will focus more on the social and ethical implications of neuroscience research,  …

While it’s not mentioned in the news release, human enhancement is part of the discussion as per the hearing in February 2014. Perhaps it will be mentioned in volume two? Here’s an early post (July 27, 2009) I wrote in 2009 on human enhancement which provides some information about a then recent European Parliament report on the subject. The post was part of a series.

Links to other posts in the Brains, prostheses, nanotechnology, and human enhancement five-part series

Part one: Brain research, ethics, and nanotechnology (May 19, 2014 post)

Part two: BRAIN and ethics in the US with some Canucks (not the hockey team) participating (May 19, 2014)

Part four: Brazil, the 2014 World Cup kickoff, and a mind-controlled exoskeleton (May 20, 2014)

Part five: Brains, prostheses, nanotechnology, and human enhancement: summary (May 20, 2014)

BRAIN and ethics in the US with some Canucks (not the hockey team) participating (part two of five)

The Brain research, ethics, and nanotechnology (part one of five) May 19, 2014 post kicked off a series titled ‘Brains, prostheses, nanotechnology, and human enhancement’ which brings together a number of developments in the worlds of neuroscience*, prosthetics, and, incidentally, nanotechnology in the field of interest called human enhancement. Parts one through four are an attempt to draw together a number of new developments, mostly in the US and in Europe. Due to my language skills which extend to English and, more tenuously, French, I can’t provide a more ‘global perspective’. Part five features a summary.

Before further discussing the US Presidential Commission for the Study of Bioethical Issues ‘brain’ meetings mentioned in part one, I have some background information.

The US launched its self-explanatory BRAIN (Brain Research through Advancing Innovative Neurotechnologies) initiative (originally called BAM; Brain Activity Map) in 2013. (You can find more about the history and details in this Wikipedia entry.)

From the beginning there has been discussion about how nanotechnology will be of fundamental use in the US BRAIN initiative and the European Union’s 10 year Human Brain Project (there’s more about that in my Jan. 28, 2013 posting). There’s also a 2013 book (Nanotechnology, the Brain, and the Future) from Springer, which, according to the table of contents, presents an exciting (to me) range of ideas about nanotechnology and brain research,

I. Introduction and key resources

1. Nanotechnology, the brain, and the future: Anticipatory governance via end-to-end real-time technology assessment by Jason Scott Robert, Ira Bennett, and Clark A. Miller
2. The complex cognitive systems manifesto by Richard P. W. Loosemore
3. Analysis of bibliometric data for research at the intersection of nanotechnology and neuroscience by Christina Nulle, Clark A. Miller, Harmeet Singh, and Alan Porter
4. Public attitudes toward nanotechnology-enabled human enhancement in the United States by Sean Hays, Michael Cobb, and Clark A. Miller
5. U.S. news coverage of neuroscience nanotechnology: How U.S. newspapers have covered neuroscience nanotechnology during the last decade by Doo-Hun Choi, Anthony Dudo, and Dietram Scheufele
6. Nanoethics and the brain by Valerye Milleson
7. Nanotechnology and religion: A dialogue by Tobie Milford

II. Brain repair

8. The age of neuroelectronics by Adam Keiper
9. Cochlear implants and Deaf culture by Derrick Anderson
10. Healing the blind: Attitudes of blind people toward technologies to cure blindness by Arielle Silverman
11. Ethical, legal and social aspects of brain-implants using nano-scale materials and techniques by Francois Berger et al.
12. Nanotechnology, the brain, and personal identity by Stephanie Naufel

III. Brain enhancement

13. Narratives of intelligence: the sociotechnical context of cognitive enhancement by Sean Hays
14. Towards responsible use of cognitive-enhancing drugs by the healthy by Henry T. Greeley et al.
15. The opposite of human enhancement: Nanotechnology and the blind chicken debate by Paul B. Thompson
16. Anticipatory governance of human enhancement: The National Citizens’ Technology Forum by Patrick Hamlett, Michael Cobb, and David Guston
a. Arizona site report
b. California site report
c. Colorado site reportd. Georgia site report
e. New Hampshire site report
f. Wisconsin site report

IV. Brain damage

17. A review of nanoparticle functionality and toxicity on the central nervous system by Yang et al.
18. Recommendations for a municipal health and safety policy for nanomaterials: A Report to the City of Cambridge City Manager by Sam Lipson
19. Museum of Science Nanotechnology Forum lets participants be the judge by Mark Griffin
20. Nanotechnology policy and citizen engagement in Cambridge, Massachusetts: Local reflexive governance by Shannon Conley

Thanks to David Bruggeman’s May 13, 2014 posting on his Pasco Phronesis blog, I stumbled across both a future meeting notice and documentation of the  Feb. 2014 meeting of the Presidential Commission for the Study of Bioethical Issues (Note: Links have been removed),

Continuing from its last meeting (in February 2014), the Presidential Commission for the Study of Bioethical Issues will continue working on the BRAIN (Brain Research through Advancing Innovative Neurotechnologies) Initiative in its June 9-10 meeting in Atlanta, Georgia.  An agenda is still forthcoming, …

In other developments, Commission staff are apparently going to examine some efforts to engage bioethical issues through plays.  I’d be very excited to see some of this happen during a Commission meeting, but any little bit is interesting.  The authors of these plays, Karen H. Rothenburg and Lynn W. Bush, have published excerpts in their book The Drama of DNA: Narrative Genomics.  …

The Commission also has a YouTube channel …

Integrating a theatrical experience into the reams of public engagement exercises that technologies such as stem cell, GMO (genetically modified organisms), nanotechnology, etc. tend to spawn seems a delightful idea.

Interestingly, the meeting in June 2014 will coincide with the book’s release date. I dug further and found these snippets of information. The book is being published by Oxford University Press and is available in both paperback and e-book formats. The authors are not playwrights, as one might assume. From the Author Information page,

Lynn Bush, PhD, MS, MA is on the faculty of Pediatric Clinical Genetics at Columbia University Medical Center, a faculty associate at their Center for Bioethics, and serves as an ethicist on pediatric and genomic advisory committees for numerous academic medical centers and professional organizations. Dr. Bush has an interdisciplinary graduate background in clinical and developmental psychology, bioethics, genomics, public health, and neuroscience that informs her research, writing, and teaching on the ethical, psychological, and policy challenges of genomic medicine and clinical research with children, and prenatal-newborn screening and sequencing.

Karen H. Rothenberg, JD, MPA serves as Senior Advisor on Genomics and Society to the Director, National Human Genome Research Institute and Visiting Scholar, Department of Bioethics, Clinical Center, National Institutes of Health. She is the Marjorie Cook Professor of Law, Founding Director, Law & Health Care Program and former Dean at the University of Maryland Francis King Carey School of Law and Visiting Professor, Johns Hopkins Berman Institute of Bioethics. Professor Rothenberg has served as Chair of the Maryland Stem Cell Research Commission, President of the American Society of Law, Medicine and Ethics, and has been on many NIH expert committees, including the NIH Recombinant DNA Advisory Committee.

It is possible to get a table of contents for the book but I notice not a single playwright is mentioned in any of the promotional material for the book. While I like the idea in principle, it seems a bit odd and suggests that these are purpose-written plays. I have not had good experiences with purpose-written plays which tend to be didactic and dull, especially when they’re not devised by a professional storyteller.

You can find out more about the upcoming ‘bioethics’ June 9 – 10, 2014 meeting here.  As for the Feb. 10 – 11, 2014 meeting, the Brain research, ethics, and nanotechnology (part one of five) May 19, 2014 post featured Barbara Herr Harthorn’s (director of the Center for Nanotechnology in Society at the University of California at Santa Barbara) participation only.

It turns out, there are some Canadian tidbits. From the Meeting Sixteen: Feb. 10-11, 2014 webcasts page, (each presenter is featured in their own webcast of approximately 11 mins.)

Timothy Caulfield, LL.M., F.R.S.C., F.C.A.H.S.

Canada Research Chair in Health Law and Policy
Professor in the Faculty of Law
and the School of Public Health
University of Alberta

Eric Racine, Ph.D.

Director, Neuroethics Research Unit
Associate Research Professor
Institut de Recherches Cliniques de Montréal
Associate Research Professor,
Department of Medicine
Université de Montréal
Adjunct Professor, Department of Medicine and Department of Neurology and Neurosurgery,
McGill University

It was a surprise to see a couple of Canucks listed as presenters and I’m grateful that the Presidential Commission for the Study of Bioethical Issues is so generous with information. in addition to the webcasts, there is the Federal Register Notice of the meeting, an agenda, transcripts, and presentation materials. By the way, Caulfield discussed hype and Racine discussed public understanding of science with regard to neuroscience both fitting into the overall theme of communication. I’ll have to look more thoroughly but it seems to me there’s no mention of pop culture as a means of communicating about science and technology.

Links to other posts in the Brains, prostheses, nanotechnology, and human enhancement five-part series:

Part one: Brain research, ethics, and nanotechnology (May 19, 2014 post)

Part three: Gray Matters: Integrative Approaches for Neuroscience, Ethics, and Society issued May 2014 by US Presidential Bioethics Commission (May 20, 2014)

Part four: Brazil, the 2014 World Cup kickoff, and a mind-controlled exoskeleton (May 20, 2014)

Part five: Brains, prostheses, nanotechnology, and human enhancement: summary (May 20, 2014)

* ‘neursocience’ corrected to ‘neuroscience’ on May 20, 2014.

Brain-on-a-chip 2014 survey/overview

Michael Berger has written another of his Nanowerk Spotlight articles focussing on neuromorphic engineering and the concept of a brain-on-a-chip bringing it up-to-date April 2014 style.

It’s a topic he and I have been following (separately) for years. Berger’s April 4, 2014 Brain-on-a-chip Spotlight article provides a very welcome overview of the international neuromorphic engineering effort (Note: Links have been removed),

Constructing realistic simulations of the human brain is a key goal of the Human Brain Project, a massive European-led research project that commenced in 2013.

The Human Brain Project is a large-scale, scientific collaborative project, which aims to gather all existing knowledge about the human brain, build multi-scale models of the brain that integrate this knowledge and use these models to simulate the brain on supercomputers. The resulting “virtual brain” offers the prospect of a fundamentally new and improved understanding of the human brain, opening the way for better treatments for brain diseases and for novel, brain-like computing technologies.

Several years ago, another European project named FACETS (Fast Analog Computing with Emergent Transient States) completed an exhaustive study of neurons to find out exactly how they work, how they connect to each other and how the network can ‘learn’ to do new things. One of the outcomes of the project was PyNN, a simulator-independent language for building neuronal network models.

Scientists have great expectations that nanotechnologies will bring them closer to the goal of creating computer systems that can simulate and emulate the brain’s abilities for sensation, perception, action, interaction and cognition while rivaling its low power consumption and compact size – basically a brain-on-a-chip. Already, scientists are working hard on laying the foundations for what is called neuromorphic engineering – a new interdisciplinary discipline that includes nanotechnologies and whose goal is to design artificial neural systems with physical architectures similar to biological nervous systems.

Several research projects funded with millions of dollars are at work with the goal of developing brain-inspired computer architectures or virtual brains: DARPA’s SyNAPSE, the EU’s BrainScaleS (a successor to FACETS), or the Blue Brain project (one of the predecessors of the Human Brain Project) at Switzerland’s EPFL [École Polytechnique Fédérale de Lausanne].

Berger goes on to describe the raison d’être for neuromorphic engineering (attempts to mimic biological brains),

Programmable machines are limited not only by their computational capacity, but also by an architecture requiring (human-derived) algorithms to both describe and process information from their environment. In contrast, biological neural systems (e.g., brains) autonomously process information in complex environments by automatically learning relevant and probabilistically stable features and associations. Since real world systems are always many body problems with infinite combinatorial complexity, neuromorphic electronic machines would be preferable in a host of applications – but useful and practical implementations do not yet exist.

Researchers are mostly interested in emulating neural plasticity (aka synaptic plasticity), from Berger’s April 4, 2014 article,

Independent from military-inspired research like DARPA’s, nanotechnology researchers in France have developed a hybrid nanoparticle-organic transistor that can mimic the main functionalities of a synapse. This organic transistor, based on pentacene and gold nanoparticles and termed NOMFET (Nanoparticle Organic Memory Field-Effect Transistor), has opened the way to new generations of neuro-inspired computers, capable of responding in a manner similar to the nervous system  (read more: “Scientists use nanotechnology to try building computers modeled after the brain”).

One of the key components of any neuromorphic effort, and its starting point, is the design of artificial synapses. Synapses dominate the architecture of the brain and are responsible for massive parallelism, structural plasticity, and robustness of the brain. They are also crucial to biological computations that underlie perception and learning. Therefore, a compact nanoelectronic device emulating the functions and plasticity of biological synapses will be the most important building block of brain-inspired computational systems.

In 2011, a team at Stanford University demonstrates a new single element nanoscale device, based on the successfully commercialized phase change material technology, emulating the functionality and the plasticity of biological synapses. In their work, the Stanford team demonstrated a single element electronic synapse with the capability of both the modulation of the time constant and the realization of the different synaptic plasticity forms while consuming picojoule level energy for its operation (read more: “Brain-inspired computing with nanoelectronic programmable synapses”).

Berger does mention memristors but not in any great detail in this article,

Researchers have also suggested that memristor devices are capable of emulating the biological synapses with properly designed CMOS neuron components. A memristor is a two-terminal electronic device whose conductance can be precisely modulated by charge or flux through it. It has the special property that its resistance can be programmed (resistor) and subsequently remains stored (memory).

One research project already demonstrated that a memristor can connect conventional circuits and support a process that is the basis for memory and learning in biological systems (read more: “Nanotechnology’s road to artificial brains”).

You can find a number of memristor articles here including these: Memristors have always been with us from June 14, 2013; How to use a memristor to create an artificial brain from Feb. 26, 2013; Electrochemistry of memristors in a critique of the 2008 discovery from Sept. 6, 2012; and many more (type ‘memristor’ into the blog search box and you should receive many postings or alternatively, you can try ‘artificial brains’ if you want everything I have on artificial brains).

Getting back to Berger’s April 4, 2014 article, he mentions one more approach and this one stands out,

A completely different – and revolutionary – human brain model has been designed by researchers in Japan who introduced the concept of a new class of computer which does not use any circuit or logic gate. This artificial brain-building project differs from all others in the world. It does not use logic-gate based computing within the framework of Turing. The decision-making protocol is not a logical reduction of decision rather projection of frequency fractal operations in a real space, it is an engineering perspective of Gödel’s incompleteness theorem.

Berger wrote about this work in much more detail in a Feb. 10, 2014 Nanowerk Spotlight article titled: Brain jelly – design and construction of an organic, brain-like computer, (Note: Links have been removed),

In a previous Nanowerk Spotlight we reported on the concept of a full-fledged massively parallel organic computer at the nanoscale that uses extremely low power (“Will brain-like evolutionary circuit lead to intelligent computers?”). In this work, the researchers created a process of circuit evolution similar to the human brain in an organic molecular layer. This was the first time that such a brain-like ‘evolutionary’ circuit had been realized.

The research team, led by Dr. Anirban Bandyopadhyay, a senior researcher at the Advanced Nano Characterization Center at the National Institute of Materials Science (NIMS) in Tsukuba, Japan, has now finalized their human brain model and introduced the concept of a new class of computer which does not use any circuit or logic gate.

In a new open-access paper published online on January 27, 2014, in Information (“Design and Construction of a Brain-Like Computer: A New Class of Frequency-Fractal Computing Using Wireless Communication in a Supramolecular Organic, Inorganic System”), Bandyopadhyay and his team now describe the fundamental computing principle of a frequency fractal brain like computer.

“Our artificial brain-building project differs from all others in the world for several reasons,” Bandyopadhyay explains to Nanowerk. He lists the four major distinctions:
1) We do not use logic gate based computing within the framework of Turing, our decision-making protocol is not a logical reduction of decision rather projection of frequency fractal operations in a real space, it is an engineering perspective of Gödel’s incompleteness theorem.
2) We do not need to write any software, the argument and basic phase transition for decision-making, ‘if-then’ arguments and the transformation of one set of arguments into another self-assemble and expand spontaneously, the system holds an astronomically large number of ‘if’ arguments and its associative ‘then’ situations.
3) We use ‘spontaneous reply back’, via wireless communication using a unique resonance band coupling mode, not conventional antenna-receiver model, since fractal based non-radiative power management is used, the power expense is negligible.
4) We have carried out our own single DNA, single protein molecule and single brain microtubule neurophysiological study to develop our own Human brain model.

I encourage people to read Berger’s articles on this topic as they provide excellent information and links to much more. Curiously (mind you, it is easy to miss something), he does not mention James Gimzewski’s work at the University of California at Los Angeles (UCLA). Working with colleagues from the National Institute for Materials Science in Japan, Gimzewski published a paper about “two-, three-terminal WO3-x-based nanoionic devices capable of a broad range of neuromorphic and electrical functions”. You can find out more about the paper in my Dec. 24, 2012 posting titled: Synaptic electronics.

As for the ‘brain jelly’ paper, here’s a link to and a citation for it,

Design and Construction of a Brain-Like Computer: A New Class of Frequency-Fractal Computing Using Wireless Communication in a Supramolecular Organic, Inorganic System by Subrata Ghoshemail, Krishna Aswaniemail, Surabhi Singhemail, Satyajit Sahuemail, Daisuke Fujitaemail and Anirban Bandyopadhyay. Information 2014, 5(1), 28-100; doi:10.3390/info5010028

It’s an open access paper.

As for anyone who’s curious about why the US BRAIN initiative ((Brain Research through Advancing Innovative Neurotechnologies, also referred to as the Brain Activity Map Project) is not mentioned, I believe that’s because it’s focussed on biological brains exclusively at this point (you can check its Wikipedia entry to confirm).

Anirban Bandyopadhyay was last mentioned here in a January 16, 2014 posting titled: Controversial theory of consciousness confirmed (maybe) in  the context of a presentation in Amsterdam, Netherlands.