Tag Archives: Stanford University

A graphene ‘camera’ and your beating heart: say cheese

Comparing it to a ‘camera’, even with the quotes, is a bit of a stretch for my taste but I can’t come up with a better comparison. Here’s a video so you can judge for yourself,

Caption: This video repeats three times the graphene camera images of a single beat of an embryonic chicken heart. The images, separated by 5 milliseconds, were measured by a laser bouncing off a graphene sheet lying beneath the heart. The images are about 2 millimeters on a side. Credit: UC Berkeley images by Halleh Balch, Alister McGuire and Jason Horng

A June 16, 2021 news item on ScienceDaily announces the research,

Bay Area [San Francisco, California] scientists have captured the real-time electrical activity of a beating heart, using a sheet of graphene to record an optical image — almost like a video camera — of the faint electric fields generated by the rhythmic firing of the heart’s muscle cells.

A University of California at Berkeley (UC Berkeley) June 16, 2021 news release (also on EurekAlert) by Robert Sanders, which originated the news item, provides more detail,

The graphene camera represents a new type of sensor useful for studying cells and tissues that generate electrical voltages, including groups of neurons or cardiac muscle cells. To date, electrodes or chemical dyes have been used to measure electrical firing in these cells. But electrodes and dyes measure the voltage at one point only; a graphene sheet measures the voltage continuously over all the tissue it touches.

The development, published online last week in the journal Nano Letters, comes from a collaboration between two teams of quantum physicists at the University of California, Berkeley, and physical chemists at Stanford University.

“Because we are imaging all cells simultaneously onto a camera, we don’t have to scan, and we don’t have just a point measurement. We can image the entire network of cells at the same time,” said Halleh Balch, one of three first authors of the paper and a recent Ph.D. recipient in UC Berkeley’s Department of Physics.

While the graphene sensor works without having to label cells with dyes or tracers, it can easily be combined with standard microscopy to image fluorescently labeled nerve or muscle tissue while simultaneously recording the electrical signals the cells use to communicate.

“The ease with which you can image an entire region of a sample could be especially useful in the study of neural networks that have all sorts of cell types involved,” said another first author of the study, Allister McGuire, who recently received a Ph.D. from Stanford and. “If you have a fluorescently labeled cell system, you might only be targeting a certain type of neuron. Our system would allow you to capture electrical activity in all neurons and their support cells with very high integrity, which could really impact the way that people do these network level studies.”

Graphene is a one-atom thick sheet of carbon atoms arranged in a two-dimensional hexagonal pattern reminiscent of honeycomb. The 2D structure has captured the interest of physicists for several decades because of its unique electrical properties and robustness and its interesting optical and optoelectronic properties.

“This is maybe the first example where you can use an optical readout of 2D materials to measure biological electrical fields,” said senior author Feng Wang, UC Berkeley professor of physics. “People have used 2D materials to do some sensing with pure electrical readout before, but this is unique in that it works with microscopy so that you can do parallel detection.”

The team calls the tool a critically coupled waveguide-amplified graphene electric field sensor, or CAGE sensor.

“This study is just a preliminary one; we want to showcase to biologists that there is such a tool you can use, and you can do great imaging. It has fast time resolution and great electric field sensitivity,” said the third first author, Jason Horng, a UC Berkeley Ph.D. recipient who is now a postdoctoral fellow at the National Institute of Standards and Technology. “Right now, it is just a prototype, but in the future, I think we can improve the device.”

Graphene is sensitive to electric fields

Ten years ago, Wang discovered that an electric field affects how graphene reflects or absorbs light. Balch and Horng exploited this discovery in designing the graphene camera. They obtained a sheet of graphene about 1 centimeter on a side produced by chemical vapor deposition in the lab of UC Berkeley physics professor Michael Crommie and placed on it a live heart from a chicken embryo, freshly extracted from a fertilized egg. These experiments were performed in the Stanford lab of Bianxiao Cui, who develops nanoscale tools to study electrical signaling in neurons and cardiac cells.

The team showed that when the graphene was tuned properly, the electrical signals that flowed along the surface of the heart during a beat were sufficient to change the reflectance of the graphene sheet.

“When cells contract, they fire action potentials that generate a small electric field outside of the cell,” Balch said. “The absorption of graphene right under that cell is modified, so we will see a change in the amount of light that comes back from that position on the large area of graphene.”

In initial studies, however, Horng found that the change in reflectance was too small to detect easily. An electric field reduces the reflectance of graphene by at most 2%; the effect was much less from changes in the electric field when the heart muscle cells fired an action potential.

Together, Balch, Horng and Wang found a way to amplify this signal by adding a thin waveguide below graphene, forcing the reflected laser light to bounce internally about 100 times before escaping. This made the change in reflectance detectable by a normal optical video camera.

“One way of thinking about it is that the more times that light bounces off of graphene as it propagates through this little cavity, the more effects that light feels from graphene’s response, and that allows us to obtain very, very high sensitivity to electric fields and voltages down to microvolts,” Balch said.

The increased amplification necessarily lowers the resolution of the image, but at 10 microns, it is more than enough to study cardiac cells that are several tens of microns across, she said.

Another application, McGuire said, is to test the effect of drug candidates on heart muscle before these drugs go into clinical trials to see whether, for example, they induce an unwanted arrhythmia. To demonstrate this, he and his colleagues observed the beating chicken heart with CAGE and an optical microscope while infusing it with a drug, blebbistatin, that inhibits the muscle protein myosin. They observed the heart stop beating, but CAGE showed that the electrical signals were unaffected.

Because graphene sheets are mechanically tough, they could also be placed directly on the surface of the brain to get a continuous measure of electrical activity — for example, to monitor neuron firing in the brains of those with epilepsy or to study fundamental brain activity. Today’s electrode arrays measure activity at a few hundred points, not continuously over the brain surface.

“One of the things that is amazing to me about this project is that electric fields mediate chemical interactions, mediate biophysical interactions — they mediate all sorts of processes in the natural world — but we never measure them. We measure current, and we measure voltage,” Balch said. “The ability to actually image electric fields gives you a look at a modality that you previously had little insight into.”

Here’s a link to and a citation for the paper,

Graphene Electric Field Sensor Enables Single Shot Label-Free Imaging of Bioelectric Potentials by Halleh B. Balch, Allister F. McGuire, Jason Horng, Hsin-Zon Tsai, Kevin K. Qi, Yi-Shiou Duh, Patrick R. Forrester, Michael F. Crommie, Bianxiao Cui, and Feng Wang. Nano Lett. 2021, XXXX, XXX, XXX-XXX OI: https://doi.org/10.1021/acs.nanolett.1c00543 Publication Date: June 8, 2021 © 2021 American Chemical Society

This paper is behind a paywall.

An algorithm for modern quilting

Caption: Each of the blocks in this quilt were designed using an algorithm-based tool developed by Stanford researchers. Credit: Mackenzie Leake

I love the colours. This research into quilting and artificial intelligence (AI) was presented at SIGGRAPH 2021 in August. (SIGGRAPH is, also known as, ACM SIGGRAPH or ‘Association for Computing Machinery’s Special Interest Group on Computer Graphics and Interactive Techniques’.)

A June 3, 2021 news item on ScienceDaily announced the presentation,

Stanford University computer science graduate student Mackenzie Leake has been quilting since age 10, but she never imagined the craft would be the focus of her doctoral dissertation. Included in that work is new prototype software that can facilitate pattern-making for a form of quilting called foundation paper piecing, which involves using a backing made of foundation paper to lay out and sew a quilted design.

Developing a foundation paper piece quilt pattern — which looks similar to a paint-by-numbers outline — is often non-intuitive. There are few formal guidelines for patterning and those that do exist are insufficient to assure a successful result.

“Quilting has this rich tradition and people make these very personal, cherished heirlooms but paper piece quilting often requires that people work from patterns that other people designed,” said Leake, who is a member of the lab of Maneesh Agrawala, the Forest Baskett Professor of Computer Science and director of the Brown Institute for Media Innovation at Stanford. “So, we wanted to produce a digital tool that lets people design the patterns that they want to design without having to think through all of the geometry, ordering and constraints.”

A paper describing this work is published and will be presented at the computer graphics conference SIGGRAPH 2021 in August.

A June 2, 2021 Stanford University news release (also on EurekAlert), which originated the news item, provides more detail,

Respecting the craft

In describing the allure of paper piece quilts, Leake cites the modern aesthetic and high level of control and precision. The seams of the quilt are sewn through the paper pattern and, as the seaming process proceeds, the individual pieces of fabric are flipped over to form the final design. All of this “sew and flip” action means the pattern must be produced in a careful order.

Poorly executed patterns can lead to loose pieces, holes, misplaced seams and designs that are simply impossible to complete. When quilters create their own paper piecing designs, figuring out the order of the seams can take considerable time – and still lead to unsatisfactory results.

“The biggest challenge that we’re tackling is letting people focus on the creative part and offload the mental energy of figuring out whether they can use this technique or not,” said Leake, who is lead author of the SIGGRAPH paper. “It’s important to me that we’re really aware and respectful of the way that people like to create and that we aren’t over-automating that process.”

This isn’t Leake’s first foray into computer-aided quilting. She previously designed a tool for improvisational quilting, which she presented [PatchProv: Supporting Improvistiional Design Practices for Modern Quilting by Mackenzie Leake, Frances Lai, Tovi Grossman, Daniel Wigdor, and Ben Lafreniere] at the human-computer interaction conference CHI in May [2021]. [Note: Links to the May 2021 conference and paper added by me.]

Quilting theory

Developing the algorithm at the heart of this latest quilting software required a substantial theoretical foundation. With few existing guidelines to go on, the researchers had to first gain a more formal understanding of what makes a quilt paper piece-able, and then represent that mathematically.

They eventually found what they needed in a particular graph structure, called a hypergraph. While so-called “simple” graphs can only connect data points by lines, a hypergraph can accommodate overlapping relationships between many data points. (A Venn diagram is a type of hypergraph.) The researchers found that a pattern will be paper piece-able if it can be depicted by a hypergraph whose edges can be removed one at a time in a specific order – which would correspond to how the seams are sewn in the pattern.

The prototype software allows users to sketch out a design and the underlying hypergraph-based algorithm determines what paper foundation patterns could make it possible – if any. Many designs result in multiple pattern options and users can adjust their sketch until they get a pattern they like. The researchers hope to make a version of their software publicly available this summer.

“I didn’t expect to be writing my computer science dissertation on quilting when I started,” said Leake. “But I found this really rich space of problems involving design and computation and traditional crafts, so there have been lots of different pieces we’ve been able to pull off and examine in that space.”

###

Researchers from University of California, Berkeley and Cornell University are co-authors of this paper. Agrawala is also an affiliate of the Institute for Human-Centered Artificial Intelligence (HAI).

An abstract for the paper “A Mathematical Foundation for Foundation Paper Pieceable Quilts” by Mackenzie Leake, Gilbert Bernstein, Abe Davis and Maneesh Agrawala can be found here along with links to a PDF of the full paper and video on YouTube.

Afterthought: I noticed that all of the co-authors for the May 2021 paper are from the University of Toronto and most of them including Mackenzie Leake are associated with that university’s Chatham Labs.

BrainGate demonstrates a high-bandwidth wireless brain-computer interface (BCI)

I wrote about some brain computer interface (BCI) work out of Stanford University (California, US), in a Sept. 17, 2020 posting (Turning brain-controlled wireless electronic prostheses into reality plus some ethical points), which may have contributed to what is now the first demonstration of a wireless brain-computer interface for people with tetraplegia (also known as quadriplegia).

From an April 1, 2021 news item on ScienceDaily,

In an important step toward a fully implantable intracortical brain-computer interface system, BrainGate researchers demonstrated human use of a wireless transmitter capable of delivering high-bandwidth neural signals.

Brain-computer interfaces (BCIs) are an emerging assistive technology, enabling people with paralysis to type on computer screens or manipulate robotic prostheses just by thinking about moving their own bodies. For years, investigational BCIs used in clinical trials have required cables to connect the sensing array in the brain to computers that decode the signals and use them to drive external devices.

Now, for the first time, BrainGate clinical trial participants with tetraplegia have demonstrated use of an intracortical wireless BCI with an external wireless transmitter. The system is capable of transmitting brain signals at single-neuron resolution and in full broadband fidelity without physically tethering the user to a decoding system. The traditional cables are replaced by a small transmitter about 2 inches in its largest dimension and weighing a little over 1.5 ounces. The unit sits on top of a user’s head and connects to an electrode array within the brain’s motor cortex using the same port used by wired systems.

For a study published in IEEE Transactions on Biomedical Engineering, two clinical trial participants with paralysis used the BrainGate system with a wireless transmitter to point, click and type on a standard tablet computer. The study showed that the wireless system transmitted signals with virtually the same fidelity as wired systems, and participants achieved similar point-and-click accuracy and typing speeds.

A March 31, 2021 Brown University news release (also on EurekAlert but published April 1, 2021), which originated the news item, provides more detail,

“We’ve demonstrated that this wireless system is functionally equivalent to the wired systems that have been the gold standard in BCI performance for years,” said John Simeral, an assistant professor of engineering (research) at Brown University, a member of the BrainGate research consortium and the study’s lead author. “The signals are recorded and transmitted with appropriately similar fidelity, which means we can use the same decoding algorithms we used with wired equipment. The only difference is that people no longer need to be physically tethered to our equipment, which opens up new possibilities in terms of how the system can be used.”

The researchers say the study represents an early but important step toward a major objective in BCI research: a fully implantable intracortical system that aids in restoring independence for people who have lost the ability to move. While wireless devices with lower bandwidth have been reported previously, this is the first device to transmit the full spectrum of signals recorded by an intracortical sensor. That high-broadband wireless signal enables clinical research and basic human neuroscience that is much more difficult to perform with wired BCIs.

The new study demonstrated some of those new possibilities. The trial participants — a 35-year-old man and a 63-year-old man, both paralyzed by spinal cord injuries — were able to use the system in their homes, as opposed to the lab setting where most BCI research takes place. Unencumbered by cables, the participants were able to use the BCI continuously for up to 24 hours, giving the researchers long-duration data including while participants slept.

“We want to understand how neural signals evolve over time,” said Leigh Hochberg, an engineering professor at Brown, a researcher at Brown’s Carney Institute for Brain Science and leader of the BrainGate clinical trial. “With this system, we’re able to look at brain activity, at home, over long periods in a way that was nearly impossible before. This will help us to design decoding algorithms that provide for the seamless, intuitive, reliable restoration of communication and mobility for people with paralysis.”

The device used in the study was first developed at Brown in the lab of Arto Nurmikko, a professor in Brown’s School of Engineering. Dubbed the Brown Wireless Device (BWD), it was designed to transmit high-fidelity signals while drawing minimal power. In the current study, two devices used together recorded neural signals at 48 megabits per second from 200 electrodes with a battery life of over 36 hours.

While the BWD has been used successfully for several years in basic neuroscience research, additional testing and regulatory permission were required prior to using the system in the BrainGate trial. Nurmikko says the step to human use marks a key moment in the development of BCI technology.

“I am privileged to be part of a team pushing the frontiers of brain-machine interfaces for human use,” Nurmikko said. “Importantly, the wireless technology described in our paper has helped us to gain crucial insight for the road ahead in pursuit of next generation of neurotechnologies, such as fully implanted high-density wireless electronic interfaces for the brain.”

The new study marks another significant advance by researchers with the BrainGate consortium, an interdisciplinary group of researchers from Brown, Stanford and Case Western Reserve universities, as well as the Providence Veterans Affairs Medical Center and Massachusetts General Hospital. In 2012, the team published landmark research in which clinical trial participants were able, for the first time, to operate multidimensional robotic prosthetics using a BCI. That work has been followed by a steady stream of refinements to the system, as well as new clinical breakthroughs that have enabled people to type on computers, use tablet apps and even move their own paralyzed limbs.

“The evolution of intracortical BCIs from requiring a wire cable to instead using a miniature wireless transmitter is a major step toward functional use of fully implanted, high-performance neural interfaces,” said study co-author Sharlene Flesher, who was a postdoctoral fellow at Stanford and is now a hardware engineer at Apple. “As the field heads toward reducing transmitted bandwidth while preserving the accuracy of assistive device control, this study may be one of few that captures the full breadth of cortical signals for extended periods of time, including during practical BCI use.”

The new wireless technology is already paying dividends in unexpected ways, the researchers say. Because participants are able to use the wireless device in their homes without a technician on hand to maintain the wired connection, the BrainGate team has been able to continue their work during the COVID-19 pandemic.

“In March 2020, it became clear that we would not be able to visit our research participants’ homes,” said Hochberg, who is also a critical care neurologist at Massachusetts General Hospital and director of the V.A. Rehabilitation Research and Development Center for Neurorestoration and Neurotechnology. “But by training caregivers how to establish the wireless connection, a trial participant was able to use the BCI without members of our team physically being there. So not only were we able to continue our research, this technology allowed us to continue with the full bandwidth and fidelity that we had before.”

Simeral noted that, “Multiple companies have wonderfully entered the BCI field, and some have already demonstrated human use of low-bandwidth wireless systems, including some that are fully implanted. In this report, we’re excited to have used a high-bandwidth wireless system that advances the scientific and clinical capabilities for future systems.”

Brown has a licensing agreement with Blackrock Microsystems to make the device available to neuroscience researchers around the world. The BrainGate team plans to continue to use the device in ongoing clinical trials.

Here’s a link to and a citation for the paper,

Home Use of a Percutaneous Wireless Intracortical Brain-Computer Interface by Individuals With Tetraplegia by John D Simeral, Thomas Hosman, Jad Saab, Sharlene N Flesher, Marco Vilela, Brian Franco, Jessica Kelemen, David M Brandman, John G Ciancibello, Paymon G Rezaii, Emad N. Eskandar, David M Rosler, Krishna V Shenoy, Jaimie M. Henderson, Arto V Nurmikko, Leigh R. Hochberg. IEEE Transactions on Biomedical Engineering, 2021; 1 DOI: 10.1109/TBME.2021.3069119 Date of Publication: 30 March 2021

This paper is open access.

If you don’t happen to be familiar with the IEEE, it’s the Institute of Electrical and Electronics Engineers. BrainGate can be found here, and Blackrock Microsystems can be found here.

The first story here to feature BrainGate was in a May 17, 2012 posting. (Unfortunately, the video featuring a participant picking up a cup of coffee is no longer embedded in the post.) There’s also an October 31, 2016 posting and an April 24, 2017 posting, both of which mention BrainGate. As for my Sept. 17, 2020 posting (Turning brain-controlled wireless electronic prostheses into reality plus some ethical points), you may want to look at those ethical points.

Brain cell-like nanodevices

Given R. Stanley Williams’s presence on the author list, it’s a bit surprising that there’s no mention of memristors. If I read the signs rightly the interest is shifting, in some cases, from the memristor to a more comprehensive grouping of circuit elements referred to as ‘neuristors’ or, more likely, ‘nanocirucuit elements’ in the effort to achieve brainlike (neuromorphic) computing (engineering). (Williams was the leader of the HP Labs team that offered proof and more of the memristor’s existence, which I mentioned here in an April 5, 2010 posting. There are many, many postings on this topic here; try ‘memristors’ or ‘brainlike computing’ for your search terms.)

A September 24, 2020 news item on ScienceDaily announces a recent development in the field of neuromorphic engineering,

In the September [2020] issue of the journal Nature, scientists from Texas A&M University, Hewlett Packard Labs and Stanford University have described a new nanodevice that acts almost identically to a brain cell. Furthermore, they have shown that these synthetic brain cells can be joined together to form intricate networks that can then solve problems in a brain-like manner.

“This is the first study where we have been able to emulate a neuron with just a single nanoscale device, which would otherwise need hundreds of transistors,” said Dr. R. Stanley Williams, senior author on the study and professor in the Department of Electrical and Computer Engineering. “We have also been able to successfully use networks of our artificial neurons to solve toy versions of a real-world problem that is computationally intense even for the most sophisticated digital technologies.”

In particular, the researchers have demonstrated proof of concept that their brain-inspired system can identify possible mutations in a virus, which is highly relevant for ensuring the efficacy of vaccines and medications for strains exhibiting genetic diversity.

A September 24, 2020 Texas A&M University news release (also on EurekAlert) by Vandana Suresh, which originated the news item, provides some context for the research,

Over the past decades, digital technologies have become smaller and faster largely because of the advancements in transistor technology. However, these critical circuit components are fast approaching their limit of how small they can be built, initiating a global effort to find a new type of technology that can supplement, if not replace, transistors.

In addition to this “scaling-down” problem, transistor-based digital technologies have other well-known challenges. For example, they struggle at finding optimal solutions when presented with large sets of data.

“Let’s take a familiar example of finding the shortest route from your office to your home. If you have to make a single stop, it’s a fairly easy problem to solve. But if for some reason you need to make 15 stops in between, you have 43 billion routes to choose from,” said Dr. Suhas Kumar, lead author on the study and researcher at Hewlett Packard Labs. “This is now an optimization problem, and current computers are rather inept at solving it.”

Kumar added that another arduous task for digital machines is pattern recognition, such as identifying a face as the same regardless of viewpoint or recognizing a familiar voice buried within a din of sounds.

But tasks that can send digital machines into a computational tizzy are ones at which the brain excels. In fact, brains are not just quick at recognition and optimization problems, but they also consume far less energy than digital systems. Hence, by mimicking how the brain solves these types of tasks, Williams said brain-inspired or neuromorphic systems could potentially overcome some of the computational hurdles faced by current digital technologies.

To build the fundamental building block of the brain or a neuron, the researchers assembled a synthetic nanoscale device consisting of layers of different inorganic materials, each with a unique function. However, they said the real magic happens in the thin layer made of the compound niobium dioxide.

When a small voltage is applied to this region, its temperature begins to increase. But when the temperature reaches a critical value, niobium dioxide undergoes a quick change in personality, turning from an insulator to a conductor. But as it begins to conduct electric currents, its temperature drops and niobium dioxide switches back to being an insulator.

These back-and-forth transitions enable the synthetic devices to generate a pulse of electrical current that closely resembles the profile of electrical spikes, or action potentials, produced by biological neurons. Further, by changing the voltage across their synthetic neurons, the researchers reproduced a rich range of neuronal behaviors observed in the brain, such as sustained, burst and chaotic firing of electrical spikes.

“Capturing the dynamical behavior of neurons is a key goal for brain-inspired computers,” said Kumar. “Altogether, we were able to recreate around 15 types of neuronal firing profiles, all using a single electrical component and at much lower energies compared to transistor-based circuits.”

To evaluate if their synthetic neurons [neuristor?] can solve real-world problems, the researchers first wired 24 such nanoscale devices together in a network inspired by the connections between the brain’s cortex and thalamus, a well-known neural pathway involved in pattern recognition. Next, they used this system to solve a toy version of the viral quasispecies reconstruction problem, where mutant variations of a virus are identified without a reference genome.

By means of data inputs, the researchers introduced the network to short gene fragments. Then, by programming the strength of connections between the artificial neurons within the network, they established basic rules about joining these genetic fragments. The jigsaw puzzle-like task for the network was to list mutations in the virus’ genome based on these short genetic segments.

The researchers found that within a few microseconds, their network of artificial neurons settled down in a state that was indicative of the genome for a mutant strain.

Williams and Kumar noted this result is proof of principle that their neuromorphic systems can quickly perform tasks in an energy-efficient way.

The researchers said the next steps in their research will be to expand the repertoire of the problems that their brain-like networks can solve by incorporating other firing patterns and some hallmark properties of the human brain like learning and memory. They also plan to address hardware challenges for implementing their technology on a commercial scale.

“Calculating the national debt or solving some large-scale simulation is not the type of task the human brain is good at and that’s why we have digital computers. Alternatively, we can leverage our knowledge of neuronal connections for solving problems that the brain is exceptionally good at,” said Williams. “We have demonstrated that depending on the type of problem, there are different and more efficient ways of doing computations other than the conventional methods using digital computers with transistors.”

If you look at the news release on EurekAlert, you’ll see this informative image is titled: NeuristerSchematic [sic],

Caption: Networks of artificial neurons connected together can solve toy versions the viral quasispecies reconstruction problem. Credit: Texas A&M University College of Engineering

(On the university website, the image is credited to Rachel Barton.) You can see one of the first mentions of a ‘neuristor’ here in an August 24, 2017 posting.

Here’s a link to and a citation for the paper,

Third-order nanocircuit elements for neuromorphic engineering by Suhas Kumar, R. Stanley Williams & Ziwen Wang. Nature volume 585, pages518–523(2020) DOI: https://doi.org/10.1038/s41586-020-2735-5 Published: 23 September 2020 Issue Date: 24 September 2020

This paper is behind a paywall.

Technical University of Munich: embedded ethics approach for AI (artificial intelligence) and storing a tv series in synthetic DNA

I stumbled across two news bits of interest from the Technical University of Munich in one day (Sept. 1, 2020, I think). The topics: artificial intelligence (AI) and synthetic DNA (deoxyribonucleic acid).

Embedded ethics and artificial intelligence (AI)

An August 27, 2020 Technical University of Munich (TUM) press release (also on EurekAlert but published Sept. 1, 2020) features information about a proposal to embed ethicists in with AI development teams,

The increasing use of AI (artificial intelligence) in the development of new medical technologies demands greater attention to ethical aspects. An interdisciplinary team at the Technical University of Munich (TUM) advocates the integration of ethics from the very beginning of the development process of new technologies. Alena Buyx, Professor of Ethics in Medicine and Health Technologies, explains the embedded ethics approach.

Professor Buyx, the discussions surrounding a greater emphasis on ethics in AI research have greatly intensified in recent years, to the point where one might speak of “ethics hype” …

Prof. Buyx: … and many committees in Germany and around the world such as the German Ethics Council or the EU Commission High-Level Expert Group on Artificial Intelligence have responded. They are all in agreement: We need more ethics in the development of AI-based health technologies. But how do things look in practice for engineers and designers? Concrete solutions are still few and far between. In a joint pilot project with two Integrative Research Centers at TUM, the Munich School of Robotics and Machine Intelligence (MSRM) with its director, Prof. Sami Haddadin, and the Munich Center for Technology in Society (MCTS), with Prof. Ruth Müller, we want to try out the embedded ethics approach. We published the proposal in Nature Machine Intelligence at the end of July [2020].

What exactly is meant by the “embedded ethics approach”?

Prof.Buyx: The idea is to make ethics an integral part of the research process by integrating ethicists into the AI development team from day one. For example, they attend team meetings on a regular basis and create a sort of “ethical awareness” for certain issues. They also raise and analyze specific ethical and social issues.

Is there an example of this concept in practice?

Prof. Buyx: The Geriatronics Research Center, a flagship project of the MSRM in Garmisch-Partenkirchen, is developing robot assistants to enable people to live independently in old age. The center’s initiatives will include the construction of model apartments designed to try out residential concepts where seniors share their living space with robots. At a joint meeting with the participating engineers, it was noted that the idea of using an open concept layout everywhere in the units – with few doors or individual rooms – would give the robots considerable range of motion. With the seniors, however, this living concept could prove upsetting because they are used to having private spaces. At the outset, the engineers had not given explicit consideration to this aspect.

Prof.Buyx: The approach sounds promising. But how can we avoid “embedded ethics” from turning into an “ethics washing” exercise, offering companies a comforting sense of “being on the safe side” when developing new AI technologies?

That’s not something we can be certain of avoiding. The key is mutual openness and a willingness to listen, with the goal of finding a common language – and subsequently being prepared to effectively implement the ethical aspects. At TUM we are ideally positioned to achieve this. Prof. Sami Haddadin, the director of the MSRM, is also a member of the EU High-Level Group of Artificial Intelligence. In his research, he is guided by the concept of human centered engineering. Consequently, he has supported the idea of embedded ethics from the very beginning. But one thing is certain: Embedded ethics alone will not suddenly make AI “turn ethical”. Ultimately, that will require laws, codes of conduct and possibly state incentives.

Here’s a link to and a citation for the paper espousing the embedded ethics for AI development approach,

An embedded ethics approach for AI development by Stuart McLennan, Amelia Fiske, Leo Anthony Celi, Ruth Müller, Jan Harder, Konstantin Ritt, Sami Haddadin & Alena Buyx. Nature Machine Intelligence (2020) DOI: https://doi.org/10.1038/s42256-020-0214-1 Published 31 July 2020

This paper is behind a paywall.

Religion, ethics and and AI

For some reason embedded ethics and AI got me to thinking about Pope Francis and other religious leaders.

The Roman Catholic Church and AI

There was a recent announcement that the Roman Catholic Church will be working with MicroSoft and IBM on AI and ethics (from a February 28, 2020 article by Jen Copestake for British Broadcasting Corporation (BBC) news online (Note: A link has been removed),

Leaders from the two tech giants met senior church officials in Rome, and agreed to collaborate on “human-centred” ways of designing AI.

Microsoft president Brad Smith admitted some people may “think of us as strange bedfellows” at the signing event.

“But I think the world needs people from different places to come together,” he said.

The call was supported by Pope Francis, in his first detailed remarks about the impact of artificial intelligence on humanity.

The Rome Call for Ethics [sic] was co-signed by Mr Smith, IBM executive vice-president John Kelly and president of the Pontifical Academy for Life Archbishop Vincenzo Paglia.

It puts humans at the centre of new technologies, asking for AI to be designed with a focus on the good of the environment and “our common and shared home and of its human inhabitants”.

Framing the current era as a “renAIssance”, the speakers said the invention of artificial intelligence would be as significant to human development as the invention of the printing press or combustion engine.

UN Food and Agricultural Organization director Qu Dongyu and Italy’s technology minister Paola Pisano were also co-signatories.

Hannah Brockhaus’s February 28, 2020 article for the Catholic News Agency provides some details missing from the BBC report and I found it quite helpful when trying to understand the various pieces that make up this initiative,

The Pontifical Academy for Life signed Friday [February 28, 2020], alongside presidents of IBM and Microsoft, a call for ethical and responsible use of artificial intelligence technologies.

According to the document, “the sponsors of the call express their desire to work together, in this context and at a national and international level, to promote ‘algor-ethics.’”

“Algor-ethics,” according to the text, is the ethical use of artificial intelligence according to the principles of transparency, inclusion, responsibility, impartiality, reliability, security, and privacy.

The signing of the “Rome Call for AI Ethics [PDF]” took place as part of the 2020 assembly of the Pontifical Academy for Life, which was held Feb. 26-28 [2020] on the theme of artificial intelligence.

One part of the assembly was dedicated to private meetings of the academics of the Pontifical Academy for Life. The second was a workshop on AI and ethics that drew 356 participants from 41 countries.

On the morning of Feb. 28 [2020], a public event took place called “renAIssance. For a Humanistic Artificial Intelligence” and included the signing of the AI document by Microsoft President Brad Smith, and IBM Executive Vice-president John Kelly III.

The Director General of FAO, Dongyu Qu, and politician Paola Pisano, representing the Italian government, also signed.

The president of the European Parliament, David Sassoli, was also present Feb. 28.

Pope Francis canceled his scheduled appearance at the event due to feeling unwell. His prepared remarks were read by Archbishop Vincenzo Paglia, president of the Academy for Life.

You can find Pope Francis’s comments about the document here (if you’re not comfortable reading Italian, hopefully, the English translation which follows directly afterward will be helpful). The Pope’s AI initiative has a dedicated website, Rome Call for AI ethics, and while most of the material dates from the February 2020 announcement, they are keeping up a blog. It has two entries, one dated in May 2020 and another in September 2020.

Buddhism and AI

The Dalai Lama is well known for having an interest in science and having hosted scientists for various dialogues. So, I was able to track down a November 10, 2016 article by Ariel Conn for the futureoflife.org website, which features his insights on the matter,

The question of what it means and what it takes to feel needed is an important problem for ethicists and philosophers, but it may be just as important for AI researchers to consider. The Dalai Lama argues that lack of meaning and purpose in one’s work increases frustration and dissatisfaction among even those who are gainfully employed.

“The problem,” says the Dalai Lama, “is … the growing number of people who feel they are no longer useful, no longer needed, no longer one with their societies. … Feeling superfluous is a blow to the human spirit. It leads to social isolation and emotional pain, and creates the conditions for negative emotions to take root.”

If feeling needed and feeling useful are necessary for happiness, then AI researchers may face a conundrum. Many researchers hope that job loss due to artificial intelligence and automation could, in the end, provide people with more leisure time to pursue enjoyable activities. But if the key to happiness is feeling useful and needed, then a society without work could be just as emotionally challenging as today’s career-based societies, and possibly worse.

I also found a talk on the topic by The Venerable Tenzin Priyadarshi, first here’s a description from his bio at the Dalai Lama Center for Ethics and Transformative Values webspace on the Massachusetts Institute of Technology (MIT) website,

… an innovative thinker, philosopher, educator and a polymath monk. He is Director of the Ethics Initiative at the MIT Media Lab and President & CEO of The Dalai Lama Center for Ethics and Transformative Values at the Massachusetts Institute of Technology. Venerable Tenzin’s unusual background encompasses entering a Buddhist monastery at the age of ten and receiving graduate education at Harvard University with degrees ranging from Philosophy to Physics to International Relations. He is a Tribeca Disruptive Fellow and a Fellow at the Center for Advanced Study in Behavioral Sciences at Stanford University. Venerable Tenzin serves on the boards of a number of academic, humanitarian, and religious organizations. He is the recipient of several recognitions and awards and received Harvard’s Distinguished Alumni Honors for his visionary contributions to humanity.

He gave the 2018 Roger W. Heyns Lecture in Religion and Society at Stanford University on the topic, “Religious and Ethical Dimensions of Artificial Intelligence.” The video runs over one hour but he is a sprightly speaker (in comparison to other Buddhist speakers I’ve listened to over the years).

Judaism, Islam, and other Abrahamic faiths examine AI and ethics

I was delighted to find this January 30, 2020 Artificial Intelligence: Implications for Ethics and Religion event as it brought together a range of thinkers from various faiths and disciplines,

New technologies are transforming our world every day, and the pace of change is only accelerating.  In coming years, human beings will create machines capable of out-thinking us and potentially taking on such uniquely-human traits as empathy, ethical reasoning, perhaps even consciousness.  This will have profound implications for virtually every human activity, as well as the meaning we impart to life and creation themselves.  This conference will provide an introduction for non-specialists to Artificial Intelligence (AI):

What is it?  What can it do and be used for?  And what will be its implications for choice and free will; economics and worklife; surveillance economies and surveillance states; the changing nature of facts and truth; and the comparative intelligence and capabilities of humans and machines in the future? 

Leading practitioners, ethicists and theologians will provide cross-disciplinary and cross-denominational perspectives on such challenges as technology addiction, inherent biases and resulting inequalities, the ethics of creating destructive technologies and of turning decision-making over to machines from self-driving cars to “autonomous weapons” systems in warfare, and how we should treat the suffering of “feeling” machines.  The conference ultimately will address how we think about our place in the universe and what this means for both religious thought and theological institutions themselves.

UTS [Union Theological Seminary] is the oldest independent seminary in the United States and has long been known as a bastion of progressive Christian scholarship.  JTS [Jewish Theological Seminary] is one of the academic and spiritual centers of Conservative Judaism and a major center for academic scholarship in Jewish studies. The Riverside Church is an interdenominational, interracial, international, open, welcoming, and affirming church and congregation that has served as a focal point of global and national activism for peace and social justice since its inception and continues to serve God through word and public witness. The annual Greater Good Gathering, the following week at Columbia University’s School of International & Public Affairs, focuses on how technology is changing society, politics and the economy – part of a growing nationwide effort to advance conversations promoting the “greater good.”

They have embedded a video of the event (it runs a little over seven hours) on the January 30, 2020 Artificial Intelligence: Implications for Ethics and Religion event page. For anyone who finds that a daunting amount of information, you may want to check out the speaker list for ideas about who might be writing and thinking on this topic.

As for Islam, I did track down this November 29, 2018 article by Shahino Mah Abdullah, a fellow at the Institute of Advanced Islamic Studies (IAIS) Malaysia,

As the global community continues to work together on the ethics of AI, there are still vast opportunities to offer ethical inputs, including the ethical principles based on Islamic teachings.

This is in line with Islam’s encouragement for its believers to convey beneficial messages, including to share its ethical principles with society.

In Islam, ethics or akhlak (virtuous character traits) in Arabic, is sometimes employed interchangeably in the Arabic language with adab, which means the manner, attitude, behaviour, and etiquette of putting things in their proper places. Islamic ethics cover all the legal concepts ranging from syariah (Islamic law), fiqh ( jurisprudence), qanun (ordinance), and ‘urf (customary practices).

Adopting and applying moral values based on the Islamic ethical concept or applied Islamic ethics could be a way to address various issues in today’s societies.

At the same time, this approach is in line with the higher objectives of syariah (maqasid alsyariah) that is aimed at conserving human benefit by the protection of human values, including faith (hifz al-din), life (hifz alnafs), lineage (hifz al-nasl), intellect (hifz al-‘aql), and property (hifz al-mal). This approach could be very helpful to address contemporary issues, including those related to the rise of AI and intelligent robots.

..

Part of the difficulty with tracking down more about AI, ethics, and various religions is linguistic. I simply don’t have the language skills to search for the commentaries and, even in English, I may not have the best or most appropriate search terms.

Television (TV) episodes stored on DNA?

According to a Sept. 1, 2020 news item on Nanowerk, the first episode of a tv series, ‘Biohackers’ has been stored on synthetic DNA (deoxyribonucleic acid) by a researcher at TUM and colleagues at another institution,

The first episode of the newly released series “Biohackers” was stored in the form of synthetic DNA. This was made possible by the research of Prof. Reinhard Heckel of the Technical University of Munich (TUM) and his colleague Prof. Robert Grass of ETH Zürich.

They have developed a method that permits the stable storage of large quantities of data on DNA for over 1000 years.

A Sept. 1, 2020 TUM press release, which originated the news item, proceeds with more detail in an interview format,

Prof. Heckel, Biohackers is about a medical student seeking revenge on a professor with a dark past – and the manipulation of DNA with biotechnology tools. You were commissioned to store the series on DNA. How does that work?

First, I should mention that what we’re talking about is artificially generated – in other words, synthetic – DNA. DNA consists of four building blocks: the nucleotides adenine (A), thymine (T), guanine (G) and cytosine (C). Computer data, meanwhile, are coded as zeros and ones. The first episode of Biohackers consists of a sequence of around 600 million zeros and ones. To code the sequence 01 01 11 00 in DNA, for example, we decide which number combinations will correspond to which letters. For example: 00 is A, 01 is C, 10 is G and 11 is T. Our example then produces the DNA sequence CCTA. Using this principle of DNA data storage, we have stored the first episode of the series on DNA.

And to view the series – is it just a matter of “reverse translation” of the letters?

In a very simplified sense, you can visualize it like that. When writing, storing and reading the DNA, however, errors occur. If these errors are not corrected, the data stored on the DNA will be lost. To solve the problem, I have developed an algorithm based on channel coding. This method involves correcting errors that take place during information transfers. The underlying idea is to add redundancy to the data. Think of language: When we read or hear a word with missing or incorrect letters, the computing power of our brain is still capable of understanding the word. The algorithm follows the same principle: It encodes the data with sufficient redundancy to ensure that even highly inaccurate data can be restored later.

Channel coding is used in many fields, including in telecommunications. What challenges did you face when developing your solution?

The first challenge was to create an algorithm specifically geared to the errors that occur in DNA. The second one was to make the algorithm so efficient that the largest possible quantities of data can be stored on the smallest possible quantity of DNA, so that only the absolutely necessary amount of redundancy is added. We demonstrated that our algorithm is optimized in that sense.

DNA data storage is very expensive because of the complexity of DNA production as well as the reading process. What makes DNA an attractive storage medium despite these challenges?

First, DNA has a very high information density. This permits the storage of enormous data volumes in a minimal space. In the case of the TV series, we stored “only” 100 megabytes on a picogram – or a billionth of a gram of DNA. Theoretically, however, it would be possible to store up to 200 exabytes on one gram of DNA. And DNA lasts a long time. By comparison: If you never turned on your PC or wrote data to the hard disk it contains, the data would disappear after a couple of years. By contrast, DNA can remain stable for many thousands of years if it is packed right.

And the method you have developed also makes the DNA strands durable – practically indestructible.

My colleague Robert Grass was the first to develop a process for the “stable packing” of DNA strands by encapsulating them in nanometer-scale spheres made of silica glass. This ensures that the DNA is protected against mechanical influences. In a joint paper in 2015, we presented the first robust DNA data storage concept with our algorithm and the encapsulation process developed by Prof. Grass. Since then we have continuously improved our method. In our most recent publication in Nature Protocols of January 2020, we passed on what we have learned.

What are your next steps? Does data storage on DNA have a future?

We’re working on a way to make DNA data storage cheaper and faster. “Biohackers” was a milestone en route to commercialization. But we still have a long way to go. If this technology proves successful, big things will be possible. Entire libraries, all movies, photos, music and knowledge of every kind – provided it can be represented in the form of data – could be stored on DNA and would thus be available to humanity for eternity.

Here’s a link to and a citation for the paper,

Reading and writing digital data in DNA by Linda C. Meiser, Philipp L. Antkowiak, Julian Koch, Weida D. Chen, A. Xavier Kohll, Wendelin J. Stark, Reinhard Heckel & Robert N. Grass. Nature Protocols volume 15, pages86–101(2020) Issue Date: January 2020 DOI: https://doi.org/10.1038/s41596-019-0244-5 Published [online] 29 November 2019

This paper is behind a paywall.

As for ‘Biohackers’, it’s a German science fiction television series and you can find out more about it here on the Internet Movie Database.

Turning brain-controlled wireless electronic prostheses into reality plus some ethical points

Researchers at Stanford University (California, US) believe they have a solution for a problem with neuroprosthetics (Note: I have included brief comments about neuroprosthetics and possible ethical issues at the end of this posting) according an August 5, 2020 news item on ScienceDaily,

The current generation of neural implants record enormous amounts of neural activity, then transmit these brain signals through wires to a computer. But, so far, when researchers have tried to create wireless brain-computer interfaces to do this, it took so much power to transmit the data that the implants generated too much heat to be safe for the patient. A new study suggests how to solve his problem — and thus cut the wires.

Caption: Photo of a current neural implant, that uses wires to transmit information and receive power. New research suggests how to one day cut the wires. Credit: Sergey Stavisky

An August 3, 2020 Stanford University news release (also on EurekAlert but published August 4, 2020) by Tom Abate, which originated the news item, details the problem and the proposed solution,

Stanford researchers have been working for years to advance a technology that could one day help people with paralysis regain use of their limbs, and enable amputees to use their thoughts to control prostheses and interact with computers.

The team has been focusing on improving a brain-computer interface, a device implanted beneath the skull on the surface of a patient’s brain. This implant connects the human nervous system to an electronic device that might, for instance, help restore some motor control to a person with a spinal cord injury, or someone with a neurological condition like amyotrophic lateral sclerosis, also called Lou Gehrig’s disease.

The current generation of these devices record enormous amounts of neural activity, then transmit these brain signals through wires to a computer. But when researchers have tried to create wireless brain-computer interfaces to do this, it took so much power to transmit the data that the devices would generate too much heat to be safe for the patient.

Now, a team led by electrical engineers and neuroscientists Krishna Shenoy, PhD, and Boris Murmann, PhD, and neurosurgeon and neuroscientist Jaimie Henderson, MD, have shown how it would be possible to create a wireless device, capable of gathering and transmitting accurate neural signals, but using a tenth of the power required by current wire-enabled systems. These wireless devices would look more natural than the wired models and give patients freer range of motion.

Graduate student Nir Even-Chen and postdoctoral fellow Dante Muratore, PhD, describe the team’s approach in a Nature Biomedical Engineering paper.

The team’s neuroscientists identified the specific neural signals needed to control a prosthetic device, such as a robotic arm or a computer cursor. The team’s electrical engineers then designed the circuitry that would enable a future, wireless brain-computer interface to process and transmit these these carefully identified and isolated signals, using less power and thus making it safe to implant the device on the surface of the brain.

To test their idea, the researchers collected neuronal data from three nonhuman primates and one human participant in a (BrainGate) clinical trial.

As the subjects performed movement tasks, such as positioning a cursor on a computer screen, the researchers took measurements. The findings validated their hypothesis that a wireless interface could accurately control an individual’s motion by recording a subset of action-specific brain signals, rather than acting like the wired device and collecting brain signals in bulk.

The next step will be to build an implant based on this new approach and proceed through a series of tests toward the ultimate goal.

Here’s a link to and a citation for the paper,

Power-saving design opportunities for wireless intracortical brain–computer interfaces by Nir Even-Chen, Dante G. Muratore, Sergey D. Stavisky, Leigh R. Hochberg, Jaimie M. Henderson, Boris Murmann & Krishna V. Shenoy. Nature Biomedical Engineering (2020) DOI: https://doi.org/10.1038/s41551-020-0595-9 Published: 03 August 2020

This paper is behind a paywall.

Comments about ethical issues

As I found out while investigating, ethical issues in this area abound. My first thought was to look at how someone with a focus on ability studies might view the complexities.

My ‘go to’ resource for human enhancement and ethical issues is Gregor Wolbring, an associate professor at the University of Calgary (Alberta, Canada). his profile lists these areas of interest: ability studies, disability studies, governance of emerging and existing sciences and technologies (e.g. neuromorphic engineering, genetics, synthetic biology, robotics, artificial intelligence, automatization, brain machine interfaces, sensors) and more.

I can’t find anything more recent on this particular topic but I did find an August 10, 2017 essay for The Conversation where he comments on technology and human enhancement ethical issues where the technology is gene-editing. Regardless, he makes points that are applicable to brain-computer interfaces (human enhancement), Note: Links have been removed),

Ability expectations have been and still are used to disable, or disempower, many people, not only people seen as impaired. They’ve been used to disable or marginalize women (men making the argument that rationality is an important ability and women don’t have it). They also have been used to disable and disempower certain ethnic groups (one ethnic group argues they’re smarter than another ethnic group) and others.

A recent Pew Research survey on human enhancement revealed that an increase in the ability to be productive at work was seen as a positive. What does such ability expectation mean for the “us” in an era of scientific advancements in gene-editing, human enhancement and robotics?

Which abilities are seen as more important than others?

The ability expectations among “us” will determine how gene-editing and other scientific advances will be used.

And so how we govern ability expectations, and who influences that governance, will shape the future. Therefore, it’s essential that ability governance and ability literacy play a major role in shaping all advancements in science and technology.

One of the reasons I find Gregor’s commentary so valuable is that he writes lucidly about ability and disability as concepts and poses what can be provocative questions about expectations and what it is to be truly abled or disabled. You can find more of his writing here on his eponymous (more or less) blog.

Ethics of clinical trials for testing brain implants

This October 31, 2017 article by Emily Underwood for Science was revelatory,

In 2003, neurologist Helen Mayberg of Emory University in Atlanta began to test a bold, experimental treatment for people with severe depression, which involved implanting metal electrodes deep in the brain in a region called area 25 [emphases mine]. The initial data were promising; eventually, they convinced a device company, St. Jude Medical in Saint Paul, to sponsor a 200-person clinical trial dubbed BROADEN.

This month [October 2017], however, Lancet Psychiatry reported the first published data on the trial’s failure. The study stopped recruiting participants in 2012, after a 6-month study in 90 people failed to show statistically significant improvements between those receiving active stimulation and a control group, in which the device was implanted but switched off.

… a tricky dilemma for companies and research teams involved in deep brain stimulation (DBS) research: If trial participants want to keep their implants [emphases mine], who will take responsibility—and pay—for their ongoing care? And participants in last week’s meeting said it underscores the need for the growing corps of DBS researchers to think long-term about their planned studies.

… participants bear financial responsibility for maintaining the device should they choose to keep it, and for any additional surgeries that might be needed in the future, Mayberg says. “The big issue becomes cost [emphasis mine],” she says. “We transition from having grants and device donations” covering costs, to patients being responsible. And although the participants agreed to those conditions before enrolling in the trial, Mayberg says she considers it a “moral responsibility” to advocate for lower costs for her patients, even it if means “begging for charity payments” from hospitals. And she worries about what will happen to trial participants if she is no longer around to advocate for them. “What happens if I retire, or get hit by a bus?” she asks.

There’s another uncomfortable possibility: that the hypothesis was wrong [emphases mine] to begin with. A large body of evidence from many different labs supports the idea that area 25 is “key to successful antidepressant response,” Mayberg says. But “it may be too simple-minded” to think that zapping a single brain node and its connections can effectively treat a disease as complex as depression, Krakauer [John Krakauer, a neuroscientist at Johns Hopkins University in Baltimore, Maryland] says. Figuring that out will likely require more preclinical research in people—a daunting prospect that raises additional ethical dilemmas, Krakauer says. “The hardest thing about being a clinical researcher,” he says, “is knowing when to jump.”

Brain-computer interfaces, symbiosis, and ethical issues

This was the most recent and most directly applicable work that I could find. From a July 24, 2019 article by Liam Drew for Nature Outlook: The brain,

“It becomes part of you,” Patient 6 said, describing the technology that enabled her, after 45 years of severe epilepsy, to halt her disabling seizures. Electrodes had been implanted on the surface of her brain that would send a signal to a hand-held device when they detected signs of impending epileptic activity. On hearing a warning from the device, Patient 6 knew to take a dose of medication to halt the coming seizure.

“You grow gradually into it and get used to it, so it then becomes a part of every day,” she told Frederic Gilbert, an ethicist who studies brain–computer interfaces (BCIs) at the University of Tasmania in Hobart, Australia. “It became me,” she said. [emphasis mine]

Gilbert was interviewing six people who had participated in the first clinical trial of a predictive BCI to help understand how living with a computer that monitors brain activity directly affects individuals psychologically1. Patient 6’s experience was extreme: Gilbert describes her relationship with her BCI as a “radical symbiosis”.

Symbiosis is a term, borrowed from ecology, that means an intimate co-existence of two species for mutual advantage. As technologists work towards directly connecting the human brain to computers, it is increasingly being used to describe humans’ potential relationship with artificial intelligence.

Interface technologies are divided into those that ‘read’ the brain to record brain activity and decode its meaning, and those that ‘write’ to the brain to manipulate activity in specific regions and affect their function.

Commercial research is opaque, but scientists at social-media platform Facebook are known to be pursuing brain-reading techniques for use in headsets that would convert users’ brain activity into text. And neurotechnology companies such as Kernel in Los Angeles, California, and Neuralink, founded by Elon Musk in San Francisco, California, predict bidirectional coupling in which computers respond to people’s brain activity and insert information into their neural circuitry. [emphasis mine]

Already, it is clear that melding digital technologies with human brains can have provocative effects, not least on people’s agency — their ability to act freely and according to their own choices. Although neuroethicists’ priority is to optimize medical practice, their observations also shape the debate about the development of commercial neurotechnologies.

Neuroethicists began to note the complex nature of the therapy’s side effects. “Some effects that might be described as personality changes are more problematic than others,” says Maslen [Hannah Maslen, a neuroethicist at the University of Oxford, UK]. A crucial question is whether the person who is undergoing stimulation can reflect on how they have changed. Gilbert, for instance, describes a DBS patient who started to gamble compulsively, blowing his family’s savings and seeming not to care. He could only understand how problematic his behaviour was when the stimulation was turned off.

Such cases present serious questions about how the technology might affect a person’s ability to give consent to be treated, or for treatment to continue. [emphases mine] If the person who is undergoing DBS is happy to continue, should a concerned family member or doctor be able to overrule them? If someone other than the patient can terminate treatment against the patient’s wishes, it implies that the technology degrades people’s ability to make decisions for themselves. It suggests that if a person thinks in a certain way only when an electrical current alters their brain activity, then those thoughts do not reflect an authentic self.

To observe a person with tetraplegia bringing a drink to their mouth using a BCI-controlled robotic arm is spectacular. [emphasis mine] This rapidly advancing technology works by implanting an array of electrodes either on or in a person’s motor cortex — a brain region involved in planning and executing movements. The activity of the brain is recorded while the individual engages in cognitive tasks, such as imagining that they are moving their hand, and these recordings are used to command the robotic limb.

If neuroscientists could unambiguously discern a person’s intentions from the chattering electrical activity that they record in the brain, and then see that it matched the robotic arm’s actions, ethical concerns would be minimized. But this is not the case. The neural correlates of psychological phenomena are inexact and poorly understood, which means that signals from the brain are increasingly being processed by artificial intelligence (AI) software before reaching prostheses.[emphasis mine]

But, he [Philipp Kellmeyer, a neurologist and neuroethicist at the University of Freiburg, Germany] says, using AI tools also introduces ethical issues of which regulators have little experience. [emphasis mine] Machine-learning software learns to analyse data by generating algorithms that cannot be predicted and that are difficult, or impossible, to comprehend. This introduces an unknown and perhaps unaccountable process between a person’s thoughts and the technology that is acting on their behalf.

Maslen is already helping to shape BCI-device regulation. She is in discussion with the European Commission about regulations it will implement in 2020 that cover non-invasive brain-modulating devices that are sold straight to consumers. [emphases mine; Note: There is a Canadian company selling this type of product, MUSE] Maslen became interested in the safety of these devices, which were covered by only cursory safety regulations. Although such devices are simple, they pass electrical currents through people’s scalps to modulate brain activity. Maslen found reports of them causing burns, headaches and visual disturbances. She also says clinical studies have shown that, although non-invasive electrical stimulation of the brain can enhance certain cognitive abilities, this can come at the cost of deficits in other aspects of cognition.

Regarding my note about MUSE, the company is InteraXon and its product is MUSE.They advertise the product as “Brain Sensing Headbands That Improve Your Meditation Practice.” The company website and the product seem to be one entity, Choose Muse. The company’s product has been used in some serious research papers they can be found here. I did not see any research papers concerning safety issues.

Getting back to Drew’s July 24, 2019 article and Patient 6,

… He [Gilbert] is now preparing a follow-up report on Patient 6. The company that implanted the device in her brain to help free her from seizures went bankrupt. The device had to be removed.

… Patient 6 cried as she told Gilbert about losing the device. … “I lost myself,” she said.

“It was more than a device,” Gilbert says. “The company owned the existence of this new person.”

I strongly recommend reading Drew’s July 24, 2019 article in its entirety.

Finally

It’s easy to forget that in all the excitement over technologies ‘making our lives better’ that there can be a dark side or two. Some of the points brought forth in the articles by Wolbring, Underwood, and Drew confirmed my uneasiness as reasonable and gave me some specific examples of how these technologies raise new issues or old issues in new ways.

What I find interesting is that no one is using the term ‘cyborg’, which would seem quite applicable.There is an April 20, 2012 posting here titled ‘My mother is a cyborg‘ where I noted that by at lease one definition people with joint replacements, pacemakers, etc. are considered cyborgs. In short, cyborgs or technology integrated into bodies have been amongst us for quite some time.

Interestingly, no one seems to care much when insects are turned into cyborgs (can’t remember who pointed this out) but it is a popular area of research especially for military applications and search and rescue applications.

I’ve sometimes used the term ‘machine/flesh’ and or ‘augmentation’ as a description of technologies integrated with bodies, human or otherwise. You can find lots on the topic here however I’ve tagged or categorized it.

Amongst other pieces you can find here, there’s the August 8, 2016 posting, ‘Technology, athletics, and the ‘new’ human‘ featuring Oscar Pistorius when he was still best known as the ‘blade runner’ and a remarkably successful paralympic athlete. It’s about his efforts to compete against able-bodied athletes at the London Olympic Games in 2012. It is fascinating to read about technology and elite athletes of any kind as they are often the first to try out ‘enhancements’.

Gregor Wolbring has a number of essays on The Conversation looking at Paralympic athletes and their pursuit of enhancements and how all of this is affecting our notions of abilities and disabilities. By extension, one has to assume that ‘abled’ athletes are also affected with the trickle-down effect on the rest of us.

Regardless of where we start the investigation, there is a sameness to the participants in neuroethics discussions with a few experts and commercial interests deciding on how the rest of us (however you define ‘us’ as per Gregor Wolbring’s essay) will live.

This paucity of perspectives is something I was getting at in my COVID-19 editorial for the Canadian Science Policy Centre. My thesis being that we need a range of ideas and insights that cannot be culled from small groups of people who’ve trained and read the same materials or entrepreneurs who too often seem to put profit over thoughtful implementations of new technologies. (See the PDF May 2020 edition [you’ll find me under Policy Development]) or see my May 15, 2020 posting here (with all the sources listed.)

As for this new research at Stanford, it’s exciting news, which raises questions, as it offers the hope of independent movement for people diagnosed as tetraplegic (sometimes known as quadriplegic.)

A biohybrid artificial synapse that can communicate with living cells

As I noted in my June 16, 2020 posting, we may have more than one kind of artificial brain in our future. This latest work features a biohybrid. From a June 15, 2020 news item on ScienceDaily,

In 2017, Stanford University researchers presented a new device that mimics the brain’s efficient and low-energy neural learning process [see my March 8, 2017 posting for more]. It was an artificial version of a synapse — the gap across which neurotransmitters travel to communicate between neurons — made from organic materials. In 2019, the researchers assembled nine of their artificial synapses together in an array, showing that they could be simultaneously programmed to mimic the parallel operation of the brain [see my Sept. 17, 2019 posting].

Now, in a paper published June 15 [2020] in Nature Materials, they have tested the first biohybrid version of their artificial synapse and demonstrated that it can communicate with living cells. Future technologies stemming from this device could function by responding directly to chemical signals from the brain. The research was conducted in collaboration with researchers at Istituto Italiano di Tecnologia (Italian Institute of Technology — IIT) in Italy and at Eindhoven University of Technology (Netherlands).

“This paper really highlights the unique strength of the materials that we use in being able to interact with living matter,” said Alberto Salleo, professor of materials science and engineering at Stanford and co-senior author of the paper. “The cells are happy sitting on the soft polymer. But the compatibility goes deeper: These materials work with the same molecules neurons use naturally.”

While other brain-integrated devices require an electrical signal to detect and process the brain’s messages, the communications between this device and living cells occur through electrochemistry — as though the material were just another neuron receiving messages from its neighbor.

A June 15, 2020 Stanford University news release (also on EurekAlert) by Taylor Kubota, which originated the news item, delves further into this recent work,

How neurons learn

The biohybrid artificial synapse consists of two soft polymer electrodes, separated by a trench filled with electrolyte solution – which plays the part of the synaptic cleft that separates communicating neurons in the brain. When living cells are placed on top of one electrode, neurotransmitters that those cells release can react with that electrode to produce ions. Those ions travel across the trench to the second electrode and modulate the conductive state of this electrode. Some of that change is preserved, simulating the learning process occurring in nature.

“In a biological synapse, essentially everything is controlled by chemical interactions at the synaptic junction. Whenever the cells communicate with one another, they’re using chemistry,” said Scott Keene, a graduate student at Stanford and co-lead author of the paper. “Being able to interact with the brain’s natural chemistry gives the device added utility.”

This process mimics the same kind of learning seen in biological synapses, which is highly efficient in terms of energy because computing and memory storage happen in one action. In more traditional computer systems, the data is processed first and then later moved to storage.

To test their device, the researchers used rat neuroendocrine cells that release the neurotransmitter dopamine. Before they ran their experiment, they were unsure how the dopamine would interact with their material – but they saw a permanent change in the state of their device upon the first reaction.

“We knew the reaction is irreversible, so it makes sense that it would cause a permanent change in the device’s conductive state,” said Keene. “But, it was hard to know whether we’d achieve the outcome we predicted on paper until we saw it happen in the lab. That was when we realized the potential this has for emulating the long-term learning process of a synapse.”

A first step

This biohybrid design is in such early stages that the main focus of the current research was simply to make it work.

“It’s a demonstration that this communication melding chemistry and electricity is possible,” said Salleo. “You could say it’s a first step toward a brain-machine interface, but it’s a tiny, tiny very first step.”

Now that the researchers have successfully tested their design, they are figuring out the best paths for future research, which could include work on brain-inspired computers, brain-machine interfaces, medical devices or new research tools for neuroscience. Already, they are working on how to make the device function better in more complex biological settings that contain different kinds of cells and neurotransmitters.

Here’s a link to and a citation for the paper,

A biohybrid synapse with neurotransmitter-mediated plasticity by Scott T. Keene, Claudia Lubrano, Setareh Kazemzadeh, Armantas Melianas, Yaakov Tuchman, Giuseppina Polino, Paola Scognamiglio, Lucio Cinà, Alberto Salleo, Yoeri van de Burgt & Francesca Santoro. Nature Materials (2020) DOI: https://doi.org/10.1038/s41563-020-0703-y Published: 15 June 2020

This paper is behind a paywall.

Brain scan variations

The Scientist is a magazine I do not feature here often enough. The latest issue (June 2020) features a May 20, 2020 opinion piece by Ruth Williams on a recent study about interpretating brain scans—70 different teams of neuroimaging experts were involved (Note: Links have been removed),

In a test of scientific reproducibility, multiple teams of neuroimaging experts from across the globe were asked to independently analyze and interpret the same functional magnetic resonance imaging dataset. The results of the test, published in Nature today (May 20), show that each team performed the analysis in a subtly different manner and that their conclusions varied as a result. While highlighting the cause of the irreproducibility—human methodological decisions—the paper also reveals ways to safeguard future studies against it.

Problems with reproducibility plague all areas of science, and have been particularly highlighted in the fields of psychology and cancer through projects run in part by the Center for Open Science. Now, neuroimaging has come under the spotlight thanks to a collaborative project by neuroimaging experts around the world called the Neuroimaging Analysis Replication and Prediction Study (NARPS).

Neuroimaging, specifically functional magnetic resonance imaging (fMRI), which produces pictures of blood flow patterns in the brain that are thought to relate to neuronal activity, has been criticized in the past for problems such as poor study design and statistical methods, and specifying hypotheses after results are known (SHARKing), says neurologist Alain Dagher of McGill University who was not involved in the study. A particularly memorable criticism of the technique was a paper demonstrating that, without needed statistical corrections, it could identify apparent brain activity in a dead fish.

Perhaps because of such criticisms, nowadays fMRI “is a field that is known to have a lot of cautiousness about statistics and . . . about the sample sizes,” says neuroscientist Tom Schonberg of Tel Aviv University, an author of the paper and co-coordinator of NARPS. Also, unlike in many areas of biology, he adds, the image analysis is computational, not manual, so fewer biases might be expected to creep in.

Schonberg was therefore a little surprised to see the NARPS results, admitting, “it wasn’t easy seeing this variability, but it was what it was.”

The study, led by Schonberg together with psychologist Russell Poldrack of Stanford University and neuroimaging statistician Thomas Nichols of the University of Oxford, recruited independent teams of researchers around the globe to analyze and interpret the same raw neuroimaging data—brain scans of 108 healthy adults taken while the subjects were at rest and while they performed a simple decision-making task about whether to gamble a sum of money.

Each of the 70 research teams taking part used one of three different image analysis software packages. But variations in the final results didn’t depend on these software choices, says Nichols. Instead, they came down to numerous steps in the analysis that each require a human’s decision, such as how to correct for motion of the subjects’ heads, how signal-to-noise ratios are enhanced, how much image smoothing to apply—that is, how strictly the anatomical regions of the brain are defined—and which statistical approaches and thresholds to use.

If this topic interests you, I strongly suggest you read Williams’ article in its entirety.

Here are two links to the paper,

Variability in the analysis of a single neuroimaging dataset by many teams. Nature DOI: https://doi.org/10.1038/s41586-020-2314-9 Published online: 20 May 2020 Check for updates

This first one seems to be a free version of the paper.

Variability in the analysis of a single neuroimaging dataset by many teams by R. Botvinik-Nezer, F. Holzmeister, C. F. Camerer, et al. (at least 70 authors in total) Nature 582, 84–88 (2020). DOI: https://doi.org/10.1038/s41586-020-2314-9 Published 20 May 2020 Issue Date 04 June 2020

This version is behind a paywall.

Are nano electronics as good as gold?

“As good as gold” was a behavioural goal when I was a child. It turns out, the same can be said of gold in electronic devices according to the headline for a March 26, 2020 news item on Nanowerk (Note: Links have been removed),

As electronics shrink to nanoscale, will they still be good as gold?

Deep inside computer chips, tiny wires made of gold and other conductive metals carry the electricity used to process data.

But as these interconnected circuits shrink to nanoscale, engineers worry that pressure, such as that caused by thermal expansion when current flows through these wires, might cause gold to behave more like a liquid than a solid, making nanoelectronics unreliable. That, in turn, could force chip designers to hunt for new materials to make these critical wires.

But according to a new paper in Physical Review Letters (“Nucleation of Dislocations in 3.9 nm Nanocrystals at High Pressure”), chip designers can rest easy. “Gold still behaves like a solid at these small scales,” says Stanford mechanical engineer Wendy Gu, who led a team that figured out how to pressurize gold particles just 4 nanometers in length — the smallest particles ever measured — to assess whether current flows might cause the metal’s atomic structure to collapse.

I have seen the issue about gold as a metal or liquid before but I can’t find it here (search engines, sigh). However, I found this somewhat related story from almost five years ago. In my April 14, 2015 posting (Gold atoms: sometimes they’re a metal and sometimes they’re a molecule), there was news that the number of gold atoms present means the difference between being a metal and being a molecule .This could have implications as circuit elements (which include some gold in their fabrication) shrink down past a certain point.

A March 24, 2020 Stanford University news release (also on Eurekalert but published on March 25, 2020) by Andrew Myers, which originated the news item, provides details about research designed to investigate a similar question, i.e, can we used gold as we shrink the scale?*,

To conduct the experiment, Gu’s team first had to devise a way put tiny gold particles under extreme pressure, while simultaneously measuring how much that pressure damaged gold’s atomic structure.

To solve the first problem, they turned to the field of high-pressure physics to borrow a device known as a diamond anvil cell. As the name implies, both hammer and anvil are diamonds that are used to compress the gold. As Gu explained, a nanoparticle of gold is built like a skyscraper with atoms forming a crystalline lattice of neat rows and columns. She knew that pressure from the anvil would dislodge some atoms from the crystal and create tiny defects in the gold.

The next challenge was to detect these defects in nanoscale gold. The scientists shined X-rays through the diamond onto the gold. Defects in the crystal caused the X-rays to reflect at different angles than they would on uncompressed gold. By measuring variations in the angles at which the X-rays bounced off the particles before and after pressure was applied, the team was able to tell whether the particles retained the deformations or reverted to their original state when pressure was lifted.

In practical terms, her findings mean that chipmakers can know with certainty that they’ll be able to design stable nanodevices using gold — a material they have known and trusted for decades — for years to come.

“For the foreseeable future, gold’s luster will not fade,” Gu says.

*The 2015 research measured the gold nanoclusters by the number of atoms within the cluster with the changes occurring at some where between 102 atoms and 144 atoms. This 2020 work measures the amount of gold by nanometers as in 3.9 nm gold nanocrystals . So, how many gold atoms in a nanometer? Cathy Murphy provides the answer and the way to calculate it for yourself in a July 26, 2016 posting on the Sustainable Nano blog ( a blog by the Center for Sustainable Nanotechnology),

Two years ago, I wrote a blog post called Two Ways to Make Nanoparticles, describing the difference between top-down and bottom-up methods for making nanoparticles. In the post I commented, “we can estimate, knowing how gold atoms pack into crystals, that there are about 2000 gold atoms in one 4 nm diameter gold nanoparticle.” Recently, a Sustainable Nano reader wrote in to ask about how this calculation is done. It’s a great question!

So, a 3.9 nm gold nanocrystal contains approximately 2000 gold atoms. (If you have time, do read Murphy’s description of how to determine the number of gold atoms in a gold nanoparticle.) So, this research does not answer the question posed by the 2015 research.

It may take years before researchers can devise tests for gold nanoclusters consisting of 102 atoms as opposed to nanoparticles consisting of 2000 atoms. In the meantime, here’s a link to and a citation for the latest on how gold reacts as we shrink the size of our electronics,

Nucleation of Dislocations in 3.9 nm Nanocrystals at High Pressure by Abhinav Parakh, Sangryun Lee, K. Anika Harkins, Mehrdad T. Kiani, David Doan, Martin Kunz, Andrew Doran, Lindsey A. Hanson, Seunghwa Ryu, and X. Wendy Gu. Phys. Rev. Lett. 124, 106104 DOI:https://doi.org/10.1103/PhysRevLett.124.106104 Published 13 March 2020 © 2020 American Physical Society

This paper is behind a paywall.