Tag Archives: University of Oxford

Turning brain-controlled wireless electronic prostheses into reality plus some ethical points

Researchers at Stanford University (California, US) believe they have a solution for a problem with neuroprosthetics (Note: I have included brief comments about neuroprosthetics and possible ethical issues at the end of this posting) according an August 5, 2020 news item on ScienceDaily,

The current generation of neural implants record enormous amounts of neural activity, then transmit these brain signals through wires to a computer. But, so far, when researchers have tried to create wireless brain-computer interfaces to do this, it took so much power to transmit the data that the implants generated too much heat to be safe for the patient. A new study suggests how to solve his problem — and thus cut the wires.

Caption: Photo of a current neural implant, that uses wires to transmit information and receive power. New research suggests how to one day cut the wires. Credit: Sergey Stavisky

An August 3, 2020 Stanford University news release (also on EurekAlert but published August 4, 2020) by Tom Abate, which originated the news item, details the problem and the proposed solution,

Stanford researchers have been working for years to advance a technology that could one day help people with paralysis regain use of their limbs, and enable amputees to use their thoughts to control prostheses and interact with computers.

The team has been focusing on improving a brain-computer interface, a device implanted beneath the skull on the surface of a patient’s brain. This implant connects the human nervous system to an electronic device that might, for instance, help restore some motor control to a person with a spinal cord injury, or someone with a neurological condition like amyotrophic lateral sclerosis, also called Lou Gehrig’s disease.

The current generation of these devices record enormous amounts of neural activity, then transmit these brain signals through wires to a computer. But when researchers have tried to create wireless brain-computer interfaces to do this, it took so much power to transmit the data that the devices would generate too much heat to be safe for the patient.

Now, a team led by electrical engineers and neuroscientists Krishna Shenoy, PhD, and Boris Murmann, PhD, and neurosurgeon and neuroscientist Jaimie Henderson, MD, have shown how it would be possible to create a wireless device, capable of gathering and transmitting accurate neural signals, but using a tenth of the power required by current wire-enabled systems. These wireless devices would look more natural than the wired models and give patients freer range of motion.

Graduate student Nir Even-Chen and postdoctoral fellow Dante Muratore, PhD, describe the team’s approach in a Nature Biomedical Engineering paper.

The team’s neuroscientists identified the specific neural signals needed to control a prosthetic device, such as a robotic arm or a computer cursor. The team’s electrical engineers then designed the circuitry that would enable a future, wireless brain-computer interface to process and transmit these these carefully identified and isolated signals, using less power and thus making it safe to implant the device on the surface of the brain.

To test their idea, the researchers collected neuronal data from three nonhuman primates and one human participant in a (BrainGate) clinical trial.

As the subjects performed movement tasks, such as positioning a cursor on a computer screen, the researchers took measurements. The findings validated their hypothesis that a wireless interface could accurately control an individual’s motion by recording a subset of action-specific brain signals, rather than acting like the wired device and collecting brain signals in bulk.

The next step will be to build an implant based on this new approach and proceed through a series of tests toward the ultimate goal.

Here’s a link to and a citation for the paper,

Power-saving design opportunities for wireless intracortical brain–computer interfaces by Nir Even-Chen, Dante G. Muratore, Sergey D. Stavisky, Leigh R. Hochberg, Jaimie M. Henderson, Boris Murmann & Krishna V. Shenoy. Nature Biomedical Engineering (2020) DOI: https://doi.org/10.1038/s41551-020-0595-9 Published: 03 August 2020

This paper is behind a paywall.

Comments about ethical issues

As I found out while investigating, ethical issues in this area abound. My first thought was to look at how someone with a focus on ability studies might view the complexities.

My ‘go to’ resource for human enhancement and ethical issues is Gregor Wolbring, an associate professor at the University of Calgary (Alberta, Canada). his profile lists these areas of interest: ability studies, disability studies, governance of emerging and existing sciences and technologies (e.g. neuromorphic engineering, genetics, synthetic biology, robotics, artificial intelligence, automatization, brain machine interfaces, sensors) and more.

I can’t find anything more recent on this particular topic but I did find an August 10, 2017 essay for The Conversation where he comments on technology and human enhancement ethical issues where the technology is gene-editing. Regardless, he makes points that are applicable to brain-computer interfaces (human enhancement), Note: Links have been removed),

Ability expectations have been and still are used to disable, or disempower, many people, not only people seen as impaired. They’ve been used to disable or marginalize women (men making the argument that rationality is an important ability and women don’t have it). They also have been used to disable and disempower certain ethnic groups (one ethnic group argues they’re smarter than another ethnic group) and others.

A recent Pew Research survey on human enhancement revealed that an increase in the ability to be productive at work was seen as a positive. What does such ability expectation mean for the “us” in an era of scientific advancements in gene-editing, human enhancement and robotics?

Which abilities are seen as more important than others?

The ability expectations among “us” will determine how gene-editing and other scientific advances will be used.

And so how we govern ability expectations, and who influences that governance, will shape the future. Therefore, it’s essential that ability governance and ability literacy play a major role in shaping all advancements in science and technology.

One of the reasons I find Gregor’s commentary so valuable is that he writes lucidly about ability and disability as concepts and poses what can be provocative questions about expectations and what it is to be truly abled or disabled. You can find more of his writing here on his eponymous (more or less) blog.

Ethics of clinical trials for testing brain implants

This October 31, 2017 article by Emily Underwood for Science was revelatory,

In 2003, neurologist Helen Mayberg of Emory University in Atlanta began to test a bold, experimental treatment for people with severe depression, which involved implanting metal electrodes deep in the brain in a region called area 25 [emphases mine]. The initial data were promising; eventually, they convinced a device company, St. Jude Medical in Saint Paul, to sponsor a 200-person clinical trial dubbed BROADEN.

This month [October 2017], however, Lancet Psychiatry reported the first published data on the trial’s failure. The study stopped recruiting participants in 2012, after a 6-month study in 90 people failed to show statistically significant improvements between those receiving active stimulation and a control group, in which the device was implanted but switched off.

… a tricky dilemma for companies and research teams involved in deep brain stimulation (DBS) research: If trial participants want to keep their implants [emphases mine], who will take responsibility—and pay—for their ongoing care? And participants in last week’s meeting said it underscores the need for the growing corps of DBS researchers to think long-term about their planned studies.

… participants bear financial responsibility for maintaining the device should they choose to keep it, and for any additional surgeries that might be needed in the future, Mayberg says. “The big issue becomes cost [emphasis mine],” she says. “We transition from having grants and device donations” covering costs, to patients being responsible. And although the participants agreed to those conditions before enrolling in the trial, Mayberg says she considers it a “moral responsibility” to advocate for lower costs for her patients, even it if means “begging for charity payments” from hospitals. And she worries about what will happen to trial participants if she is no longer around to advocate for them. “What happens if I retire, or get hit by a bus?” she asks.

There’s another uncomfortable possibility: that the hypothesis was wrong [emphases mine] to begin with. A large body of evidence from many different labs supports the idea that area 25 is “key to successful antidepressant response,” Mayberg says. But “it may be too simple-minded” to think that zapping a single brain node and its connections can effectively treat a disease as complex as depression, Krakauer [John Krakauer, a neuroscientist at Johns Hopkins University in Baltimore, Maryland] says. Figuring that out will likely require more preclinical research in people—a daunting prospect that raises additional ethical dilemmas, Krakauer says. “The hardest thing about being a clinical researcher,” he says, “is knowing when to jump.”

Brain-computer interfaces, symbiosis, and ethical issues

This was the most recent and most directly applicable work that I could find. From a July 24, 2019 article by Liam Drew for Nature Outlook: The brain,

“It becomes part of you,” Patient 6 said, describing the technology that enabled her, after 45 years of severe epilepsy, to halt her disabling seizures. Electrodes had been implanted on the surface of her brain that would send a signal to a hand-held device when they detected signs of impending epileptic activity. On hearing a warning from the device, Patient 6 knew to take a dose of medication to halt the coming seizure.

“You grow gradually into it and get used to it, so it then becomes a part of every day,” she told Frederic Gilbert, an ethicist who studies brain–computer interfaces (BCIs) at the University of Tasmania in Hobart, Australia. “It became me,” she said. [emphasis mine]

Gilbert was interviewing six people who had participated in the first clinical trial of a predictive BCI to help understand how living with a computer that monitors brain activity directly affects individuals psychologically1. Patient 6’s experience was extreme: Gilbert describes her relationship with her BCI as a “radical symbiosis”.

Symbiosis is a term, borrowed from ecology, that means an intimate co-existence of two species for mutual advantage. As technologists work towards directly connecting the human brain to computers, it is increasingly being used to describe humans’ potential relationship with artificial intelligence.

Interface technologies are divided into those that ‘read’ the brain to record brain activity and decode its meaning, and those that ‘write’ to the brain to manipulate activity in specific regions and affect their function.

Commercial research is opaque, but scientists at social-media platform Facebook are known to be pursuing brain-reading techniques for use in headsets that would convert users’ brain activity into text. And neurotechnology companies such as Kernel in Los Angeles, California, and Neuralink, founded by Elon Musk in San Francisco, California, predict bidirectional coupling in which computers respond to people’s brain activity and insert information into their neural circuitry. [emphasis mine]

Already, it is clear that melding digital technologies with human brains can have provocative effects, not least on people’s agency — their ability to act freely and according to their own choices. Although neuroethicists’ priority is to optimize medical practice, their observations also shape the debate about the development of commercial neurotechnologies.

Neuroethicists began to note the complex nature of the therapy’s side effects. “Some effects that might be described as personality changes are more problematic than others,” says Maslen [Hannah Maslen, a neuroethicist at the University of Oxford, UK]. A crucial question is whether the person who is undergoing stimulation can reflect on how they have changed. Gilbert, for instance, describes a DBS patient who started to gamble compulsively, blowing his family’s savings and seeming not to care. He could only understand how problematic his behaviour was when the stimulation was turned off.

Such cases present serious questions about how the technology might affect a person’s ability to give consent to be treated, or for treatment to continue. [emphases mine] If the person who is undergoing DBS is happy to continue, should a concerned family member or doctor be able to overrule them? If someone other than the patient can terminate treatment against the patient’s wishes, it implies that the technology degrades people’s ability to make decisions for themselves. It suggests that if a person thinks in a certain way only when an electrical current alters their brain activity, then those thoughts do not reflect an authentic self.

To observe a person with tetraplegia bringing a drink to their mouth using a BCI-controlled robotic arm is spectacular. [emphasis mine] This rapidly advancing technology works by implanting an array of electrodes either on or in a person’s motor cortex — a brain region involved in planning and executing movements. The activity of the brain is recorded while the individual engages in cognitive tasks, such as imagining that they are moving their hand, and these recordings are used to command the robotic limb.

If neuroscientists could unambiguously discern a person’s intentions from the chattering electrical activity that they record in the brain, and then see that it matched the robotic arm’s actions, ethical concerns would be minimized. But this is not the case. The neural correlates of psychological phenomena are inexact and poorly understood, which means that signals from the brain are increasingly being processed by artificial intelligence (AI) software before reaching prostheses.[emphasis mine]

But, he [Philipp Kellmeyer, a neurologist and neuroethicist at the University of Freiburg, Germany] says, using AI tools also introduces ethical issues of which regulators have little experience. [emphasis mine] Machine-learning software learns to analyse data by generating algorithms that cannot be predicted and that are difficult, or impossible, to comprehend. This introduces an unknown and perhaps unaccountable process between a person’s thoughts and the technology that is acting on their behalf.

Maslen is already helping to shape BCI-device regulation. She is in discussion with the European Commission about regulations it will implement in 2020 that cover non-invasive brain-modulating devices that are sold straight to consumers. [emphases mine; Note: There is a Canadian company selling this type of product, MUSE] Maslen became interested in the safety of these devices, which were covered by only cursory safety regulations. Although such devices are simple, they pass electrical currents through people’s scalps to modulate brain activity. Maslen found reports of them causing burns, headaches and visual disturbances. She also says clinical studies have shown that, although non-invasive electrical stimulation of the brain can enhance certain cognitive abilities, this can come at the cost of deficits in other aspects of cognition.

Regarding my note about MUSE, the company is InteraXon and its product is MUSE.They advertise the product as “Brain Sensing Headbands That Improve Your Meditation Practice.” The company website and the product seem to be one entity, Choose Muse. The company’s product has been used in some serious research papers they can be found here. I did not see any research papers concerning safety issues.

Getting back to Drew’s July 24, 2019 article and Patient 6,

… He [Gilbert] is now preparing a follow-up report on Patient 6. The company that implanted the device in her brain to help free her from seizures went bankrupt. The device had to be removed.

… Patient 6 cried as she told Gilbert about losing the device. … “I lost myself,” she said.

“It was more than a device,” Gilbert says. “The company owned the existence of this new person.”

I strongly recommend reading Drew’s July 24, 2019 article in its entirety.

Finally

It’s easy to forget that in all the excitement over technologies ‘making our lives better’ that there can be a dark side or two. Some of the points brought forth in the articles by Wolbring, Underwood, and Drew confirmed my uneasiness as reasonable and gave me some specific examples of how these technologies raise new issues or old issues in new ways.

What I find interesting is that no one is using the term ‘cyborg’, which would seem quite applicable.There is an April 20, 2012 posting here titled ‘My mother is a cyborg‘ where I noted that by at lease one definition people with joint replacements, pacemakers, etc. are considered cyborgs. In short, cyborgs or technology integrated into bodies have been amongst us for quite some time.

Interestingly, no one seems to care much when insects are turned into cyborgs (can’t remember who pointed this out) but it is a popular area of research especially for military applications and search and rescue applications.

I’ve sometimes used the term ‘machine/flesh’ and or ‘augmentation’ as a description of technologies integrated with bodies, human or otherwise. You can find lots on the topic here however I’ve tagged or categorized it.

Amongst other pieces you can find here, there’s the August 8, 2016 posting, ‘Technology, athletics, and the ‘new’ human‘ featuring Oscar Pistorius when he was still best known as the ‘blade runner’ and a remarkably successful paralympic athlete. It’s about his efforts to compete against able-bodied athletes at the London Olympic Games in 2012. It is fascinating to read about technology and elite athletes of any kind as they are often the first to try out ‘enhancements’.

Gregor Wolbring has a number of essays on The Conversation looking at Paralympic athletes and their pursuit of enhancements and how all of this is affecting our notions of abilities and disabilities. By extension, one has to assume that ‘abled’ athletes are also affected with the trickle-down effect on the rest of us.

Regardless of where we start the investigation, there is a sameness to the participants in neuroethics discussions with a few experts and commercial interests deciding on how the rest of us (however you define ‘us’ as per Gregor Wolbring’s essay) will live.

This paucity of perspectives is something I was getting at in my COVID-19 editorial for the Canadian Science Policy Centre. My thesis being that we need a range of ideas and insights that cannot be culled from small groups of people who’ve trained and read the same materials or entrepreneurs who too often seem to put profit over thoughtful implementations of new technologies. (See the PDF May 2020 edition [you’ll find me under Policy Development]) or see my May 15, 2020 posting here (with all the sources listed.)

As for this new research at Stanford, it’s exciting news, which raises questions, as it offers the hope of independent movement for people diagnosed as tetraplegic (sometimes known as quadriplegic.)

Brain scan variations

The Scientist is a magazine I do not feature here often enough. The latest issue (June 2020) features a May 20, 2020 opinion piece by Ruth Williams on a recent study about interpretating brain scans—70 different teams of neuroimaging experts were involved (Note: Links have been removed),

In a test of scientific reproducibility, multiple teams of neuroimaging experts from across the globe were asked to independently analyze and interpret the same functional magnetic resonance imaging dataset. The results of the test, published in Nature today (May 20), show that each team performed the analysis in a subtly different manner and that their conclusions varied as a result. While highlighting the cause of the irreproducibility—human methodological decisions—the paper also reveals ways to safeguard future studies against it.

Problems with reproducibility plague all areas of science, and have been particularly highlighted in the fields of psychology and cancer through projects run in part by the Center for Open Science. Now, neuroimaging has come under the spotlight thanks to a collaborative project by neuroimaging experts around the world called the Neuroimaging Analysis Replication and Prediction Study (NARPS).

Neuroimaging, specifically functional magnetic resonance imaging (fMRI), which produces pictures of blood flow patterns in the brain that are thought to relate to neuronal activity, has been criticized in the past for problems such as poor study design and statistical methods, and specifying hypotheses after results are known (SHARKing), says neurologist Alain Dagher of McGill University who was not involved in the study. A particularly memorable criticism of the technique was a paper demonstrating that, without needed statistical corrections, it could identify apparent brain activity in a dead fish.

Perhaps because of such criticisms, nowadays fMRI “is a field that is known to have a lot of cautiousness about statistics and . . . about the sample sizes,” says neuroscientist Tom Schonberg of Tel Aviv University, an author of the paper and co-coordinator of NARPS. Also, unlike in many areas of biology, he adds, the image analysis is computational, not manual, so fewer biases might be expected to creep in.

Schonberg was therefore a little surprised to see the NARPS results, admitting, “it wasn’t easy seeing this variability, but it was what it was.”

The study, led by Schonberg together with psychologist Russell Poldrack of Stanford University and neuroimaging statistician Thomas Nichols of the University of Oxford, recruited independent teams of researchers around the globe to analyze and interpret the same raw neuroimaging data—brain scans of 108 healthy adults taken while the subjects were at rest and while they performed a simple decision-making task about whether to gamble a sum of money.

Each of the 70 research teams taking part used one of three different image analysis software packages. But variations in the final results didn’t depend on these software choices, says Nichols. Instead, they came down to numerous steps in the analysis that each require a human’s decision, such as how to correct for motion of the subjects’ heads, how signal-to-noise ratios are enhanced, how much image smoothing to apply—that is, how strictly the anatomical regions of the brain are defined—and which statistical approaches and thresholds to use.

If this topic interests you, I strongly suggest you read Williams’ article in its entirety.

Here are two links to the paper,

Variability in the analysis of a single neuroimaging dataset by many teams. Nature DOI: https://doi.org/10.1038/s41586-020-2314-9 Published online: 20 May 2020 Check for updates

This first one seems to be a free version of the paper.

Variability in the analysis of a single neuroimaging dataset by many teams by R. Botvinik-Nezer, F. Holzmeister, C. F. Camerer, et al. (at least 70 authors in total) Nature 582, 84–88 (2020). DOI: https://doi.org/10.1038/s41586-020-2314-9 Published 20 May 2020 Issue Date 04 June 2020

This version is behind a paywall.

Nanotechnology book suggestions for 2020

A January 23, 2020 news item on Nanowerk features a number of new books. Here are summaries of a couple of them from the news item (Note: Links have been removed),

The main goal of “Nanotechnology in Skin, Soft Tissue, and Bone Infections” is to deal with the role of nanobiotechnology in skin, soft tissue and bone infections since it is difficult to treat the infections due to the development of resistance in them against existing antibiotics.

The present interdisciplinary book is very useful for a diverse group of readers including nanotechnologists, medical microbiologists, dermatologists, osteologists, biotechnologists, bioengineers.

Nanotechnology in Skin, Soft-Tissue, and Bone Infections” is divided into four sections: Section I- includes role of nanotechnology in skin infections such as atopic dermatitis, and nanomaterials for combating infections caused by bacteria and fungi. Section II- incorporates how nanotechnology can be used for soft-tissue infections such as diabetic foot ulcer and other wound infections; Section III- discusses about the nanomaterials in artificial scaffolds bone engineering and bone infections caused by bacteria and fungi; and also about the toxicity issues generated by the nanomaterials in general and nanoparticles in particular.

Advanced Materials for Defense: Development, Analysis and Applications” is a collection of high quality research and review papers submitted to the 1st World Conference on Advanced Materials for Defense (AUXDEFENSE 2018).

A wide range of topics related to the defense area such as ballistic protection, impact and energy absorption, composite materials, smart materials and structures, nanomaterials and nano structures, CBRN protection, thermoregulation, camouflage, auxetic materials, and monitoring systems is covered.

Written by the leading experts in these subjects, this work discusses both technological advances in terms of materials as well as product designing, analysis as well as case studies.

This volume will prove to be a valuable resource for researchers and scientists from different engineering disciplines such as materials science, chemical engineering, biological sciences, textile engineering, mechanical engineering, environmental science, and nanotechnology.

Nanoengineering is a branch of engineering that exploits the unique properties of nanomaterials—their size and quantum effects—and the interaction between these materials, in order to design and manufacture novel structures and devices that possess entirely new functionality and capabilities, which are not obtainable by macroscale engineering.

While the term nanoengineering is often used synonymously with the general term nanotechnology, the former technically focuses more closely on the engineering aspects of the field, as opposed to the broader science and general technology aspects that are encompassed by the latter.

Nanoengineering: The Skills and Tools Making Technology Invisible” puts a spotlight on some of the scientists who are pushing the boundaries of technology and it gives examples of their work and how they are advancing knowledge one little step at a time.

This book is a collection of essays about researchers involved in nanoengineering and many other facets of nanotechnologies. This research involves truly multidisciplinary and international efforts, covering a wide range of scientific disciplines such as medicine, materials sciences, chemistry, toxicology, biology and biotechnology, physics and electronics.

The book showcases 176 very specific research projects and you will meet the scientists who develop the theories, conduct the experiments, and build the new materials and devices that will make nanoengineering a core technology platform for many future products and applications.

On January 28, 2020, Azonano featured a book review for “Nano Comes to Life: How Nanotechnology is Transforming Medicine and the Future of Biology.” The review by Rebecca Megson-Smith, marketing lead, was originally published on the NuNano company blog

Covering sciences ‘greatest hits’ since we have been able to look at the world on the nanoscale, as well as where it is taking our understanding of life, Nano Comes to Life: How Nanotechnology is Transforming Medicine and the Future of Biology is an inspiring and joyful read.

As author Sonia Contera writes, biology is an area of intense interest and study. With the advent of nanotechnology, a more diverse range of scientists from across the disciplines are now coming together to solve some of the biggest issues of our time.

The ability to visualise, interact with, manipulate and create matter at the nanometer scale – the level of molecules, proteins and DNA – combined with the physicists quantitative and mathematical approach is revolutionising our understanding of the complexity which underpins life.

I particularly enjoyed the section that discussed the history of scanning tools. Here Contera highlights how profoundly the development of the STM [scanning tunneling microscope] transformed human interaction with matter.

Not only did it image at the atomic level with ‘unprecedented accuracy using a relatively simple, cheap tool’, but the STM was able to pick up and move the atoms around one by one. And what it couldn’t do effectively – work within the biological environments – was and is achievable through the introduction of the AFM [atomic force microscope].

She [Contera] writes:

“Physics urges us to consider life as a whole emergent from the greater whole – emanating from the same rules that govern the entire cosmos.”

I leave you with another bold declaration from Sonia about the good that the merging of the sciences has offered and, on behalf of everyone at NuNano, would like to wish you all a very Merry Christmas and Happy New Year – see you in 2020!

“As physics, engineering, computer science and materials science merge with biology, they are actually helping to reconnect science and technology with the deep questions that humans have asked themselves from the beginning of civilization: What is life? What does it mean to be human when we can manipulate and even exploit our own biology?”

Sonia Contera is professor of biological physics in the Department of Physics at the University of Oxford. She is a leading pioneer in the field of nanotechnology.

Megson-Smith certainly seems enthused about the book and she reminded me of how interested I was in STMs and AFMs when I first started investigating and writing about nanotechnology. Given the review but not having seen the book myself, it seems this might be a good introduction.

My introductory book was the 2009 Soft Machines: Nanotechnology and Life by Richard Jones, a professor of physics and astronomy at the University of Sheffield. I have great affection for the book and, if memory serves, it hasn’t really aged. One more thing, Jones can be very funny. It’s not many people who can successfully combine humour and nanotechnology.

You can find Megson-Smith’s original posting here.

Electronics begone! Enter: the light-based brainlike computing chip

At this point, it’s possible I’m wrong but I think this is the first ‘memristor’ type device (also called a neuromorphic chip) based on light rather than electronics that I’ve featured here on this blog. In other words, it’s not, technically speaking, a memristor but it does have the same properties so it is a neuromorphic chip.

Caption: The optical microchips that the researchers are working on developing are about the size of a one-cent piece. Credit: WWU Muenster – Peter Leßmann

A May 8, 2019 news item on Nanowerk announces this new approach to neuromorphic hardware (Note: A link has been removed),

Researchers from the Universities of Münster (Germany), Oxford and Exeter (both UK) have succeeded in developing a piece of hardware which could pave the way for creating computers which resemble the human brain.

The scientists produced a chip containing a network of artificial neurons that works with light and can imitate the behaviour of neurons and their synapses. The network is able to “learn” information and use this as a basis for computing and recognizing patterns. As the system functions solely with light and not with electrons, it can process data many times faster than traditional systems. …

A May 8, 2019 University of Münster press release (also on EurekAlert), which originated the news item, reveals the full story,

A technology that functions like a brain? In these times of artificial intelligence, this no longer seems so far-fetched – for example, when a mobile phone can recognise faces or languages. With more complex applications, however, computers still quickly come up against their own limitations. One of the reasons for this is that a computer traditionally has separate memory and processor units – the consequence of which is that all data have to be sent back and forth between the two. In this respect, the human brain is way ahead of even the most modern computers because it processes and stores information in the same place – in the synapses, or connections between neurons, of which there are a million-billion in the brain. An international team of researchers from the Universities of Münster (Germany), Oxford and Exeter (both UK) have now succeeded in developing a piece of hardware which could pave the way for creating computers which resemble the human brain. The scientists managed to produce a chip containing a network of artificial neurons that works with light and can imitate the behaviour of neurons and their synapses.

The researchers were able to demonstrate, that such an optical neurosynaptic network is able to “learn” information and use this as a basis for computing and recognizing patterns – just as a brain can. As the system functions solely with light and not with traditional electrons, it can process data many times faster. “This integrated photonic system is an experimental milestone,” says Prof. Wolfram Pernice from Münster University and lead partner in the study. “The approach could be used later in many different fields for evaluating patterns in large quantities of data, for example in medical diagnoses.” The study is published in the latest issue of the “Nature” journal.

The story in detail – background and method used

Most of the existing approaches relating to so-called neuromorphic networks are based on electronics, whereas optical systems – in which photons, i.e. light particles, are used – are still in their infancy. The principle which the German and British scientists have now presented works as follows: optical waveguides that can transmit light and can be fabricated into optical microchips are integrated with so-called phase-change materials – which are already found today on storage media such as re-writable DVDs. These phase-change materials are characterised by the fact that they change their optical properties dramatically, depending on whether they are crystalline – when their atoms arrange themselves in a regular fashion – or amorphous – when their atoms organise themselves in an irregular fashion. This phase-change can be triggered by light if a laser heats the material up. “Because the material reacts so strongly, and changes its properties dramatically, it is highly suitable for imitating synapses and the transfer of impulses between two neurons,” says lead author Johannes Feldmann, who carried out many of the experiments as part of his PhD thesis at the Münster University.

In their study, the scientists succeeded for the first time in merging many nanostructured phase-change materials into one neurosynaptic network. The researchers developed a chip with four artificial neurons and a total of 60 synapses. The structure of the chip – consisting of different layers – was based on the so-called wavelength division multiplex technology, which is a process in which light is transmitted on different channels within the optical nanocircuit.

In order to test the extent to which the system is able to recognise patterns, the researchers “fed” it with information in the form of light pulses, using two different algorithms of machine learning. In this process, an artificial system “learns” from examples and can, ultimately, generalise them. In the case of the two algorithms used – both in so-called supervised and in unsupervised learning – the artificial network was ultimately able, on the basis of given light patterns, to recognise a pattern being sought – one of which was four consecutive letters.

“Our system has enabled us to take an important step towards creating computer hardware which behaves similarly to neurons and synapses in the brain and which is also able to work on real-world tasks,” says Wolfram Pernice. “By working with photons instead of electrons we can exploit to the full the known potential of optical technologies – not only in order to transfer data, as has been the case so far, but also in order to process and store them in one place,” adds co-author Prof. Harish Bhaskaran from the University of Oxford.

A very specific example is that with the aid of such hardware cancer cells could be identified automatically. Further work will need to be done, however, before such applications become reality. The researchers need to increase the number of artificial neurons and synapses and increase the depth of neural networks. This can be done, for example, with optical chips manufactured using silicon technology. “This step is to be taken in the EU joint project ‘Fun-COMP’ by using foundry processing for the production of nanochips,” says co-author and leader of the Fun-COMP project, Prof. C. David Wright from the University of Exeter.

Here’s a link to and a citation for the paper,

All-optical spiking neurosynaptic networks with self-learning capabilities by J. Feldmann, N. Youngblood, C. D. Wright, H. Bhaskaran & W. H. P. Pernice. Nature volume 569, pages208–214 (2019) DOI: https://doi.org/10.1038/s41586-019-1157-8 Issue Date: 09 May 2019

This paper is behind a paywall.

For the curious, I found a little more information about Fun-COMP (functionally-scaled computer technology). It’s a European Commission (EC) Horizon 2020 project coordinated through the University of Exeter. For information with details such as the total cost, contribution from the EC, the list of partnerships and more there is the Fun-COMP webpage on fabiodisconzi.com.

How to get people to trust artificial intelligence

Vyacheslav Polonski’s (University of Oxford researcher) January 10, 2018 piece (originally published Jan. 9, 2018 on The Conversation) on phys.org isn’t a gossip article although there are parts that could be read that way. Before getting to what I consider the juicy bits (Note: Links have been removed),

Artificial intelligence [AI] can already predict the future. Police forces are using it to map when and where crime is likely to occur [Note: See my Nov. 23, 2017 posting about predictive policing in Vancouver for details about the first Canadian municipality to introduce the technology]. Doctors can use it to predict when a patient is most likely to have a heart attack or stroke. Researchers are even trying to give AI imagination so it can plan for unexpected consequences.

Many decisions in our lives require a good forecast, and AI agents are almost always better at forecasting than their human counterparts. Yet for all these technological advances, we still seem to deeply lack confidence in AI predictions. Recent cases show that people don’t like relying on AI and prefer to trust human experts, even if these experts are wrong.

The part (juicy bits) that satisfied some of my long held curiosity was this section on Watson and its life as a medical adjunct (Note: Links have been removed),

IBM’s attempt to promote its supercomputer programme to cancer doctors (Watson for Onology) was a PR [public relations] disaster. The AI promised to deliver top-quality recommendations on the treatment of 12 cancers that accounted for 80% of the world’s cases. As of today, over 14,000 patients worldwide have received advice based on its calculations.

But when doctors first interacted with Watson they found themselves in a rather difficult situation. On the one hand, if Watson provided guidance about a treatment that coincided with their own opinions, physicians did not see much value in Watson’s recommendations. The supercomputer was simply telling them what they already know, and these recommendations did not change the actual treatment. This may have given doctors some peace of mind, providing them with more confidence in their own decisions. But IBM has yet to provide evidence that Watson actually improves cancer survival rates.

On the other hand, if Watson generated a recommendation that contradicted the experts’ opinion, doctors would typically conclude that Watson wasn’t competent. And the machine wouldn’t be able to explain why its treatment was plausible because its machine learning algorithms were simply too complex to be fully understood by humans. Consequently, this has caused even more mistrust and disbelief, leading many doctors to ignore the seemingly outlandish AI recommendations and stick to their own expertise.

As a result, IBM Watson’s premier medical partner, the MD Anderson Cancer Center, recently announced it was dropping the programme. Similarly, a Danish hospital reportedly abandoned the AI programme after discovering that its cancer doctors disagreed with Watson in over two thirds of cases.

The problem with Watson for Oncology was that doctors simply didn’t trust it. Human trust is often based on our understanding of how other people think and having experience of their reliability. …

It seems to me there might be a bit more to the doctors’ trust issues and I was surprised it didn’t seem to have occurred to Polonski. Then I did some digging (from Polonski’s webpage on the Oxford Internet Institute website),

Vyacheslav Polonski (@slavacm) is a DPhil [PhD] student at the Oxford Internet Institute. His research interests are located at the intersection of network science, media studies and social psychology. Vyacheslav’s doctoral research examines the adoption and use of social network sites, focusing on the effects of social influence, social cognition and identity construction.

Vyacheslav is a Visiting Fellow at Harvard University and a Global Shaper at the World Economic Forum. He was awarded the Master of Science degree with Distinction in the Social Science of the Internet from the University of Oxford in 2013. He also obtained the Bachelor of Science degree with First Class Honours in Management from the London School of Economics and Political Science (LSE) in 2012.

Vyacheslav was honoured at the British Council International Student of the Year 2011 awards, and was named UK’s Student of the Year 2012 and national winner of the Future Business Leader of the Year 2012 awards by TARGETjobs.

Previously, he has worked as a management consultant at Roland Berger Strategy Consultants and gained further work experience at the World Economic Forum, PwC, Mars, Bertelsmann and Amazon.com. Besides, he was involved in several start-ups as part of the 2012 cohort of Entrepreneur First and as part of the founding team of the London office of Rocket Internet. Vyacheslav was the junior editor of the bi-lingual book ‘Inspire a Nation‘ about Barack Obama’s first presidential election campaign. In 2013, he was invited to be a keynote speaker at the inaugural TEDx conference of IE University in Spain to discuss the role of a networked mindset in everyday life.

Vyacheslav is fluent in German, English and Russian, and is passionate about new technologies, social entrepreneurship, philanthropy, philosophy and modern art.

Research interests

Network science, social network analysis, online communities, agency and structure, group dynamics, social interaction, big data, critical mass, network effects, knowledge networks, information diffusion, product adoption

Positions held at the OII

  • DPhil student, October 2013 –
  • MSc Student, October 2012 – August 2013

Polonski doesn’t seem to have any experience dealing with, participating in, or studying the medical community. Getting a doctor to admit that his or her approach to a particular patient’s condition was wrong or misguided runs counter to their training and, by extension, the institution of medicine. Also, one of the biggest problems in any field is getting people to change and it’s not always about trust. In this instance, you’re asking a doctor to back someone else’s opinion after he or she has rendered theirs. This is difficult even when the other party is another human doctor let alone a form of artificial intelligence.

If you want to get a sense of just how hard it is to get someone to back down after they’ve committed to a position, read this January 10, 2018 essay by Lara Bazelon, an associate professor at the University of San Francisco School of Law. This is just one of the cases (Note: Links have been removed),

Davontae Sanford was 14 years old when he confessed to murdering four people in a drug house on Detroit’s East Side. Left alone with detectives in a late-night interrogation, Sanford says he broke down after being told he could go home if he gave them “something.” On the advice of a lawyer whose license was later suspended for misconduct, Sanders pleaded guilty in the middle of his March 2008 trial and received a sentence of 39 to 92 years in prison.

Sixteen days after Sanford was sentenced, a hit man named Vincent Smothers told the police he had carried out 12 contract killings, including the four Sanford had pleaded guilty to committing. Smothers explained that he’d worked with an accomplice, Ernest Davis, and he provided a wealth of corroborating details to back up his account. Smothers told police where they could find one of the weapons used in the murders; the gun was recovered and ballistics matched it to the crime scene. He also told the police he had used a different gun in several of the other murders, which ballistics tests confirmed. Once Smothers’ confession was corroborated, it was clear Sanford was innocent. Smothers made this point explicitly in an 2015 affidavit, emphasizing that Sanford hadn’t been involved in the crimes “in any way.”

Guess what happened? (Note: Links have been removed),

But Smothers and Davis were never charged. Neither was Leroy Payne, the man Smothers alleged had paid him to commit the murders. …

Davontae Sanford, meanwhile, remained behind bars, locked up for crimes he very clearly didn’t commit.

Police failed to turn over all the relevant information in Smothers’ confession to Sanford’s legal team, as the law required them to do. When that information was leaked in 2009, Sanford’s attorneys sought to reverse his conviction on the basis of actual innocence. Wayne County Prosecutor Kym Worthy fought back, opposing the motion all the way to the Michigan Supreme Court. In 2014, the court sided with Worthy, ruling that actual innocence was not a valid reason to withdraw a guilty plea [emphasis mine]. Sanford would remain in prison for another two years.

Doctors are just as invested in their opinions and professional judgments as lawyers  (just like  the prosecutor and the judges on the Michigan Supreme Court) are.

There is one more problem. From the doctor’s (or anyone else’s perspective), if the AI is making the decisions, why do he/she need to be there? At best it’s as if AI were turning the doctor into its servant or, at worst, replacing the doctor. Polonski alludes to the problem in one of his solutions to the ‘trust’ issue (Note: A link has been removed),

Research suggests involving people more in the AI decision-making process could also improve trust and allow the AI to learn from human experience. For example,one study showed people were given the freedom to slightly modify an algorithm felt more satisfied with its decisions, more likely to believe it was superior and more likely to use it in the future.

Having input into the AI decision-making process somewhat addresses one of the problems but the commitment to one’s own judgment even when there is overwhelming evidence to the contrary is a perennially thorny problem. The legal case mentioned here earlier is clearly one where the contrarian is wrong but it’s not always that obvious. As well, sometimes, people who hold out against the majority are right.

US Army

Getting back to building trust, it turns out the US Army Research Laboratory is also interested in transparency where AI is concerned (from a January 11, 2018 US Army news release on EurekAlert),

U.S. Army Research Laboratory [ARL] scientists developed ways to improve collaboration between humans and artificially intelligent agents in two projects recently completed for the Autonomy Research Pilot Initiative supported by the Office of Secretary of Defense. They did so by enhancing the agent transparency [emphasis mine], which refers to a robot, unmanned vehicle, or software agent’s ability to convey to humans its intent, performance, future plans, and reasoning process.

“As machine agents become more sophisticated and independent, it is critical for their human counterparts to understand their intent, behaviors, reasoning process behind those behaviors, and expected outcomes so the humans can properly calibrate their trust [emphasis mine] in the systems and make appropriate decisions,” explained ARL’s Dr. Jessie Chen, senior research psychologist.

The U.S. Defense Science Board, in a 2016 report, identified six barriers to human trust in autonomous systems, with ‘low observability, predictability, directability and auditability’ as well as ‘low mutual understanding of common goals’ being among the key issues.

In order to address these issues, Chen and her colleagues developed the Situation awareness-based Agent Transparency, or SAT, model and measured its effectiveness on human-agent team performance in a series of human factors studies supported by the ARPI. The SAT model deals with the information requirements from an agent to its human collaborator in order for the human to obtain effective situation awareness of the agent in its tasking environment. At the first SAT level, the agent provides the operator with the basic information about its current state and goals, intentions, and plans. At the second level, the agent reveals its reasoning process as well as the constraints/affordances that the agent considers when planning its actions. At the third SAT level, the agent provides the operator with information regarding its projection of future states, predicted consequences, likelihood of success/failure, and any uncertainty associated with the aforementioned projections.

In one of the ARPI projects, IMPACT, a research program on human-agent teaming for management of multiple heterogeneous unmanned vehicles, ARL’s experimental effort focused on examining the effects of levels of agent transparency, based on the SAT model, on human operators’ decision making during military scenarios. The results of a series of human factors experiments collectively suggest that transparency on the part of the agent benefits the human’s decision making and thus the overall human-agent team performance. More specifically, researchers said the human’s trust in the agent was significantly better calibrated — accepting the agent’s plan when it is correct and rejecting it when it is incorrect– when the agent had a higher level of transparency.

The other project related to agent transparency that Chen and her colleagues performed under the ARPI was Autonomous Squad Member, on which ARL collaborated with Naval Research Laboratory scientists. The ASM is a small ground robot that interacts with and communicates with an infantry squad. As part of the overall ASM program, Chen’s group developed transparency visualization concepts, which they used to investigate the effects of agent transparency levels on operator performance. Informed by the SAT model, the ASM’s user interface features an at a glance transparency module where user-tested iconographic representations of the agent’s plans, motivator, and projected outcomes are used to promote transparent interaction with the agent. A series of human factors studies on the ASM’s user interface have investigated the effects of agent transparency on the human teammate’s situation awareness, trust in the ASM, and workload. The results, consistent with the IMPACT project’s findings, demonstrated the positive effects of agent transparency on the human’s task performance without increase of perceived workload. The research participants also reported that they felt the ASM as more trustworthy, intelligent, and human-like when it conveyed greater levels of transparency.

Chen and her colleagues are currently expanding the SAT model into bidirectional transparency between the human and the agent.

“Bidirectional transparency, although conceptually straightforward–human and agent being mutually transparent about their reasoning process–can be quite challenging to implement in real time. However, transparency on the part of the human should support the agent’s planning and performance–just as agent transparency can support the human’s situation awareness and task performance, which we have demonstrated in our studies,” Chen hypothesized.

The challenge is to design the user interfaces, which can include visual, auditory, and other modalities, that can support bidirectional transparency dynamically, in real time, while not overwhelming the human with too much information and burden.

Interesting, yes? Here’s a link and a citation for the paper,

Situation Awareness-based Agent Transparency and Human-Autonomy Teaming Effectiveness by Jessie Y.C. Chen, Shan G. Lakhmani, Kimberly Stowers, Anthony R. Selkowitz, Julia L. Wright, and Michael Barnes. Theoretical Issues in Ergonomics Science May 2018. DOI 10.1080/1463922X.2017.1315750

This paper is behind a paywall.

A transatlantic report highlighting the risks and opportunities associated with synthetic biology and bioengineering

I love e-Life, the open access journal where its editors noted that a submitted synthetic biology and bioengineering report was replete with US and UK experts (along with a European or two) but no expert input from other parts of the world. In response the authors added ‘transatlantic’ to the title. It was a good decision since it was too late to add any new experts if the authors planned to have their paper published in the foreseeable future.

I’ve commented many times here when panels of experts include only Canadian, US, UK, and, sometimes, European or Commonwealth (Australia/New Zealand) experts that we need to broaden our perspectives and now I can add: or at least acknowledge (e.g. transatlantic) that the perspectives taken are reflective of a rather narrow range of countries.

Now getting to the report, here’s more from a November 21, 2017 University of Cambridge press release,

Human genome editing, 3D-printed replacement organs and artificial photosynthesis – the field of bioengineering offers great promise for tackling the major challenges that face our society. But as a new article out today highlights, these developments provide both opportunities and risks in the short and long term.

Rapid developments in the field of synthetic biology and its associated tools and methods, including more widely available gene editing techniques, have substantially increased our capabilities for bioengineering – the application of principles and techniques from engineering to biological systems, often with the goal of addressing ‘real-world’ problems.

In a feature article published in the open access journal eLife, an international team of experts led by Dr Bonnie Wintle and Dr Christian R. Boehm from the Centre for the Study of Existential Risk at the University of Cambridge, capture perspectives of industry, innovators, scholars, and the security community in the UK and US on what they view as the major emerging issues in the field.

Dr Wintle says: “The growth of the bio-based economy offers the promise of addressing global environmental and societal challenges, but as our paper shows, it can also present new kinds of challenges and risks. The sector needs to proceed with caution to ensure we can reap the benefits safely and securely.”

The report is intended as a summary and launching point for policy makers across a range of sectors to further explore those issues that may be relevant to them.

Among the issues highlighted by the report as being most relevant over the next five years are:

Artificial photosynthesis and carbon capture for producing biofuels

If technical hurdles can be overcome, such developments might contribute to the future adoption of carbon capture systems, and provide sustainable sources of commodity chemicals and fuel.

Enhanced photosynthesis for agricultural productivity

Synthetic biology may hold the key to increasing yields on currently farmed land – and hence helping address food security – by enhancing photosynthesis and reducing pre-harvest losses, as well as reducing post-harvest and post-consumer waste.

Synthetic gene drives

Gene drives promote the inheritance of preferred genetic traits throughout a species, for example to prevent malaria-transmitting mosquitoes from breeding. However, this technology raises questions about whether it may alter ecosystems [emphasis mine], potentially even creating niches where a new disease-carrying species or new disease organism may take hold.

Human genome editing

Genome engineering technologies such as CRISPR/Cas9 offer the possibility to improve human lifespans and health. However, their implementation poses major ethical dilemmas. It is feasible that individuals or states with the financial and technological means may elect to provide strategic advantages to future generations.

Defence agency research in biological engineering

The areas of synthetic biology in which some defence agencies invest raise the risk of ‘dual-use’. For example, one programme intends to use insects to disseminate engineered plant viruses that confer traits to the target plants they feed on, with the aim of protecting crops from potential plant pathogens – but such technologies could plausibly also be used by others to harm targets.

In the next five to ten years, the authors identified areas of interest including:

Regenerative medicine: 3D printing body parts and tissue engineering

While this technology will undoubtedly ease suffering caused by traumatic injuries and a myriad of illnesses, reversing the decay associated with age is still fraught with ethical, social and economic concerns. Healthcare systems would rapidly become overburdened by the cost of replenishing body parts of citizens as they age and could lead new socioeconomic classes, as only those who can pay for such care themselves can extend their healthy years.

Microbiome-based therapies

The human microbiome is implicated in a large number of human disorders, from Parkinson’s to colon cancer, as well as metabolic conditions such as obesity and type 2 diabetes. Synthetic biology approaches could greatly accelerate the development of more effective microbiota-based therapeutics. However, there is a risk that DNA from genetically engineered microbes may spread to other microbiota in the human microbiome or into the wider environment.

Intersection of information security and bio-automation

Advancements in automation technology combined with faster and more reliable engineering techniques have resulted in the emergence of robotic ‘cloud labs’ where digital information is transformed into DNA then expressed in some target organisms. This opens the possibility of new kinds of information security threats, which could include tampering with digital DNA sequences leading to the production of harmful organisms, and sabotaging vaccine and drug production through attacks on critical DNA sequence databases or equipment.

Over the longer term, issues identified include:

New makers disrupt pharmaceutical markets

Community bio-labs and entrepreneurial startups are customizing and sharing methods and tools for biological experiments and engineering. Combined with open business models and open source technologies, this could herald opportunities for manufacturing therapies tailored to regional diseases that multinational pharmaceutical companies might not find profitable. But this raises concerns around the potential disruption of existing manufacturing markets and raw material supply chains as well as fears about inadequate regulation, less rigorous product quality control and misuse.

Platform technologies to address emerging disease pandemics

Emerging infectious diseases—such as recent Ebola and Zika virus disease outbreaks—and potential biological weapons attacks require scalable, flexible diagnosis and treatment. New technologies could enable the rapid identification and development of vaccine candidates, and plant-based antibody production systems.

Shifting ownership models in biotechnology

The rise of off-patent, generic tools and the lowering of technical barriers for engineering biology has the potential to help those in low-resource settings, benefit from developing a sustainable bioeconomy based on local needs and priorities, particularly where new advances are made open for others to build on.

Dr Jenny Molloy comments: “One theme that emerged repeatedly was that of inequality of access to the technology and its benefits. The rise of open source, off-patent tools could enable widespread sharing of knowledge within the biological engineering field and increase access to benefits for those in developing countries.”

Professor Johnathan Napier from Rothamsted Research adds: “The challenges embodied in the Sustainable Development Goals will require all manner of ideas and innovations to deliver significant outcomes. In agriculture, we are on the cusp of new paradigms for how and what we grow, and where. Demonstrating the fairness and usefulness of such approaches is crucial to ensure public acceptance and also to delivering impact in a meaningful way.”

Dr Christian R. Boehm concludes: “As these technologies emerge and develop, we must ensure public trust and acceptance. People may be willing to accept some of the benefits, such as the shift in ownership away from big business and towards more open science, and the ability to address problems that disproportionately affect the developing world, such as food security and disease. But proceeding without the appropriate safety precautions and societal consensus—whatever the public health benefits—could damage the field for many years to come.”

The research was made possible by the Centre for the Study of Existential Risk, the Synthetic Biology Strategic Research Initiative (both at the University of Cambridge), and the Future of Humanity Institute (University of Oxford). It was based on a workshop co-funded by the Templeton World Charity Foundation and the European Research Council under the European Union’s Horizon 2020 research and innovation programme.

Here’s a link to and a citation for the paper,

A transatlantic perspective on 20 emerging issues in biological engineering by Bonnie C Wintle, Christian R Boehm, Catherine Rhodes, Jennifer C Molloy, Piers Millett, Laura Adam, Rainer Breitling, Rob Carlson, Rocco Casagrande, Malcolm Dando, Robert Doubleday, Eric Drexler, Brett Edwards, Tom Ellis, Nicholas G Evans, Richard Hammond, Jim Haseloff, Linda Kahl, Todd Kuiken, Benjamin R Lichman, Colette A Matthewman, Johnathan A Napier, Seán S ÓhÉigeartaigh, Nicola J Patron, Edward Perello, Philip Shapira, Joyce Tait, Eriko Takano, William J Sutherland. eLife; 14 Nov 2017; DOI: 10.7554/eLife.30247

This paper is open access and the editors have included their notes to the authors and the authors’ response.

You may have noticed that I highlighted a portion of the text concerning synthetic gene drives. Coincidentally I ran across a November 16, 2017 article by Ed Yong for The Atlantic where the topic is discussed within the context of a project in New Zealand, ‘Predator Free 2050’ (Note: A link has been removed),

Until the 13th century, the only land mammals in New Zealand were bats. In this furless world, local birds evolved a docile temperament. Many of them, like the iconic kiwi and the giant kakapo parrot, lost their powers of flight. Gentle and grounded, they were easy prey for the rats, dogs, cats, stoats, weasels, and possums that were later introduced by humans. Between them, these predators devour more than 26 million chicks and eggs every year. They have already driven a quarter of the nation’s unique birds to extinction.

Many species now persist only in offshore islands where rats and their ilk have been successfully eradicated, or in small mainland sites like Zealandia where they are encircled by predator-proof fences. The songs in those sanctuaries are echoes of the New Zealand that was.

But perhaps, they also represent the New Zealand that could be.

In recent years, many of the country’s conservationists and residents have rallied behind Predator-Free 2050, an extraordinarily ambitious plan to save the country’s birds by eradicating its invasive predators. Native birds of prey will be unharmed, but Predator-Free 2050’s research strategy, which is released today, spells doom for rats, possums, and stoats (a large weasel). They are to die, every last one of them. No country, anywhere in the world, has managed such a task in an area that big. The largest island ever cleared of rats, Australia’s Macquarie Island, is just 50 square miles in size. New Zealand is 2,000 times bigger. But, the country has committed to fulfilling its ecological moonshot within three decades.

In 2014, Kevin Esvelt, a biologist at MIT, drew a Venn diagram that troubles him to this day. In it, he and his colleagues laid out several possible uses for gene drives—a nascent technology for spreading designer genes through groups of wild animals. Typically, a given gene has a 50-50 chance of being passed to the next generation. But gene drives turn that coin toss into a guarantee, allowing traits to zoom through populations in just a few generations. There are a few natural examples, but with CRISPR, scientists can deliberately engineer such drives.

Suppose you have a population of rats, roughly half of which are brown, and the other half white. Now, imagine there is a gene that affects each rat’s color. It comes in two forms, one leading to brown fur, and the other leading to white fur. A male with two brown copies mates with a female with two white copies, and all their offspring inherit one of each. Those offspring breed themselves, and the brown and white genes continue cascading through the generations in a 50-50 split. This is the usual story of inheritance. But you can subvert it with CRISPR, by programming the brown gene to cut its counterpart and replace it with another copy of itself. Now, the rats’ children are all brown-furred, as are their grandchildren, and soon the whole population is brown.

Forget fur. The same technique could spread an antimalarial gene through a mosquito population, or drought-resistance through crop plants. The applications are vast, but so are the risks. In theory, gene drives spread so quickly and relentlessly that they could rewrite an entire wild population, and once released, they would be hard to contain. If the concept of modifying the genes of organisms is already distasteful to some, gene drives magnify that distaste across national, continental, and perhaps even global scales.

These excerpts don’t do justice to this thought-provoking article. If you have time, I recommend reading it in its entirety  as it provides some insight into gene drives and, with some imagination on the reader’s part, the potential for the other technologies discussed in the report.

One last comment, I notice that Eric Drexler is cited as on the report’s authors. He’s familiar to me as K. Eric Drexler, the author of the book that popularized nanotechnology in the US and other countries, Engines of Creation (1986) .

Revisiting the scientific past for new breakthroughs

A March 2, 2017 article on phys.org features a thought-provoking (and, for some of us, confirming) take on scientific progress  (Note: Links have been removed),

The idea that science isn’t a process of constant progress might make some modern scientists feel a bit twitchy. Surely we know more now than we did 100 years ago? We’ve sequenced the genome, explored space and considerably lengthened the average human lifespan. We’ve invented aircraft, computers and nuclear energy. We’ve developed theories of relativity and quantum mechanics to explain how the universe works.

However, treating the history of science as a linear story of progression doesn’t reflect wholly how ideas emerge and are adapted, forgotten, rediscovered or ignored. While we are happy with the notion that the arts can return to old ideas, for example in neoclassicism, this idea is not commonly recognised in science. Is this constraint really present in principle? Or is it more a comment on received practice or, worse, on the general ignorance of the scientific community of its own intellectual history?

For one thing, not all lines of scientific enquiry are pursued to conclusion. For example, a few years ago, historian of science Hasok Chang undertook a careful examination of notebooks from scientists working in the 19th century. He unearthed notes from experiments in electrochemistry whose results received no explanation at the time. After repeating the experiments himself, Chang showed the results still don’t have a full explanation today. These research programmes had not been completed, simply put to one side and forgotten.

A March 1, 2017 essay by Giles Gasper (Durham University), Hannah Smithson (University of Oxford) and Tom Mcleish (Durham University) for The Conversation, which originated the article, expands on the theme (Note: Links have been removed),

… looping back into forgotten scientific history might also provide an alternative, regenerative way of thinking that doesn’t rely on what has come immediately before it.

Collaborating with an international team of colleagues, we have taken this hypothesis further by bringing scientists into close contact with scientific treatises from the early 13th century. The treatises were composed by the English polymath Robert Grosseteste – who later became Bishop of Lincoln – between 1195 and 1230. They cover a wide range of topics we would recognise as key to modern physics, including sound, light, colour, comets, the planets, the origin of the cosmos and more.

We have worked with paleographers (handwriting experts) and Latinists to decipher Grosseteste’s manuscripts, and with philosophers, theologians, historians and scientists to provide intellectual interpretation and context to his work. As a result, we’ve discovered that scientific and mathematical minds today still resonate with Grosseteste’s deeply physical and structured thinking.

Our first intuition and hope was that the scientists might bring a new analytic perspective to these very technical texts. And so it proved: the deep mathematical structure of a small treatise on colour, the De colore, was shown to describe what we would now call a three-dimensional abstract co-ordinate space for colour.

But more was true. During the examination of each treatise, at some point one of the group would say: “Did anyone ever try doing …?” or “What would happen if we followed through with this calculation, supposing he meant …”. Responding to this thinker from eight centuries ago has, to our delight and surprise, inspired new scientific work of a rather fresh cut. It isn’t connected in a linear way to current research programmes, but sheds light on them from new directions.

I encourage you to read the essay in its entirety.

Brown recluse spider, one of the world’s most venomous spiders, shows off unique spinning technique

Caption: American Brown Recluse Spider is pictured. Credit: Oxford University

According to scientists from Oxford University this deadly spider could teach us a thing or two about strength. From a Feb. 15, 2017 news item on ScienceDaily,

Brown recluse spiders use a unique micro looping technique to make their threads stronger than that of any other spider, a newly published UK-US collaboration has discovered.

One of the most feared and venomous arachnids in the world, the American brown recluse spider has long been known for its signature necro-toxic venom, as well as its unusual silk. Now, new research offers an explanation for how the spider is able to make its silk uncommonly strong.

Researchers suggest that if applied to synthetic materials, the technique could inspire scientific developments and improve impact absorbing structures used in space travel.

The study, published in the journal Material Horizons, was produced by scientists from Oxford University’s Department of Zoology, together with a team from the Applied Science Department at Virginia’s College of William & Mary. Their surveillance of the brown recluse spider’s spinning behaviour shows how, and to what extent, the spider manages to strengthen the silk it makes.

A Feb. 15, 2017 University of Oxford press release, which originated the news item,  provides more detail about the research,

From observing the arachnid, the team discovered that unlike other spiders, who produce round ribbons of thread, recluse silk is thin and flat. This structural difference is key to the thread’s strength, providing the flexibility needed to prevent premature breakage and withstand the knots created during spinning which give each strand additional strength.

Professor Hannes Schniepp from William & Mary explains: “The theory of knots adding strength is well proven. But adding loops to synthetic filaments always seems to lead to premature fibre failure. Observation of the recluse spider provided the breakthrough solution; unlike all spiders its silk is not round, but a thin, nano-scale flat ribbon. The ribbon shape adds the flexibility needed to prevent premature failure, so that all the microloops can provide additional strength to the strand.”

By using computer simulations to apply this technique to synthetic fibres, the team were able to test and prove that adding even a single loop significantly enhances the strength of the material.

William & Mary PhD student Sean Koebley adds: “We were able to prove that adding even a single loop significantly enhances the toughness of a simple synthetic sticky tape. Our observations open the door to new fibre technology inspired by the brown recluse.”

Speaking on how the recluse’s technique could be applied more broadly in the future, Professor Fritz Vollrath, of the Department of Zoology at Oxford University, expands: “Computer simulations demonstrate that fibres with many loops would be much, much tougher than those without loops. This right away suggests possible applications. For example carbon filaments could be looped to make them less brittle, and thus allow their use in novel impact absorbing structures. One example would be spider-like webs of carbon-filaments floating in outer space, to capture the drifting space debris that endangers astronaut lives’ and satellite integrity.”

Here’s a link to and a citation for the paper,

Toughness-enhancing metastructure in the recluse spider’s looped ribbon silk by
S. R. Koebley, F. Vollrath, and H. C. Schniepp. Mater. Horiz., 2017, Advance Article DOI: 10.1039/C6MH00473C First published online 15 Feb 2017

This paper is open access although you may need to register with the Royal Society of Chemistry’s publishing site to get access.

Developing cortical implants for future speech neural prostheses

I’m guessing that graphene will feature in these proposed cortical implants since the project leader is a member of the Graphene Flagship’s Biomedical Technologies Work Package. (For those who don’t know, the Graphene Flagship is one of two major funding initiatives each receiving funding of 1B Euros over 10 years from the European Commission as part of their FET [Future and Emerging Technologies)] Initiative.)  A Jan. 12, 2017 news item on Nanowerk announces the new project (Note: A link has been removed),

BrainCom is a FET Proactive project, funded by the European Commission with 8.35M€ [8.3 million Euros] for the next 5 years, holding its Kick-off meeting on January 12-13 at ICN2 (Catalan Institute of Nanoscience and Nanotechnology) and the UAB [ Universitat Autònoma de Barcelona]. This project, coordinated by ICREA [Catalan Institution for Research and Advanced Studies] Research Prof. Jose A. Garrido from ICN2, will permit significant advances in understanding of cortical speech networks and the development of speech rehabilitation solutions using innovative brain-computer interfaces.

A Jan. 12, 2017 ICN2 press release, which originated the news item expands on the theme (it is a bit repetitive),

More than 5 million people worldwide suffer annually from aphasia, an extremely invalidating condition in which patients lose the ability to comprehend and formulate language after brain damage or in the course of neurodegenerative disorders. Brain-computer interfaces (BCIs), enabled by forefront technologies and materials, are a promising approach to treat patients with aphasia. The principle of BCIs is to collect neural activity at its source and decode it by means of electrodes implanted directly in the brain. However, neurorehabilitation of higher cognitive functions such as language raises serious issues. The current challenge is to design neural implants that cover sufficiently large areas of the brain to allow for reliable decoding of detailed neuronal activity distributed in various brain regions that are key for language processing.

BrainCom is a FET Proactive project funded by the European Commission with 8.35M€ for the next 5 years. This interdisciplinary initiative involves 10 partners including technologists, engineers, biologists, clinicians, and ethics experts. They aim to develop a new generation of neuroprosthetic cortical devices enabling large-scale recordings and stimulation of cortical activity to study high level cognitive functions. Ultimately, the BraimCom project will seed a novel line of knowledge and technologies aimed at developing the future generation of speech neural prostheses. It will cover different levels of the value chain: from technology and engineering to basic and language neuroscience, and from preclinical research in animals to clinical studies in humans.

This recently funded project is coordinated by ICREA Prof. Jose A. Garrido, Group Leader of the Advanced Electronic Materials and Devices Group at the Institut Català de Nanociència i Nanotecnologia (Catalan Institute of Nanoscience and Nanotechnology – ICN2) and deputy leader of the Biomedical Technologies Work Package presented last year in Barcelona by the Graphene Flagship. The BrainCom Kick-Off meeting is held on January 12-13 at ICN2 and the Universitat Autònoma de Barcelona (UAB).

Recent developments show that it is possible to record cortical signals from a small region of the motor cortex and decode them to allow tetraplegic [also known as, quadriplegic] people to activate a robotic arm to perform everyday life actions. Brain-computer interfaces have also been successfully used to help tetraplegic patients unable to speak to communicate their thoughts by selecting letters on a computer screen using non-invasive electroencephalographic (EEG) recordings. The performance of such technologies can be dramatically increased using more detailed cortical neural information.

BrainCom project proposes a radically new electrocorticography technology taking advantage of unique mechanical and electrical properties of novel nanomaterials such as graphene, 2D materials and organic semiconductors.  The consortium members will fabricate ultra-flexible cortical and intracortical implants, which will be placed right on the surface of the brain, enabling high density recording and stimulation sites over a large area. This approach will allow the parallel stimulation and decoding of cortical activity with unprecedented spatial and temporal resolution.

These technologies will help to advance the basic understanding of cortical speech networks and to develop rehabilitation solutions to restore speech using innovative brain-computer paradigms. The technology innovations developed in the project will also find applications in the study of other high cognitive functions of the brain such as learning and memory, as well as other clinical applications such as epilepsy monitoring.

The BrainCom project Consortium members are:

  • Catalan Institute of Nanoscience and Nanotechnology (ICN2) – Spain (Coordinator)
  • Institute of Microelectronics of Barcelona (CNM-IMB-CSIC) – Spain
  • University Grenoble Alpes – France
  • ARMINES/ Ecole des Mines de St. Etienne – France
  • Centre Hospitalier Universitaire de Grenoble – France
  • Multichannel Systems – Germany
  • University of Geneva – Switzerland
  • University of Oxford – United Kingdom
  • Ludwig-Maximilians-Universität München – Germany
  • Wavestone – Luxembourg

There doesn’t seem to be a website for the project but there is a BrainCom webpage on the European Commission’s CORDIS (Community Research and Development Information Service) website.

Epic Scottish poetry and social network science

It’s been a while since I’ve run a social network story here and this research into a 250-year controversy piqued my interest anew. From an Oct. 20, 2016 Coventry University (UK) press release (also on EurekAlert) Note: A link has been removed,

The social networks behind one of the most famous literary controversies of all time have been uncovered using modern networks science.

Since James Macpherson published what he claimed were translations of ancient Scottish Gaelic poetry by a third-century bard named Ossian, scholars have questioned the authenticity of the works and whether they were misappropriated from Irish mythology or, as heralded at the time, authored by a Scottish equivalent to Homer.

Now, in a joint study by Coventry University, the National University of Ireland, Galway and the University of Oxford, published today in the journal Advances in Complex Systems, researchers have revealed the structures of the social networks underlying the Ossian’s works and their similarities to Irish mythology.

The researchers mapped the characters at the heart of the works and the relationships between them to compare the social networks found in the Scottish epics with classical Greek literature and Irish mythology.

The study revealed that the networks in the Scottish poems bore no resemblance to epics by Homer, but strongly resembled those in mythological stories from Ireland.

The Ossianic poems are considered to be some of the most important literary works ever to have emerged from Britain or Ireland, given their influence over the Romantic period in literature and the arts. Figures from Brahms to Wordsworth reacted enthusiastically; Napoleon took a copy on his military campaigns and US President Thomas Jefferson believed that Ossian was the greatest poet to have ever existed.

The poems launched the romantic portrayal of the Scottish Highlands which persists, in many forms, to the present day and inspired Romantic nationalism all across Europe.

Professor Ralph Kenna, a statistical physicist based at Coventry University, said:

By working together, it shows how science can open up new avenues of research in the humanities. The opposite also applies, as social structures discovered in Ossian inspire new questions in mathematics.”

Dr Justin Tonra, a digital humanities expert from the National University of Ireland, Galway said:

From a humanities point of view, while it cannot fully resolve the debate about Ossian, this scientific analysis does reveal an insightful statistical picture: close similarity to the Irish texts which Macpherson explicitly rejected, and distance from the Greek sources which he sought to emulate.”

A statistical physicist, eh? I find that specialty quite an unexpected addition to the team stretching my ideas about social networks in new directions.

Getting back to the research, the scientists have supplied this image to illustrate their work,

Caption: In the social network underlying the Ossianic epic, the 325 nodes represent characters appearing in the narratives and the 748 links represent interactions between them. Credit: Coventry University

Caption: In the social network underlying the Ossianic epic, the 325 nodes represent characters appearing in the narratives and the 748 links represent interactions between them. Credit: Coventry University

Here’s a link to and a citation for the paper,

A networks-science investigation into the epic poems of Ossian by Joseph Yose, Ralph Kenna, Pádraig MacCarron, Thierry Platini, Justin Tonra.  Complex Syst. DOI: http://dx.doi.org/10.1142/S0219525916500089 Published: 21 October 2016

This paper is behind a paywall.