What are the ethics of incorporating human cells into computer chips? That’s the question that Julian Savulescu (Visiting Professor in biomedical Ethics, University of Melbourne and Uehiro Chair in Practical Ethics, University of Oxford), Christopher Gyngell (Research Fellow in Biomedical Ethics, The University of Melbourne), and Tsutomu Sawai (Associate Professor, Humanities and Social Sciences, Hiroshima University) discuss in a May 24, 2022 essay on The Conversation (Note: A link has been removed),
The year is 2030 and we are at the world’s largest tech conference, CES in Las Vegas. A crowd is gathered to watch a big tech company unveil its new smartphone. The CEO comes to the stage and announces the Nyooro, containing the most powerful processor ever seen in a phone. The Nyooro can perform an astonishing quintillion operations per second, which is a thousand times faster than smartphone models in 2020. It is also ten times more energy-efficient with a battery that lasts for ten days.
A journalist asks: “What technological advance allowed such huge performance gains?” The chief executive replies: “We created a new biological chip using lab-grown human neurons. These biological chips are better than silicon chips because they can change their internal structure, adapting to a user’s usage pattern and leading to huge gains in efficiency.”
Another journalist asks: “Aren’t there ethical concerns about computers that use human brain matter?”
Although the name and scenario are fictional, this is a question we have to confront now. In December 2021, Melbourne-based Cortical Labs grew groups of neurons (brain cells) that were incorporated into a computer chip. The resulting hybrid chip works because both brains and neurons share a common language: electricity.
The authors explain their comment that brains and neurons share the common language of electricity (Note: Links have been removed),
In silicon computers, electrical signals travel along metal wires that link different components together. In brains, neurons communicate with each other using electric signals across synapses (junctions between nerve cells). In Cortical Labs’ Dishbrain system, neurons are grown on silicon chips. These neurons act like the wires in the system, connecting different components. The major advantage of this approach is that the neurons can change their shape, grow, replicate, or die in response to the demands of the system.
Dishbrain could learn to play the arcade game Pong faster than conventional AI systems. The developers of Dishbrain said: “Nothing like this has ever existed before … It is an entirely new mode of being. A fusion of silicon and neuron.”
Cortical Labs believes its hybrid chips could be the key to the kinds of complex reasoning that today’s computers and AI cannot produce. Another start-up making computers from lab-grown neurons, Koniku, believes their technology will revolutionise several industries including agriculture, healthcare, military technology and airport security. Other types of organic computers are also in the early stages of development.
Ethics issues arise (Note: Links have been removed),
… this raises questions about donor consent. Do people who provide tissue samples for technology research and development know that it might be used to make neural computers? Do they need to know this for their consent to be valid?
People will no doubt be much more willing to donate skin cells for research than their brain tissue. One of the barriers to brain donation is that the brain is seen as linked to your identity. But in a world where we can grow mini-brains from virtually any cell type, does it make sense to draw this type of distinction?
… Consider the scandal regarding Henrietta Lacks, an African-American woman whose cells were used extensively in medical and commercial research without her knowledge and consent.
Henrietta’s cells are still used in applications which generate huge amounts of revenue for pharmaceutical companies (including recently to develop COVID vaccines. The Lacks family still has not received any compensation. If a donor’s neurons end up being used in products like the imaginary Nyooro, should they be entitled to some of the profit made from those products?
Another key ethical consideration for neural computers is whether they could develop some form of consciousness and experience pain. Would neural computers be more likely to have experiences than silicon-based ones? …
This May 24, 2022 essay is fascinating and, if you have the time, I encourage you to read it all.
*HeLa cells are named for Henrietta Lacks who unknowingly donated her immortal cell line to medical research. You can find more about the story on the Oprah Winfrey website, which features an excerpt from the Rebecca Skloot book “The Immortal Life of Henrietta Lacks.”’ …
I checked; the excerpt is still on the Oprah Winfrey site.
At the simplest of levels, nanopores are (nanometre-sized) holes in an insulating membrane. The hole allows ions to pass through the membrane when a voltage is applied, resulting in a measurable current. When a molecule passes through a nanopore it causes a change in the current, this can be used to characterize and even identify individual molecules. Nanopores are extremely powerful single-molecule biosensing devices and can be used to detect and sequence DNA, RNA, and even proteins. Recently, it has been used in the SARS-CoV-2 virus sequencing.
Solid-state nanopores are an extremely versatile type of nanopore formed in ultrathin membranes (less than 50 nanometres), made from materials such as silicon nitride (SiNx). Solid-state nanopores can be created with a range of diameters and can withstand a multitude of conditions (discover more about solid-state nanopore fabrication techniques here). One of the most appealing techniques with which to fabricate nanopores is Controlled Breakdown (CBD). This technique is quick, reduces fabrication costs, does not require specialized equipment, and can be automated.
CBD is a technique in which an electric field is applied across the membrane to induce a current. At some point, a spike in the current is observed, signifying pore formation. The voltage is then quickly reduced to ensure the fabrication of a single, small nanopore.
The mechanisms underlying this process have not been fully elucidated thus an international team involving ITQB NOVA decided to further investigate how electrical conduction through the membrane occurs during breakdown, namely how oxidation and reduction reactions (also called redox reactions, they imply electron loss or gain, respectively) influence the process. To do this, the team created three devices in which the electric field is applied to the membrane (a silicon-rich SiNx membrane) in different ways: via metal electrodes on both sides of the membrane; via electrolyte solutions on both sides of the membrane; and via a mixed device with a metal electrode on one side and an electrolyte solution on the other.
Results showed that redox reactions must occur at the membrane-electrolyte interface, whilst the metal electrodes circumvent this need. The team also demonstrated that, because of this phenomenon, nanopore fabrication could be localized to certain regions by performing CBD with metal microelectrodes on the membrane surface. Finally, by varying the content of silicon in the membrane, the investigators demonstrated that conduction and nanopore formation is highly dependent on the membrane material since it limits the electrical current in the membrane.
“Controlling the location of nanopores has been of interest to us for a number of years”, says James Yates. Pedro Sousa adds that “our findings suggest that CBD can be used to integrate pores with complementary micro or nanostructures, such as tunneling electrodes or field-effect sensors, across a range of different membrane materials.” These devices may then be used for the detection of specific molecules, such as proteins, DNA, or antibodies, and applied to a wide array of scenarios, including pandemic surveillance or food safety.
This project was developed by a research team led by ITQB NOVA’s James Yates and has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (grant agreement No 724300 and 875525). Co-author Pedro Miguel Sousa is also from ITQB NOVA. The other consortium members are from the University of Oxford, Oak Ridge National Laboratory, Imperial College London and Queen Mary University of London. The authors would like to thank Andrew Briggs for providing financial support.
Ai-Da was invented by gallerist Aidan Meller, in collaboration with Engineered Arts, a Cornish robotics company. Her drawing intelligence was developed by computer AI researchers at the University of Oxford, and her drawing arm is the work of engineers based in Leeds.
Ai-Da is the world’s first ultra-realistic artist robot. She draws using cameras in her eyes, her AI algorithms, and her robotic arm. Created in February 2019, she had her first solo show at the University of Oxford, ‘Unsecured Futures’, where her [visual] art encouraged viewers to think about our rapidly changing world. She has since travelled and exhibited work internationally, and had her first show in a major museum, the Design Museum, in 2021. She continues to create art that challenges our notions of creativity in a post-humanist era.
Ai-Da – is it art?
The role and definition of art changes over time. Ai-Da’s work is art, because it reflects the enormous integration of technology in todays society. We recognise ‘art’ means different things to different people.
Today, a dominant opinion is that art is created by the human, for other humans. This has not always been the case. The ancient Greeks felt art and creativity came from the Gods. Inspiration was divine inspiration. Today, a dominant mind-set is that of humanism, where art is an entirely human affair, stemming from human agency. However, current thinking suggests we are edging away from humanism, into a time where machines and algorithms influence our behaviour to a point where our ‘agency’ isn’t just our own. It is starting to get outsourced to the decisions and suggestions of algorithms, and complete human autonomy starts to look less robust. Ai-Da creates art, because art no longer has to be restrained by the requirement of human agency alone.
It seems that Ai-Da has branched out from visual art into poetry. (I wonder how many of the arts Ai-Da can produce and/or perform?)
A divine comedy? Dante and Ai-Da
The 700th anniversary of poet Dante Alighieri’s death has occasioned an exhibition, DANTE: THE INVENTION OF CELEBRITY, 17 September 2021–9 January 2022, at Oxford’s Ashmolean Museum.
Ai-Da, the world’s most modern humanoid artist, is involved in an exhibition about the poet and philosopher, Dante Alighieri, writer of the Divine Comedy, whose 700th anniversary is this year. A major exhibition, ‘Dante and the Invention of Celebrity’, opens at Oxford’s Ashmolean Museum this month, and includes an intervention by this most up-to-date robot artist.
Honours are being paid around the world to the author of what he called a Comedy because, unlike a tragedy, it began badly but ended well. From the darkness of hell, the work sees Dante journey through purgatory, before eventually arriving at the eternal light of paradise. What hold does a poem about the spiritual redemption of humanity, written so long ago, have on us today?
One challenge to both spirit and humanity in the 21st century is the power of artificial intelligence, created and unleashed by human ingenuity. The scientists who introduced this term, AI, in the 1950s announced that ‘every aspect of learning or any other feature of intelligence can, in principle, be so precisely described that a machine can be made to simulate it’.
Over the course of a human lifetime, that prophecy has almost been realised. Artificial intelligence has already taken the place of human thought, often in ways of which are not apparent. In medicine, AI promises to become both irreplaceable and inestimable.
But to an extent which we are, perhaps, frightened to acknowledge, AI monitors our consumption patterns, our taste in everything from food to culture, our perception of ourselves, even our political views. If we want to re-orientate ourselves and take a critical view of this, before it is too late to regain control, how can we do so?
Creative fiction offers a field in which our values and aspirations can be questioned. This year has seen the publication of Klara and the Sun, by Kazuo Ishiguro, which evokes a world, not many years into the future, in which humanoid AI robots have become the domestic servants and companions of all prosperous families.
One of the book’s characters asks a fundamental question about the human heart, ‘Do you think there is such a thing? Something that makes each of us special and individual?’
Art can make two things possible: through it, artificial intelligence, which remains largely unseen, can be made visible and tangible and it can be given a prophetic voice, which we can choose to heed or ignore.
These aims have motivated the creators of Ai-Da, the artist robot which, through a series of exhibitions, is currently provoking questions around the globe (from the United Nations headquarters in Geneva to Cairo, and from the Design Museum in London [UK] to Abu Dhabi) about the nature of human creativity, originality, and authenticity.
In the Ashmolean Museum’s Gallery 8, Dante meets artificial intelligence, in a staged encounter deliberately designed to invite reflection on what it means to see the world; on the nature of creativity; and on the value of human relationships.
The juxtaposition of AI with the Divine Comedy, in a year in which the poem is being celebrated as a supreme achievement of the human spirit, is timely. The encounter, however, is not presented as a clash of incompatible opposites, but as a conversation.
This is the spirit in which Ai-Da has been developed by her inventors, Aidan Meller and Lucy Seal, in collaboration with technical teams in Oxford University and elsewhere. Significantly, she takes her name from Ada Lovelace [emphasis mine], a mathematician and writer who was belatedly recognised as the first programmer. At the time of her early death in 1852, at the age of 36, she was considering writing a visionary kind of mathematical poetry, and wrote about her idea of ‘poetical philosophy, poetical science’.
For the Ashmolean exhibition, Ai-Da has made works in response to the Comedy. The first focuses on one of the circles of Dante’s Purgatory. Here, the souls of the envious compensate for their lives on earth, which were partially, but not irredeemably, marred by their frustrated desire for the possessions of others.
My first thought on seeing the inventor’s name, Aidan Meller, was that he named the robot after himself; I did not pick up on the Ada Lovelace connection. I appreciate how smart this is especially as the name also references AI.
Finally, the excerpts don’t do justice to Rosser’s essay; I recommend reading it if you have the time.
What I find most exciting about this conference is the range of countries being represented. At first glance, I’ve found Argentina, Thailand, Senegal, Ivory Coast, Costa Rica and more in a science meeting being held in Canada. Thank you to the organizers and to the organization International Network for Government Science Advice (INGSA)
As I’ve noted many times here in discussing the science advice we (Canadians) get through the Council of Canadian Academies (CCA), there’s far too much dependence on the same old, same old countries for international expertise. Let’s hope this meeting changes things.
The conference (with the theme Build Back Wiser: Knowledge, Policy and Publics in Dialogue) started on Monday, August 30, 2021 and is set to run for four days in Montréal, Québec. and as an online event The Premier of Québec, François Legault, and Mayor of Montréal, Valérie Plante (along with Peter Gluckman, Chair of INGSA and Rémi Quirion, Chief Scientist of Québec; this is the only province with a chief scientist) are there to welcome those who are present in person.
You can find a PDF of the four day programme here or go to the INGSA 2021 website for the programme and more. Here’s a sample from the programme of what excited me, from Day 1 (August 30, 2021),
8:45 | Plenary | Roundtable: Reflections from Covid-19: Where to from here?
Moderator: Mona Nemer – Chief Science Advisor of Canada
Speakers: Joanne Liu – Professor, School of Population and Global Health, McGill University, Quebec, Canada Chor Pharn Lee – Principal Foresight Strategist at Centre for Strategic Futures, Prime Minister’s Office, Singapore Andrea Ammon – Director of the European Centre for Disease Prevention and Control, Sweden Rafael Radi – President of the National Academy of Sciences; Coordinator of Scientific Honorary Advisory Group to the President on Covid-19, Uruguay
9:45 | Panel: Science advice during COVID-19: What factors made the difference?
Romain Murenzi – Executive Director, The World Academy of Sciences (TWAS), Italy
Stephen Quest – Director-General, European Commission’s Joint Research Centre (JRC), Belgium Yuxi Zhang – Postdoctoral Research Fellow, Blavatnik School of Government, University of Oxford, United Kingdom Amadou Sall – Director, Pasteur Institute of Dakar, Senegal Inaya Rakhmani – Director, Asia Research Centre, Universitas Indonesia
One last excerpt, from Day 2 (August 31, 2021),
Studio Session | Panel: Science advice for complex risk assessment: dealing with complex, new, and interacting threats
Moderator: Eeva Hellström – Senior Lead, Strategy and Foresight, Finnish Innovation Fund Sitra, Finland
Speakers: Albert van Jaarsveld – Director General and Chief Executive Officer, International Institute for Applied Systems Analysis, Austria Abdoulaye Gounou – Head, Benin’s Office for the Evaluation of Public Policies and Analysis of Government Action Catherine Mei Ling Wong – Sociologist, LRF Institute for the Public Understanding of Risk, National University of Singapore Andria Grosvenor – Deputy Executive Director (Ag), Caribbean Disaster Emergency Management Agency, Barbados
Studio Session | Innovations in Science Advice – Science Diplomacy driving evidence for policymaking
Moderator: Mehrdad Hariri – CEO and President of the Canadian Science Policy Centre, Canada
Speakers: Primal Silva – Canadian Food Inspection Agency’s Chief Science Operating Officer, Canada Zakri bin Abdul Hamid – Chair of the South-East Asia Science Advice Network (SEA SAN); Pro-Chancellor of Multimedia University in Malaysia Christian Arnault Emini – Senior Economic Adviser to the Prime Minister’s Office in Cameroon Florence Gauzy Krieger and Sebastian Goers – RLS-Sciences Network [See more about RLS-Sciences below] Elke Dall and Angela Schindler-Daniels – European Union Science Diplomacy Alliance Alexis Roig – CEO, SciTech DiploHub – Barcelona Science and Technology Diplomacy Hub, Spain
RLS-Sciences works under the framework of the Regional Leaders Summit. The Regional Leaders Summit (RLS) is a forum comprising seven regional governments (state, federal state, or provincial), which together represent approximately one hundred eighty million people across five continents, and a collective GDP of three trillion USD. The regions are: Bavaria (Germany), Georgia (USA), Québec (Canada), São Paulo (Brazil), Shandong (China), Upper Austria (Austria), and Western Cape (South Africa). Since 2002, the heads of government for these regions have met every two years for a political summit. These summits offer the RLS regions an opportunity for political dialogue.
Getting back to the main topic of this post, INGSA has some satellite events on offer, including this on Open Science,
Open Science: Science for the 21st century |
Science ouverte : la science au XXIe siècle
Thursday September 9, 2021; 11am-2pm EST | Jeudi 9 septembre 2021, 11 h à 14 h (HNE).
This event will be in English and French (using simultaneous translation) | Cet événement se déroulera en anglais et en français (traduction simultanée)
In the past 18 months we have seen an unprecedented level of sharing as medical scientists worked collaboratively and shared data to find solutions to the COVID-19 pandemic. The pandemic has accelerated the ongoing cultural shift in research practices towards open science.
This acceleration of the discovery/research process presents opportunities for institutions and governments to develop infrastructure, tools, funding, policies, and training to support, promote, and reward open science efforts. It also presents new opportunities to accelerate progress towards the UN Agenda 2030 Sustainable Development Goals through international scientific cooperation.
At the same time, it presents new challenges: rapid developments in open science often outpace national open science policies, funding, and infrastructure frameworks. Moreover, the development of international standard setting instruments, such as the future UNESCO Recommendation on Open Science, requires international harmonization of national policies, the establishment of frameworks to ensure equitable participation, and education, training, and professional development.
This 3-hour satellite event brings together international and national policy makers, funders, and experts in open science infrastructure to discuss these issues.
The outcome of the satellite event will be a summary report with recommendations for open science policy alignment at institutional, national, and international levels.
The event will be hosted on an events platform, with simultaneous interpretation in English and French. Participants will be able to choose which concurrent session they participate in upon registration. Registration is free but will be closed when capacity is reached.
This satellite event takes place in time for an interesting anniversary. The Montreal Neurological Institute (MNI), also known as Montreal Neuro, declared itself as Open Science in 2016, the first academic research institute (as far as we know) to do so in the world (see my January 22, 2016 posting for details about their open science initiative and my December 19, 2016 posting for more about their open science and their decision to not pursue patents for a five year period).
Stumbling across an entry from National Film Board of Canada for the Venice VR (virtual reality) Expanded section at the 77th Venice International Film Festival (September 2 to 12, 2020) and a recent Scientific American article on computer simulations provoked a memory from Frank Herbert’s 1965 novel, Dune. From an Oct. 3, 2007 posting on Equivocality; A journal of self-discovery, healing, growth, and growing pains,
Knowing where the trap is — that’s the first step in evading it. This is like single combat, Son, only on a larger scale — a feint within a feint within a feint [emphasis mine]…seemingly without end. The task is to unravel it.
—Duke Leto Atreides, Dune [Note: Dune is a 1965 science-fiction novel by US author Frank Herbert]
Two-time Emmy Award-winning storytelling pioneer Pietro Gagliano’s new work Agence (Transitional Forms/National Film Board of Canada) is an industry-first dynamic film that integrates cinematic storytelling, artificial intelligence, and user interactivity to create a different experience each time.
Agence is premiering in official competition in the Venice VR Expanded section at the 77th Venice International Film Festival (September 2 to 12), and accessible worldwide via the online Venice VR Expanded platform.
About the experience
Would you play god to intelligent life? Agence places the fate of artificially intelligent creatures in your hands. In their simulated universe, you have the power to observe, and to interfere. Maintain the balance of their peaceful existence or throw them into a state of chaos as you move from planet to planet. Watch closely and you’ll see them react to each other and their emerging world.
About the creators
Created by Pietro Gagliano, Agence is a co-production between his studio lab Transitional Forms and the NFB. Pietro is a pioneer of new forms of media that allow humans to understand what it means to be machine, and machines what it means to be human. Previously, Pietro co-founded digital studio Secret Location, and with his team, made history in 2015 by winning the first ever Emmy Award for a virtual reality project. His work has been recognized through hundreds of awards and nominations, including two Emmy Awards, 11 Canadian Screen Awards, 31 FWAs, two Webby Awards, a Peabody-Facebook Award, and a Cannes Lion.
Agence is produced by Casey Blustein (Transitional Forms) and David Oppenheim (NFB) and executive produced by Pietro Gagliano (Transitional Forms) and Anita Lee (NFB).
About Transitional Forms
Transitional Forms is a studio lab focused on evolving entertainment formats through the use of artificial intelligence. Through their innovative approach to content and tool creation, their interdisciplinary team transforms valuable research into dynamic, culturally relevant experiences across a myriad of emerging platforms. Dedicated to the intersection of technology and art, Transitional Forms strives to make humans more creative, and machines more human.
David Oppenheim and Anita Lee’s recent VR credits also include the acclaimed virtual reality/live performance piece Draw Me Close and The Book of Distance, which premiered at the Sundance Film Festival and is in the “Best of VR” section at Venice this year. Canada’s public producer of award-winning creative documentaries, auteur animation, interactive stories and participatory experiences, the NFB has won over 7,000 awards, including 21 Webbys and 12 Academy Awards.
The line that caught my eye? “Would you play god to intelligent life?” For the curious, here’s the film’s trailer,
Now for the second computer simulation (the feint within the feint).
Are we living in a computer simulation?
According to some thinkers in the field, the chances are about 50/50 that we are computer simulations, which makes “Agence” a particularly piquant experience.
It is not often that a comedian gives an astrophysicist goose bumps when discussing the laws of physics. But comic Chuck Nice managed to do just that in a recent episode of the podcast StarTalk.The show’s host Neil deGrasse Tyson had just explained the simulation argument—the idea that we could be virtual beings living in a computer simulation. If so, the simulation would most likely create perceptions of reality on demand rather than simulate all of reality all the time—much like a video game optimized to render only the parts of a scene visible to a player. “Maybe that’s why we can’t travel faster than the speed of light, because if we could, we’d be able to get to another galaxy,” said Nice, the show’s co-host, prompting Tyson to gleefully interrupt. “Before they can program it,” the astrophysicist said,delighting at the thought. “So the programmer put in that limit.”
Such conversations may seem flippant. But ever since Nick Bostrom of the University of Oxford wrote a seminal paper about the simulation argument in 2003, philosophers, physicists, technologists and, yes, comedians have been grappling with the idea of our reality being a simulacrum. Some have tried to identify ways in which we can discern if we are simulated beings. Others have attempted to calculate the chance of us being virtual entities. Now a new analysis shows that the odds that we are living in base reality—meaning an existence that is not simulated—are pretty much even. But the study also demonstrates that if humans were to ever develop the ability to simulate conscious beings, the chances would overwhelmingly tilt in favor of us, too, being virtual denizens inside someone else’s computer. (A caveat to that conclusion is that there is little agreement about what the term “consciousness” means, let alone how one might go about simulating it.)
In 2003 Bostrom imagined a technologically adept civilization that possesses immense computing power and needs a fraction of that power to simulate new realities with conscious beings in them. Given this scenario, his simulation argument showed that at least one proposition in the following trilemma must be true: First, humans almost always go extinct before reaching the simulation-savvy stage. Second, even if humans make it to that stage, they are unlikely to be interested in simulating their own ancestral past. And third, the probability that we are living in a simulation is close to one.
Before Bostrom, the movie The Matrix had already done its part to popularize the notion of simulated realities. And the idea has deep roots in Western and Eastern philosophical traditions, from Plato’s cave allegory to Zhuang Zhou’s butterfly dream. More recently, Elon Musk gave further fuel to the concept that our reality is a simulation: “The odds that we are in base reality is one in billions,” he said at a 2016 conference.
For him [astronomer David Kipping of Columbia University], there is a more obvious answer: Occam’s razor, which says that in the absence of other evidence, the simplest explanation is more likely to be correct. The simulation hypothesis is elaborate, presuming realities nested upon realities, as well as simulated entities that can never tell that they are inside a simulation. “Because it is such an overly complicated, elaborate model in the first place, by Occam’s razor, it really should be disfavored, compared to the simple natural explanation,” Kipping says.
Maybe we are living in base reality after all—The Matrix, Musk and weird quantum physics notwithstanding.
It’s all a little mind-boggling (a computer simulation creating and playing with a computer simulation?) and I’m not sure how far how I want to start thinking about the implications (the feint within the feint within the feint). Still, it seems that the idea could be useful as a kind of thought experiment designed to have us rethink our importance in the world. Or maybe, as a way to have a laugh at our own absurdity.
Researchers at Stanford University (California, US) believe they have a solution for a problem with neuroprosthetics (Note: I have included brief comments about neuroprosthetics and possible ethical issues at the end of this posting) according an August 5, 2020 news item on ScienceDaily,
The current generation of neural implants record enormous amounts of neural activity, then transmit these brain signals through wires to a computer. But, so far, when researchers have tried to create wireless brain-computer interfaces to do this, it took so much power to transmit the data that the implants generated too much heat to be safe for the patient. A new study suggests how to solve his problem — and thus cut the wires.
Stanford researchers have been working for years to advance a technology that could one day help people with paralysis regain use of their limbs, and enable amputees to use their thoughts to control prostheses and interact with computers.
The team has been focusing on improving a brain-computer interface, a device implanted beneath the skull on the surface of a patient’s brain. This implant connects the human nervous system to an electronic device that might, for instance, help restore some motor control to a person with a spinal cord injury, or someone with a neurological condition like amyotrophic lateral sclerosis, also called Lou Gehrig’s disease.
The current generation of these devices record enormous amounts of neural activity, then transmit these brain signals through wires to a computer. But when researchers have tried to create wireless brain-computer interfaces to do this, it took so much power to transmit the data that the devices would generate too much heat to be safe for the patient.
Now, a team led by electrical engineers and neuroscientists Krishna Shenoy, PhD, and Boris Murmann, PhD, and neurosurgeon and neuroscientist Jaimie Henderson, MD, have shown how it would be possible to create a wireless device, capable of gathering and transmitting accurate neural signals, but using a tenth of the power required by current wire-enabled systems. These wireless devices would look more natural than the wired models and give patients freer range of motion.
Graduate student Nir Even-Chen and postdoctoral fellow Dante Muratore, PhD, describe the team’s approach in a Nature Biomedical Engineering paper.
The team’s neuroscientists identified the specific neural signals needed to control a prosthetic device, such as a robotic arm or a computer cursor. The team’s electrical engineers then designed the circuitry that would enable a future, wireless brain-computer interface to process and transmit these these carefully identified and isolated signals, using less power and thus making it safe to implant the device on the surface of the brain.
To test their idea, the researchers collected neuronal data from three nonhuman primates and one human participant in a (BrainGate) clinical trial.
As the subjects performed movement tasks, such as positioning a cursor on a computer screen, the researchers took measurements. The findings validated their hypothesis that a wireless interface could accurately control an individual’s motion by recording a subset of action-specific brain signals, rather than acting like the wired device and collecting brain signals in bulk.
The next step will be to build an implant based on this new approach and proceed through a series of tests toward the ultimate goal.
As I found out while investigating, ethical issues in this area abound. My first thought was to look at how someone with a focus on ability studies might view the complexities.
My ‘go to’ resource for human enhancement and ethical issues is Gregor Wolbring, an associate professor at the University of Calgary (Alberta, Canada). his profile lists these areas of interest: ability studies, disability studies, governance of emerging and existing sciences and technologies (e.g. neuromorphic engineering, genetics, synthetic biology, robotics, artificial intelligence, automatization, brain machine interfaces, sensors) and more.
I can’t find anything more recent on this particular topic but I did find an August 10, 2017 essay for The Conversation where he comments on technology and human enhancement ethical issues where the technology is gene-editing. Regardless, he makes points that are applicable to brain-computer interfaces (human enhancement), Note: Links have been removed),
Ability expectations have been and still are used to disable, or disempower, many people, not only people seen as impaired. They’ve been used to disable or marginalize women (men making the argument that rationality is an important ability and women don’t have it). They also have been used to disable and disempower certain ethnic groups (one ethnic group argues they’re smarter than another ethnic group) and others.
A recent Pew Research survey on human enhancement revealed that an increase in the ability to be productive at work was seen as a positive. What does such ability expectation mean for the “us” in an era of scientific advancements in gene-editing, human enhancement and robotics?
Which abilities are seen as more important than others?
The ability expectations among “us” will determine how gene-editing and other scientific advances will be used.
And so how we govern ability expectations, and who influences that governance, will shape the future. Therefore, it’s essential that ability governance and ability literacy play a major role in shaping all advancements in science and technology.
One of the reasons I find Gregor’s commentary so valuable is that he writes lucidly about ability and disability as concepts and poses what can be provocative questions about expectations and what it is to be truly abled or disabled. You can find more of his writing here on his eponymous (more or less) blog.
Ethics of clinical trials for testing brain implants
In 2003, neurologist Helen Mayberg of Emory University in Atlanta began to test a bold, experimental treatment for people with severe depression, which involved implanting metal electrodes deep in the brain in a region called area 25 [emphases mine]. The initial data were promising; eventually, they convinced a device company, St. Jude Medical in Saint Paul, to sponsor a 200-person clinical trial dubbed BROADEN.
This month [October 2017], however, Lancet Psychiatry reported the first published data on the trial’s failure. The study stopped recruiting participants in 2012, after a 6-month study in 90 people failed to show statistically significant improvements between those receiving active stimulation and a control group, in which the device was implanted but switched off.
… a tricky dilemma for companies and research teams involved in deep brain stimulation (DBS) research: If trial participants want to keep their implants [emphases mine], who will take responsibility—and pay—for their ongoing care? And participants in last week’s meeting said it underscores the need for the growing corps of DBS researchers to think long-term about their planned studies.
… participants bear financial responsibility for maintaining the device should they choose to keep it, and for any additional surgeries that might be needed in the future, Mayberg says. “The big issue becomes cost [emphasis mine],” she says. “We transition from having grants and device donations” covering costs, to patients being responsible. And although the participants agreed to those conditions before enrolling in the trial, Mayberg says she considers it a “moral responsibility” to advocate for lower costs for her patients, even it if means “begging for charity payments” from hospitals. And she worries about what will happen to trial participants if she is no longer around to advocate for them. “What happens if I retire, or get hit by a bus?” she asks.
There’s another uncomfortable possibility: that the hypothesis was wrong [emphases mine] to begin with. A large body of evidence from many different labs supports the idea that area 25 is “key to successful antidepressant response,” Mayberg says. But “it may be too simple-minded” to think that zapping a single brain node and its connections can effectively treat a disease as complex as depression, Krakauer [John Krakauer, a neuroscientist at Johns Hopkins University in Baltimore, Maryland] says. Figuring that out will likely require more preclinical research in people—a daunting prospect that raises additional ethical dilemmas, Krakauer says. “The hardest thing about being a clinical researcher,” he says, “is knowing when to jump.”
Brain-computer interfaces, symbiosis, and ethical issues
This was the most recent and most directly applicable work that I could find. From a July 24, 2019 article by Liam Drew for Nature Outlook: The brain,
“It becomes part of you,” Patient 6 said, describing the technology that enabled her, after 45 years of severe epilepsy, to halt her disabling seizures. Electrodes had been implanted on the surface of her brain that would send a signal to a hand-held device when they detected signs of impending epileptic activity. On hearing a warning from the device, Patient 6 knew to take a dose of medication to halt the coming seizure.
“You grow gradually into it and get used to it, so it then becomes a part of every day,” she told Frederic Gilbert, an ethicist who studies brain–computer interfaces (BCIs) at the University of Tasmania in Hobart, Australia. “It became me,” she said. [emphasis mine]
Gilbert was interviewing six people who had participated in the first clinical trial of a predictive BCI to help understand how living with a computer that monitors brain activity directly affects individuals psychologically1. Patient 6’s experience was extreme: Gilbert describes her relationship with her BCI as a “radical symbiosis”.
Symbiosis is a term, borrowed from ecology, that means an intimate co-existence of two species for mutual advantage. As technologists work towards directly connecting the human brain to computers, it is increasingly being used to describe humans’ potential relationship with artificial intelligence.
Interface technologies are divided into those that ‘read’ the brain to record brain activity and decode its meaning, and those that ‘write’ to the brain to manipulate activity in specific regions and affect their function.
Commercial research is opaque, but scientists at social-media platform Facebook are known to be pursuing brain-reading techniques for use in headsets that would convert users’ brain activity into text. And neurotechnology companies such as Kernel in Los Angeles, California, and Neuralink, founded by Elon Musk in San Francisco, California, predict bidirectional coupling in which computers respond to people’s brain activity and insert information into their neural circuitry. [emphasis mine]
Already, it is clear that melding digital technologies with human brains can have provocative effects, not least on people’s agency — their ability to act freely and according to their own choices. Although neuroethicists’ priority is to optimize medical practice, their observations also shape the debate about the development of commercial neurotechnologies.
Neuroethicists began to note the complex nature of the therapy’s side effects. “Some effects that might be described as personality changes are more problematic than others,” says Maslen [Hannah Maslen, a neuroethicist at the University of Oxford, UK]. A crucial question is whether the person who is undergoing stimulation can reflect on how they have changed. Gilbert, for instance, describes a DBS patient who started to gamble compulsively, blowing his family’s savings and seeming not to care. He could only understand how problematic his behaviour was when the stimulation was turned off.
Such cases present serious questions about how the technology might affect a person’s ability to give consent to be treated, or for treatment to continue. [emphases mine] If the person who is undergoing DBS is happy to continue, should a concerned family member or doctor be able to overrule them? If someone other than the patient can terminate treatment against the patient’s wishes, it implies that the technology degrades people’s ability to make decisions for themselves. It suggests that if a person thinks in a certain way only when an electrical current alters their brain activity, then those thoughts do not reflect an authentic self.
To observe a person with tetraplegia bringing a drink to their mouth using a BCI-controlled robotic arm is spectacular. [emphasis mine] This rapidly advancing technology works by implanting an array of electrodes either on or in a person’s motor cortex — a brain region involved in planning and executing movements. The activity of the brain is recorded while the individual engages in cognitive tasks, such as imagining that they are moving their hand, and these recordings are used to command the robotic limb.
If neuroscientists could unambiguously discern a person’s intentions from the chattering electrical activity that they record in the brain, and then see that it matched the robotic arm’s actions, ethical concerns would be minimized. But this is not the case. The neural correlates of psychological phenomena are inexact and poorly understood, which means that signals from the brain are increasingly being processed by artificial intelligence (AI) software before reaching prostheses.[emphasis mine]
But, he [Philipp Kellmeyer, a neurologist and neuroethicist at the University of Freiburg, Germany] says, using AI tools also introduces ethical issues of which regulators have little experience. [emphasis mine] Machine-learning software learns to analyse data by generating algorithms that cannot be predicted and that are difficult, or impossible, to comprehend. This introduces an unknown and perhaps unaccountable process between a person’s thoughts and the technology that is acting on their behalf.
Maslen is already helping to shape BCI-device regulation. She is in discussion with the European Commission about regulations it will implement in 2020 that cover non-invasive brain-modulating devices that are sold straight to consumers. [emphases mine; Note: There is a Canadian company selling this type of product, MUSE] Maslen became interested in the safety of these devices, which were covered by only cursory safety regulations. Although such devices are simple, they pass electrical currents through people’s scalps to modulate brain activity. Maslen found reports of them causing burns, headaches and visual disturbances. She also says clinical studies have shown that, although non-invasive electrical stimulation of the brain can enhance certain cognitive abilities, this can come at the cost of deficits in other aspects of cognition.
Regarding my note about MUSE, the company is InteraXon and its product is MUSE.They advertise the product as “Brain Sensing Headbands That Improve Your Meditation Practice.” The company website and the product seem to be one entity, Choose Muse. The company’s product has been used in some serious research papers they can be found here. I did not see any research papers concerning safety issues.
It’s easy to forget that in all the excitement over technologies ‘making our lives better’ that there can be a dark side or two. Some of the points brought forth in the articles by Wolbring, Underwood, and Drew confirmed my uneasiness as reasonable and gave me some specific examples of how these technologies raise new issues or old issues in new ways.
What I find interesting is that no one is using the term ‘cyborg’, which would seem quite applicable.There is an April 20, 2012 posting here titled ‘My mother is a cyborg‘ where I noted that by at lease one definition people with joint replacements, pacemakers, etc. are considered cyborgs. In short, cyborgs or technology integrated into bodies have been amongst us for quite some time.
Interestingly, no one seems to care much when insects are turned into cyborgs (can’t remember who pointed this out) but it is a popular area of research especially for military applications and search and rescue applications.
I’ve sometimes used the term ‘machine/flesh’ and or ‘augmentation’ as a description of technologies integrated with bodies, human or otherwise. You can find lots on the topic here however I’ve tagged or categorized it.
Amongst other pieces you can find here, there’s the August 8, 2016 posting, ‘Technology, athletics, and the ‘new’ human‘ featuring Oscar Pistorius when he was still best known as the ‘blade runner’ and a remarkably successful paralympic athlete. It’s about his efforts to compete against able-bodied athletes at the London Olympic Games in 2012. It is fascinating to read about technology and elite athletes of any kind as they are often the first to try out ‘enhancements’.
Gregor Wolbring has a number of essays on The Conversation looking at Paralympic athletes and their pursuit of enhancements and how all of this is affecting our notions of abilities and disabilities. By extension, one has to assume that ‘abled’ athletes are also affected with the trickle-down effect on the rest of us.
Regardless of where we start the investigation, there is a sameness to the participants in neuroethics discussions with a few experts and commercial interests deciding on how the rest of us (however you define ‘us’ as per Gregor Wolbring’s essay) will live.
This paucity of perspectives is something I was getting at in my COVID-19 editorial for the Canadian Science Policy Centre. My thesis being that we need a range of ideas and insights that cannot be culled from small groups of people who’ve trained and read the same materials or entrepreneurs who too often seem to put profit over thoughtful implementations of new technologies. (See the PDF May 2020 edition [you’ll find me under Policy Development]) or see my May 15, 2020 posting here (with all the sources listed.)
As for this new research at Stanford, it’s exciting news, which raises questions, as it offers the hope of independent movement for people diagnosed as tetraplegic (sometimes known as quadriplegic.)
The Scientist is a magazine I do not feature here often enough. The latest issue (June 2020) features a May 20, 2020 opinion piece by Ruth Williams on a recent study about interpretating brain scans—70 different teams of neuroimaging experts were involved (Note: Links have been removed),
In a test of scientific reproducibility, multiple teams of neuroimaging experts from across the globe were asked to independently analyze and interpret the same functional magnetic resonance imaging dataset. The results of the test, published in Nature today (May 20), show that each team performed the analysis in a subtly different manner and that their conclusions varied as a result. While highlighting the cause of the irreproducibility—human methodological decisions—the paper also reveals ways to safeguard future studies against it.
Problems with reproducibility plague all areas of science, and have been particularly highlighted in the fields of psychology and cancer through projects run in part by the Center for Open Science. Now, neuroimaging has come under the spotlight thanks to a collaborative project by neuroimaging experts around the world called the Neuroimaging Analysis Replication and Prediction Study (NARPS).
Neuroimaging, specifically functional magnetic resonance imaging (fMRI), which produces pictures of blood flow patterns in the brain that are thought to relate to neuronal activity, has been criticized in the past for problems such as poor study design and statistical methods, and specifying hypotheses after results are known (SHARKing), says neurologist Alain Dagher of McGill University who was not involved in the study. A particularly memorable criticism of the technique was a paper demonstrating that, without needed statistical corrections, it could identify apparent brain activity in a dead fish.
Perhaps because of such criticisms, nowadays fMRI “is a field that is known to have a lot of cautiousness about statistics and . . . about the sample sizes,” says neuroscientist Tom Schonberg of Tel Aviv University, an author of the paper and co-coordinator of NARPS. Also, unlike in many areas of biology, he adds, the image analysis is computational, not manual, so fewer biases might be expected to creep in.
Schonberg was therefore a little surprised to see the NARPS results, admitting, “it wasn’t easy seeing this variability, but it was what it was.”
The study, led by Schonberg together with psychologist Russell Poldrack of Stanford University and neuroimaging statistician Thomas Nichols of the University of Oxford, recruited independent teams of researchers around the globe to analyze and interpret the same raw neuroimaging data—brain scans of 108 healthy adults taken while the subjects were at rest and while they performed a simple decision-making task about whether to gamble a sum of money.
Each of the 70 research teams taking part used one of three different image analysis software packages. But variations in the final results didn’t depend on these software choices, says Nichols. Instead, they came down to numerous steps in the analysis that each require a human’s decision, such as how to correct for motion of the subjects’ heads, how signal-to-noise ratios are enhanced, how much image smoothing to apply—that is, how strictly the anatomical regions of the brain are defined—and which statistical approaches and thresholds to use.
If this topic interests you, I strongly suggest you read Williams’ article in its entirety.
A January 23, 2020 news item on Nanowerk features a number of new books. Here are summaries of a couple of them from the news item (Note: Links have been removed),
The main goal of “Nanotechnology in Skin, Soft Tissue, and Bone Infections” is to deal with the role of nanobiotechnology in skin, soft tissue and bone infections since it is difficult to treat the infections due to the development of resistance in them against existing antibiotics.
The present interdisciplinary book is very useful for a diverse group of readers including nanotechnologists, medical microbiologists, dermatologists, osteologists, biotechnologists, bioengineers.
“Nanotechnology in Skin, Soft-Tissue, and Bone Infections” is divided into four sections: Section I- includes role of nanotechnology in skin infections such as atopic dermatitis, and nanomaterials for combating infections caused by bacteria and fungi. Section II- incorporates how nanotechnology can be used for soft-tissue infections such as diabetic foot ulcer and other wound infections; Section III- discusses about the nanomaterials in artificial scaffolds bone engineering and bone infections caused by bacteria and fungi; and also about the toxicity issues generated by the nanomaterials in general and nanoparticles in particular.
“Advanced Materials for Defense: Development, Analysis and Applications” is a collection of high quality research and review papers submitted to the 1st World Conference on Advanced Materials for Defense (AUXDEFENSE 2018).
A wide range of topics related to the defense area such as ballistic protection, impact and energy absorption, composite materials, smart materials and structures, nanomaterials and nano structures, CBRN protection, thermoregulation, camouflage, auxetic materials, and monitoring systems is covered.
Written by the leading experts in these subjects, this work discusses both technological advances in terms of materials as well as product designing, analysis as well as case studies.
This volume will prove to be a valuable resource for researchers and scientists from different engineering disciplines such as materials science, chemical engineering, biological sciences, textile engineering, mechanical engineering, environmental science, and nanotechnology.
Nanoengineering is a branch of engineering that exploits the unique properties of nanomaterials—their size and quantum effects—and the interaction between these materials, in order to design and manufacture novel structures and devices that possess entirely new functionality and capabilities, which are not obtainable by macroscale engineering.
While the term nanoengineering is often used synonymously with the general term nanotechnology, the former technically focuses more closely on the engineering aspects of the field, as opposed to the broader science and general technology aspects that are encompassed by the latter.
“Nanoengineering: The Skills and Tools Making Technology Invisible” puts a spotlight on some of the scientists who are pushing the boundaries of technology and it gives examples of their work and how they are advancing knowledge one little step at a time.
This book is a collection of essays about researchers involved in nanoengineering and many other facets of nanotechnologies. This research involves truly multidisciplinary and international efforts, covering a wide range of scientific disciplines such as medicine, materials sciences, chemistry, toxicology, biology and biotechnology, physics and electronics.
The book showcases 176 very specific research projects and you will meet the scientists who develop the theories, conduct the experiments, and build the new materials and devices that will make nanoengineering a core technology platform for many future products and applications.
On January 28, 2020, Azonano featured a book review for “Nano Comes to Life: How Nanotechnology is Transforming Medicine and the Future of Biology.” The review by Rebecca Megson-Smith, marketing lead, was originally published on the NuNano company blog
Covering sciences ‘greatest hits’ since we have been able to look at the world on the nanoscale, as well as where it is taking our understanding of life, Nano Comes to Life: How Nanotechnology is Transforming Medicine and the Future of Biology is an inspiring and joyful read.
As author Sonia Contera writes, biology is an area of intense interest and study. With the advent of nanotechnology, a more diverse range of scientists from across the disciplines are now coming together to solve some of the biggest issues of our time.
The ability to visualise, interact with, manipulate and create matter at the nanometer scale – the level of molecules, proteins and DNA – combined with the physicists quantitative and mathematical approach is revolutionising our understanding of the complexity which underpins life.
I particularly enjoyed the section that discussed the history of scanning tools. Here Contera highlights how profoundly the development of the STM [scanning tunneling microscope] transformed human interaction with matter.
Not only did it image at the atomic level with ‘unprecedented accuracy using a relatively simple, cheap tool’, but the STM was able to pick up and move the atoms around one by one. And what it couldn’t do effectively – work within the biological environments – was and is achievable through the introduction of the AFM [atomic force microscope].
She [Contera] writes:
“Physics urges us to consider life as a whole emergent from the greater whole – emanating from the same rules that govern the entire cosmos.”
I leave you with another bold declaration from Sonia about the good that the merging of the sciences has offered and, on behalf of everyone at NuNano, would like to wish you all a very Merry Christmas and Happy New Year – see you in 2020!
“As physics, engineering, computer science and materials science merge with biology, they are actually helping to reconnect science and technology with the deep questions that humans have asked themselves from the beginning of civilization: What is life? What does it mean to be human when we can manipulate and even exploit our own biology?”
Sonia Contera is professor of biological physics in the Department of Physics at the University of Oxford. She is a leading pioneer in the field of nanotechnology.
Megson-Smith certainly seems enthused about the book and she reminded me of how interested I was in STMs and AFMs when I first started investigating and writing about nanotechnology. Given the review but not having seen the book myself, it seems this might be a good introduction.
My introductory book was the 2009 Soft Machines: Nanotechnology and Life by Richard Jones, a professor of physics and astronomy at the University of Sheffield. I have great affection for the book and, if memory serves, it hasn’t really aged. One more thing, Jones can be very funny. It’s not many people who can successfully combine humour and nanotechnology.
At this point, it’s possible I’m wrong but I think this is the first ‘memristor’ type device (also called a neuromorphic chip) based on light rather than electronics that I’ve featured here on this blog. In other words, it’s not, technically speaking, a memristor but it does have the same properties so it is a neuromorphic chip.
A May 8, 2019 news item on Nanowerk announces this new approach to neuromorphic hardware (Note: A link has been removed),
Researchers from the Universities of Münster (Germany), Oxford and Exeter (both UK) have succeeded in developing a piece of hardware which could pave the way for creating computers which resemble the human brain.
The scientists produced a chip containing a network of artificial neurons that works with light and can imitate the behaviour of neurons and their synapses. The network is able to “learn” information and use this as a basis for computing and recognizing patterns. As the system functions solely with light and not with electrons, it can process data many times faster than traditional systems. …
A technology that functions like a brain? In these times of artificial intelligence, this no longer seems so far-fetched – for example, when a mobile phone can recognise faces or languages. With more complex applications, however, computers still quickly come up against their own limitations. One of the reasons for this is that a computer traditionally has separate memory and processor units – the consequence of which is that all data have to be sent back and forth between the two. In this respect, the human brain is way ahead of even the most modern computers because it processes and stores information in the same place – in the synapses, or connections between neurons, of which there are a million-billion in the brain. An international team of researchers from the Universities of Münster (Germany), Oxford and Exeter (both UK) have now succeeded in developing a piece of hardware which could pave the way for creating computers which resemble the human brain. The scientists managed to produce a chip containing a network of artificial neurons that works with light and can imitate the behaviour of neurons and their synapses.
The researchers were able to demonstrate, that such an optical neurosynaptic network is able to “learn” information and use this as a basis for computing and recognizing patterns – just as a brain can. As the system functions solely with light and not with traditional electrons, it can process data many times faster. “This integrated photonic system is an experimental milestone,” says Prof. Wolfram Pernice from Münster University and lead partner in the study. “The approach could be used later in many different fields for evaluating patterns in large quantities of data, for example in medical diagnoses.” The study is published in the latest issue of the “Nature” journal.
The story in detail – background and method used
Most of the existing approaches relating to so-called neuromorphic networks are based on electronics, whereas optical systems – in which photons, i.e. light particles, are used – are still in their infancy. The principle which the German and British scientists have now presented works as follows: optical waveguides that can transmit light and can be fabricated into optical microchips are integrated with so-called phase-change materials – which are already found today on storage media such as re-writable DVDs. These phase-change materials are characterised by the fact that they change their optical properties dramatically, depending on whether they are crystalline – when their atoms arrange themselves in a regular fashion – or amorphous – when their atoms organise themselves in an irregular fashion. This phase-change can be triggered by light if a laser heats the material up. “Because the material reacts so strongly, and changes its properties dramatically, it is highly suitable for imitating synapses and the transfer of impulses between two neurons,” says lead author Johannes Feldmann, who carried out many of the experiments as part of his PhD thesis at the Münster University.
In their study, the scientists succeeded for the first time in merging many nanostructured phase-change materials into one neurosynaptic network. The researchers developed a chip with four artificial neurons and a total of 60 synapses. The structure of the chip – consisting of different layers – was based on the so-called wavelength division multiplex technology, which is a process in which light is transmitted on different channels within the optical nanocircuit.
In order to test the extent to which the system is able to recognise patterns, the researchers “fed” it with information in the form of light pulses, using two different algorithms of machine learning. In this process, an artificial system “learns” from examples and can, ultimately, generalise them. In the case of the two algorithms used – both in so-called supervised and in unsupervised learning – the artificial network was ultimately able, on the basis of given light patterns, to recognise a pattern being sought – one of which was four consecutive letters.
“Our system has enabled us to take an important step towards creating computer hardware which behaves similarly to neurons and synapses in the brain and which is also able to work on real-world tasks,” says Wolfram Pernice. “By working with photons instead of electrons we can exploit to the full the known potential of optical technologies – not only in order to transfer data, as has been the case so far, but also in order to process and store them in one place,” adds co-author Prof. Harish Bhaskaran from the University of Oxford.
A very specific example is that with the aid of such hardware cancer cells could be identified automatically. Further work will need to be done, however, before such applications become reality. The researchers need to increase the number of artificial neurons and synapses and increase the depth of neural networks. This can be done, for example, with optical chips manufactured using silicon technology. “This step is to be taken in the EU joint project ‘Fun-COMP’ by using foundry processing for the production of nanochips,” says co-author and leader of the Fun-COMP project, Prof. C. David Wright from the University of Exeter.
For the curious, I found a little more information about Fun-COMP (functionally-scaled computer technology). It’s a European Commission (EC) Horizon 2020 project coordinated through the University of Exeter. For information with details such as the total cost, contribution from the EC, the list of partnerships and more there is the Fun-COMP webpage on fabiodisconzi.com.
Vyacheslav Polonski’s (University of Oxford researcher) January 10, 2018 piece (originally published Jan. 9, 2018 on The Conversation) on phys.org isn’t a gossip article although there are parts that could be read that way. Before getting to what I consider the juicy bits (Note: Links have been removed),
Artificial intelligence [AI] can already predict the future. Police forces are using it to map when and where crime is likely to occur [Note: See my Nov. 23, 2017 posting about predictive policing in Vancouver for details about the first Canadian municipality to introduce the technology]. Doctors can use it to predict when a patient is most likely to have a heart attack or stroke. Researchers are even trying to give AI imagination so it can plan for unexpected consequences.
Many decisions in our lives require a good forecast, and AI agents are almost always better at forecasting than their human counterparts. Yet for all these technological advances, we still seem to deeply lack confidence in AI predictions. Recent cases show that people don’t like relying on AI and prefer to trust human experts, even if these experts are wrong.
The part (juicy bits) that satisfied some of my long held curiosity was this section on Watson and its life as a medical adjunct (Note: Links have been removed),
IBM’s attempt to promote its supercomputer programme to cancer doctors (Watson for Onology) was a PR [public relations] disaster. The AI promised to deliver top-quality recommendations on the treatment of 12 cancers that accounted for 80% of the world’s cases. As of today, over 14,000 patients worldwide have received advice based on its calculations.
But when doctors first interacted with Watson they found themselves in a rather difficult situation. On the one hand, if Watson provided guidance about a treatment that coincided with their own opinions, physicians did not see much value in Watson’s recommendations. The supercomputer was simply telling them what they already know, and these recommendations did not change the actual treatment. This may have given doctors some peace of mind, providing them with more confidence in their own decisions. But IBM has yet to provide evidence that Watson actually improves cancer survival rates.
On the other hand, if Watson generated a recommendation that contradicted the experts’ opinion, doctors would typically conclude that Watson wasn’t competent. And the machine wouldn’t be able to explain why its treatment was plausible because its machine learning algorithms were simply too complex to be fully understood by humans. Consequently, this has caused even more mistrust and disbelief, leading many doctors to ignore the seemingly outlandish AI recommendations and stick to their own expertise.
As a result, IBM Watson’s premier medical partner, the MD Anderson Cancer Center, recently announced it was dropping the programme. Similarly, a Danish hospital reportedly abandoned the AI programme after discovering that its cancer doctors disagreed with Watson in over two thirds of cases.
The problem with Watson for Oncology was that doctors simply didn’t trust it. Human trust is often based on our understanding of how other people think and having experience of their reliability. …
It seems to me there might be a bit more to the doctors’ trust issues and I was surprised it didn’t seem to have occurred to Polonski. Then I did some digging (from Polonski’s webpage on the Oxford Internet Institute website),
Vyacheslav Polonski (@slavacm) is a DPhil [PhD] student at the Oxford Internet Institute. His research interests are located at the intersection of network science, media studies and social psychology. Vyacheslav’s doctoral research examines the adoption and use of social network sites, focusing on the effects of social influence, social cognition and identity construction.
Vyacheslav is a Visiting Fellow at Harvard University and a Global Shaper at the World Economic Forum. He was awarded the Master of Science degree with Distinction in the Social Science of the Internet from the University of Oxford in 2013. He also obtained the Bachelor of Science degree with First Class Honours in Management from the London School of Economics and Political Science (LSE) in 2012.
Vyacheslav was honoured at the British Council International Student of the Year 2011 awards, and was named UK’s Student of the Year 2012 and national winner of the Future Business Leader of the Year 2012 awards by TARGETjobs.
Previously, he has worked as a management consultant at Roland Berger Strategy Consultants and gained further work experience at the World Economic Forum, PwC, Mars, Bertelsmann and Amazon.com. Besides, he was involved in several start-ups as part of the 2012 cohort of Entrepreneur First and as part of the founding team of the London office of Rocket Internet. Vyacheslav was the junior editor of the bi-lingual book ‘Inspire a Nation‘ about Barack Obama’s first presidential election campaign. In 2013, he was invited to be a keynote speaker at the inaugural TEDx conference of IE University in Spain to discuss the role of a networked mindset in everyday life.
Vyacheslav is fluent in German, English and Russian, and is passionate about new technologies, social entrepreneurship, philanthropy, philosophy and modern art.
Network science, social network analysis, online communities, agency and structure, group dynamics, social interaction, big data, critical mass, network effects, knowledge networks, information diffusion, product adoption
Positions held at the OII
DPhil student, October 2013 –
MSc Student, October 2012 – August 2013
Polonski doesn’t seem to have any experience dealing with, participating in, or studying the medical community. Getting a doctor to admit that his or her approach to a particular patient’s condition was wrong or misguided runs counter to their training and, by extension, the institution of medicine. Also, one of the biggest problems in any field is getting people to change and it’s not always about trust. In this instance, you’re asking a doctor to back someone else’s opinion after he or she has rendered theirs. This is difficult even when the other party is another human doctor let alone a form of artificial intelligence.
If you want to get a sense of just how hard it is to get someone to back down after they’ve committed to a position, read this January 10, 2018 essay by Lara Bazelon, an associate professor at the University of San Francisco School of Law. This is just one of the cases (Note: Links have been removed),
Davontae Sanford was 14 years old when he confessed to murdering four people in a drug house on Detroit’s East Side. Left alone with detectives in a late-night interrogation, Sanford says he broke down after being told he could go home if he gave them “something.” On the advice of a lawyer whose license was later suspended for misconduct, Sanders pleaded guilty in the middle of his March 2008 trial and received a sentence of 39 to 92 years in prison.
Sixteen days after Sanford was sentenced, a hit man named Vincent Smothers told the police he had carried out 12 contract killings, including the four Sanford had pleaded guilty to committing. Smothers explained that he’d worked with an accomplice, Ernest Davis, and he provided a wealth of corroborating details to back up his account. Smothers told police where they could find one of the weapons used in the murders; the gun was recovered and ballistics matched it to the crime scene. He also told the police he had used a different gun in several of the other murders, which ballistics tests confirmed. Once Smothers’ confession was corroborated, it was clear Sanford was innocent. Smothers made this point explicitly in an 2015 affidavit, emphasizing that Sanford hadn’t been involved in the crimes “in any way.”
Guess what happened? (Note: Links have been removed),
But Smothers and Davis were never charged. Neither was Leroy Payne, the man Smothers alleged had paid him to commit the murders. …
Davontae Sanford, meanwhile, remained behind bars, locked up for crimes he very clearly didn’t commit.
Police failed to turn over all the relevant information in Smothers’ confession to Sanford’s legal team, as the law required them to do. When that information was leaked in 2009, Sanford’s attorneys sought to reverse his conviction on the basis of actual innocence. Wayne County Prosecutor Kym Worthy fought back, opposing the motion all the way to the Michigan Supreme Court. In 2014, the court sided with Worthy, ruling that actual innocence was not a valid reason to withdraw a guilty plea [emphasis mine]. Sanford would remain in prison for another two years.
Doctors are just as invested in their opinions and professional judgments as lawyers (just like the prosecutor and the judges on the Michigan Supreme Court) are.
There is one more problem. From the doctor’s (or anyone else’s perspective), if the AI is making the decisions, why do he/she need to be there? At best it’s as if AI were turning the doctor into its servant or, at worst, replacing the doctor. Polonski alludes to the problem in one of his solutions to the ‘trust’ issue (Note: A link has been removed),
Research suggests involving people more in the AI decision-making process could also improve trust and allow the AI to learn from human experience. For example,one study showed people were given the freedom to slightly modify an algorithm felt more satisfied with its decisions, more likely to believe it was superior and more likely to use it in the future.
Having input into the AI decision-making process somewhat addresses one of the problems but the commitment to one’s own judgment even when there is overwhelming evidence to the contrary is a perennially thorny problem. The legal case mentioned here earlier is clearly one where the contrarian is wrong but it’s not always that obvious. As well, sometimes, people who hold out against the majority are right.
Getting back to building trust, it turns out the US Army Research Laboratory is also interested in transparency where AI is concerned (from a January 11, 2018 US Army news release on EurekAlert),
U.S. Army Research Laboratory [ARL] scientists developed ways to improve collaboration between humans and artificially intelligent agents in two projects recently completed for the Autonomy Research Pilot Initiative supported by the Office of Secretary of Defense. They did so by enhancing the agent transparency [emphasis mine], which refers to a robot, unmanned vehicle, or software agent’s ability to convey to humans its intent, performance, future plans, and reasoning process.
“As machine agents become more sophisticated and independent, it is critical for their human counterparts to understand their intent, behaviors, reasoning process behind those behaviors, and expected outcomes so the humans can properly calibrate their trust [emphasis mine] in the systems and make appropriate decisions,” explained ARL’s Dr. Jessie Chen, senior research psychologist.
The U.S. Defense Science Board, in a 2016 report, identified six barriers to human trust in autonomous systems, with ‘low observability, predictability, directability and auditability’ as well as ‘low mutual understanding of common goals’ being among the key issues.
In order to address these issues, Chen and her colleagues developed the Situation awareness-based Agent Transparency, or SAT, model and measured its effectiveness on human-agent team performance in a series of human factors studies supported by the ARPI. The SAT model deals with the information requirements from an agent to its human collaborator in order for the human to obtain effective situation awareness of the agent in its tasking environment. At the first SAT level, the agent provides the operator with the basic information about its current state and goals, intentions, and plans. At the second level, the agent reveals its reasoning process as well as the constraints/affordances that the agent considers when planning its actions. At the third SAT level, the agent provides the operator with information regarding its projection of future states, predicted consequences, likelihood of success/failure, and any uncertainty associated with the aforementioned projections.
In one of the ARPI projects, IMPACT, a research program on human-agent teaming for management of multiple heterogeneous unmanned vehicles, ARL’s experimental effort focused on examining the effects of levels of agent transparency, based on the SAT model, on human operators’ decision making during military scenarios. The results of a series of human factors experiments collectively suggest that transparency on the part of the agent benefits the human’s decision making and thus the overall human-agent team performance. More specifically, researchers said the human’s trust in the agent was significantly better calibrated — accepting the agent’s plan when it is correct and rejecting it when it is incorrect– when the agent had a higher level of transparency.
The other project related to agent transparency that Chen and her colleagues performed under the ARPI was Autonomous Squad Member, on which ARL collaborated with Naval Research Laboratory scientists. The ASM is a small ground robot that interacts with and communicates with an infantry squad. As part of the overall ASM program, Chen’s group developed transparency visualization concepts, which they used to investigate the effects of agent transparency levels on operator performance. Informed by the SAT model, the ASM’s user interface features an at a glance transparency module where user-tested iconographic representations of the agent’s plans, motivator, and projected outcomes are used to promote transparent interaction with the agent. A series of human factors studies on the ASM’s user interface have investigated the effects of agent transparency on the human teammate’s situation awareness, trust in the ASM, and workload. The results, consistent with the IMPACT project’s findings, demonstrated the positive effects of agent transparency on the human’s task performance without increase of perceived workload. The research participants also reported that they felt the ASM as more trustworthy, intelligent, and human-like when it conveyed greater levels of transparency.
Chen and her colleagues are currently expanding the SAT model into bidirectional transparency between the human and the agent.
“Bidirectional transparency, although conceptually straightforward–human and agent being mutually transparent about their reasoning process–can be quite challenging to implement in real time. However, transparency on the part of the human should support the agent’s planning and performance–just as agent transparency can support the human’s situation awareness and task performance, which we have demonstrated in our studies,” Chen hypothesized.
The challenge is to design the user interfaces, which can include visual, auditory, and other modalities, that can support bidirectional transparency dynamically, in real time, while not overwhelming the human with too much information and burden.
Interesting, yes? Here’s a link and a citation for the paper,