Category Archives: human enhancement

Robots and a new perspective on disability

I’ve long wondered about how disabilities would be viewed in a future (h/t May 4, 2017 news item on phys.org) where technology could render them largely irrelevant. A May 4, 2017 essay by Thusha (Gnanthusharan) Rajendran of Heriot-Watt University on TheConversation.com provides a perspective on the possibilities (Note: Links have been removed),

When dealing with the otherness of disability, the Victorians in their shame built huge out-of-sight asylums, and their legacy of “them” and “us” continues to this day. Two hundred years later, technologies offer us an alternative view. The digital age is shattering barriers, and what used to the norm is now being challenged.

What if we could change the environment, rather than the person? What if a virtual assistant could help a visually impaired person with their online shopping? And what if a robot “buddy” could help a person with autism navigate the nuances of workplace politics? These are just some of the questions that are being asked and which need answers as the digital age challenges our perceptions of normality.

The treatment of people with developmental conditions has a chequered history. In towns and cities across Britain, you will still see large Victorian buildings that were once places to “look after” people with disabilities, that is, remove them from society. Things became worse still during the time of the Nazis with an idealisation of the perfect and rejection of Darwin’s idea of natural diversity.

Today we face similar challenges about differences versus abnormalities. Arguably, current diagnostic systems do not help, because they diagnose the person and not “the system”. So, a child has challenging behaviour, rather than being in distress; the person with autism has a communication disorder rather than simply not being understood.

Natural-born cyborgs

In contrast, the digital world is all about systems. The field of human-computer interaction is about how things work between humans and computers or robots. Philosopher Andy Clark argues that humans have always been natural-born cyborgs – that is, we have always used technology (in its broadest sense) to improve ourselves.

The most obvious example is language itself. In the digital age we can become truly digitally enhanced. How many of us Google something rather than remembering it? How do you feel when you have no access to wi-fi? How much do we favour texting, tweeting and Facebook over face-to-face conversations? How much do we love and need our smartphones?

In the new field of social robotics, my colleagues and I are developing a robot buddy to help adults with autism to understand, for example, if their boss is pleased or displeased with their work. For many adults with autism, it is not the work itself that stops from them from having successful careers, it is the social environment surrounding work. From the stress-inducing interview to workplace politics, the modern world of work is a social minefield. It is not easy, at times, for us neurotypticals, but for a person with autism it is a world full contradictions and implied meaning.

Rajendra goes on to highlight efforts with autistic individuals; he also includes this video of his December 14, 2016 TEDx Heriot-Watt University talk, which largely focuses on his work with robots and autism  (Note: This runs approximately 15 mins.),

The talk reminded me of a Feb. 6, 2017 posting (scroll down about 33% of the way) where I discussed a recent book about science communication and its failure to recognize the importance of pop culture in that endeavour. As an example, I used a then recent announcement from MIT (Massachusetts Institute of Technology) about their emotion detection wireless application and the almost simultaneous appearance of that application in a Feb. 2, 2017 episode of The Big Bang Theory (a popular US television comedy) featuring a character who could be seen as autistic making use of the emotion detection device.

In any event, the work described in the MIT news release is very similar to Rajendra’s albeit the communication is delivered to the public through entirely different channels: TEDx Talk and TheConversation.com (channels aimed at academics and those with academic interests) and a pop culture television comedy with broad appeal.

Solar-powered graphene skin for more feeling in your prosthetics

A March 23, 2017 news item on Nanowerk highlights research that could put feeling into a prosthetic limb,

A new way of harnessing the sun’s rays to power ‘synthetic skin’ could help to create advanced prosthetic limbs capable of returning the sense of touch to amputees.

Engineers from the University of Glasgow, who have previously developed an ‘electronic skin’ covering for prosthetic hands made from graphene, have found a way to use some of graphene’s remarkable physical properties to use energy from the sun to power the skin.

Graphene is a highly flexible form of graphite which, despite being just a single atom thick, is stronger than steel, electrically conductive, and transparent. It is graphene’s optical transparency, which allows around 98% of the light which strikes its surface to pass directly through it, which makes it ideal for gathering energy from the sun to generate power.

A March 23, 2017 University of Glasgow press release, which originated the news item, details more about the research,

Ravinder Dahiya

Dr Ravinder Dahiya

A new research paper, published today in the journal Advanced Functional Materials, describes how Dr Dahiya and colleagues from his Bendable Electronics and Sensing Technologies (BEST) group have integrated power-generating photovoltaic cells into their electronic skin for the first time.

Dr Dahiya, from the University of Glasgow’s School of Engineering, said: “Human skin is an incredibly complex system capable of detecting pressure, temperature and texture through an array of neural sensors which carry signals from the skin to the brain.

“My colleagues and I have already made significant steps in creating prosthetic prototypes which integrate synthetic skin and are capable of making very sensitive pressure measurements. Those measurements mean the prosthetic hand is capable of performing challenging tasks like properly gripping soft materials, which other prosthetics can struggle with. We are also using innovative 3D printing strategies to build more affordable sensitive prosthetic limbs, including the formation of a very active student club called ‘Helping Hands’.

“Skin capable of touch sensitivity also opens the possibility of creating robots capable of making better decisions about human safety. A robot working on a construction line, for example, is much less likely to accidentally injure a human if it can feel that a person has unexpectedly entered their area of movement and stop before an injury can occur.”

The new skin requires just 20 nanowatts of power per square centimetre, which is easily met even by the poorest-quality photovoltaic cells currently available on the market. And although currently energy generated by the skin’s photovoltaic cells cannot be stored, the team are already looking into ways to divert unused energy into batteries, allowing the energy to be used as and when it is required.

Dr Dahiya added: “The other next step for us is to further develop the power-generation technology which underpins this research and use it to power the motors which drive the prosthetic hand itself. This could allow the creation of an entirely energy-autonomous prosthetic limb.

“We’ve already made some encouraging progress in this direction and we’re looking forward to presenting those results soon. We are also exploring the possibility of building on these exciting results to develop wearable systems for affordable healthcare. In this direction, recently we also got small funds from Scottish Funding Council.”

For more information about this advance and others in the field of prosthetics you may want to check out Megan Scudellari’s March 30, 2017 article for the IEEE’s (Institute of Electrical and Electronics Engineers) Spectrum (Note: Links have been removed),

Cochlear implants can restore hearing to individuals with some types of hearing loss. Retinal implants are now on the market to restore sight to the blind. But there are no commercially available prosthetics that restore a sense of touch to those who have lost a limb.

Several products are in development, including this haptic system at Case Western Reserve University, which would enable upper-limb prosthetic users to, say, pluck a grape off a stem or pull a potato chip out of a bag. It sounds simple, but such tasks are virtually impossible without a sense of touch and pressure.

Now, a team at the University of Glasgow that previously developed a flexible ‘electronic skin’ capable of making sensitive pressure measurements, has figured out how to power their skin with sunlight. …

Here’s a link to and a citation for the paper,

Energy-Autonomous, Flexible, and Transparent Tactile Skin by Carlos García Núñez, William Taube Navaraj, Emre O. Polat and Ravinder Dahiya. Advanced Functional Materials DOI: 10.1002/adfm.201606287 Version of Record online: 22 MAR 2017

© 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

Emerging technology and the law

I have three news bits about legal issues that are arising as a consequence of emerging technologies.

Deep neural networks, art, and copyright

Caption: The rise of automated art opens new creative avenues, coupled with new problems for copyright protection. Credit: Provided by: Alexander Mordvintsev, Christopher Olah and Mike Tyka

Presumably this artwork is a demonstration of automated art although they never really do explain how in the news item/news release. An April 26, 2017 news item on ScienceDaily announces research into copyright and the latest in using neural networks to create art,

In 1968, sociologist Jean Baudrillard wrote on automatism that “contained within it is the dream of a dominated world […] that serves an inert and dreamy humanity.”

With the growing popularity of Deep Neural Networks (DNN’s), this dream is fast becoming a reality.

Dr. Jean-Marc Deltorn, researcher at the Centre d’études internationales de la propriété intellectuelle in Strasbourg, argues that we must remain a responsive and responsible force in this process of automation — not inert dominators. As he demonstrates in a recent Frontiers in Digital Humanities paper, the dream of automation demands a careful study of the legal problems linked to copyright.

An April 26, 2017 Frontiers (publishing) news release on EurekAlert, which originated the news item, describes the research in more detail,

For more than half a century, artists have looked to computational processes as a way of expanding their vision. DNN’s are the culmination of this cross-pollination: by learning to identify a complex number of patterns, they can generate new creations.

These systems are made up of complex algorithms modeled on the transmission of signals between neurons in the brain.

DNN creations rely in equal measure on human inputs and the non-human algorithmic networks that process them.

Inputs are fed into the system, which is layered. Each layer provides an opportunity for a more refined knowledge of the inputs (shape, color, lines). Neural networks compare actual outputs to expected ones, and correct the predictive error through repetition and optimization. They train their own pattern recognition, thereby optimizing their learning curve and producing increasingly accurate outputs.

The deeper the layers are, the higher the level of abstraction. The highest layers are able to identify the contents of a given input with reasonable accuracy, after extended periods of training.

Creation thus becomes increasingly automated through what Deltorn calls “the arcane traceries of deep architecture”. The results are sufficiently abstracted from their sources to produce original creations that have been exhibited in galleries, sold at auction and performed at concerts.

The originality of DNN’s is a combined product of technological automation on one hand, human inputs and decisions on the other.

DNN’s are gaining popularity. Various platforms (such as DeepDream) now allow internet users to generate their very own new creations . This popularization of the automation process calls for a comprehensive legal framework that ensures a creator’s economic and moral rights with regards to his work – copyright protection.

Form, originality and attribution are the three requirements for copyright. And while DNN creations satisfy the first of these three, the claim to originality and attribution will depend largely on a given country legislation and on the traceability of the human creator.

Legislation usually sets a low threshold to originality. As DNN creations could in theory be able to create an endless number of riffs on source materials, the uncurbed creation of original works could inflate the existing number of copyright protections.

Additionally, a small number of national copyright laws confers attribution to what UK legislation defines loosely as “the person by whom the arrangements necessary for the creation of the work are undertaken.” In the case of DNN’s, this could mean anybody from the programmer to the user of a DNN interface.

Combined with an overly supple take on originality, this view on attribution would further increase the number of copyrightable works.

The risk, in both cases, is that artists will be less willing to publish their own works, for fear of infringement of DNN copyright protections.

In order to promote creativity – one seminal aim of copyright protection – the issue must be limited to creations that manifest a personal voice “and not just the electric glint of a computational engine,” to quote Deltorn. A delicate act of discernment.

DNN’s promise new avenues of creative expression for artists – with potential caveats. Copyright protection – a “catalyst to creativity” – must be contained. Many of us gently bask in the glow of an increasingly automated form of technology. But if we want to safeguard the ineffable quality that defines much art, it might be a good idea to hone in more closely on the differences between the electric and the creative spark.

This research is and be will part of a broader Frontiers Research Topic collection of articles on Deep Learning and Digital Humanities.

Here’s a link to and a citation for the paper,

Deep Creations: Intellectual Property and the Automata by Jean-Marc Deltorn. Front. Digit. Humanit., 01 February 2017 | https://doi.org/10.3389/fdigh.2017.00003

This paper is open access.

Conference on governance of emerging technologies

I received an April 17, 2017 notice via email about this upcoming conference. Here’s more from the Fifth Annual Conference on Governance of Emerging Technologies: Law, Policy and Ethics webpage,

The Fifth Annual Conference on Governance of Emerging Technologies:

Law, Policy and Ethics held at the new

Beus Center for Law & Society in Phoenix, AZ

May 17-19, 2017!

Call for Abstracts – Now Closed

The conference will consist of plenary and session presentations and discussions on regulatory, governance, legal, policy, social and ethical aspects of emerging technologies, including (but not limited to) nanotechnology, synthetic biology, gene editing, biotechnology, genomics, personalized medicine, human enhancement technologies, telecommunications, information technologies, surveillance technologies, geoengineering, neuroscience, artificial intelligence, and robotics. The conference is premised on the belief that there is much to be learned and shared from and across the governance experience and proposals for these various emerging technologies.

Keynote Speakers:

Gillian HadfieldRichard L. and Antoinette Schamoi Kirtland Professor of Law and Professor of Economics USC [University of Southern California] Gould School of Law

Shobita Parthasarathy, Associate Professor of Public Policy and Women’s Studies, Director, Science, Technology, and Public Policy Program University of Michigan

Stuart Russell, Professor at [University of California] Berkeley, is a computer scientist known for his contributions to artificial intelligence

Craig Shank, Vice President for Corporate Standards Group in Microsoft’s Corporate, External and Legal Affairs (CELA)

Plenary Panels:

Innovation – Responsible and/or Permissionless

Ellen-Marie Forsberg, Senior Researcher/Research Manager at Oslo and Akershus University College of Applied Sciences

Adam Thierer, Senior Research Fellow with the Technology Policy Program at the Mercatus Center at George Mason University

Wendell Wallach, Consultant, ethicist, and scholar at Yale University’s Interdisciplinary Center for Bioethics

 Gene Drives, Trade and International Regulations

Greg Kaebnick, Director, Editorial Department; Editor, Hastings Center Report; Research Scholar, Hastings Center

Jennifer Kuzma, Goodnight-North Carolina GlaxoSmithKline Foundation Distinguished Professor in Social Sciences in the School of Public and International Affairs (SPIA) and co-director of the Genetic Engineering and Society (GES) Center at North Carolina State University

Andrew Maynard, Senior Sustainability Scholar, Julie Ann Wrigley Global Institute of Sustainability Director, Risk Innovation Lab, School for the Future of Innovation in Society Professor, School for the Future of Innovation in Society, Arizona State University

Gary Marchant, Regents’ Professor of Law, Professor of Law Faculty Director and Faculty Fellow, Center for Law, Science & Innovation, Arizona State University

Marc Saner, Inaugural Director of the Institute for Science, Society and Policy, and Associate Professor, University of Ottawa Department of Geography

Big Data

Anupam Chander, Martin Luther King, Jr. Professor of Law and Director, California International Law Center, UC Davis School of Law

Pilar Ossorio, Professor of Law and Bioethics, University of Wisconsin, School of Law and School of Medicine and Public Health; Morgridge Institute for Research, Ethics Scholar-in-Residence

George Poste, Chief Scientist, Complex Adaptive Systems Initiative (CASI) (http://www.casi.asu.edu/), Regents’ Professor and Del E. Webb Chair in Health Innovation, Arizona State University

Emily Shuckburgh, climate scientist and deputy head of the Polar Oceans Team at the British Antarctic Survey, University of Cambridge

 Responsible Development of AI

Spring Berman, Ira A. Fulton Schools of Engineering, Arizona State University

John Havens, The IEEE [Institute of Electrical and Electronics Engineers] Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems

Subbarao Kambhampati, Senior Sustainability Scientist, Julie Ann Wrigley Global Institute of Sustainability, Professor, School of Computing, Informatics and Decision Systems Engineering, Ira A. Fulton Schools of Engineering, Arizona State University

Wendell Wallach, Consultant, Ethicist, and Scholar at Yale University’s Interdisciplinary Center for Bioethics

Existential and Catastrophic Ricks [sic]

Tony Barrett, Co-Founder and Director of Research of the Global Catastrophic Risk Institute

Haydn Belfield,  Academic Project Administrator, Centre for the Study of Existential Risk at the University of Cambridge

Margaret E. Kosal Associate Director, Sam Nunn School of International Affairs, Georgia Institute of Technology

Catherine Rhodes,  Academic Project Manager, Centre for the Study of Existential Risk at CSER, University of Cambridge

These were the panels that are of interest to me; there are others on the homepage.

Here’s some information from the Conference registration webpage,

Early Bird Registration – $50 off until May 1! Enter discount code: earlybirdGETs50

New: Group Discount – Register 2+ attendees together and receive an additional 20% off for all group members!

Click Here to Register!

Conference registration fees are as follows:

  • General (non-CLE) Registration: $150.00
  • CLE Registration: $350.00
  • *Current Student / ASU Law Alumni Registration: $50.00
  • ^Cybsersecurity sessions only (May 19): $100 CLE / $50 General / Free for students (registration info coming soon)

There you have it.

Neuro-techno future laws

I’m pretty sure this isn’t the first exploration of potential legal issues arising from research into neuroscience although it’s the first one I’ve stumbled across. From an April 25, 2017 news item on phys.org,

New human rights laws to prepare for advances in neurotechnology that put the ‘freedom of the mind’ at risk have been proposed today in the open access journal Life Sciences, Society and Policy.

The authors of the study suggest four new human rights laws could emerge in the near future to protect against exploitation and loss of privacy. The four laws are: the right to cognitive liberty, the right to mental privacy, the right to mental integrity and the right to psychological continuity.

An April 25, 2017 Biomed Central news release on EurekAlert, which originated the news item, describes the work in more detail,

Marcello Ienca, lead author and PhD student at the Institute for Biomedical Ethics at the University of Basel, said: “The mind is considered to be the last refuge of personal freedom and self-determination, but advances in neural engineering, brain imaging and neurotechnology put the freedom of the mind at risk. Our proposed laws would give people the right to refuse coercive and invasive neurotechnology, protect the privacy of data collected by neurotechnology, and protect the physical and psychological aspects of the mind from damage by the misuse of neurotechnology.”

Advances in neurotechnology, such as sophisticated brain imaging and the development of brain-computer interfaces, have led to these technologies moving away from a clinical setting and into the consumer domain. While these advances may be beneficial for individuals and society, there is a risk that the technology could be misused and create unprecedented threats to personal freedom.

Professor Roberto Andorno, co-author of the research, explained: “Brain imaging technology has already reached a point where there is discussion over its legitimacy in criminal court, for example as a tool for assessing criminal responsibility or even the risk of reoffending. Consumer companies are using brain imaging for ‘neuromarketing’, to understand consumer behaviour and elicit desired responses from customers. There are also tools such as ‘brain decoders’ which can turn brain imaging data into images, text or sound. All of these could pose a threat to personal freedom which we sought to address with the development of four new human rights laws.”

The authors explain that as neurotechnology improves and becomes commonplace, there is a risk that the technology could be hacked, allowing a third-party to ‘eavesdrop’ on someone’s mind. In the future, a brain-computer interface used to control consumer technology could put the user at risk of physical and psychological damage caused by a third-party attack on the technology. There are also ethical and legal concerns over the protection of data generated by these devices that need to be considered.

International human rights laws make no specific mention to neuroscience, although advances in biomedicine have become intertwined with laws, such as those concerning human genetic data. Similar to the historical trajectory of the genetic revolution, the authors state that the on-going neurorevolution will force a reconceptualization of human rights laws and even the creation of new ones.

Marcello Ienca added: “Science-fiction can teach us a lot about the potential threat of technology. Neurotechnology featured in famous stories has in some cases already become a reality, while others are inching ever closer, or exist as military and commercial prototypes. We need to be prepared to deal with the impact these technologies will have on our personal freedom.”

Here’s a link to and a citation for the paper,

Towards new human rights in the age of neuroscience and neurotechnology by Marcello Ienca and Roberto Andorno. Life Sciences, Society and Policy201713:5 DOI: 10.1186/s40504-017-0050-1 Published: 26 April 2017

©  The Author(s). 2017

This paper is open access.

Fractal imagery (from nature or from art or from mathematics) soothes

Jackson Pollock’s work is often cited when fractal art is discussed. I think it’s largely because he likely produced the art without knowing about the concept.

No. 5, 1948 (Jackson Pollock, downloaded from Wikipedia essay about No. 5, 1948)

Richard Taylor, a professor of physics at the University of Oregon, provides more information about how fractals affect us and how this is relevant to his work with retinal implants in a March 30, 2017 essay for The Conversation (h/t Mar. 31, 2017 news item on phys.org), Note: Links have been removed),

Humans are visual creatures. Objects we call “beautiful” or “aesthetic” are a crucial part of our humanity. Even the oldest known examples of rock and cave art served aesthetic rather than utilitarian roles. Although aesthetics is often regarded as an ill-defined vague quality, research groups like mine are using sophisticated techniques to quantify it – and its impact on the observer.

We’re finding that aesthetic images can induce staggering changes to the body, including radical reductions in the observer’s stress levels. Job stress alone is estimated to cost American businesses many billions of dollars annually, so studying aesthetics holds a huge potential benefit to society.

Researchers are untangling just what makes particular works of art or natural scenes visually appealing and stress-relieving – and one crucial factor is the presence of the repetitive patterns called fractals.

When it comes to aesthetics, who better to study than famous artists? They are, after all, the visual experts. My research group took this approach with Jackson Pollock, who rose to the peak of modern art in the late 1940s by pouring paint directly from a can onto horizontal canvases laid across his studio floor. Although battles raged among Pollock scholars regarding the meaning of his splattered patterns, many agreed they had an organic, natural feel to them.

My scientific curiosity was stirred when I learned that many of nature’s objects are fractal, featuring patterns that repeat at increasingly fine magnifications. For example, think of a tree. First you see the big branches growing out of the trunk. Then you see smaller versions growing out of each big branch. As you keep zooming in, finer and finer branches appear, all the way down to the smallest twigs. Other examples of nature’s fractals include clouds, rivers, coastlines and mountains.

In 1999, my group used computer pattern analysis techniques to show that Pollock’s paintings are as fractal as patterns found in natural scenery. Since then, more than 10 different groups have performed various forms of fractal analysis on his paintings. Pollock’s ability to express nature’s fractal aesthetics helps explain the enduring popularity of his work.

The impact of nature’s aesthetics is surprisingly powerful. In the 1980s, architects found that patients recovered more quickly from surgery when given hospital rooms with windows looking out on nature. Other studies since then have demonstrated that just looking at pictures of natural scenes can change the way a person’s autonomic nervous system responds to stress.

Are fractals the secret to some soothing natural scenes? Ronan, CC BY-NC-ND

For me, this raises the same question I’d asked of Pollock: Are fractals responsible? Collaborating with psychologists and neuroscientists, we measured people’s responses to fractals found in nature (using photos of natural scenes), art (Pollock’s paintings) and mathematics (computer generated images) and discovered a universal effect we labeled “fractal fluency.”

Through exposure to nature’s fractal scenery, people’s visual systems have adapted to efficiently process fractals with ease. We found that this adaptation occurs at many stages of the visual system, from the way our eyes move to which regions of the brain get activated. This fluency puts us in a comfort zone and so we enjoy looking at fractals. Crucially, we used EEG to record the brain’s electrical activity and skin conductance techniques to show that this aesthetic experience is accompanied by stress reduction of 60 percent – a surprisingly large effect for a nonmedicinal treatment. This physiological change even accelerates post-surgical recovery rates.

Pollock’s motivation for continually increasing the complexity of his fractal patterns became apparent recently when I studied the fractal properties of Rorschach inkblots. These abstract blots are famous because people see imaginary forms (figures and animals) in them. I explained this process in terms of the fractal fluency effect, which enhances people’s pattern recognition processes. The low complexity fractal inkblots made this process trigger-happy, fooling observers into seeing images that aren’t there.

Pollock disliked the idea that viewers of his paintings were distracted by such imaginary figures, which he called “extra cargo.” He intuitively increased the complexity of his works to prevent this phenomenon.

Pollock’s abstract expressionist colleague, Willem De Kooning, also painted fractals. When he was diagnosed with dementia, some art scholars called for his retirement amid concerns that that it would reduce the nurture component of his work. Yet, although they predicted a deterioration in his paintings, his later works conveyed a peacefulness missing from his earlier pieces. Recently, the fractal complexity of his paintings was shown to drop steadily as he slipped into dementia. The study focused on seven artists with different neurological conditions and highlighted the potential of using art works as a new tool for studying these diseases. To me, the most inspiring message is that, when fighting these diseases, artists can still create beautiful artworks.

Recognizing how looking at fractals reduces stress means it’s possible to create retinal implants that mimic the mechanism. Nautilus image via www.shutterstock.com.

My main research focuses on developing retinal implants to restore vision to victims of retinal diseases. At first glance, this goal seems a long way from Pollock’s art. Yet, it was his work that gave me the first clue to fractal fluency and the role nature’s fractals can play in keeping people’s stress levels in check. To make sure my bio-inspired implants induce the same stress reduction when looking at nature’s fractals as normal eyes do, they closely mimic the retina’s design.

When I started my Pollock research, I never imagined it would inform artificial eye designs. This, though, is the power of interdisciplinary endeavors – thinking “out of the box” leads to unexpected but potentially revolutionary ideas.

Fabulous essay, eh?

I have previously featured Jackson Pollock in a June 30, 2011 posting titled: Jackson Pollock’s physics and and briefly mentioned him in a May 11, 2010 visual arts commentary titled: Rennie Collection’s latest: Richard Jackson, Georges Seurat & Jackson Pollock, guns, the act of painting, and women (scroll down about 45% of the way).

Quadriplegic man reanimates a limb with implanted brain-recording and muscle-stimulating systems

It took me a few minutes to figure out why this item about a quadriplegic (also known as, tetraplegic) man is news. After all, I have a May 17, 2012 posting which features a video and information about a quadri(tetra)plegic woman who was drinking her first cup of coffee, independently, in many years. The difference is that she was using an external robotic arm and this man is using *his own arm*,

This Case Western Reserve University (CRWU) video accompanies a March 28, 2017 CRWU news release, (h/t ScienceDaily March 28, 2017 news item)

Bill Kochevar grabbed a mug of water, drew it to his lips and drank through the straw.

His motions were slow and deliberate, but then Kochevar hadn’t moved his right arm or hand for eight years.

And it took some practice to reach and grasp just by thinking about it.

Kochevar, who was paralyzed below his shoulders in a bicycling accident, is believed to be the first person with quadriplegia in the world to have arm and hand movements restored with the help of two temporarily implanted technologies.

A brain-computer interface with recording electrodes under his skull, and a functional electrical stimulation (FES) system* activating his arm and hand, reconnect his brain to paralyzed muscles.

Holding a makeshift handle pierced through a dry sponge, Kochevar scratched the side of his nose with the sponge. He scooped forkfuls of mashed potatoes from a bowl—perhaps his top goal—and savored each mouthful.

“For somebody who’s been injured eight years and couldn’t move, being able to move just that little bit is awesome to me,” said Kochevar, 56, of Cleveland. “It’s better than I thought it would be.”

Kochevar is the focal point of research led by Case Western Reserve University, the Cleveland Functional Electrical Stimulation (FES) Center at the Louis Stokes Cleveland VA Medical Center and University Hospitals Cleveland Medical Center (UH). A study of the work was published in the The Lancet March 28 [2017] at 6:30 p.m. U.S. Eastern time.

“He’s really breaking ground for the spinal cord injury community,” said Bob Kirsch, chair of Case Western Reserve’s Department of Biomedical Engineering, executive director of the FES Center and principal investigator (PI) and senior author of the research. “This is a major step toward restoring some independence.”

When asked, people with quadriplegia say their first priority is to scratch an itch, feed themselves or perform other simple functions with their arm and hand, instead of relying on caregivers.

“By taking the brain signals generated when Bill attempts to move, and using them to control the stimulation of his arm and hand, he was able to perform personal functions that were important to him,” said Bolu Ajiboye, assistant professor of biomedical engineering and lead study author.

Technology and training

The research with Kochevar is part of the ongoing BrainGate2* pilot clinical trial being conducted by a consortium of academic and VA institutions assessing the safety and feasibility of the implanted brain-computer interface (BCI) system in people with paralysis. Other investigational BrainGate research has shown that people with paralysis can control a cursor on a computer screen or a robotic arm (braingate.org).

“Every day, most of us take for granted that when we will to move, we can move any part of our body with precision and control in multiple directions and those with traumatic spinal cord injury or any other form of paralysis cannot,” said Benjamin Walter, associate professor of neurology at Case Western Reserve School of Medicine, clinical PI of the Cleveland BrainGate2 trial and medical director of the Deep Brain Stimulation Program at UH Cleveland Medical Center.

“The ultimate hope of any of these individuals is to restore this function,” Walter said. “By restoring the communication of the will to move from the brain directly to the body this work will hopefully begin to restore the hope of millions of paralyzed individuals that someday they will be able to move freely again.”

Jonathan Miller, assistant professor of neurosurgery at Case Western Reserve School of Medicine and director of the Functional and Restorative Neurosurgery Center at UH, led a team of surgeons who implanted two 96-channel electrode arrays—each about the size of a baby aspirin—in Kochevar’s motor cortex, on the surface of the brain.

The arrays record brain signals created when Kochevar imagines movement of his own arm and hand. The brain-computer interface extracts information from the brain signals about what movements he intends to make, then passes the information to command the electrical stimulation system.

To prepare him to use his arm again, Kochevar first learned how to use his brain signals to move a virtual-reality arm on a computer screen.

“He was able to do it within a few minutes,” Kirsch said. “The code was still in his brain.”

As Kochevar’s ability to move the virtual arm improved through four months of training, the researchers believed he would be capable of controlling his own arm and hand.

Miller then led a team that implanted the FES systems’ 36 electrodes that animate muscles in the upper and lower arm.

The BCI decodes the recorded brain signals into the intended movement command, which is then converted by the FES system into patterns of electrical pulses.

The pulses sent through the FES electrodes trigger the muscles controlling Kochevar’s hand, wrist, arm, elbow and shoulder. To overcome gravity that would otherwise prevent him from raising his arm and reaching, Kochevar uses a mobile arm support, which is also under his brain’s control.

New Capabilities

Eight years of muscle atrophy required rehabilitation. The researchers exercised Kochevar’s arm and hand with cyclical electrical stimulation patterns. Over 45 weeks, his strength, range of motion and endurance improved. As he practiced movements, the researchers adjusted stimulation patterns to further his abilities.

Kochevar can make each joint in his right arm move individually. Or, just by thinking about a task such as feeding himself or getting a drink, the muscles are activated in a coordinated fashion.

When asked to describe how he commanded the arm movements, Kochevar told investigators, “I’m making it move without having to really concentrate hard at it…I just think ‘out’…and it goes.”

Kocehvar is fitted with temporarily implanted FES technology that has a track record of reliable use in people. The BCI and FES system together represent early feasibility that gives the research team insights into the potential future benefit of the combined system.

Advances needed to make the combined technology usable outside of a lab are not far from reality, the researchers say. Work is underway to make the brain implant wireless, and the investigators are improving decoding and stimulation patterns needed to make movements more precise. Fully implantable FES systems have already been developed and are also being tested in separate clinical research.

Kochevar welcomes new technology—even if it requires more surgery—that will enable him to move better. “This won’t replace caregivers,” he said. “But, in the long term, people will be able, in a limited way, to do more for themselves.”

There is more about the research in a March 29, 2017 article by Sarah Boseley for The Guardian,

Bill Kochevar, 53, has had electrical implants in the motor cortex of his brain and sensors inserted in his forearm, which allow the muscles of his arm and hand to be stimulated in response to signals from his brain, decoded by computer. After eight years, he is able to drink and feed himself without assistance.

“I think about what I want to do and the system does it for me,” Kochevar told the Guardian. “It’s not a lot of thinking about it. When I want to do something, my brain does what it does.”

The experimental technology, pioneered by the Case Western Reserve University in Cleveland, Ohio, is the first in the world to restore brain-controlled reaching and grasping in a person with complete paralysis.

For now, the process is relatively slow, but the scientists behind the breakthrough say this is proof of concept and that they hope to streamline the technology until it becomes a routine treatment for people with paralysis. In the future, they say, it will also be wireless and the electrical arrays and sensors will all be implanted under the skin and invisible.

A March 28, 2017 Lancet news release on EurekAlert provides a little more technical insight into the research and Kochevar’s efforts,

Although only tested with one participant, the study is a major advance and the first to restore brain-controlled reaching and grasping in a person with complete paralysis. The technology, which is only for experimental use in the USA, circumvents rather than repairs spinal injuries, meaning the participant relies on the device being implanted and switched on to move.

“Our research is at an early stage, but we believe that this neuro-prosthesis could offer individuals with paralysis the possibility of regaining arm and hand functions to perform day-to-day activities, offering them greater independence,” said lead author Dr Bolu Ajiboye, Case Western Reserve University, USA. “So far it has helped a man with tetraplegia to reach and grasp, meaning he could feed himself and drink. With further development, we believe the technology could give more accurate control, allowing a wider range of actions, which could begin to transform the lives of people living with paralysis.” [1]

Previous research has used similar elements of the neuro-prosthesis. For example, a brain-computer interface linked to electrodes on the skin has helped a person with less severe paralysis open and close his hand, while other studies have allowed participants to control a robotic arm using their brain signals. However, this is the first to restore reaching and grasping via the system in a person with a chronic spinal cord injury.

In this study, a 53 year-old man who had been paralysed below the shoulders for eight years underwent surgery to have the neuro-prosthesis fitted.

This involved brain surgery to place sensors in the motor cortex area of his brain responsible for hand movement – creating a brain-computer interface that learnt which movements his brain signals were instructing for. This initial stage took four months and included training using a virtual reality arm.

He then underwent another procedure placing 36 muscle stimulating electrodes into his upper and lower arm, including four that helped restore finger and thumb, wrist, elbow and shoulder movements. These were switched on 17 days after the procedure, and began stimulating the muscles for eight hours a week over 18 weeks to improve strength, movement and reduce muscle fatigue.

The researchers then wired the brain-computer interface to the electrical stimulators in his arm, using a decoder (mathematical algorithm) to translate his brain signals into commands for the electrodes in his arm. The electrodes stimulated the muscles to produce contractions, helping the participant intuitively complete the movements he was thinking of. The system also involved an arm support to stop gravity simply pulling his arm down.

During his training, the participant described how he controlled the neuro-prosthesis: “It’s probably a good thing that I’m making it move without having to really concentrate hard at it. I just think ‘out’ and it just goes.”

After 12 months of having the neuro-prosthesis fitted, the participant was asked to complete day-to-day tasks, including drinking a cup of coffee and feeding himself. First of all, he observed while his arm completed the action under computer control. During this, he thought about making the same movement so that the system could recognise the corresponding brain signals. The two systems were then linked and he was able to use it to drink a coffee and feed himself.

He successfully drank in 11 out of 12 attempts, and it took him roughly 20-40 seconds to complete the task. When feeding himself, he did so multiple times – scooping forkfuls of food and navigating his hand to his mouth to take several bites.

“Although similar systems have been used before, none of them have been as easy to adopt for day-to-day use and they have not been able to restore both reaching and grasping actions,” said Dr Ajiboye. “Our system builds on muscle stimulating electrode technology that is already available and will continue to improve with the development of new fully implanted and wireless brain-computer interface systems. This could lead to enhanced performance of the neuro-prosthesis with better speed, precision and control.” [1]

At the time of the study, the participant had had the neuro-prosthesis implanted for almost two years (717 days) and in this time experienced four minor, non-serious adverse events which were treated and resolved.

Despite its achievements, the neuro-prosthesis still had some limitations, including that movements made using it were slower and less accurate than those made using the virtual reality arm the participant used for training. When using the technology, the participant also needed to watch his arm as he lost his sense of proprioception – the ability to intuitively sense the position and movement of limbs – as a result of the paralysis.

Writing in a linked Comment, Dr Steve Perlmutter, University of Washington, USA, said: “The goal is futuristic: a paralysed individual thinks about moving her arm as if her brain and muscles were not disconnected, and implanted technology seamlessly executes the desired movement… This study is groundbreaking as the first report of a person executing functional, multi-joint movements of a paralysed limb with a motor neuro-prosthesis. However, this treatment is not nearly ready for use outside the lab. The movements were rough and slow and required continuous visual feedback, as is the case for most available brain-machine interfaces, and had restricted range due to the use of a motorised device to assist shoulder movements… Thus, the study is a proof-of-principle demonstration of what is possible, rather than a fundamental advance in neuro-prosthetic concepts or technology. But it is an exciting demonstration nonetheless, and the future of motor neuro-prosthetics to overcome paralysis is brighter.”

[1] Quote direct from author and cannot be found in the text of the Article.

Here’s a link to and a citation for the paper,

Restoration of reaching and grasping movements through brain-controlled muscle stimulation in a person with tetraplegia: a proof-of-concept demonstration by A Bolu Ajiboye, Francis R Willett, Daniel R Young, William D Memberg, Brian A Murphy, Jonathan P Miller, Benjamin L Walter, Jennifer A Sweet, Harry A Hoyen, Michael W Keith, Prof P Hunter Peckham, John D Simeral, Prof John P Donoghue, Prof Leigh R Hochberg, Prof Robert F Kirsch. The Lancet DOI: http://dx.doi.org/10.1016/S0140-6736(17)30601-3 Published: 28 March 2017 [online?]

This paper is behind a paywall.

For anyone  who’s interested, you can find the BrainGate website here.

*I initially misidentified the nature of the achievement and stated that Kochevar used a “robotic arm, which is attached to his body” when it was his own reanimated arm. Corrected on April 25, 2017.

Bidirectional prosthetic-brain communication with light?

The possibility of not only being able to make a prosthetic that allows a tetraplegic to grab a coffee but to feel that coffee  cup with their ‘hand’ is one step closer to reality according to a Feb. 22, 2017 news item on ScienceDaily,

Since the early seventies, scientists have been developing brain-machine interfaces; the main application being the use of neural prosthesis in paralyzed patients or amputees. A prosthetic limb directly controlled by brain activity can partially recover the lost motor function. This is achieved by decoding neuronal activity recorded with electrodes and translating it into robotic movements. Such systems however have limited precision due to the absence of sensory feedback from the artificial limb. Neuroscientists at the University of Geneva (UNIGE), Switzerland, asked whether it was possible to transmit this missing sensation back to the brain by stimulating neural activity in the cortex. They discovered that not only was it possible to create an artificial sensation of neuroprosthetic movements, but that the underlying learning process occurs very rapidly. These findings, published in the scientific journal Neuron, were obtained by resorting to modern imaging and optical stimulation tools, offering an innovative alternative to the classical electrode approach.

A Feb. 22, 2017 Université de Genève press release on EurekAlert, which originated the news item, provides more detail,

Motor function is at the heart of all behavior and allows us to interact with the world. Therefore, replacing a lost limb with a robotic prosthesis is the subject of much research, yet successful outcomes are rare. Why is that? Until this moment, brain-machine interfaces are operated by relying largely on visual perception: the robotic arm is controlled by looking at it. The direct flow of information between the brain and the machine remains thus unidirectional. However, movement perception is not only based on vision but mostly on proprioception, the sensation of where the limb is located in space. “We have therefore asked whether it was possible to establish a bidirectional communication in a brain-machine interface: to simultaneously read out neural activity, translate it into prosthetic movement and reinject sensory feedback of this movement back in the brain”, explains Daniel Huber, professor in the Department of Basic Neurosciences of the Faculty of Medicine at UNIGE.

Providing artificial sensations of prosthetic movements

In contrast to invasive approaches using electrodes, Daniel Huber’s team specializes in optical techniques for imaging and stimulating brain activity. Using a method called two-photon microscopy, they routinely measure the activity of hundreds of neurons with single cell resolution. “We wanted to test whether mice could learn to control a neural prosthesis by relying uniquely on an artificial sensory feedback signal”, explains Mario Prsa, researcher at UNIGE and the first author of the study. “We imaged neural activity in the motor cortex. When the mouse activated a specific neuron, the one chosen for neuroprosthetic control, we simultaneously applied stimulation proportional to this activity to the sensory cortex using blue light”. Indeed, neurons of the sensory cortex were rendered photosensitive to this light, allowing them to be activated by a series of optical flashes and thus integrate the artificial sensory feedback signal. The mouse was rewarded upon every above-threshold activation, and 20 minutes later, once the association learned, the rodent was able to more frequently generate the correct neuronal activity.

This means that the artificial sensation was not only perceived, but that it was successfully integrated as a feedback of the prosthetic movement. In this manner, the brain-machine interface functions bidirectionally. The Geneva researchers think that the reason why this fabricated sensation is so rapidly assimilated is because it most likely taps into very basic brain functions. Feeling the position of our limbs occurs automatically, without much thought and probably reflects fundamental neural circuit mechanisms. This type of bidirectional interface might allow in the future more precisely displacing robotic arms, feeling touched objects or perceiving the necessary force to grasp them.

At present, the neuroscientists at UNIGE are examining how to produce a more efficient sensory feedback. They are currently capable of doing it for a single movement, but is it also possible to provide multiple feedback channels in parallel? This research sets the groundwork for developing a new generation of more precise, bidirectional neural prostheses.

Towards better understanding the neural mechanisms of neuroprosthetic control

By resorting to modern imaging tools, hundreds of neurons in the surrounding area could also be observed as the mouse learned the neuroprosthetic task. “We know that millions of neural connections exist. However, we discovered that the animal activated only the one neuron chosen for controlling the prosthetic action, and did not recruit any of the neighbouring neurons”, adds Daniel Huber. “This is a very interesting finding since it reveals that the brain can home in on and specifically control the activity of just one single neuron”. Researchers can potentially exploit this knowledge to not only develop more stable and precise decoding techniques, but also gain a better understanding of most basic neural circuit functions. It remains to be discovered what mechanisms are involved in routing signals to the uniquely activated neuron.

Caption: A novel optical brain-machine interface allows bidirectional communication with the brain. While a robotic arm is controlled by neuronal activity recorded with optical imaging (red laser), the position of the arm is fed back to the brain via optical microstimulation (blue laser). Credit: © Daniel Huber, UNIGE

Here’s a link to and a citation for the paper,

Rapid Integration of Artificial Sensory Feedback during Operant Conditioning of Motor Cortex Neurons by Mario Prsa, Gregorio L. Galiñanes, Daniel Huber. Neuron Volume 93, Issue 4, p929–939.e6, 22 February 2017 DOI: http://dx.doi.org/10.1016/j.neuron.2017.01.023 Open access funded by European Research Council

This paper is open access.

Brain and machine as one (machine/flesh)

The essay on brains and machines becoming intertwined is making the rounds. First stop on my tour was its Oct. 4, 2016 appearance on the Mail & Guardian, then there was its Oct. 3, 2016 appearance on The Conversation, and finally (moving forward in time) there was its Oct. 4, 2016 appearance on the World Economic Forum website as part of their Final Frontier series.

The essay was written by Richard Jones of Sheffield University (mentioned here many times before but most recently in a Sept. 4, 2014 posting). His book ‘Soft Machines’ provided me with an important and eminently readable introduction to nanotechnology. He is a professor of physics at the University of Sheffield and here’s more from his essay (Oct. 3, 2016 on The Conversation) about brains and machines (Note: Links have been removed),

Imagine a condition that leaves you fully conscious, but unable to move or communicate, as some victims of severe strokes or other neurological damage experience. This is locked-in syndrome, when the outward connections from the brain to the rest of the world are severed. Technology is beginning to promise ways of remaking these connections, but is it our ingenuity or the brain’s that is making it happen?

Ever since an 18th-century biologist called Luigi Galvani made a dead frog twitch we have known that there is a connection between electricity and the operation of the nervous system. We now know that the signals in neurons in the brain are propagated as pulses of electrical potential, whose effects can be detected by electrodes in close proximity. So in principle, we should be able to build an outward neural interface system – that is to say, a device that turns thought into action.

In fact, we already have the first outward neural interface system to be tested in humans. It is called BrainGate and consists of an array of micro-electrodes, implanted into the part of the brain concerned with controlling arm movements. Signals from the micro-electrodes are decoded and used to control the movement of a cursor on a screen, or the motion of a robotic arm.

A crucial feature of these systems is the need for some kind of feedback. A patient must be able to see the effect of their willed patterns of thought on the movement of the cursor. What’s remarkable is the ability of the brain to adapt to these artificial systems, learning to control them better.

You can find out more about BrainGate in my May 17, 2012 posting which also features a video of a woman controlling a mechanical arm so she can drink from a cup coffee by herself for the first time in 15 years.

Jones goes on to describe the cochlear implants (although there’s no mention of the controversy; not everyone believes they’re a good idea) and retinal implants that are currently available. Jones notes this (Note Links have been removed),

The key message of all this is that brain interfaces now are a reality and that the current versions will undoubtedly be improved. In the near future, for many deaf and blind people, for people with severe disabilities – including, perhaps, locked-in syndrome – there are very real prospects that some of their lost capabilities might be at least partially restored.

Until then, our current neural interface systems are very crude. One problem is size; the micro-electrodes in use now, with diameters of tens of microns, may seem tiny, but they are still coarse compared to the sub-micron dimensions of individual nerve fibres. And there is a problem of scale. The BrainGate system, for example, consists of 100 micro-electrodes in a square array; compare that to the many tens of billions of neurons in the brain. The fact these devices work at all is perhaps more a testament to the adaptability of the human brain than to our technological prowess.

Scale models

So the challenge is to build neural interfaces on scales that better match the structures of biology. Here, we move into the world of nanotechnology. There has been much work in the laboratory to make nano-electronic structures small enough to read out the activity of a single neuron. In the 1990s, Peter Fromherz, at the Max Planck Institute for Biochemistry, was a pioneer of using silicon field effect transistors, similar to those used in commercial microprocessors, to interact with cultured neurons. In 2006, Charles Lieber’s group at Harvard succeeded in using transistors made from single carbon nanotubes – whiskers of carbon just one nanometer in diameter – to measure the propagation of single nerve pulses along the nerve fibres.

But these successes have been achieved, not in whole organisms, but in cultured nerve cells which are typically on something like the surface of a silicon wafer. It’s going to be a challenge to extend these methods into three dimensions, to interface with a living brain. Perhaps the most promising direction will be to create a 3D “scaffold” incorporating nano-electronics, and then to persuade growing nerve cells to infiltrate it to create what would in effect be cyborg tissue – living cells and inorganic electronics intimately mixed.

I have featured Charles Lieber and his work here in two recent posts: ‘Bionic’ cardiac patch with nanoelectric scaffolds and living cells on July 11, 2016 and Long-term brain mapping with injectable electronics on Sept. 22, 2016.

For anyone interested in more about the controversy regarding cochlear implants, there’s this page on the Brown University (US) website. You might also want to check out Gregor Wolbring (professor at the University of Calgary) who has written extensively on the concept of ableism (links to his work can be found at the end of this post). I have excerpted from an Aug. 30, 2011 post the portion where Gregor defines ‘ableism’,

From Gregor’s June 17, 2011 posting on the FedCan blog,

The term ableism evolved from the disabled people rights movements in the United States and Britain during the 1960s and 1970s.  It questions and highlights the prejudice and discrimination experienced by persons whose body structure and ability functioning were labelled as ‘impaired’ as sub species-typical. Ableism of this flavor is a set of beliefs, processes and practices, which favors species-typical normative body structure based abilities. It labels ‘sub-normative’ species-typical biological structures as ‘deficient’, as not able to perform as expected.

The disabled people rights discourse and disability studies scholars question the assumption of deficiency intrinsic to ‘below the norm’ labeled body abilities and the favoritism for normative species-typical body abilities. The discourse around deafness and Deaf Culture would be one example where many hearing people expect the ability to hear. This expectation leads them to see deafness as a deficiency to be treated through medical means. In contrast, many Deaf people see hearing as an irrelevant ability and do not perceive themselves as ill and in need of gaining the ability to hear. Within the disabled people rights framework ableism was set up as a term to be used like sexism and racism to highlight unjust and inequitable treatment.

Ableism is, however, much more pervasive.

You can find out more about Gregor and his work here: http://www.crds.org/research/faculty/Gregor_Wolbring2.shtml or here:
https://www.facebook.com/GregorWolbring.

Tiny sensors produced by nanoscale 3D printing could lead to new generation of atomic force microscopes

A Sept. 26, 2016 news item on Nanowerk features research into producing smaller sensors for atomic force microscopes (AFMs) to achieve greater sensitivity,

Tiny sensors made through nanoscale 3D printing may be the basis for the next generation of atomic force microscopes. These nanosensors can enhance the microscopes’ sensitivity and detection speed by miniaturizing their detection component up to 100 times. The sensors were used in a real-world application for the first time at EPFL, and the results are published in Nature Communications.

A Sept. 26, 2016 École Polytechnique Fédérale de Lausanne (EPFL; Switzerland) press release by Laure-Anne Pessina, which originated the news item, expands on the theme (Note: A link has been removed),

Atomic force microscopy is based on powerful technology that works a little like a miniature turntable. A tiny cantilever with a nanometric tip passes over a sample and traces its relief, atom by atom. The tip’s infinitesimal up-and-down movements are picked up by a sensor so that the sample’s topography can be determined. (…)

One way to improve atomic force microscopes is to miniaturize the cantilever, as this will reduce inertia, increase sensitivity, and speed up detection. Researchers at EPFL’s Laboratory for Bio- and Nano-Instrumentation achieved this by equipping the cantilever with a 5-nanometer thick sensor made with a nanoscale 3D-printing technique. “Using our method, the cantilever can be 100 times smaller,” says Georg Fantner, the lab’s director.

Electrons that jump over obstacles

The nanometric tip’s up-and-down movements can be measured through the deformation of the sensor placed at the fixed end of the cantilever. But because the researchers were dealing with minute movements – smaller than an atom – they had to pull a trick out of their hat.

Together with Michael Huth’s lab at Goethe Universität at Frankfurt am Main, they developed a sensor made up of highly conductive platinum nanoparticles surrounded by an insulating carbon matrix. Under normal conditions, the carbon isolates the electrons. But at the nano-scale, a quantum effect comes into play: some electrons jump through the insulating material and travel from one nanoparticle to the next. “It’s sort of like if people walking on a path came up against a wall and only the courageous few managed to climb over it,” said Fantner.

When the shape of the sensor changes, the nanoparticles move further away from each other and the electrons jump between them less frequently. Changes in the current thus reveal the deformation of the sensor and the composition of the sample.

Tailor-made sensors

The researchers’ real feat was in finding a way to produce these sensors in nanoscale dimensions while carefully controlling their structure and, by extension, their properties. “In a vacuum, we distribute a precursor gas containing platinum and carbon atoms over a substrate. Then we apply an electron beam. The platinum atoms gather and form nanoparticles, and the carbon atoms naturally form a matrix around them,” said Maja Dukic, the article’s lead author. “By repeating this process, we can build sensors with any thickness and shape we want. We have proven that we could build these sensors and that they work on existing infrastructures. Our technique can now be used for broader applications, ranging from biosensors, ABS sensors for cars, to touch sensors on flexible membranes in prosthetics and artificial skin.”

Here’s a link to and a citation for the paper,

Direct-write nanoscale printing of nanogranular tunnelling strain sensors for sub-micrometre cantilevers by Maja Dukic, Marcel Winhold, Christian H. Schwalb, Jonathan D. Adams, Vladimir Stavrov, Michael Huth, & Georg E. Fantner. Nature Communications 7, Article number: 12487 doi:10.1038/ncomms12487 Published  26 September 2016

This is an open access paper.

Man with world’s first implanted bionic arm participates in first Cybathlon (olympics for cyborgs)

The world’s first Cybathlon is being held on Oct. 8, 2016 in Zurich, Switzerland. One of the participants is an individual who took part in some groundbreaking research into implants which was featured in my Oct. 10, 2014 posting. There’s more about the Cybathlon and the participant in an Oct. 4, 2016 news item on phys.org,

A few years ago, a patient was implanted with a bionic arm for the first time in the world using control technology developed at Chalmers University of Technology. He is now taking part in Cybathlon, a new international competition in which 74 participants with physical disabilities will compete against each other, using the latest robotic prostheses and other assistive technologies – a sort of ‘Cyborg Olympics’.

The Paralympics will now be followed by the Cybathlon, which takes place in Zürich on October 8th [2016]. This is the first major competition to show that the boundaries between human and machine are becoming more and more blurred. The participants will compete in six different disciplines using the machines they are connected to as well as possible.

Cybathlon is intended to drive forward the development of prostheses and other types of assistive aids. Today, such technologies are often highly advanced technically, but provide limited value in everyday life.

An Oct. 4, 2016 Chalmers University of Technology press release by Johanna Wilde, which originated the news item, provides details about the competitor, his prosthetic device, and more,

Magnus, one of the participants, has now had his biomechatronically integrated arm prosthesis for almost four years. He says that his life has totally changed since the implantation, which was performed by Dr Rickard Brånemark, associate professor at Sahlgrenska University Hospital.

“I don’t feel handicapped since I got this arm”, says Magnus. “I can now work full time and can perform all the tasks in both my job and my family life. The prosthesis doesn’t feel like a machine, but more like my own arm.”

Magnus lives in northern Sweden and works as a lorry driver. He regularly visits Gothenburg in southern Sweden and carries out tests with researcher Max Ortiz Catalan, assistant professor at Chalmers University of Technology, who has been in charge of developing the technology and leads the team competing in the Cybathlon.

“This is a completely new research field in which we have managed to directly connect the artificial limb to the skeleton, nerves and muscles,” says Dr Max Ortiz Catalan. “In addition, we are including direct neural sensory feedback in the prosthetic arm so the patient can intuitively feel with it.”

Today Magnus can feel varying levels of pressure in his artificial hand, something which is necessary to instinctively grip an object firmly enough. He is unique in the world in having a permanent sensory connection between the prosthesis and his nervous system, working outside laboratory conditions. Work is now under way to add more types of sensations.

At the Cybathlon he will be competing for the Swedish team, which is formed by Chalmers University of Technology, Sahlgrenska University Hospital and the company Integrum AB.

The competition has a separate discipline for arm prostheses. In this discipline Magnus has to complete a course made up of six different stations at which the prosthesis will be put to the test. For example, he has to open a can with a can opener, load a tray with crockery and open a door with the tray in his hand. The events at the Cybathlon are designed to be spectator-friendly while being based on various operations that the participants have to cope with in their daily lives.

“However, the competition will not really show the unique advantages of our technology, such as the sense of touch and the bone-anchored attachment which makes the prosthesis comfortable enough to wear all day,” says Max Ortiz Catalan.

Magnus is the only participant with an amputation above the elbow. This naturally makes the competition more difficult for him than for the others, who have a natural elbow joint.

“From a competitive perspective Cybathlon is far from ideal to demonstrate clinically viable technology,” says Max Ortiz Catalan. “But it is a major and important event in the human-machine interface field in which we would like to showcase our technology. Unlike several of the other participants, Magnus will compete in the event using the same technology he uses in his everyday life.”

Facts about Cybathlon
•    The very first Cybathlon is being organised by the Swiss university ETH Zürich.
•    The €5 million event will take place in Zürich´s 7600 spectator ice hockey stadium, Swiss Arena.
•    74 participants are competing for 59 different teams from 25 countries around the world. In total, the teams consist of about 300 scientists, engineers, support staff and competitors.
•    The teams range from small ad hoc teams to the world’s largest manufacturers of advanced prostheses.
•    The majority of the teams are groups from research labs and many of the prostheses have come straight out of the lab.
•    Unlike the Olympics and Paralympics, the Cybathlon participants are not athletes but ordinary people with various disabilities. The aims of the competition are to establish a dialogue between academia and industry, to facilitate discussion between technology developers and people with disabilities and to promote the use of robotic assistive aids to the general public.
•    Cybathlon will return in 2020, as a seven-day event in Tokyo, to coincide with the Olympics.

Facts about the Swedish team
The Opra Osseointegration team is a multidisciplinary team comprising technical and medical partners. The team is led by Dr Max Ortiz Catalan, assistant professor at Chalmers University of Technology, who has been in charge of developing the technology in close collaboration with Dr Rickard Brånemark, who is a surgeon at Sahlgrenska University Hospital and an associate professor at Gothenburg University. Dr Brånemark led the team performing the implantation of the device. Integrum AB, a Swedish company, complements the team as the pioneering provider of bone-anchored limb prostheses.

This video gives you an idea of what it’s in store on Oct. 8, 2016,