Category Archives: human enhancement

A perovskite memristor with three stable resistive states

Thanks to Dexter Johnson’s Oct. 22, 2015 posting on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers]) website, I’ve found information about a second memristor with three terminals, aka, three stable resistive states,  (the first is mentioned in my April 10, 2015 posting). From Dexter’s posting (Note: Links have been removed),

Now researchers at ETH Zurich have designed a memristor device out of perovskite just 5 nanometres thick that has three stable resistive states, which means it can encode data as 0,1 and 2, or a “trit” as opposed to a “bit.”

The research, which was published in the journal ACS Nano, developed model devices that have two competing nonvolatile resistive switching processes. These switching processes can be alternatively triggered by the effective switching voltage and time applied to the device.

“Our component could therefore also be useful for a new type of IT (Information Technology) that is not based on binary logic, but on a logic that provides for information located ‘between’ the 0 and 1,” said Jennifer Rupp, professor in the Department of Materials at ETH Zurich, in a press release. “This has interesting implications for what is referred to as fuzzy logic, which seeks to incorporate a form of uncertainty into the processing of digital information. You could describe it as less rigid computing.”

An Oct. 19, 2015 Swiss National Science Foundation press release provides context for the research,

Two IT giants, Intel and HP, have entered a race to produce a commercial version of memristors, a new electronics component that could one day replace flash memory (DRAM) used in USB memory sticks, SD cards and SSD hard drives. “Basically, memristors require less energy since they work at lower voltages,” explains Jennifer Rupp, professor in the Department of Materials at ETH Zurich and holder of a SNSF professorship grant. “They can be made much smaller than today’s memory modules, and therefore offer much greater density. This means they can store more megabytes of information per square millimetre.” But currently memristors are only at the prototype stage. [emphasis mine]

There is a memristor-based product on the market as I noted in a Sept. 10, 2015 posting, although that may not be the type of memristive device that Rupp seems to be discussing. (Should you have problems accessing the Swiss National Science Foundation press release, you can find a lightly edited version (a brief [two sentences] history of the memristor has been left out) here on Azonano.

Jacopo Prisco wrote for CNN online in a March 2, 2015 article about memristors and Rupp’s work (Note: A link has been removed),

Simply put, the memristor could mean the end of electronics as we know it and the beginning of a new era called “ionics”.

The transistor, developed in 1947, is the main component of computer chips. It functions using a flow of electrons, whereas the memristor couples the electrons with ions, or electrically charged atoms.

In a transistor, once the flow of electrons is interrupted by, say, cutting the power, all information is lost. But a memristor can remember the amount of charge that was flowing through it, and much like a memory stick it will retain the data even when the power is turned off.

This can pave the way for computers that will instantly turn on and off like a light bulb and never lose data: the RAM, or memory, will no longer be erased when the machine is turned off, without the need to save anything to hard drives as with current technology.

Jennifer Rupp is a Professor of electrochemical materials at ETH Zurich, and she’s working with IBM to build a memristor-based machine.

Memristors, she points out, function in a way that is similar to a human brain: “Unlike a transistor, which is based on binary codes, a memristor can have multi-levels. You could have several states, let’s say zero, one half, one quarter, one third, and so on, and that gives us a very powerful new perspective on how our computers may develop in the future,” she told CNN’s Nick Glass.

This is the CNN interview with Rupp,

Prisco also provides an update about HP’s memristor-based product,

After manufacturing the first ever memristor, Hewlett Packard has been working for years on a new type of computer based on the technology. According to plans, it will launch by 2020.

Simply called “The Machine”, it uses “electrons for processing, photons for communication, and ions for storage.”

I first wrote about HP’s The Machine in a June 25, 2014 posting (scroll down about 40% of the way).

There are many academic teams researching memristors including a team at Northwestern University. I highlighted their announcement of a three-terminal version in an April 10, 2015 posting. While Rupp’s team achieved its effect with a perovskite substrate, the Northwestern team used a molybdenum disulfide (MoS2) substrate.

For anyone wanting to read the latest research from ETH, here’s a link to and a citation for the paper,

Uncovering Two Competing Switching Mechanisms for Epitaxial and Ultrathin Strontium Titanate-Based Resistive Switching Bits by Markus Kubicek, Rafael Schmitt, Felix Messerschmitt, and Jennifer L. M. Rupp. ACS Nano, Article ASAP DOI: 10.1021/acsnano.5b02752 Publication Date (Web): October 8, 2015

Copyright © 2015 American Chemical Society

This paper is behind a paywall.

Finally, should you find the commercialization aspects of the memristor story interesting, there’s a June 6, 2015 posting by Knowm CEO (chief executive officer) Alex Nugent waxes eloquent on HP Labs’ ‘memristor problem’ (Note: A link has been removed),

Today I read something that did not surprise me. HP has said that their memristor technology will be replaced by traditional DRAM memory for use in “The Machine”. This is not surprising for those of us who have been in the field since before HP’s memristor marketing engine first revved up in 2008. While I have to admit the miscommunication between HP’s research and business development departments is starting to get really old, I do understand the problem, or at least part of it.

There are two ways to develop memristors. The first way is to force them to behave as you want them to behave. Most memristors that I have seen do not behave like fast, binary, non-volatile, deterministic switches. This is a problem because this is how HP wants them to behave. Consequently a perception has been created that memristors are for non-volatile fast memory. HP wants a drop-in replacement for standard memory because this is a large and established market. Makes sense of course, but its not the whole story on memristors.

Memristors exhibit a huge range of amazing phenomena. Some are very fast to switch but operate probabilistically. Others can be changed a little bit at a time and are ideal for learning. Still others have capacitance (with memory), or act as batteries. I’ve even seen some devices that can be programmed to be a capacitor or a resistor or a memristor. (Seriously).

Nugent, whether you agree with him or not provides, some fascinating insight. In the excerpt I’ve included here, he seems to provide confirmation that it’s possible to state ‘there are no memristors on the market’ and ‘there are memristors on the market’ because different devices are being called memristors.

US White House’s grand computing challenge could mean a boost for research into artificial intelligence and brains

An Oct. 20, 2015 posting by Lynn Bergeson on Nanotechnology Now announces a US White House challenge incorporating nanotechnology, computing, and brain research (Note: A link has been removed),

On October 20, 2015, the White House announced a grand challenge to develop transformational computing capabilities by combining innovations in multiple scientific disciplines. See The Office of Science and Technology Policy (OSTP) states that, after considering over 100 responses to its June 17, 2015, request for information, it “is excited to announce the following grand challenge that addresses three Administration priorities — the National Nanotechnology Initiative, the National Strategic Computing Initiative (NSCI), and the BRAIN initiative.” The grand challenge is to “[c]reate a new type of computer that can proactively interpret and learn from data, solve unfamiliar problems using what it has learned, and operate with the energy efficiency of the human brain.”

Here’s where the Oct. 20, 2015 posting, which originated the news item, by Lloyd Whitman, Randy Bryant, and Tom Kalil for the US White House blog gets interesting,

 While it continues to be a national priority to advance conventional digital computing—which has been the engine of the information technology revolution—current technology falls far short of the human brain in terms of both the brain’s sensing and problem-solving abilities and its low power consumption. Many experts predict that fundamental physical limitations will prevent transistor technology from ever matching these twin characteristics. We are therefore challenging the nanotechnology and computer science communities to look beyond the decades-old approach to computing based on the Von Neumann architecture as implemented with transistor-based processors, and chart a new path that will continue the rapid pace of innovation beyond the next decade.

There are growing problems facing the Nation that the new computing capabilities envisioned in this challenge might address, from delivering individualized treatments for disease, to allowing advanced robots to work safely alongside people, to proactively identifying and blocking cyber intrusions. To meet this challenge, major breakthroughs are needed not only in the basic devices that store and process information and the amount of energy they require, but in the way a computer analyzes images, sounds, and patterns; interprets and learns from data; and identifies and solves problems. [emphases mine]

Many of these breakthroughs will require new kinds of nanoscale devices and materials integrated into three-dimensional systems and may take a decade or more to achieve. These nanotechnology innovations will have to be developed in close coordination with new computer architectures, and will likely be informed by our growing understanding of the brain—a remarkable, fault-tolerant system that consumes less power than an incandescent light bulb.

Recent progress in developing novel, low-power methods of sensing and computation—including neuromorphic, magneto-electronic, and analog systems—combined with dramatic advances in neuroscience and cognitive sciences, lead us to believe that this ambitious challenge is now within our reach. …

This is the first time I’ve come across anything that publicly links the BRAIN initiative to computing, artificial intelligence, and artificial brains. (For my own sake, I make an arbitrary distinction between algorithms [artificial intelligence] and devices that simulate neural plasticity [artificial brains].)The emphasis in the past has always been on new strategies for dealing with Parkinson’s and other neurological diseases and conditions.

The sense of touch via artificial skin

Scientists have been working for years to allow artificial skin to transmit what the brain would recognize as the sense of touch. For anyone who has lost a limb and gotten a prosthetic replacement, the loss of touch is reputedly one of the more difficult losses to accept. The sense of touch is also vital in robotics if the field is to expand and include activities reliant on the sense of touch, e.g., how much pressure do you use to grasp a cup; how much strength  do you apply when moving an object from one place to another?

For anyone interested in the ‘electronic skin and pursuit of touch’ story, I have a Nov. 15, 2013 posting which highlights the evolution of the research into e-skin and what was then some of the latest work.

This posting is a 2015 update of sorts featuring the latest e-skin research from Stanford University and Xerox PARC. (Dexter Johnson in an Oct. 15, 2015 posting on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineering] site) provides a good research summary.) For anyone with an appetite for more, there’s this from an Oct. 15, 2015 American Association for the Advancement of Science (AAAS) news release on EurekAlert,

Using flexible organic circuits and specialized pressure sensors, researchers have created an artificial “skin” that can sense the force of static objects. Furthermore, they were able to transfer these sensory signals to the brain cells of mice in vitro using optogenetics. For the many people around the world living with prosthetics, such a system could one day allow them to feel sensation in their artificial limbs. To create the artificial skin, Benjamin Tee et al. developed a specialized circuit out of flexible, organic materials. It translates static pressure into digital signals that depend on how much mechanical force is applied. A particular challenge was creating sensors that can “feel” the same range of pressure that humans can. Thus, on the sensors, the team used carbon nanotubes molded into pyramidal microstructures, which are particularly effective at tunneling the signals from the electric field of nearby objects to the receiving electrode in a way that maximizes sensitivity. Transferring the digital signal from the artificial skin system to the cortical neurons of mice proved to be another challenge, since conventional light-sensitive proteins used in optogenetics do not stimulate neural spikes for sufficient durations for these digital signals to be sensed. Tee et al. therefore engineered new optogenetic proteins able to accommodate longer intervals of stimulation. Applying these newly engineered optogenic proteins to fast-spiking interneurons of the somatosensory cortex of mice in vitro sufficiently prolonged the stimulation interval, allowing the neurons to fire in accordance with the digital stimulation pulse. These results indicate that the system may be compatible with other fast-spiking neurons, including peripheral nerves.

And, there’s an Oct. 15, 2015 Stanford University news release on EurkeAlert describing this work from another perspective,

The heart of the technique is a two-ply plastic construct: the top layer creates a sensing mechanism and the bottom layer acts as the circuit to transport electrical signals and translate them into biochemical stimuli compatible with nerve cells. The top layer in the new work featured a sensor that can detect pressure over the same range as human skin, from a light finger tap to a firm handshake.

Five years ago, Bao’s [Zhenan Bao, a professor of chemical engineering at Stanford,] team members first described how to use plastics and rubbers as pressure sensors by measuring the natural springiness of their molecular structures. They then increased this natural pressure sensitivity by indenting a waffle pattern into the thin plastic, which further compresses the plastic’s molecular springs.

To exploit this pressure-sensing capability electronically, the team scattered billions of carbon nanotubes through the waffled plastic. Putting pressure on the plastic squeezes the nanotubes closer together and enables them to conduct electricity.

This allowed the plastic sensor to mimic human skin, which transmits pressure information as short pulses of electricity, similar to Morse code, to the brain. Increasing pressure on the waffled nanotubes squeezes them even closer together, allowing more electricity to flow through the sensor, and those varied impulses are sent as short pulses to the sensing mechanism. Remove pressure, and the flow of pulses relaxes, indicating light touch. Remove all pressure and the pulses cease entirely.

The team then hooked this pressure-sensing mechanism to the second ply of their artificial skin, a flexible electronic circuit that could carry pulses of electricity to nerve cells.

Importing the signal

Bao’s team has been developing flexible electronics that can bend without breaking. For this project, team members worked with researchers from PARC, a Xerox company, which has a technology that uses an inkjet printer to deposit flexible circuits onto plastic. Covering a large surface is important to making artificial skin practical, and the PARC collaboration offered that prospect.

Finally the team had to prove that the electronic signal could be recognized by a biological neuron. It did this by adapting a technique developed by Karl Deisseroth, a fellow professor of bioengineering at Stanford who pioneered a field that combines genetics and optics, called optogenetics. Researchers bioengineer cells to make them sensitive to specific frequencies of light, then use light pulses to switch cells, or the processes being carried on inside them, on and off.

For this experiment the team members engineered a line of neurons to simulate a portion of the human nervous system. They translated the electronic pressure signals from the artificial skin into light pulses, which activated the neurons, proving that the artificial skin could generate a sensory output compatible with nerve cells.

Optogenetics was only used as an experimental proof of concept, Bao said, and other methods of stimulating nerves are likely to be used in real prosthetic devices. Bao’s team has already worked with Bianxiao Cui, an associate professor of chemistry at Stanford, to show that direct stimulation of neurons with electrical pulses is possible.

Bao’s team envisions developing different sensors to replicate, for instance, the ability to distinguish corduroy versus silk, or a cold glass of water from a hot cup of coffee. This will take time. There are six types of biological sensing mechanisms in the human hand, and the experiment described in Science reports success in just one of them.

But the current two-ply approach means the team can add sensations as it develops new mechanisms. And the inkjet printing fabrication process suggests how a network of sensors could be deposited over a flexible layer and folded over a prosthetic hand.

“We have a lot of work to take this from experimental to practical applications,” Bao said. “But after spending many years in this work, I now see a clear path where we can take our artificial skin.”

Here’s a link to and a citation for the paper,

A skin-inspired organic digital mechanoreceptor by Benjamin C.-K. Tee, Alex Chortos, Andre Berndt, Amanda Kim Nguyen, Ariane Tom, Allister McGuire, Ziliang Carter Lin, Kevin Tien, Won-Gyu Bae, Huiliang Wang, Ping Mei, Ho-Hsiu Chou, Bianxiao Cui, Karl Deisseroth, Tse Nga Ng, & Zhenan Bao. Science 16 October 2015 Vol. 350 no. 6258 pp. 313-316 DOI: 10.1126/science.aaa9306

This paper is behind a paywall.

Fixed: The Science/Fiction of Human Enhancement

First the news, Fixed: The Science/Fiction of Human Enhancement is going to be broadcast on KCTS 9 (PBS [Public Broadcasting Service] station for Seattle/Yakima) on Wednesday, Aug. 26, 2015 at 7 pm PDT. From the KCTS 9 schedule,

From botox to bionic limbs, the human body is more “upgradeable” than ever. But how much of it can we alter and still be human? What do we gain or lose in the process? Award-winning documentary, Fixed: The Science/Fiction of Human Enhancement, explores the social impact of human biotechnologies. Haunting and humorous, poignant and political, Fixed rethinks “disability” and “normalcy” by exploring technologies that promise to change our bodies and minds forever.

This 2013 documentary has a predecessor titled ‘Fixed’, which I wrote about in an August 3, 2010 posting. The director for both ‘Fixeds’ is Regan Brashear.

It seems the latest version of Fixed builds on the themes present in the first, while integrating the latest scientific work (to 2013) in the field of human enhancement (from my August 3, 2010 posting),

As for the film, I found this at the University of California, Santa Cruz,

Fixed is a video documentary that explores the burgeoning field of “human enhancement” technologies from the perspective of individuals with disabilities. Fixed uses the current debates surrounding human enhancement technologies (i.e. bionic limbs, brain machine interfaces, prenatal screening technologies such as PGD or pre-implantation genetic diagnosis, etc.) to tackle larger questions about disability, inequality, and citizenship. This documentary asks the question, “Will these technologies ‘liberate’ humanity, or will they create even more inequality?”

You can find out more about the 2013 Fixed on its website or Facebook page (they list opportunities in the US, in Canada, and internationally to see the documentary). There is also a listing of PBS broadcasts available from the Fixed: The Science/Fiction of Human Enhancement Press page.

I recognized two names from the cast list on the Internet Movie Database (IMDB) page for Fixed: The Science/Fiction of Human Enhancement, Gregor Wolbring (he also appeared in the first ‘Fixed’) and Hugh Herr.

Gregor has been mentioned here a few times in connection with human enhancement. A Canadian professor at the University of Calgary, he’s active in the field of bioethics and you can find out more about Gregor and his work here.

Hugh Herr was first mentioned here in a January 30, 2013 posting titled: The ultimate DIY: ‘How to build a robotic man’ on BBC 4. He is a robotocist at the Massachusetts Institute of Technology (MIT).

The two men offering contrasting perspectives, Gregor Wolbring, ‘we should re-examine the notion that some people are impaired and need to be fixed’,  and Hugh Herr, ‘we will eliminate all forms of impairment’. Hopefully, the 2013 documentary has managed to present more of the nuances than I have.

Brain-friendly interface to replace neural prosthetics one day?

This research will not find itself occupying anyone’s brain for some time to come but it is interesting to find out that neural prosthetics have some drawbacks and there is work being done to address them. From an Aug. 10, 2015 news item on Azonano,

Instead of using neural prosthetic devices–which suffer from immune-system rejection and are believed to fail due to a material and mechanical mismatch–a multi-institutional team, including Lohitash Karumbaiah of the University of Georgia’s Regenerative Bioscience Center, has developed a brain-friendly extracellular matrix environment of neuronal cells that contain very little foreign material. These by-design electrodes are shielded by a covering that the brain recognizes as part of its own composition.

An Aug. 5, 2015 University of Georgia news release, which originated the news item, describes the new approach and technique in more detail,

Although once believed to be devoid of immune cells and therefore of immune responses, the brain is now recognized to have its own immune system that protects it against foreign invaders.

“This is not by any means the device that you’re going to implant into a patient,” said Karumbaiah, an assistant professor of animal and dairy science in the UGA College of Agricultural and Environmental Sciences. “This is proof of concept that extracellular matrix can be used to ensheathe a functioning electrode without the use of any other foreign or synthetic materials.”

Implantable neural prosthetic devices in the brain have been around for almost two decades, helping people living with limb loss and spinal cord injury become more independent. However, not only do neural prosthetic devices suffer from immune-system rejection, but most are believed to eventually fail because of a mismatch between the soft brain tissue and the rigid devices.

The collaboration, led by Wen Shen and Mark Allen of the University of Pennsylvania, found that the extracellular matrix derived electrodes adapted to the mechanical properties of brain tissue and were capable of acquiring neural recordings from the brain cortex.

“Neural interface technology is literally mind boggling, considering that one might someday control a prosthetic limb with one’s own thoughts,” Karumbaiah said.

The study’s joint collaborators were Ravi Bellamkonda, who conceived the new approach and is chair of the Wallace H. Coulter Department of Biomedical Engineering at the Georgia Institute of Technology and Emory University, as well as Allen, who at the time was director of the Institute for Electronics and Nanotechnology.

“Hopefully, once we converge upon the nanofabrication techniques that would enable these to be clinically translational, this same methodology could then be applied in getting these extracellular matrix derived electrodes to be the next wave of brain implants,” Karumbaiah said.

Currently, one out of every 190 Americans is living with limb loss, according to the National Institutes of Health. There is a significant burden in cost of care and quality of life for people suffering from this disability.

The research team is one part of many in the prosthesis industry, which includes those who design the robotics for the artificial limbs, others who make the neural prosthetic devices and developers who design the software that decodes the neural signal.

“What neural prosthetic devices do is communicate seamlessly to an external prosthesis,” Karumbaiah said, “providing independence of function without having to have a person or a facility dedicated to their care.”

Karumbaiah hopes further collaboration will allow them to make positive changes in the industry, saying that, “it’s the researcher-to-industry kind of conversation that now needs to take place, where companies need to come in and ask: ‘What have you learned? How are the devices deficient, and how can we make them better?'”

Here’s a link to and a citation for the paper,

Extracellular matrix-based intracortical microelectrodes: Toward a microfabricated neural interface based on natural materials by Wen Shen, Lohitash Karumbaiah, Xi Liu, Tarun Saxena, Shuodan Chen, Radhika Patkar, Ravi V. Bellamkonda, & Mark G. Allen. Microsystems & Nanoengineering 1, Article number: 15010 (2015) doi:10.1038/micronano.2015.10

This appears to be an open access paper.

One final note, I have written frequently about prosthetics and neural prosthetics, which you can find by using either of those terms and/or human enhancement. Here’s my latest piece, a March 25, 2015 posting.

Clinical trial for bionic eye (artificial retinal implant) shows encouraging results (safety and efficacy)

The Argus II artificial retina was first mentioned here in a Feb. 15, 2013 posting (scroll down about 50% of the way) when it received US Food and Drug Administration (FDA) commercial approval. In retrospect that seems puzzling since the results of a three-year clinical trial have just been reported in a June 23, 2015 news item on ScienceDaily (Note: There was one piece of information about the approval which didn’t make its way into the information disseminated in 2013),

The three-year clinical trial results of the retinal implant popularly known as the “bionic eye,” have proven the long-term efficacy, safety and reliability of the device that restores vision in those blinded by a rare, degenerative eye disease. The findings show that the Argus II significantly improves visual function and quality of life for people blinded by retinitis pigmentosa. They are being published online in Ophthalmology, the journal of the American Academy of Ophthalmology.

A June 23, 2015 American Academy of Ophthalmology news release (also on EurekAlert), which originated the news item, describes the condition the Argus II is designed for and that crucial bit of FDA information,

Retinitis pigmentosa is an incurable disease that affects about 1 in 4,000 Americans and causes slow vision loss that eventually leads to blindness.[1] The Argus II system was designed to help provide patients who have lost their sight due to the disease with some useful vision. Through the device, patients with retinitis pigmentosa are able to see patterns of light that the brain learns to interpret as an image. The system uses a miniature video camera stored in the patient’s glasses to send visual information to a small computerized video processing unit which can be stored in a pocket. This computer turns the image to electronic signals that are sent wirelessly to an electronic device implanted on the retina, the layer of light-sensing cells lining the back of the eye.

The Argus II received Food and Drug Administration (FDA) approval as a Humanitarian Use Device (HUD) in 2013, which is an approval specifically for devices intended to benefit small populations and/or rare conditions. [emphasis mine]

I don’t recall seeing “Humanitarian Use Device (HUD)” in the 2013 materials which focused on the FDA’s commercial use approval. I gather from this experience that commercial use doesn’t necessarily mean they’ve finished with clinical trials and are ready to start selling the product. In any event, I will try to take a closer look at the actual approvals the next time, assuming I can make sense of the language.

After all the talk about it, here’s what the device looks like,

 Caption: Figure A, The implanted portions of the Argus II System. Figure B, The external components of the Argus II System. Images in real time are captured by camera mounted on the glasses. The video processing unit down-samples and processes the image, converting it to stimulation patterns. Data and power are sent via radiofrequency link form the transmitter antenna on the glasses to the receiver antenna around the eye. A removable, rechargeable battery powers the system. Credit: Photo courtesy of Second Sight Medical Products, Inc.

Caption: Figure A, The implanted portions of the Argus II System. Figure B, The external components of the Argus II System. Images in real time are captured by camera mounted on the glasses. The video processing unit down-samples and processes the image, converting it to stimulation patterns. Data and power are sent via radiofrequency link form the transmitter antenna on the glasses to the receiver antenna around the eye. A removable, rechargeable battery powers the system.
Credit: Photo courtesy of Second Sight Medical Products, Inc.

The news release offers more details about the recently completed clinical trial,

To further evaluate the safety, reliability and benefit of the device, a clinical trial of 30 people, aged 28 to 77, was conducted in the United States and Europe. All of the study participants had little or no light perception in both eyes. The researchers conducted visual function tests using both a computer screen and real-world conditions, including finding and touching a door and identifying and following a line on the ground. A Functional Low-vision Observer Rated Assessment (FLORA) was also performed by independent visual rehabilitation experts at the request of the FDA to assess the impact of the Argus II system on the subjects’ everyday lives, including extensive interviews and tasks performed around the home.

The visual function results indicated that up to 89 percent of the subjects performed significantly better with the device. The FLORA found that among the subjects, 80 percent received benefit from the system when considering both functional vision and patient-reported quality of life, and no subjects were affected negatively.

After one year, two-thirds of the subjects had not experienced device- or surgery-related serious adverse events. After three years, there were no device failures. Throughout the three years, 11 subjects experienced serious adverse events, most of which occurred soon after implantation and were successfully treated. One of these treatments, however, was to remove the device due to recurring erosion after the suture tab on the device became damaged.

“This study shows that the Argus II system is a viable treatment option for people profoundly blind due to retinitis pigmentosa – one that can make a meaningful difference in their lives and provides a benefit that can last over time,” said Allen C. Ho, M.D., lead author of the study and director of the clinical retina research unit at Wills Eye Hospital. “I look forward to future studies with this technology which may make possible expansion of the intended use of the device, including treatment for other diseases and eye injuries.”

Here’s a link to a PDF of and a citation for the paper,

Long-Term Results from an Epiretinal Prosthesis to Restore Sight to the Blind by Allen C. Ho,Mark S. Humayun, Jessy D. Dorn, Lyndon da Cruz, Gislin Dagnelie,James Handa, Pierre-Olivier Barale, José-Alain Sahel, Paulo E. Stanga, Farhad Hafezi, Avinoam B. Safran, Joel Salzmann, Arturo Santos, David Birch, Rand Spencer, Artur V. Cideciyan, Eugene de Juan, Jacque L. Duncan, Dean Eliott, Amani Fawzi, Lisa C. Olmos de Koo, Gary C. Brown, Julia A. Haller, Carl D. Regillo, Lucian V. Del Priore, Aries Arditi, Duane R. Geruschat, Robert J. Greenberg. Opthamology, June 2015

This paper is open access.

Is it time to invest in a ‘brain chip’ company?

This story take a few twists and turns. First, ‘brain chips’ as they’re sometimes called would allow, theoretically, computers to learn and function like human brains. (Note: There’s another type of ‘brain chip’ which could be implanted in human brains to help deal with diseases such as Parkinson’s and Alzheimer’s. *Today’s [June 26, 2015] earlier posting about an artificial neuron points at some of the work being done in this areas.*)

Returning to the ‘brain ship’ at hand. Second, there’s a company called BrainChip, which has one patent and another pending for, yes, a ‘brain chip’.

The company, BrainChip, founded in Australia and now headquartered in California’s Silicon Valley, recently sparked some investor interest in Australia. From an April 7, 2015 article by Timna Jacks for the Australian Financial Review,

Former mining stock Aziana Limited has whet Australian investors’ appetite for science fiction, with its share price jumping 125 per cent since it announced it was acquiring a US-based tech company called BrainChip, which promises artificial intelligence through a microchip that replicates the neural system of the human brain.

Shares in the company closed at 9¢ before the Easter long weekend, having been priced at just 4¢ when the backdoor listing of BrainChip was announced to the market on March 18.

Creator of the patented digital chip, Peter Van Der Made told The Australian Financial Review the technology has the capacity to learn autonomously, due to its composition of 10,000 biomimic neurons, which, through a process known as synaptic time-dependent plasticity, can form memories and associations in the same way as a biological brain. He said it works 5000 times faster and uses a thousandth of the power of the fastest computers available today.

Mr Van Der Made is inviting technology partners to license the technology for their own chips and products, and is donating the technology to university laboratories in the US for research.

The Netherlands-born Australian, now based in southern California, was inspired to create the brain-like chip in 2004, after working at the IBM Internet Security Systems for two years, where he was chief scientist for behaviour analysis security systems. …

A June 23, 2015 article by Tony Malkovic on provide a few more details about BrainChip and about the deal,

Mr Van der Made and the company, also called BrainChip, are now based in Silicon Valley in California and he returned to Perth last month as part of the company’s recent merger and listing on the Australian Stock Exchange.

He says BrainChip has the ability to learn autonomously, evolve and associate information and respond to stimuli like a brain.

Mr Van der Made says the company’s chip technology is more than 5,000 faster than other technologies, yet uses only 1/1,000th of the power.

“It’s a hardware only solution, there is no software to slow things down,” he says.

“It doesn’t executes instructions, it learns and supplies what it has learnt to new information.

“BrainChip is on the road to position itself at the forefront of artificial intelligence,” he says.

“We have a clear advantage, at least 10 years, over anybody else in the market, that includes IBM.”

BrainChip is aiming at the global semiconductor market involving almost anything that involves a microprocessor.

You can find out more about the company, BrainChip here. The site does have a little more information about the technology,

Spiking Neuron Adaptive Processor (SNAP)

BrainChip’s inventor, Peter van der Made, has created an exciting new Spiking Neural Networking technology that has the ability to learn autonomously, evolve and associate information just like the human brain. The technology is developed as a digital design containing a configurable “sea of biomimic neurons’.

The technology is fast, completely digital, and consumes very low power, making it feasible to integrate large networks into portable battery-operated products, something that has never been possible before.

BrainChip neurons autonomously learn through a process known as STDP (Synaptic Time Dependent Plasticity). BrainChip’s fully digital neurons process input spikes directly in hardware. Sensory neurons convert physical stimuli into spikes. Learning occurs when the input is intense, or repeating through feedback and this is directly correlated to the way the brain learns.

Computing Artificial Neural Networks (ANNs)

The brain consists of specialized nerve cells that communicate with one another. Each such nerve cell is called a Neuron,. The inputs are memory nodes called synapses. When the neuron associates information, it produces a ‘spike’ or a ‘spike train’. Each spike is a pulse that triggers a value in the next synapse. Synapses store values, similar to the way a computer stores numbers. In combination, these values determine the function of the neural network. Synapses acquire values through learning.

In Artificial Neural Networks (ANNs) this complex function is generally simplified to a static summation and compare function, which severely limits computational power. BrainChip has redefined how neural networks work, replicating the behaviour of the brain. BrainChip’s artificial neurons are completely digital, biologically realistic resulting in increased computational power, high speed and extremely low power consumption.

The Problem with Artificial Neural Networks

Standard ANNs, running on computer hardware are processed sequentially; the processor runs a program that defines the neural network. This consumes considerable time and because these neurons are processed sequentially, all this delayed time adds up resulting in a significant linear decline in network performance with size.

BrainChip neurons are all mapped in parallel. Therefore the performance of the network is not dependent on the size of the network providing a clear speed advantage. So because there is no decline in performance with network size, learning also takes place in parallel within each synapse, making STDP learning very fast.

A hardware solution

BrainChip’s digital neural technology is the only custom hardware solution that is capable of STDP learning. The hardware requires no coding and has no software as it evolves learning through experience and user direction.

The BrainChip neuron is unique in that it is completely digital, behaves asynchronously like an analog neuron, and has a higher level of biological realism. It is more sophisticated than software neural models and is many orders of magnitude faster. The BrainChip neuron consists entirely of binary logic gates with no traditional CPU core. Hence, there are no ‘programming’ steps. Learning and training takes the place of programming and coding. Like of a child learning a task for the first time.

Software ‘neurons’, to compromise for limited processing power, are simplified to a point where they do not resemble any of the features of a biological neuron. This is due to the sequential nature of computers, whereby all data has to pass through a central processor in chunks of 16, 32 or 64 bits. In contrast, the brain’s network is parallel and processes the equivalent of millions of data bits simultaneously.

A significantly faster technology

Performing emulation in digital hardware has distinct advantages over software. As software is processed sequentially, one instruction at a time, Software Neural Networks perform slower with increasing size. Parallel hardware does not have this problem and maintains the same speed no matter how large the network is. Another advantage of hardware is that it is more power efficient by several orders of magnitude.

The speed of the BrainChip device is unparalleled in the industry.

For large neural networks a GPU (Graphics Processing Unit) is ~70 times faster than the Intel i7 executing a similar size neural network. The BrainChip neural network is faster still and takes far fewer CPU (Central Processing Unit) cycles, with just a little communication overhead, which means that the CPU is available for other tasks. The BrainChip network also responds much faster than a software network accelerating the performance of the entire system.

The BrainChip network is completely parallel, with no sequential dependencies. This means that the network does not slow down with increasing size.

Endorsed by the neuroscience community

A number of the world’s pre-eminent neuroscientists have endorsed the technology and are agreeing to joint develop projects.

BrainChip has the potential to become the de facto standard for all autonomous learning technology and computer products.


BrainChip’s autonomous learning technology patent was granted on the 21st September 2008 (Patent number US 8,250,011 “Autonomous learning dynamic artificial neural computing device and brain inspired system”). BrainChip is the only company in the world to have achieved autonomous learning in a network of Digital Neurons without any software.

A prototype Spiking Neuron Adaptive Processor was designed as a ‘proof of concept’ chip.

The first tests were completed at the end of 2007 and this design was used as the foundation for the US patent application which was filed in 2008. BrainChip has also applied for a continuation-in-part patent filed in 2012, the “Method and System for creating Dynamic Neural Function Libraries”, US Patent Application 13/461,800 which is pending.

Van der Made doesn’t seem to have published any papers on this work and the description of the technology provided on the website is frustratingly vague. There are many acronyms for processes but no mention of what this hardware might be. For example, is it based on a memristor or some kind of atomic ionic switch or something else altogether?

It would be interesting to find out more but, presumably, van der Made, wishes to withhold details. There are many companies following the same strategy while pursuing what they view as a business advantage.

* Artificial neuron link added June 26, 2015 at 1017 hours PST.

Magnetic sensitivity under the microscope

Humans do not have the sense of magnetoreception (the ability to detect magnetic fields) unless they’ve been enhanced. On the other hand, species of fish, insects, birds, and some mammals (other than human) possess the sense naturally. Scientists at the University of Tokyo (Japan) have developed a microscope capable of observing magnetoreception according to a June 4, 2015 news item on Nanowerk (Note: A link has been removed),

Researchers at the University of Tokyo have succeeded in developing a new microscope capable of observing the magnetic sensitivity of photochemical reactions believed to be responsible for the ability of some animals to navigate in the Earth’s magnetic field, on a scale small enough to follow these reactions taking place inside sub-cellular structures (Angewandte Chemie International Edition, “Optical Absorption and Magnetic Field Effect Based Imaging of Transient Radicals”).

A June 4, 2015 University of Tokyo news release on EurekAlert, which originated the news item, describes the research in more detail,

Several species of insects, fish, birds and mammals are believed to be able to detect magnetic fields – an ability known as magnetoreception. For example, birds are able to sense the Earth’s magnetic field and use it to help navigate when migrating. Recent research suggests that a group of proteins called cryptochromes and particularly the molecule flavin adenine dinucleotide (FAD) that forms part of the cryptochrome, are implicated in magnetoreception. When cryptochromes absorb blue light, they can form what are known as radical pairs. The magnetic field around the cryptochromes determines the spins of these radical pairs, altering their reactivity. However, to date there has been no way to measure the effect of magnetic fields on radical pairs in living cells.

The research group of Associate Professor Jonathan Woodward at the Graduate School of Arts and Sciences are specialists in radical pair chemistry and investigating the magnetic sensitivity of biological systems. In this latest research, PhD student Lewis Antill made measurements using a special microscope to detect radical pairs formed from FAD, and the influence of very weak magnetic fields on their reactivity, in volumes less than 4 millionths of a billionth of a liter (4 femtoliters). This was possible using a technique the group developed called TOAD (transient optical absorption detection) imaging, employing a microscope built by postdoctoral research associate Dr. Joshua Beardmore based on a design by Beardmore and Woodward.

“In the future, using another mode of the new microscope called MIM (magnetic intensity modulation), also introduced in this work, it may be possible to directly image only the magnetically sensitive regions of living cells,” says Woodward. “The new imaging microscope developed in this research will enable the study of the magnetic sensitivity of photochemical reactions in a variety of important biological and other contexts, and hopefully help to unlock the secrets of animals’ miraculous magnetic sense.”

Here’s a link to and a citation for the paper,

Optical Absorption and Magnetic Field Effect Based Imaging of Transient Radicals by Dr. Joshua P. Beardmore, Lewis M. Antill, and Prof. Jonathan R. Woodward. Angewandte Chemie International Edition DOI: 10.1002/anie.201502591 Article first published online: 3 JUN 2015

© 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

I mentioned human enhancement earlier with regard to magnetoreception. There are people (body hackers) who’ve had implants that give them this extra sense. Dann Berg in a March 21, 2012 post on his website blog ( describes why he implanted a magnet into his finger and his experience with it (at that time, three years and counting),

I quickly learned that magnetic surfaces provided almost no sensation at all. Rather, it was movement that caused my finger to perk up. Things like power cord transformers, microwaves, and laptop fans became interactive in a whole new way. Each object has its own unique field, with different strength and “texture.” I started holding my finger over almost everything that I could, getting a feeling for each object’s invisible reach.

Portable electronics proved to be an experience as well. There were two fairly large electronic items that hit the shelves around the same time as I got my implant: the first iPad and the Kindle 2.

Something to consider,

Courtesy: (Dann Berg)

Courtesy: (Dann Berg)

Gray Matters volume 2: Integrative Approaches for Neuroscience, Ethics, and Society issued March 2015 by US Presidential Bioethics Commission

The second and final volume in the Grey Matters  set (from the US Presidential Commission for the Study of Bioethical Issues produced in response to a request from President Barack Obama regarding the BRAIN (Brain Research through Advancing Innovative Neurotechnologies) initiative) has just been released.

The formal title of the latest volume is Gray Matters: Topics at the Intersection of Neuroscience, Ethics, and Society, volume two. The first was titled: Gray Matters: Integrative Approaches for Neuroscience, Ethics, and Society, volume one.)

According to volume 2 of the report’s executive summary,

… In its first volume on neuroscience and ethics, Gray Matters: Integrative Approaches for Neuroscience, Ethics, and Society, the Bioethics Commission emphasized the importance of integrating ethics and neuroscience throughout the research endeavor.1 This second volume, Gray Matters: Topics at the Intersection of Neuroscience, Ethics, and Society, takes an in-depth look at three topics at the intersection of neuroscience and society that have captured the public’s attention.

The Bioethics Commission found widespread agreement that contemporary neuroscience holds great promise for relieving human suffering from a number of devastating neurological disorders. Less agreement exists on multiple other topics, and the Bioethics Commission focused on three cauldrons of controversy—cognitive enhancement, consent capacity, and neuroscienceand the legal system. These topics illustrate the ethical tensions and societal implications of advancing neuroscience and technology, and bring into heightened relief many important ethical considerations.

A March 26, 2015 post by David Bruggeman on his Pasco Phronesis blog further describes the 168 pp. second volume of the report,

There are fourteen main recommendations in the report:

Prioritize Existing Strategies to Maintain and Improve Neural Health

Continue to examine and develop existing tools and techniques for brain health

Prioritize Treatment of Neurological Disorders

As with the previous recommendation, it would be valuable to focus on existing means of addressing neurological disorders and working to improve them.

Study Novel Neural Modifiers to Augment or Enhance Neural Function

Existing research in this area is limited and inconclusive.

Ensure Equitable Access to Novel Neural Modifiers to Augment or Enhance Neural Function

Access to cognitive enhancements will need to be handled carefully to avoid exacerbating societal inequities (think the stratified societies of the film Elysium or the Star Trek episode “The Cloud Minders“).

Create Guidance About the Use of Neural Modifiers

Professional societies and expert groups need to develop guidance for health care providers that receive requests for prescriptions for cognitive enhancements (something like an off-label use of attention deficit drugs, beta blockers or other medicines to boost cognition rather than address perceived deficits).

If you don’t have time to look at the 2nd volume, David’s post covers many of the important points.

Think of your skin as a smartphone

A March 5, 2015 news item on Azonano highlights work on flexible, transparent electronics designed to adhere to your skin,

Someone wearing a smartwatch can look at a calendar or receive e-mails without having to reach further than their wrist. However, the interaction area offered by the watch face is both fixed and small, making it difficult to actually hit individual buttons with adequate precision. A method currently being developed by a team of computer scientists from Saarbrücken in collaboration with researchers from Carnegie Mellon University in the USA may provide a solution to this problem. They have developed touch-sensitive stickers made from flexible silicone and electrically conducting sensors that can be worn on the skin.

Here’s what the sticker looks like,

Caption: The stickers are skin-friendly and are attached to the skin with a biocompatible, medical-grade adhesive. Credit: Oliver Dietze

Caption: The stickers are skin-friendly and are attached to the skin with a biocompatible, medical-grade adhesive. Credit: Oliver Dietze Courtesy: Saarland University

A March 4, 2015 University of Saarland press release on EurekAlert, which originated the news item, expands on the theme on connecting technology to the body,

… The stickers can act as an input space that receives and executes commands and thus controls mobile devices. Depending on the type of skin sticker used, applying pressure to the sticker could, for example, answer an incoming call or adjust the volume of a music player. ‘The stickers allow us to enlarge the input space accessible to the user as they can be attached practically anywhere on the body,’ explains Martin Weigel, a PhD student in the team led by Jürgen Steimle at the Cluster of Excellence at Saarland University. The ‘iSkin’ approach enables the human body to become more closely connected to technology. [emphasis mine]

Users can also design their iSkin patches on a computer beforehand to suit their individual tastes. ‘A simple graphics program is all you need,’ says Weigel. One sticker, for instance, is based on musical notation, another is circular in shape like an LP. The silicone used to fabricate the sensor patches makes them flexible and stretchable. ‘This makes them easier to use in an everyday environment. The music player can simply be rolled up and put in a pocket,’ explains Jürgen Steimle, who heads the ‘Embodied Interaction Group’ in which Weigel is doing his research. ‘They are also skin-friendly, as they are attached to the skin with a biocompatible, medical-grade adhesive. Users can therefore decide where they want to position the sensor patch and how long they want to wear it.’

In addition to controlling music or phone calls, the iSkin technology could be used for many other applications. For example, a keyboard sticker could be used to type and send messages. Currently the sensor stickers are connected via cable to a computer system. According to Steimle, in-built microchips may in future allow the skin-worn sensor patches to communicate wirelessly with other mobile devices.

The publication about ‘iSkin’ won the ‘Best Paper Award’ at the SIGCHI conference, which ranks among the most important conferences within the research area of human computer interaction. The researchers will present their project at the SIGCHI conference in April [2015] in Seoul, Korea, and beforehand at the computer expo Cebit, which takes place from the 16th until the 20th of March [2015] in Hannover (hall 9, booth E13).

Hopefully, you’ll have a chance to catch researchers’ presentation at the SIGCHI or Cebit events.

That quote about enabling “the human body to become more closely connected to technology” reminds me of a tag (machine/flesh) I created to categorize research of this nature. I explained the idea being explored in a May 9, 2012 posting titled: Everything becomes part machine,

Machine/flesh. That’s what I’ve taken to calling this process of integrating machinery into our and, as I newly realized, other animals’ flesh.

I think my latest previous post on this topic was a Jan. 10, 2014 post titled: Chemistry of Cyborgs: review of the state of the art by German researchers.