Tag Archives: neurons

A solar, self-charging supercapacitor for wearable technology

Ravinder Dahiya, Carlos García Núñez, and their colleagues at the University of Glasgow (Scotland) strike again (see my May 10, 2017 posting for their first ‘solar-powered graphene skin’ research announcement). Last time it was all about robots and prosthetics, this time they’ve focused on wearable technology according to a July 18, 2018 news item on phys.org,

A new form of solar-powered supercapacitor could help make future wearable technologies lighter and more energy-efficient, scientists say.

In a paper published in the journal Nano Energy, researchers from the University of Glasgow’s Bendable Electronics and Sensing Technologies (BEST) group describe how they have developed a promising new type of graphene supercapacitor, which could be used in the next generation of wearable health sensors.

A July 18, 2018 University of Glasgow press release, which originated the news item, explains further,

Currently, wearable systems generally rely on relatively heavy, inflexible batteries, which can be uncomfortable for long-term users. The BEST team, led by Professor Ravinder Dahiya, have built on their previous success in developing flexible sensors by developing a supercapacitor which could power health sensors capable of conforming to wearer’s bodies, offering more comfort and a more consistent contact with skin to better collect health data.

Their new supercapacitor uses layers of flexible, three-dimensional porous foam formed from graphene and silver to produce a device capable of storing and releasing around three times more power than any similar flexible supercapacitor. The team demonstrated the durability of the supercapacitor, showing that it provided power consistently across 25,000 charging and discharging cycles.

They have also found a way to charge the system by integrating it with flexible solar powered skin already developed by the BEST group, effectively creating an entirely self-charging system, as well as a pH sensor which uses wearer’s sweat to monitor their health.

Professor Dahiya said: “We’re very pleased by the progress this new form of solar-powered supercapacitor represents. A flexible, wearable health monitoring system which only requires exposure to sunlight to charge has a lot of obvious commercial appeal, but the underlying technology has a great deal of additional potential.

“This research could take the wearable systems for health monitoring to remote parts of the world where solar power is often the most reliable source of energy, and it could also increase the efficiency of hybrid electric vehicles. We’re already looking at further integrating the technology into flexible synthetic skin which we’re developing for use in advanced prosthetics.” [emphasis mine]

In addition to the team’s work on robots, prosthetics, and graphene ‘skin’ mentioned in the May 10, 2017 posting the team is working on a synthetic ‘brainy’ skin for which they have just received £1.5m funding from the Engineering and Physical Science Research Council (EPSRC).

Brainy skin

A July 3, 2018 University of Glasgow press release discusses the proposed work in more detail,

A robotic hand covered in ‘brainy skin’ that mimics the human sense of touch is being developed by scientists.

University of Glasgow’s Professor Ravinder Dahiya has plans to develop ultra-flexible, synthetic Brainy Skin that ‘thinks for itself’.

The super-flexible, hypersensitive skin may one day be used to make more responsive prosthetics for amputees, or to build robots with a sense of touch.

Brainy Skin reacts like human skin, which has its own neurons that respond immediately to touch rather than having to relay the whole message to the brain.

This electronic ‘thinking skin’ is made from silicon based printed neural transistors and graphene – an ultra-thin form of carbon that is only an atom thick, but stronger than steel.

The new version is more powerful, less cumbersome and would work better than earlier prototypes, also developed by Professor Dahiya and his Bendable Electronics and Sensing Technologies (BEST) team at the University’s School of Engineering.

His futuristic research, called neuPRINTSKIN (Neuromorphic Printed Tactile Skin), has just received another £1.5m funding from the Engineering and Physical Science Research Council (EPSRC).

Professor Dahiya said: “Human skin is an incredibly complex system capable of detecting pressure, temperature and texture through an array of neural sensors that carry signals from the skin to the brain.

“Inspired by real skin, this project will harness the technological advances in electronic engineering to mimic some features of human skin, such as softness, bendability and now, also sense of touch. This skin will not just mimic the morphology of the skin but also its functionality.

“Brainy Skin is critical for the autonomy of robots and for a safe human-robot interaction to meet emerging societal needs such as helping the elderly.”

Synthetic ‘Brainy Skin’ with sense of touch gets £1.5m funding. Photo of Professor Ravinder Dahiya

This latest advance means tactile data is gathered over large areas by the synthetic skin’s computing system rather than sent to the brain for interpretation.

With additional EPSRC funding, which extends Professor Dahiya’s fellowship by another three years, he plans to introduce tactile skin with neuron-like processing. This breakthrough in the tactile sensing research will lead to the first neuromorphic tactile skin, or ‘brainy skin.’

To achieve this, Professor Dahiya will add a new neural layer to the e-skin that he has already developed using printing silicon nanowires.

Professor Dahiya added: “By adding a neural layer underneath the current tactile skin, neuPRINTSKIN will add significant new perspective to the e-skin research, and trigger transformations in several areas such as robotics, prosthetics, artificial intelligence, wearable systems, next-generation computing, and flexible and printed electronics.”

The Engineering and Physical Sciences Research Council (EPSRC) is part of UK Research and Innovation, a non-departmental public body funded by a grant-in-aid from the UK government.

EPSRC is the main funding body for engineering and physical sciences research in the UK. By investing in research and postgraduate training, the EPSRC is building the knowledge and skills base needed to address the scientific and technological challenges facing the nation.

Its portfolio covers a vast range of fields from healthcare technologies to structural engineering, manufacturing to mathematics, advanced materials to chemistry. The research funded by EPSRC has impact across all sectors. It provides a platform for future UK prosperity by contributing to a healthy, connected, resilient, productive nation.

It’s fascinating to note how these pieces of research fit together for wearable technology and health monitoring and creating more responsive robot ‘skin’ and, possibly, prosthetic devices that would allow someone to feel again.

The latest research paper

Getting back the solar-charging supercapacitors mentioned in the opening, here’s a link to and a citation for the team’s latest research paper,

Flexible self-charging supercapacitor based on graphene-Ag-3D graphene foam electrodes by Libu Manjakka, Carlos García Núñez, Wenting Dang, Ravinder Dahiya. Nano Energy Volume 51, September 2018, Pages 604-612 DOI: https://doi.org/10.1016/j.nanoen.2018.06.072

This paper is open access.

Brainy and brainy: a novel synaptic architecture and a neuromorphic computing platform called SpiNNaker

I have two items about brainlike computing. The first item hearkens back to memristors, a topic I have been following since 2008. (If you’re curious about the various twists and turns just enter  the term ‘memristor’ in this blog’s search engine.) The latest on memristors is from a team than includes IBM (US), École Politechnique Fédérale de Lausanne (EPFL; Swizterland), and the New Jersey Institute of Technology (NJIT; US). The second bit comes from a Jülich Research Centre team in Germany and concerns an approach to brain-like computing that does not include memristors.

Multi-memristive synapses

In the inexorable march to make computers function more like human brains (neuromorphic engineering/computing), an international team has announced its latest results in a July 10, 2018 news item on Nanowerk,

Two New Jersey Institute of Technology (NJIT) researchers, working with collaborators from the IBM Research Zurich Laboratory and the École Polytechnique Fédérale de Lausanne, have demonstrated a novel synaptic architecture that could lead to a new class of information processing systems inspired by the brain.

The findings are an important step toward building more energy-efficient computing systems that also are capable of learning and adaptation in the real world. …

A July 10, 2018 NJIT news release (also on EurekAlert) by Tracey Regan, which originated by the news item, adds more details,

The researchers, Bipin Rajendran, an associate professor of electrical and computer engineering, and S. R. Nandakumar, a graduate student in electrical engineering, have been developing brain-inspired computing systems that could be used for a wide range of big data applications.

Over the past few years, deep learning algorithms have proven to be highly successful in solving complex cognitive tasks such as controlling self-driving cars and language understanding. At the heart of these algorithms are artificial neural networks – mathematical models of the neurons and synapses of the brain – that are fed huge amounts of data so that the synaptic strengths are autonomously adjusted to learn the intrinsic features and hidden correlations in these data streams.

However, the implementation of these brain-inspired algorithms on conventional computers is highly inefficient, consuming huge amounts of power and time. This has prompted engineers to search for new materials and devices to build special-purpose computers that can incorporate the algorithms. Nanoscale memristive devices, electrical components whose conductivity depends approximately on prior signaling activity, can be used to represent the synaptic strength between the neurons in artificial neural networks.

While memristive devices could potentially lead to faster and more power-efficient computing systems, they are also plagued by several reliability issues that are common to nanoscale devices. Their efficiency stems from their ability to be programmed in an analog manner to store multiple bits of information; however, their electrical conductivities vary in a non-deterministic and non-linear fashion.

In the experiment, the team showed how multiple nanoscale memristive devices exhibiting these characteristics could nonetheless be configured to efficiently implement artificial intelligence algorithms such as deep learning. Prototype chips from IBM containing more than one million nanoscale phase-change memristive devices were used to implement a neural network for the detection of hidden patterns and correlations in time-varying signals.

“In this work, we proposed and experimentally demonstrated a scheme to obtain high learning efficiencies with nanoscale memristive devices for implementing learning algorithms,” Nandakumar says. “The central idea in our demonstration was to use several memristive devices in parallel to represent the strength of a synapse of a neural network, but only chose one of them to be updated at each step based on the neuronal activity.”

Here’s a link to and a citation for the paper,

Neuromorphic computing with multi-memristive synapses by Irem Boybat, Manuel Le Gallo, S. R. Nandakumar, Timoleon Moraitis, Thomas Parnell, Tomas Tuma, Bipin Rajendran, Yusuf Leblebici, Abu Sebastian, & Evangelos Eleftheriou. Nature Communications volume 9, Article number: 2514 (2018) DOI: https://doi.org/10.1038/s41467-018-04933-y Published 28 June 2018

This is an open access paper.

Also they’ve got a couple of very nice introductory paragraphs which I’m including here, (from the June 28, 2018 paper in Nature Communications; Note: Links have been removed),

The human brain with less than 20 W of power consumption offers a processing capability that exceeds the petaflops mark, and thus outperforms state-of-the-art supercomputers by several orders of magnitude in terms of energy efficiency and volume. Building ultra-low-power cognitive computing systems inspired by the operating principles of the brain is a promising avenue towards achieving such efficiency. Recently, deep learning has revolutionized the field of machine learning by providing human-like performance in areas, such as computer vision, speech recognition, and complex strategic games1. However, current hardware implementations of deep neural networks are still far from competing with biological neural systems in terms of real-time information-processing capabilities with comparable energy consumption.

One of the reasons for this inefficiency is that most neural networks are implemented on computing systems based on the conventional von Neumann architecture with separate memory and processing units. There are a few attempts to build custom neuromorphic hardware that is optimized to implement neural algorithms2,3,4,5. However, as these custom systems are typically based on conventional silicon complementary metal oxide semiconductor (CMOS) circuitry, the area efficiency of such hardware implementations will remain relatively low, especially if in situ learning and non-volatile synaptic behavior have to be incorporated. Recently, a new class of nanoscale devices has shown promise for realizing the synaptic dynamics in a compact and power-efficient manner. These memristive devices store information in their resistance/conductance states and exhibit conductivity modulation based on the programming history6,7,8,9. The central idea in building cognitive hardware based on memristive devices is to store the synaptic weights as their conductance states and to perform the associated computational tasks in place.

The two essential synaptic attributes that need to be emulated by memristive devices are the synaptic efficacy and plasticity. …

It gets more complicated from there.

Now onto the next bit.

SpiNNaker

At a guess, those capitalized N’s are meant to indicate ‘neural networks’. As best I can determine, SpiNNaker is not based on the memristor. Moving on, a July 11, 2018 news item on phys.org announces work from a team examining how neuromorphic hardware and neuromorphic software work together,

A computer built to mimic the brain’s neural networks produces similar results to that of the best brain-simulation supercomputer software currently used for neural-signaling research, finds a new study published in the open-access journal Frontiers in Neuroscience. Tested for accuracy, speed and energy efficiency, this custom-built computer named SpiNNaker, has the potential to overcome the speed and power consumption problems of conventional supercomputers. The aim is to advance our knowledge of neural processing in the brain, to include learning and disorders such as epilepsy and Alzheimer’s disease.

A July 11, 2018 Frontiers Publishing news release on EurekAlert, which originated the news item, expands on the latest work,

“SpiNNaker can support detailed biological models of the cortex–the outer layer of the brain that receives and processes information from the senses–delivering results very similar to those from an equivalent supercomputer software simulation,” says Dr. Sacha van Albada, lead author of this study and leader of the Theoretical Neuroanatomy group at the Jülich Research Centre, Germany. “The ability to run large-scale detailed neural networks quickly and at low power consumption will advance robotics research and facilitate studies on learning and brain disorders.”

The human brain is extremely complex, comprising 100 billion interconnected brain cells. We understand how individual neurons and their components behave and communicate with each other and on the larger scale, which areas of the brain are used for sensory perception, action and cognition. However, we know less about the translation of neural activity into behavior, such as turning thought into muscle movement.

Supercomputer software has helped by simulating the exchange of signals between neurons, but even the best software run on the fastest supercomputers to date can only simulate 1% of the human brain.

“It is presently unclear which computer architecture is best suited to study whole-brain networks efficiently. The European Human Brain Project and Jülich Research Centre have performed extensive research to identify the best strategy for this highly complex problem. Today’s supercomputers require several minutes to simulate one second of real time, so studies on processes like learning, which take hours and days in real time are currently out of reach.” explains Professor Markus Diesmann, co-author, head of the Computational and Systems Neuroscience department at the Jülich Research Centre.

He continues, “There is a huge gap between the energy consumption of the brain and today’s supercomputers. Neuromorphic (brain-inspired) computing allows us to investigate how close we can get to the energy efficiency of the brain using electronics.”

Developed over the past 15 years and based on the structure and function of the human brain, SpiNNaker — part of the Neuromorphic Computing Platform of the Human Brain Project — is a custom-built computer composed of half a million of simple computing elements controlled by its own software. The researchers compared the accuracy, speed and energy efficiency of SpiNNaker with that of NEST–a specialist supercomputer software currently in use for brain neuron-signaling research.

“The simulations run on NEST and SpiNNaker showed very similar results,” reports Steve Furber, co-author and Professor of Computer Engineering at the University of Manchester, UK. “This is the first time such a detailed simulation of the cortex has been run on SpiNNaker, or on any neuromorphic platform. SpiNNaker comprises 600 circuit boards incorporating over 500,000 small processors in total. The simulation described in this study used just six boards–1% of the total capability of the machine. The findings from our research will improve the software to reduce this to a single board.”

Van Albada shares her future aspirations for SpiNNaker, “We hope for increasingly large real-time simulations with these neuromorphic computing systems. In the Human Brain Project, we already work with neuroroboticists who hope to use them for robotic control.”

Before getting to the link and citation for the paper, here’s a description of SpiNNaker’s hardware from the ‘Spiking neural netowrk’ Wikipedia entry, Note: Links have been removed,

Neurogrid, built at Stanford University, is a board that can simulate spiking neural networks directly in hardware. SpiNNaker (Spiking Neural Network Architecture) [emphasis mine], designed at the University of Manchester, uses ARM processors as the building blocks of a massively parallel computing platform based on a six-layer thalamocortical model.[5]

Now for the link and citation,

Performance Comparison of the Digital Neuromorphic Hardware SpiNNaker and the Neural Network Simulation Software NEST for a Full-Scale Cortical Microcircuit Model by
Sacha J. van Albada, Andrew G. Rowley, Johanna Senk, Michael Hopkins, Maximilian Schmidt, Alan B. Stokes, David R. Lester, Markus Diesmann, and Steve B. Furber. Neurosci. 12:291. doi: 10.3389/fnins.2018.00291 Published: 23 May 2018

As noted earlier, this is an open access paper.

Neurons and graphene carpets

I don’t entirely grasp the carpet analogy. Actually, I have no why they used a carpet analogy but here’s the June 12, 2018 ScienceDaily news item about the research,

A work led by SISSA [Scuola Internazionale Superiore di Studi Avanzati] and published on Nature Nanotechnology reports for the first time experimentally the phenomenon of ion ‘trapping’ by graphene carpets and its effect on the communication between neurons. The researchers have observed an increase in the activity of nerve cells grown on a single layer of graphene. Combining theoretical and experimental approaches they have shown that the phenomenon is due to the ability of the material to ‘trap’ several ions present in the surrounding environment on its surface, modulating its composition. Graphene is the thinnest bi-dimensional material available today, characterised by incredible properties of conductivity, flexibility and transparency. Although there are great expectations for its applications in the biomedical field, only very few works have analysed its interactions with neuronal tissue.

A June 12, 2018 SISSA press release (also on EurekAlert), which originated the news item, provides more detail,

A study conducted by SISSA – Scuola Internazionale Superiore di Studi Avanzati, in association with the University of Antwerp (Belgium), the University of Trieste and the Institute of Science and Technology of Barcelona (Spain), has analysed the behaviour of neurons grown on a single layer of graphene, observing a strengthening in their activity. Through theoretical and experimental approaches the researchers have shown that such behaviour is due to reduced ion mobility, in particular of potassium, to the neuron-graphene interface. This phenomenon is commonly called ‘ion trapping’, already known at theoretical level, but observed experimentally for the first time only now. “It is as if graphene behaves as an ultra-thin magnet on whose surface some of the potassium ions present in the extra cellular solution between the cells and the graphene remain trapped. It is this small variation that determines the increase in neuronal excitability” comments Denis Scaini, researcher at SISSA who has led the research alongside Laura Ballerini.

The study has also shown that this strengthening occurs when the graphene itself is supported by an insulator, like glass, or suspended in solution, while it disappears when lying on a conductor. “Graphene is a highly conductive material which could potentially be used to coat any surface. Understanding how its behaviour varies according to the substratum on which it is laid is essential for its future applications, above all in the neurological field” continues Scaini, “considering the unique properties of graphene it is natural to think for example about the development of innovative electrodes of cerebral stimulation or visual devices”.

It is a study with a double outcome. Laura Ballerini comments as follows: “This ‘ion trap’ effect was described only in theory. Studying the impact of the ‘technology of materials’ on biological systems, we have documented a mechanism to regulate membrane excitability, but at the same time we have also experimentally described a property of the material through the biology of neurons.”

Dexter Johnson in a June 13, 2018 posting, on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website), provides more context for the work (Note: Links have been removed),

While graphene has been tapped to deliver on everything from electronics to optoelectronics, it’s a bit harder to picture how it may offer a key tool for addressing neurological damage and disorders. But that’s exactly what researchers have been looking at lately because of the wonder material’s conductivity and transparency.

In the most recent development, a team from Europe has offered a deeper understanding of how graphene can be combined with neurological tissue and, in so doing, may have not only given us an additional tool for neurological medicine, but also provided a tool for gaining insights into other biological processes.

“The results demonstrate that, depending on how the interface with [single-layer graphene] is engineered, the material may tune neuronal activities by altering the ion mobility, in particular potassium, at the cell/substrate interface,” said Laura Ballerini, a researcher in neurons and nanomaterials at SISSA.

Ballerini provided some context for this most recent development by explaining that graphene-based nanomaterials have come to represent potential tools in neurology and neurosurgery.

“These materials are increasingly engineered as components of a variety of applications such as biosensors, interfaces, or drug-delivery platforms,” said Ballerini. “In particular, in neural electrode or interfaces, a precise requirement is the stable device/neuronal electrical coupling, which requires governing the interactions between the electrode surface and the cell membrane.”

This neuro-electrode hybrid is at the core of numerous studies, she explained, and graphene, thanks to its electrical properties, transparency, and flexibility represents an ideal material candidate.

In all of this work, the real challenge has been to investigate the ability of a single atomic layer to tune neuronal excitability and to demonstrate unequivocally that graphene selectively modifies membrane-associated neuronal functions.

I encourage you to read Dexter’s posting as it clarifies the work described in the SISSA press release for those of us (me) who may fail to grasp the implications.

Here’s a link to and a citation for the paper,

Single-layer graphene modulates neuronal communication and augments membrane ion currents by Niccolò Paolo Pampaloni, Martin Lottner, Michele Giugliano, Alessia Matruglio, Francesco D’Amico, Maurizio Prato, Josè Antonio Garrido, Laura Ballerini, & Denis Scaini. Nature Nanotechnology (2018) DOI: https://doi.org/10.1038/s41565-018-0163-6 Published online June 13, 2018

This paper is behind a paywall.

All this brings to mind a prediction made about the Graphene Flagship and the Human Brain Project shortly after the European Commission announced in January 2013 that each project had won funding of 1B Euros to be paid out over a period of 10 years. The prediction was that scientists would work on graphene/human brain research.

Transparent graphene electrode technology and complex brain imaging

Michael Berger has written a May 24, 2018 Nanowerk Spotlight article about some of the latest research on transparent graphene electrode technology and the brain (Note: A link has been removed),

In new work, scientists from the labs of Kuzum [Duygu Kuzum, an Assistant Professor of Electrical and Computer Engineering at the University of California, San Diego {UCSD}] and Anna Devor report a transparent graphene microelectrode neural implant that eliminates light-induced artifacts to enable crosstalk-free integration of 2-photon microscopy, optogenetic stimulation, and cortical recordings in the same in vivo experiment. The new class of transparent brain implant is based on monolayer graphene. It offers a practical pathway to investigate neuronal activity over multiple spatial scales extending from single neurons to large neuronal populations.

Conventional metal-based microelectrodes cannot be used for simultaneous measurements of multiple optical and electrical parameters, which are essential for comprehensive investigation of brain function across spatio-temporal scales. Since they are opaque, they block the field of view of the microscopes and generate optical shadows impeding imaging.

More importantly, they cause light induced artifacts in electrical recordings, which can significantly interfere with neural signals. Transparent graphene electrode technology presented in this paper addresses these problems and allow seamless and crosstalk-free integration of optical and electrical sensing and manipulation technologies.

In their work, the scientists demonstrate that by careful design of key steps in the fabrication process for transparent graphene electrodes, the light-induced artifact problem can be mitigated and virtually artifact-free local field potential (LFP) recordings can be achieved within operating light intensities.

“Optical transparency of graphene enables seamless integration of imaging, optogenetic stimulation and electrical recording of brain activity in the same experiment with animal models,” Kuzum explains. “Different from conventional implants based on metal electrodes, graphene-based electrodes do not generate any electrical artifacts upon interacting with light used for imaging or optogenetics. That enables crosstalk free integration of three modalities: imaging, stimulation and recording to investigate brain activity over multiple spatial scales extending from single neurons to large populations of neurons in the same experiment.”

The team’s new fabrication process avoids any crack formation in the transfer process, resulting in a 95-100% yield for the electrode arrays. This fabrication quality is important for expanding this technology to high-density large area transparent arrays to monitor brain-scale cortical activity in large animal models or humans.

“Our technology is also well-suited for neurovascular and neurometabolic studies, providing a ‘gold standard’ neuronal correlate for optical measurements of vascular, hemodynamic, and metabolic activity,” Kuzum points out. “It will find application in multiple areas, advancing our understanding of how microscopic neural activity at the cellular scale translates into macroscopic activity of large neuron populations.”

“Combining optical techniques with electrical recordings using graphene electrodes will allow to connect the large body of neuroscience knowledge obtained from animal models to human studies mainly relying on electrophysiological recordings of brain-scale activity,” she adds.

Next steps for the team involve employing this technology to investigate coupling and information transfer between different brain regions.

This work is part of the US BRAIN (Brain Research through Advancing Innovative Neurotechnologies) initiative and there’s more than one team working with transparent graphene electrodes. John Hewitt in an Oct. 21, 2014 posting on ExtremeTech describes two other teams’ work (Note: Links have been removed),

The solution [to the problems with metal electrodes], now emerging from multiple labs throughout the universe is to build flexible, transparent electrode arrays from graphene. Two studies in the latest issue of Nature Communications, one from the University of Wisconsin-Madison and the other from Penn [University of Pennsylvania], describe how to build these devices.

The University of Wisconsin researchers are either a little bit smarter or just a little bit richer, because they published their work open access. It’s a no-brainer then that we will focus on their methods first, and also in more detail. To make the arrays, these guys first deposited the parylene (polymer) substrate on a silicon wafer, metalized it with gold, and then patterned it with an electron beam to create small contact pads. The magic was to then apply four stacked single-atom-thick graphene layers using a wet transfer technique. These layers were then protected with a silicon dioxide layer, another parylene layer, and finally molded into brain signal recording goodness with reactive ion etching.

PennTransparentelectrodeThe researchers went with four graphene layers because that provided optimal mechanical integrity and conductivity while maintaining sufficient transparency. They tested the device in opto-enhanced mice whose neurons expressed proteins that react to blue light. When they hit the neurons with a laser fired in through the implant, the protein channels opened and fired the cell beneath. The masterstroke that remained was then to successfully record the electrical signals from this firing, sit back, and wait for the Nobel prize office to call.

The Penn State group [Note: Every reearcher mentioned in the paper Hewitt linked to is from the University of Pennsylvania] in the  used a similar 16-spot electrode array (pictured above right), and proceeded — we presume — in much the same fashion. Their angle was to perform high-resolution optical imaging, in particular calcium imaging, right out through the transparent electrode arrays which simultaneously recorded in high-temporal-resolution signals. They did this in slices of the hippocampus where they could bring to bear the complex and multifarious hardware needed to perform confocal and two-photon microscopy. These latter techniques provide a boost in spatial resolution by zeroing in over narrow planes inside the specimen, and limiting the background by the requirement of two photons to generate an optical signal. We should mention that there are voltage sensitive dyes available, in addition to standard calcium dyes, which can almost record the fastest single spikes, but electrical recording still reigns supreme for speed.

What a mouse looks like with an optogenetics system plugged in

What a mouse looks like with an optogenetics system plugged in

One concern of both groups in making these kinds of simultaneous electro-optic measurements was the generation of light-induced artifacts in the electrical recordings. This potential complication, called the Becqueral photovoltaic effect, has been known to exist since it was first demonstrated back in 1839. When light hits a conventional metal electrode, a photoelectrochemical (or more simply, a photovoltaic) effect occurs. If present in these recordings, the different signals could be highly disambiguatable. The Penn researchers reported that they saw no significant artifact, while the Wisconsin researchers saw some small effects with their device. In particular, when compared with platinum electrodes put into the opposite side cortical hemisphere, the Wisconsin researchers found that the artifact from graphene was similar to that obtained from platinum electrodes.

Here’s a link to and a citation for the latest research from UCSD,

Deep 2-photon imaging and artifact-free optogenetics through transparent graphene microelectrode arrays by Martin Thunemann, Yichen Lu, Xin Liu, Kıvılcım Kılıç, Michèle Desjardins, Matthieu Vandenberghe, Sanaz Sadegh, Payam A. Saisan, Qun Cheng, Kimberly L. Weldy, Hongming Lyu, Srdjan Djurovic, Ole A. Andreassen, Anders M. Dale, Anna Devor, & Duygu Kuzum. Nature Communicationsvolume 9, Article number: 2035 (2018) doi:10.1038/s41467-018-04457-5 Published: 23 May 2018

This paper is open access.

You can find out more about the US BRAIN initiative here and if you’re curious, you can find out more about the project at UCSD here. Duygu Kuzum (now at UCSD) was at  the University of Pennsylvania in 2014 and participated in the work mentioned in Hewitt’s 2014 posting.

Injectable bandages for internal bleeding and hydrogel for the brain

This injectable bandage could be a gamechanger (as they say) if it can be taken beyond the ‘in vitro’ (i.e., petri dish) testing stage. A May 22, 2018 news item on Nanowerk makes the announcement (Note: A link has been removed),

While several products are available to quickly seal surface wounds, rapidly stopping fatal internal bleeding has proven more difficult. Now researchers from the Department of Biomedical Engineering at Texas A&M University are developing an injectable hydrogel bandage that could save lives in emergencies such as penetrating shrapnel wounds on the battlefield (Acta Biomaterialia, “Nanoengineered injectable hydrogels for wound healing application”).

A May 22, 2018 US National Institute of Biomedical Engineering and Bioengiineering news release, which originated the news item, provides more detail (Note: Links have been removed),

The researchers combined a hydrogel base (a water-swollen polymer) and nanoparticles that interact with the body’s natural blood-clotting mechanism. “The hydrogel expands to rapidly fill puncture wounds and stop blood loss,” explained Akhilesh Gaharwar, Ph.D., assistant professor and senior investigator on the work. “The surface of the nanoparticles attracts blood platelets that become activated and start the natural clotting cascade of the body.”

Enhanced clotting when the nanoparticles were added to the hydrogel was confirmed by standard laboratory blood clotting tests. Clotting time was reduced from eight minutes to six minutes when the hydrogel was introduced into the mixture. When nanoparticles were added, clotting time was significantly reduced, to less than three minutes.

In addition to the rapid clotting mechanism of the hydrogel composite, the engineers took advantage of special properties of the nanoparticle component. They found they could use the electric charge of the nanoparticles to add growth factors that efficiently adhered to the particles. “Stopping fatal bleeding rapidly was the goal of our work,” said Gaharwar. “However, we found that we could attach growth factors to the nanoparticles. This was an added bonus because the growth factors act to begin the body’s natural wound healing process—the next step needed after bleeding has stopped.”

The researchers were able to attach vascular endothelial growth factor (VEGF) to the nanoparticles. They tested the hydrogel/nanoparticle/VEGF combination in a cell culture test that mimics the wound healing process. The test uses a petri dish with a layer of endothelial cells on the surface that create a solid skin-like sheet. The sheet is then scratched down the center creating a rip or hole in the sheet that resembles a wound.

When the hydrogel containing VEGF bound to the nanoparticles was added to the damaged endothelial cell wound, the cells were induced to grow back and fill-in the scratched region—essentially mimicking the healing of a wound.

“Our laboratory experiments have verified the effectiveness of the hydrogel for initiating both blood clotting and wound healing,” said Gaharwar. “We are anxious to begin tests in animals with the hope of testing and eventual use in humans where we believe our formulation has great potential to have a significant impact on saving lives in critical situations.”

The work was funded by grant EB023454 from the National Institute of Biomedical Imaging and Bioengineering (NIBIB), and the National Science Foundation. The results were reported in the February issue of the journal Acta Biomaterialia.

The paper was published back in April 2018 and there was an April 2, 2018 Texas A&M University news release on EurekAlert making the announcement (and providing a few unique details),

A penetrating injury from shrapnel is a serious obstacle in overcoming battlefield wounds that can ultimately lead to death.Given the high mortality rates due to hemorrhaging, there is an unmet need to quickly self-administer materials that prevent fatality due to excessive blood loss.

With a gelling agent commonly used in preparing pastries, researchers from the Inspired Nanomaterials and Tissue Engineering Laboratory have successfully fabricated an injectable bandage to stop bleeding and promote wound healing.

In a recent article “Nanoengineered Injectable Hydrogels for Wound Healing Application” published in Acta Biomaterialia, Dr. Akhilesh K. Gaharwar, assistant professor in the Department of Biomedical Engineering at Texas A&M University, uses kappa-carrageenan and nanosilicates to form injectable hydrogels to promote hemostasis (the process to stop bleeding) and facilitate wound healing via a controlled release of therapeutics.

“Injectable hydrogels are promising materials for achieving hemostasis in case of internal injuries and bleeding, as these biomaterials can be introduced into a wound site using minimally invasive approaches,” said Gaharwar. “An ideal injectable bandage should solidify after injection in the wound area and promote a natural clotting cascade. In addition, the injectable bandage should initiate wound healing response after achieving hemostasis.”

The study uses a commonly used thickening agent known as kappa-carrageenan, obtained from seaweed, to design injectable hydrogels. Hydrogels are a 3-D water swollen polymer network, similar to Jell-O, simulating the structure of human tissues.

When kappa-carrageenan is mixed with clay-based nanoparticles, injectable gelatin is obtained. The charged characteristics of clay-based nanoparticles provide hemostatic ability to the hydrogels. Specifically, plasma protein and platelets form blood adsorption on the gel surface and trigger a blood clotting cascade.

“Interestingly, we also found that these injectable bandages can show a prolonged release of therapeutics that can be used to heal the wound” said Giriraj Lokhande, a graduate student in Gaharwar’s lab and first author of the paper. “The negative surface charge of nanoparticles enabled electrostatic interactions with therapeutics thus resulting in the slow release of therapeutics.”

Nanoparticles that promote blood clotting and wound healing (red discs), attached to the wound-filling hydrogel component (black) form a nanocomposite hydrogel. The gel is designed to be self-administered to stop bleeding and begin wound-healing in emergency situations. Credit: Lokhande, et al. 1

Here’s a link to and a citation for the paper,

Nanoengineered injectable hydrogels for wound healing application by Giriraj Lokhande, James K. Carrow, Teena Thakur, Janet R. Xavier, Madasamy Parani, Kayla J. Bayless, Akhilesh K. Gaharwar. Acta Biomaterialia Volume 70, 1 April 2018, Pages 35-47
https://doi.org/10.1016/j.actbio.2018.01.045

This paper is behind a paywall.

Hydrogel and the brain

It’s been an interesting week for hydrogels. On May 21, 2018 there was a news item on ScienceDaily about a bioengineered hydrogel which stimulated brain tissue growth after a stroke (mouse model),

In a first-of-its-kind finding, a new stroke-healing gel helped regrow neurons and blood vessels in mice with stroke-damaged brains, UCLA researchers report in the May 21 issue of Nature Materials.

“We tested this in laboratory mice to determine if it would repair the brain in a model of stroke, and lead to recovery,” said Dr. S. Thomas Carmichael, Professor and Chair of neurology at UCLA. “This study indicated that new brain tissue can be regenerated in what was previously just an inactive brain scar after stroke.”

The brain has a limited capacity for recovery after stroke and other diseases. Unlike some other organs in the body, such as the liver or skin, the brain does not regenerate new connections, blood vessels or new tissue structures. Tissue that dies in the brain from stroke is absorbed, leaving a cavity, devoid of blood vessels, neurons or axons, the thin nerve fibers that project from neurons.

After 16 weeks, stroke cavities in mice contained regenerated brain tissue, including new neural networks — a result that had not been seen before. The mice with new neurons showed improved motor behavior, though the exact mechanism wasn’t clear.

Remarkable stuff.

The roles mathematics and light play in cellular communication

These are two entirely different types of research but taken together they help build a picture about how the cells in our bodies function.

Cells and light

An April 30, 2018 news item on phys.org describes work on controlling biology with light,

Over the past five years, University of Chicago chemist Bozhi Tian has been figuring out how to control biology with light.

A longterm science goal is devices to serve as the interface between researcher and body—both as a way to understand how cells talk among each other and within themselves, and eventually, as a treatment for brain or nervous system disorders [emphasis mine] by stimulating nerves to fire or limbs to move. Silicon—a versatile, biocompatible material used in both solar panels and surgical implants—is a natural choice.

In a paper published April 30 in Nature Biomedical Engineering, Tian’s team laid out a system of design principles for working with silicon to control biology at three levels—from individual organelles inside cells to tissues to entire limbs. The group has demonstrated each in cells or mice models, including the first time anyone has used light to control behavior without genetic modification.

“We want this to serve as a map, where you can decide which problem you would like to study and immediately find the right material and method to address it,” said Tian, an assistant professor in the Department of Chemistry.

Researchers built this thin layer of silicon lace to modulate neural signals when activated by light. Courtesy of Yuanwen Jiang and Bozhi Tian

An April 30, 2018 University of Chicago news release by Louise Lerner, which originated the news item, describes the work in greater detail,

The scientists’ map lays out best methods to craft silicon devices depending on both the intended task and the scale—ranging from inside a cell to a whole animal.

For example, to affect individual brain cells, silicon can be crafted to respond to light by emitting a tiny ionic current, which encourages neurons to fire. But in order to stimulate limbs, scientists need a system whose signals can travel farther and are stronger—such as a gold-coated silicon material in which light triggers a chemical reaction.

The mechanical properties of the implant are important, too. Say researchers would like to work with a larger piece of the brain, like the cortex, to control motor movement. The brain is a soft, squishy substance, so they’ll need a material that’s similarly soft and flexible, but can bind tightly against the surface. They’d want thin and lacy silicon, say the design principles.

The team favors this method because it doesn’t require genetic modification or a power supply wired in, since the silicon can be fashioned into what are essentially tiny solar panels. (Many other forms of monitoring or interacting with the brain need to have a power supply, and keeping a wire running into a patient is an infection risk.)

They tested the concept in mice and found they could stimulate limb movements by shining light on brain implants. Previous research tested the concept in neurons.

“We don’t have answers to a number of intrinsic questions about biology, such as whether individual mitochondria communicate remotely through bioelectric signals,” said Yuanwen Jiang, the first author on the paper, then a graduate student at UChicago and now a postdoctoral researcher at Stanford. “This set of tools could address such questions as well as pointing the way to potential solutions for nervous system disorders.”

Other UChicago authors were Assoc. Profs. Chin-Tu Chen and Chien-Min Kao, Asst. Prof Xiaoyang, postdoctoral researchers Jaeseok Yi, Yin Fang, Xiang Gao, Jiping Yue, Hsiu-Ming Tsai, Bing Liu and Yin Fang, graduate students Kelliann Koehler, Vishnu Nair, and Edward Sudzilovsky, and undergraduate student George Freyermuth.

Other researchers on the paper hailed from Northwestern University, the University of Illinois at Chicago and Hong Kong Polytechnic University.

The researchers have also made this video illustrating their work,

via Gfycat Tiny silicon nanowires (in blue), activated by light, trigger activity in neurons. (Courtesy Yuanwen Jiang and Bozhi Tian)

Here’s a link to and a citation for the paper,

Rational design of silicon structures for optically controlled multiscale biointerfaces by Yuanwen Jiang, Xiaojian Li, Bing Liu, Jaeseok Yi, Yin Fang, Fengyuan Shi, Xiang Gao, Edward Sudzilovsky, Ramya Parameswaran, Kelliann Koehler, Vishnu Nair, Jiping Yue, KuangHua Guo, Yin Fang, Hsiu-Ming Tsai, George Freyermuth, Raymond C. S. Wong, Chien-Min Kao, Chin-Tu Chen, Alan W. Nicholls, Xiaoyang Wu, Gordon M. G. Shepherd, & Bozhi Tian. Nature Biomedical Engineering (2018) doi:10.1038/s41551-018-0230-1 Published: 30 April 2018

This paper is behind a paywall.

Mathematics and how living cells ‘think’

This May 2, 2018 Queensland University of Technology (QUT; Australia) press release is also on EurekAlert,

How does the ‘brain’ of a living cell work, allowing an organism to function and thrive in changing and unfavourable environments?

Queensland University of Technology (QUT) researcher Dr Robyn Araujo has developed new mathematics to solve a longstanding mystery of how the incredibly complex biological networks within cells can adapt and reset themselves after exposure to a new stimulus.

Her findings, published in Nature Communications, provide a new level of understanding of cellular communication and cellular ‘cognition’, and have potential application in a variety of areas, including new targeted cancer therapies and drug resistance.

Dr Araujo, a lecturer in applied and computational mathematics in QUT’s Science and Engineering Faculty, said that while we know a great deal about gene sequences, we have had extremely limited insight into how the proteins encoded by these genes work together as an integrated network – until now.

“Proteins form unfathomably complex networks of chemical reactions that allow cells to communicate and to ‘think’ – essentially giving the cell a ‘cognitive’ ability, or a ‘brain’,” she said. “It has been a longstanding mystery in science how this cellular ‘brain’ works.

“We could never hope to measure the full complexity of cellular networks – the networks are simply too large and interconnected and their component proteins are too variable.

“But mathematics provides a tool that allows us to explore how these networks might be constructed in order to perform as they do.

“My research is giving us a new way to look at unravelling network complexity in nature.”

Dr Araujo’s work has focused on the widely observed function called perfect adaptation – the ability of a network to reset itself after it has been exposed to a new stimulus.

“An example of perfect adaptation is our sense of smell,” she said. “When exposed to an odour we will smell it initially but after a while it seems to us that the odour has disappeared, even though the chemical, the stimulus, is still present.

“Our sense of smell has exhibited perfect adaptation. This process allows it to remain sensitive to further changes in our environment so that we can detect both very feint and very strong odours.

“This kind of adaptation is essentially what takes place inside living cells all the time. Cells are exposed to signals – hormones, growth factors, and other chemicals – and their proteins will tend to react and respond initially, but then settle down to pre-stimulus levels of activity even though the stimulus is still there.

“I studied all the possible ways a network can be constructed and found that to be capable of this perfect adaptation in a robust way, a network has to satisfy an extremely rigid set of mathematical principles. There are a surprisingly limited number of ways a network could be constructed to perform perfect adaptation.

“Essentially we are now discovering the needles in the haystack in terms of the network constructions that can actually exist in nature.

“It is early days, but this opens the door to being able to modify cell networks with drugs and do it in a more robust and rigorous way. Cancer therapy is a potential area of application, and insights into how proteins work at a cellular level is key.”

Dr Araujo said the published study was the result of more than “five years of relentless effort to solve this incredibly deep mathematical problem”. She began research in this field while at George Mason University in Virginia in the US.

Her mentor at the university’s College of Science and co-author of the Nature Communications paper, Professor Lance Liotta, said the “amazing and surprising” outcome of Dr Araujo’s study is applicable to any living organism or biochemical network of any size.

“The study is a wonderful example of how mathematics can have a profound impact on society and Dr Araujo’s results will provide a set of completely fresh approaches for scientists in a variety of fields,” he said.

“For example, in strategies to overcome cancer drug resistance – why do tumours frequently adapt and grow back after treatment?

“It could also help understanding of how our hormone system, our immune defences, perfectly adapt to frequent challenges and keep us well, and it has future implications for creating new hypotheses about drug addiction and brain neuron signalling adaptation.”

Hre’s a link to and a citation for the paper,

The topological requirements for robust perfect adaptation in networks of any size by Robyn P. Araujo & Lance A. Liotta. Nature Communicationsvolume 9, Article number: 1757 (2018) doi:10.1038/s41467-018-04151-6 Published: 01 May 2018

This paper is open access.

New path to viable memristor/neuristor?

I first stumbled onto memristors and the possibility of brain-like computing sometime in 2008 (around the time that R. Stanley Williams and his team at HP Labs first published the results of their research linking Dr. Leon Chua’s memristor theory to their attempts to shrink computer chips). In the almost 10 years since, scientists have worked hard to utilize memristors in the field of neuromorphic (brain-like) engineering/computing.

A January 22, 2018 news item on phys.org describes the latest work,

When it comes to processing power, the human brain just can’t be beat.

Packed within the squishy, football-sized organ are somewhere around 100 billion neurons. At any given moment, a single neuron can relay instructions to thousands of other neurons via synapses—the spaces between neurons, across which neurotransmitters are exchanged. There are more than 100 trillion synapses that mediate neuron signaling in the brain, strengthening some connections while pruning others, in a process that enables the brain to recognize patterns, remember facts, and carry out other learning tasks, at lightning speeds.

Researchers in the emerging field of “neuromorphic computing” have attempted to design computer chips that work like the human brain. Instead of carrying out computations based on binary, on/off signaling, like digital chips do today, the elements of a “brain on a chip” would work in an analog fashion, exchanging a gradient of signals, or “weights,” much like neurons that activate in various ways depending on the type and number of ions that flow across a synapse.

In this way, small neuromorphic chips could, like the brain, efficiently process millions of streams of parallel computations that are currently only possible with large banks of supercomputers. But one significant hangup on the way to such portable artificial intelligence has been the neural synapse, which has been particularly tricky to reproduce in hardware.

Now engineers at MIT [Massachusetts Institute of Technology] have designed an artificial synapse in such a way that they can precisely control the strength of an electric current flowing across it, similar to the way ions flow between neurons. The team has built a small chip with artificial synapses, made from silicon germanium. In simulations, the researchers found that the chip and its synapses could be used to recognize samples of handwriting, with 95 percent accuracy.

A January 22, 2018 MIT news release by Jennifer Chua (also on EurekAlert), which originated the news item, provides more detail about the research,

The design, published today [January 22, 2018] in the journal Nature Materials, is a major step toward building portable, low-power neuromorphic chips for use in pattern recognition and other learning tasks.

The research was led by Jeehwan Kim, the Class of 1947 Career Development Assistant Professor in the departments of Mechanical Engineering and Materials Science and Engineering, and a principal investigator in MIT’s Research Laboratory of Electronics and Microsystems Technology Laboratories. His co-authors are Shinhyun Choi (first author), Scott Tan (co-first author), Zefan Li, Yunjo Kim, Chanyeol Choi, and Hanwool Yeon of MIT, along with Pai-Yu Chen and Shimeng Yu of Arizona State University.

Too many paths

Most neuromorphic chip designs attempt to emulate the synaptic connection between neurons using two conductive layers separated by a “switching medium,” or synapse-like space. When a voltage is applied, ions should move in the switching medium to create conductive filaments, similarly to how the “weight” of a synapse changes.

But it’s been difficult to control the flow of ions in existing designs. Kim says that’s because most switching mediums, made of amorphous materials, have unlimited possible paths through which ions can travel — a bit like Pachinko, a mechanical arcade game that funnels small steel balls down through a series of pins and levers, which act to either divert or direct the balls out of the machine.

Like Pachinko, existing switching mediums contain multiple paths that make it difficult to predict where ions will make it through. Kim says that can create unwanted nonuniformity in a synapse’s performance.

“Once you apply some voltage to represent some data with your artificial neuron, you have to erase and be able to write it again in the exact same way,” Kim says. “But in an amorphous solid, when you write again, the ions go in different directions because there are lots of defects. This stream is changing, and it’s hard to control. That’s the biggest problem — nonuniformity of the artificial synapse.”

A perfect mismatch

Instead of using amorphous materials as an artificial synapse, Kim and his colleagues looked to single-crystalline silicon, a defect-free conducting material made from atoms arranged in a continuously ordered alignment. The team sought to create a precise, one-dimensional line defect, or dislocation, through the silicon, through which ions could predictably flow.

To do so, the researchers started with a wafer of silicon, resembling, at microscopic resolution, a chicken-wire pattern. They then grew a similar pattern of silicon germanium — a material also used commonly in transistors — on top of the silicon wafer. Silicon germanium’s lattice is slightly larger than that of silicon, and Kim found that together, the two perfectly mismatched materials can form a funnel-like dislocation, creating a single path through which ions can flow.

The researchers fabricated a neuromorphic chip consisting of artificial synapses made from silicon germanium, each synapse measuring about 25 nanometers across. They applied voltage to each synapse and found that all synapses exhibited more or less the same current, or flow of ions, with about a 4 percent variation between synapses — a much more uniform performance compared with synapses made from amorphous material.

They also tested a single synapse over multiple trials, applying the same voltage over 700 cycles, and found the synapse exhibited the same current, with just 1 percent variation from cycle to cycle.

“This is the most uniform device we could achieve, which is the key to demonstrating artificial neural networks,” Kim says.

Writing, recognized

As a final test, Kim’s team explored how its device would perform if it were to carry out actual learning tasks — specifically, recognizing samples of handwriting, which researchers consider to be a first practical test for neuromorphic chips. Such chips would consist of “input/hidden/output neurons,” each connected to other “neurons” via filament-based artificial synapses.

Scientists believe such stacks of neural nets can be made to “learn.” For instance, when fed an input that is a handwritten ‘1,’ with an output that labels it as ‘1,’ certain output neurons will be activated by input neurons and weights from an artificial synapse. When more examples of handwritten ‘1s’ are fed into the same chip, the same output neurons may be activated when they sense similar features between different samples of the same letter, thus “learning” in a fashion similar to what the brain does.

Kim and his colleagues ran a computer simulation of an artificial neural network consisting of three sheets of neural layers connected via two layers of artificial synapses, the properties of which they based on measurements from their actual neuromorphic chip. They fed into their simulation tens of thousands of samples from a handwritten recognition dataset commonly used by neuromorphic designers, and found that their neural network hardware recognized handwritten samples 95 percent of the time, compared to the 97 percent accuracy of existing software algorithms.

The team is in the process of fabricating a working neuromorphic chip that can carry out handwriting-recognition tasks, not in simulation but in reality. Looking beyond handwriting, Kim says the team’s artificial synapse design will enable much smaller, portable neural network devices that can perform complex computations that currently are only possible with large supercomputers.

“Ultimately we want a chip as big as a fingernail to replace one big supercomputer,” Kim says. “This opens a stepping stone to produce real artificial hardware.”

This research was supported in part by the National Science Foundation.

Here’s a link to and a citation for the paper,

SiGe epitaxial memory for neuromorphic computing with reproducible high performance based on engineered dislocations by Shinhyun Choi, Scott H. Tan, Zefan Li, Yunjo Kim, Chanyeol Choi, Pai-Yu Chen, Hanwool Yeon, Shimeng Yu, & Jeehwan Kim. Nature Materials (2018) doi:10.1038/s41563-017-0001-5 Published online: 22 January 2018

This paper is behind a paywall.

For the curious I have included a number of links to recent ‘memristor’ postings here,

January 22, 2018: Memristors at Masdar

January 3, 2018: Mott memristor

August 24, 2017: Neuristors and brainlike computing

June 28, 2017: Dr. Wei Lu and bio-inspired ‘memristor’ chips

May 2, 2017: Predicting how a memristor functions

December 30, 2016: Changing synaptic connectivity with a memristor

December 5, 2016: The memristor as computing device

November 1, 2016: The memristor as the ‘missing link’ in bioelectronic medicine?

You can find more by using ‘memristor’ as the search term in the blog search function or on the search engine of your choice.

A bioengineered robot hand with its own nervous system: machine/flesh and a job opening

A November 14, 2017 news item on phys.org announces a grant for a research project which will see engineered robot hands combined with regenerative medicine to imbue neuroprosthetic hands with the sense of touch,

The sense of touch is often taken for granted. For someone without a limb or hand, losing that sense of touch can be devastating. While highly sophisticated prostheses with complex moving fingers and joints are available to mimic almost every hand motion, they remain frustratingly difficult and unnatural for the user. This is largely because they lack the tactile experience that guides every movement. This void in sensation results in limited use or abandonment of these very expensive artificial devices. So why not make a prosthesis that can actually “feel” its environment?

That is exactly what an interdisciplinary team of scientists from Florida Atlantic University and the University of Utah School of Medicine aims to do. They are developing a first-of-its-kind bioengineered robotic hand that will grow and adapt to its environment. This “living” robot will have its own peripheral nervous system directly linking robotic sensors and actuators. FAU’s College of Engineering and Computer Science is leading the multidisciplinary team that has received a four-year, $1.3 million grant from the National Institute of Biomedical Imaging and Bioengineering of the [US] National Institutes of Health for a project titled “Virtual Neuroprosthesis: Restoring Autonomy to People Suffering from Neurotrauma.”

A November14, 2017 Florida Atlantic University (FAU) news release by Gisele Galoustian, which originated the news item, goes into more detail,

With expertise in robotics, bioengineering, behavioral science, nerve regeneration, electrophysiology, microfluidic devices, and orthopedic surgery, the research team is creating a living pathway from the robot’s touch sensation to the user’s brain to help amputees control the robotic hand. A neuroprosthesis platform will enable them to explore how neurons and behavior can work together to regenerate the sensation of touch in an artificial limb.

At the core of this project is a cutting-edge robotic hand and arm developed in the BioRobotics Laboratory in FAU’s College of Engineering and Computer Science. Just like human fingertips, the robotic hand is equipped with numerous sensory receptors that respond to changes in the environment. Controlled by a human, it can sense pressure changes, interpret the information it is receiving and interact with various objects. It adjusts its grip based on an object’s weight or fragility. But the real challenge is figuring out how to send that information back to the brain using living residual neural pathways to replace those that have been damaged or destroyed by trauma.

“When the peripheral nerve is cut or damaged, it uses the rich electrical activity that tactile receptors create to restore itself. We want to examine how the fingertip sensors can help damaged or severed nerves regenerate,” said Erik Engeberg, Ph.D., principal investigator, an associate professor in FAU’s Department of Ocean and Mechanical Engineering, and director of FAU’s BioRobotics Laboratory. “To accomplish this, we are going to directly connect these living nerves in vitro and then electrically stimulate them on a daily basis with sensors from the robotic hand to see how the nerves grow and regenerate while the hand is operated by limb-absent people.”

For the study, the neurons will not be kept in conventional petri dishes. Instead, they will be placed in  biocompatible microfluidic chambers that provide a nurturing environment mimicking the basic function of living cells. Sarah E. Du, Ph.D., co-principal investigator, an assistant professor in FAU’s Department of Ocean and Mechanical Engineering, and an expert in the emerging field of microfluidics, has developed these tiny customized artificial chambers with embedded micro-electrodes. The research team will be able to stimulate the neurons with electrical impulses from the robot’s hand to help regrowth after injury. They will morphologically and electrically measure in real-time how much neural tissue has been restored.

Jianning Wei, Ph.D., co-principal investigator, an associate professor of biomedical science in FAU’s Charles E. Schmidt College of Medicine, and an expert in neural damage and regeneration, will prepare the neurons in vitro, observe them grow and see how they fare and regenerate in the aftermath of injury. This “virtual” method will give the research team multiple opportunities to test and retest the nerves without any harm to subjects.

Using an electroencephalogram (EEG) to detect electrical activity in the brain, Emmanuelle Tognoli, Ph.D., co-principal investigator, associate research professor in FAU’s Center for Complex Systems and Brain Sciences in the Charles E. Schmidt College of Science, and an expert in electrophysiology and neural, behavioral, and cognitive sciences, will examine how the tactile information from the robotic sensors is passed onto the brain to distinguish scenarios with successful or unsuccessful functional restoration of the sense of touch. Her objective: to understand how behavior helps nerve regeneration and how this nerve regeneration helps the behavior.

Once the nerve impulses from the robot’s tactile sensors have gone through the microfluidic chamber, they are sent back to the human user manipulating the robotic hand. This is done with a special device that converts the signals coming from the microfluidic chambers into a controllable pressure at a cuff placed on the remaining portion of the amputated person’s arm. Users will know if they are squeezing the object too hard or if they are losing their grip.

Engeberg also is working with Douglas T. Hutchinson, M.D., co-principal investigator and a professor in the Department of Orthopedics at the University of Utah School of Medicine, who specializes in hand and orthopedic surgery. They are developing a set of tasks and behavioral neural indicators of performance that will ultimately reveal how to promote a healthy sensation of touch in amputees and limb-absent people using robotic devices. The research team also is seeking a post-doctoral researcher with multi-disciplinary experience to work on this breakthrough project.

Here’s more about the job opportunity from the FAU BioRobotics Laboratory job posting, (I checked on January 30, 2018 and it seems applications are still being accepted.)

Post-doctoral Opportunity

Dated Posted: Oct. 13, 2017

The BioRobotics Lab at Florida Atlantic University (FAU) invites applications for a NIH NIBIB-funded Postdoctoral position to develop a Virtual Neuroprosthesis aimed at providing a sense of touch in amputees and limb-absent people.

Candidates should have a Ph.D. in one of the following degrees: mechanical engineering, electrical engineering, biomedical engineering, bioengineering or related, with interest and/or experience in transdisciplinary work at the intersection of robotic hands, biology, and biomedical systems. Prior experience in the neural field will be considered an advantage, though not a necessity. Underrepresented minorities and women are warmly encouraged to apply.

The postdoctoral researcher will be co-advised across the department of Mechanical Engineering and the Center for Complex Systems & Brain Sciences through an interdisciplinary team whose expertise spans Robotics, Microfluidics, Behavioral and Clinical Neuroscience and Orthopedic Surgery.

The position will be for one year with a possibility of extension based on performance. Salary will be commensurate with experience and qualifications. Review of applications will begin immediately and continue until the position is filled.

The application should include:

  1. a cover letter with research interests and experiences,
  2. a CV, and
  3. names and contact information for three professional references.

Qualified candidates can contact Erik Engeberg, Ph.D., Associate Professor, in the FAU Department of Ocean and Mechanical Engineering at eengeberg@fau.edu. Please reference AcademicKeys.com in your cover letter when applying for or inquiring about this job announcement.

You can find the apply button on this page. Good luck!

Art in the details: A look at the role of art in science—a Sept. 19, 2017 Café Scientifique event in Vancouver, Canada

The Sept. 19, 2017 Café Scientifique event, “Art in the Details A look at the role of art in science,” in Vancouver seems to be part of a larger neuroscience and the arts program at the University of British Columbia. First, the details about the Sept. 13, 2017 event from the eventful Vancouver webpage,

Café Scientifique – Art in the Details: A look at the role of art in science

Art in the Details: A look at the role of art in science With so much beauty in the natural world, why does the misconception that art and science are vastly different persist? Join us for discussion and dessert as we hear from artists, researchers and academic professionals about the role art has played in scientific research – from the formative work of Santiago Ramon Y Cajal to modern imaging, and beyond – and how it might help shape scientific understanding in the future. September 19th, 2017  7:00 – 9:00 pm (doors open at 6:45pm)  TELUS World of Science [also known as Science World], 1455 Quebec St., Vancouver, BC V6A 3Z7 Free Admission [emphasis mine] Experts Dr Carol-Ann Courneya Associate Professor in the Department of Cellular and Physiological Science and Assistant Dean of Student Affairs, Faculty of Medicine, University of British Columbia   Dr Jason Snyder  Assistant Professor, Department of Psychology, University of British Columbia http://snyderlab.com/   Dr Steven Barnes Instructor and Assistant Head—Undergraduate Affairs, Department of Psychology, University of British Columbia http://stevenjbarnes.com/   Moderated By   Bruce Claggett Senior Managing Editor, NEWS 1130   This evening event is presented in collaboration with the Djavad Mowafaghian Centre for Brain Health. Please note: this is a private, adult-oriented event and TELUS World of Science will be closed during this discussion.

The Art in the Details event page on the Science World website provides a bit more information about the speakers (mostly in the form of links to their webpage),,

Experts

Dr Carol-Ann Courneya
Associate Professor in the Department of Cellular and Physiological Science and Assistant Dean of Student Affairs, Faculty of Medicine, University of British Columbia

Dr Jason Snyder 

Assistant Professor, Department of Psychology, University of British Columbi

Dr Steven Barnes

Instructor, Department of Psychology, University of British Columbia

Moderated By  

Bruce Claggett

Senior Managing Editor, NEWS 1130

Should you click though to obtain tickets from either the eventful Vancouver or Science World websites, you’ll find the event is sold out but perhaps the organizers will include a waitlist.

Even if you can’t get a ticket, there’s an exhibition of Santiago Ramon Y Cajal’s work (from the Djavad Mowafaghian Centre for Brain Health’s Beautiful brain’s webpage),

Drawings of Santiago Ramón y Cajal to be shown at UBC

Santiago Ramón y Cajal, injured Purkinje neurons, 1914, ink and pencil on paper. Courtesy of Instituto Cajal (CSIC).

Pictured: Santiago Ramón y Cajal, injured Purkinje neurons, 1914, ink and pencil on paper. Courtesy of Instituto Cajal (CSIC).

The Beautiful Brain is the first North American museum exhibition to present the extraordinary drawings of Santiago Ramón y Cajal (1852–1934), a Spanish pathologist, histologist and neuroscientist renowned for his discovery of neuron cells and their structure, for which he was awarded the Nobel Prize in Physiology and Medicine in 1906. Known as the father of modern neuroscience, Cajal was also an exceptional artist. He combined scientific and artistic skills to produce arresting drawings with extraordinary scientific and aesthetic qualities.

A century after their completion, Cajal’s drawings are still used in contemporary medical publications to illustrate important neuroscience principles, and continue to fascinate artists and visual art audiences. Eighty of Cajal’s drawings will be accompanied by a selection of contemporary neuroscience visualizations by international scientists. The Morris and Helen Belkin Art Gallery exhibition will also include early 20th century works that imaged consciousness, including drawings from Annie Besant’s Thought Forms (1901) and Charles Leadbeater’s The Chakras (1927), as well as abstract works by Lawren Harris that explored his interest in spirituality and mysticism.

After countless hours at the microscope, Cajal was able to perceive that the brain was made up of individual nerve cells or neurons rather than a tangled single web, which was only decisively proven by electron microscopy in the 1950s and is the basis of neuroscience today. His speculative drawings stemmed from an understanding of aesthetics in their compressed detail and lucid composition, as he laboured to clearly represent matter and processes that could not be seen.

In a special collaboration with the Morris and Helen Belkin Art Gallery and the VGH & UBC Hospital Foundation this project will encourage meaningful dialogue amongst artists, curators, scientists and scholars on concepts of neuroplasticity and perception. Public and Academic programs will address the emerging field of art and neuroscience and engage interdisciplinary research of scholars from the sciences and humanities alike.

“This is an incredible opportunity for the neuroscience and visual arts communities at the University and Vancouver,” says Dr. Brian MacVicar, who has been working diligently with Director Scott Watson at the Morris and Helen Belkin Art Gallery and with his colleagues at the University of Minnesota for the past few years to bring this exhibition to campus. “Without Cajal’s impressive body of work, our understanding of the anatomy of the brain would not be so well-formed; Cajal’s legacy has been of critical importance to neuroscience teaching and research over the past century.”

A book published by Abrams accompanies the exhibition, containing full colour reproductions of all 80 of the exhibition drawings, commentary on each of the works and essays on Cajal’s life and scientific contributions, artistic roots and achievements and contemporary neuroscience imaging techniques.

Cajal’s work will be on display at the Morris and Helen Belkin Art Gallery from September 5 to December 3, 2017.

Join the UBC arts and neuroscience communities for a free symposium and dance performance celebrating The Beautiful Brain at UBC on September 7. [link removed]

The Beautiful Brain: The Drawings of Santiago Ramón y Cajal was developed by the Frederick R. Weisman Art Museum, University of Minnesota with the Instituto Cajal. The exhibition at the Morris and Helen Belkin Art Gallery, University British Columbia is presented in partnership with the Djavad Mowafaghian Centre for Brain Health with support from the VGH & UBC Hospital Foundation. We gratefully acknowledge the generous support of the Canada Council for the Arts, the British Columbia Arts Council and Belkin Curator’s Forum members.

The Morris and Helen Belkin Art Gallery’s Beautiful Brain webpage has a listing of upcoming events associated with the exhibition as well as instructions on how to get there (if you click on About),

SEMINAR & READING GROUP: Plasticity at SFU Vancouver and 221A: Wednesdays, October 4, 18, November 1, 15 and 21 at 7 pm

CONVERSATION with Anthony Phillips and Timothy Taylor: Wednesday, October 11, 2017 at 7 pm

LECTURE with Catherine Malabou at the Liu Institute: Thursday, November 23 at 6 pm

CONCERT with UBC Contemporary Players: Friday, December 1 at 2 pm

Cajal was also an exceptional artist and studied as a teenager at the Academy of Arts in Huesca, Spain. He combined scientific and artistic skills to produce arresting drawings with extraordinary scientific and aesthetic qualities. A century after their completion, his drawings are still used in contemporary medical publications to illustrate important neuroscience principles, and continue to fascinate artists and visual art audiences. Eighty of Cajal’s drawings are accompanied by a selection of contemporary neuroscience visualizations by international scientists.

Organizationally, this seems a little higgledy piggledy with the Cafe Scientifique event found on some sites, the Belkin Gallery events found on one site, and no single listing of everything on any one site for the Beautiful Brain. Please let me know if you find something I’ve missed.

Carbon nanotubes to repair nerve fibres (cyborg brains?)

Can cyborg brains be far behind now that researchers are looking at ways to repair nerve fibers with carbon nanotubes (CNTs)? A June 26, 2017 news item on ScienceDaily describes the scheme using carbon nanotubes as a material for repairing nerve fibers,

Carbon nanotubes exhibit interesting characteristics rendering them particularly suited to the construction of special hybrid devices — consisting of biological issue and synthetic material — planned to re-establish connections between nerve cells, for instance at spinal level, lost on account of lesions or trauma. This is the result of a piece of research published on the scientific journal Nanomedicine: Nanotechnology, Biology, and Medicine conducted by a multi-disciplinary team comprising SISSA (International School for Advanced Studies), the University of Trieste, ELETTRA Sincrotrone and two Spanish institutions, Basque Foundation for Science and CIC BiomaGUNE. More specifically, researchers have investigated the possible effects on neurons of the interaction with carbon nanotubes. Scientists have proven that these nanomaterials may regulate the formation of synapses, specialized structures through which the nerve cells communicate, and modulate biological mechanisms, such as the growth of neurons, as part of a self-regulating process. This result, which shows the extent to which the integration between nerve cells and these synthetic structures is stable and efficient, highlights the great potentialities of carbon nanotubes as innovative materials capable of facilitating neuronal regeneration or in order to create a kind of artificial bridge between groups of neurons whose connection has been interrupted. In vivo testing has actually already begun.

The researchers have included a gorgeous image to illustrate their work,

Caption: Scientists have proven that these nanomaterials may regulate the formation of synapses, specialized structures through which the nerve cells communicate, and modulate biological mechanisms, such as the growth of neurons, as part of a self-regulating process. Credit: Pixabay

A June 26, 2017 SISSA press release (also on EurekAlert), which originated the news item, describes the work in more detail while explaining future research needs,

“Interface systems, or, more in general, neuronal prostheses, that enable an effective re-establishment of these connections are under active investigation” explain Laura Ballerini (SISSA) and Maurizio Prato (UniTS-CIC BiomaGUNE), coordinating the research project. “The perfect material to build these neural interfaces does not exist, yet the carbon nanotubes we are working on have already proved to have great potentialities. After all, nanomaterials currently represent our best hope for developing innovative strategies in the treatment of spinal cord injuries”. These nanomaterials are used both as scaffolds, a supportive framework for nerve cells, and as means of interfaces releasing those signals that empower nerve cells to communicate with each other.

Many aspects, however, still need to be addressed. Among them, the impact on neuronal physiology of the integration of these nanometric structures with the cell membrane. “Studying the interaction between these two elements is crucial, as it might also lead to some undesired effects, which we ought to exclude”. Laura Ballerini explains: “If, for example, the mere contact provoked a vertiginous rise in the number of synapses, these materials would be essentially unusable”. “This”, Maurizio Prato adds, “is precisely what we have investigated in this study where we used pure carbon nanotubes”.

The results of the research are extremely encouraging: “First of all we have proved that nanotubes do not interfere with the composition of lipids, of cholesterol in particular, which make up the cellular membrane in neurons. Membrane lipids play a very important role in the transmission of signals through the synapses. Nanotubes do not seem to influence this process, which is very important”.

There is more, however. The research has also highlighted the fact that the nerve cells growing on the substratum of nanotubes, thanks to this interaction, develop and reach maturity very quickly, eventually reaching a condition of biological homeostasis. “Nanotubes facilitate the full growth of neurons and the formation of new synapses. This growth, however, is not indiscriminate and unlimited since, as we proved, after a few weeks a physiological balance is attained. Having established the fact that this interaction is stable and efficient is an aspect of fundamental importance”. Maurizio Prato and Laura Ballerini conclude as follows: “We are proving that carbon nanotubes perform excellently in terms of duration, adaptability and mechanical compatibility with the tissue. Now we know that their interaction with the biological material, too, is efficient. Based on this evidence, we are already studying the in vivo application, and preliminary results appear to be quite promising also in terms of recovery of the lost neurological functions”.

Here’s a link to and a citation for the paper,

Sculpting neurotransmission during synaptic development by 2D nanostructured interfaces by Niccolò Paolo Pampaloni, Denis Scaini, Fabio Perissinotto, Susanna Bosi, Maurizio Prato, Laura Ballerini. Nanomedicine: Nanotechnology, Biology and Medicine, DOI: http://dx.doi.org/10.1016/j.nano.2017.01.020 Published online: May 25, 2017

This paper is open access.