Tag Archives: European Commission

Human Brain Project: update

The European Union’s Human Brain Project was announced in January 2013. It, along with the Graphene Flagship, had won a multi-year competition for the extraordinary sum of one million euros each to be paid out over a 10-year period. (My January 28, 2013 posting gives the details available at the time.)

At a little more than half-way through the project period, Ed Yong, in his July 22, 2019 article for The Atlantic, offers an update (of sorts),

Ten years ago, a neuroscientist said that within a decade he could simulate a human brain. Spoiler: It didn’t happen.

On July 22, 2009, the neuroscientist Henry Markram walked onstage at the TEDGlobal conference in Oxford, England, and told the audience that he was going to simulate the human brain, in all its staggering complexity, in a computer. His goals were lofty: “It’s perhaps to understand perception, to understand reality, and perhaps to even also understand physical reality.” His timeline was ambitious: “We can do it within 10 years, and if we do succeed, we will send to TED, in 10 years, a hologram to talk to you.” …

It’s been exactly 10 years. He did not succeed.

One could argue that the nature of pioneers is to reach far and talk big, and that it’s churlish to single out any one failed prediction when science is so full of them. (Science writers joke that breakthrough medicines and technologies always seem five to 10 years away, on a rolling window.) But Markram’s claims are worth revisiting for two reasons. First, the stakes were huge: In 2013, the European Commission awarded his initiative—the Human Brain Project (HBP)—a staggering 1 billion euro grant (worth about $1.42 billion at the time). Second, the HBP’s efforts, and the intense backlash to them, exposed important divides in how neuroscientists think about the brain and how it should be studied.

Markram’s goal wasn’t to create a simplified version of the brain, but a gloriously complex facsimile, down to the constituent neurons, the electrical activity coursing along them, and even the genes turning on and off within them. From the outset, the criticism to this approach was very widespread, and to many other neuroscientists, its bottom-up strategy seemed implausible to the point of absurdity. The brain’s intricacies—how neurons connect and cooperate, how memories form, how decisions are made—are more unknown than known, and couldn’t possibly be deciphered in enough detail within a mere decade. It is hard enough to map and model the 302 neurons of the roundworm C. elegans, let alone the 86 billion neurons within our skulls. “People thought it was unrealistic and not even reasonable as a goal,” says the neuroscientist Grace Lindsay, who is writing a book about modeling the brain.
And what was the point? The HBP wasn’t trying to address any particular research question, or test a specific hypothesis about how the brain works. The simulation seemed like an end in itself—an overengineered answer to a nonexistent question, a tool in search of a use. …

Markram seems undeterred. In a recent paper, he and his colleague Xue Fan firmly situated brain simulations within not just neuroscience as a field, but the entire arc of Western philosophy and human civilization. And in an email statement, he told me, “Political resistance (non-scientific) to the project has indeed slowed us down considerably, but it has by no means stopped us nor will it.” He noted the 140 people still working on the Blue Brain Project, a recent set of positive reviews from five external reviewers, and its “exponentially increasing” ability to “build biologically accurate models of larger and larger brain regions.”

No time frame, this time, but there’s no shortage of other people ready to make extravagant claims about the future of neuroscience. In 2014, I attended TED’s main Vancouver conference and watched the opening talk, from the MIT Media Lab founder Nicholas Negroponte. In his closing words, he claimed that in 30 years, “we are going to ingest information. …

I’m happy to see the update. As I recall, there was murmuring almost immediately about the Human Brain Project (HBP). I never got details but it seemed that people were quite actively unhappy about the disbursements. Of course, this kind of uproar is not unusual when great sums of money are involved and the Graphene Flagship also had its rocky moments.

As for Yong’s contribution, I’m glad he’s debunking some of the hype and glory associated with the current drive to colonize the human brain and other efforts (e.g. genetics) which they often claim are the ‘future of medicine’.

To be fair. Yong is focused on the brain simulation aspect of the HBP (and Markram’s efforts in the Blue Brain Project) but there are other HBP efforts, as well, even if brain simulation seems to be the HBP’s main interest.

After reading the article, I looked up Henry Markram’s Wikipedia entry and found this,

In 2013, the European Union funded the Human Brain Project, led by Markram, to the tune of $1.3 billion. Markram claimed that the project would create a simulation of the entire human brain on a supercomputer within a decade, revolutionising the treatment of Alzheimer’s disease and other brain disorders. Less than two years into it, the project was recognised to be mismanaged and its claims overblown, and Markram was asked to step down.[7][8]

On 8 October 2015, the Blue Brain Project published the first digital reconstruction and simulation of the micro-circuitry of a neonatal rat somatosensory cortex.[9]

I also looked up the Human Brain Project and, talking about their other efforts, was reminded that they have a neuromorphic computing platform, SpiNNaker (mentioned here in a January 24, 2019 posting; scroll down about 50% of the way). For anyone unfamiliar with the term, neuromorphic computing/engineering is what scientists call the effort to replicate the human brain’s ability to synthesize and process information in computing processors.

In fact, there was some discussion in 2013 that the Human Brain Project and the Graphene Flagship would have some crossover projects, e.g., trying to make computers more closely resemble human brains in terms of energy use and processing power.

The Human Brain Project’s (HBP) Silicon Brains webpage notes this about their neuromorphic computing platform,

Neuromorphic computing implements aspects of biological neural networks as analogue or digital copies on electronic circuits. The goal of this approach is twofold: Offering a tool for neuroscience to understand the dynamic processes of learning and development in the brain and applying brain inspiration to generic cognitive computing. Key advantages of neuromorphic computing compared to traditional approaches are energy efficiency, execution speed, robustness against local failures and the ability to learn.

Neuromorphic Computing in the HBP

In the HBP the neuromorphic computing Subproject carries out two major activities: Constructing two large-scale, unique neuromorphic machines and prototyping the next generation neuromorphic chips.

The large-scale neuromorphic machines are based on two complementary principles. The many-core SpiNNaker machine located in Manchester [emphasis mine] (UK) connects 1 million ARM processors with a packet-based network optimized for the exchange of neural action potentials (spikes). The BrainScaleS physical model machine located in Heidelberg (Germany) implements analogue electronic models of 4 Million neurons and 1 Billion synapses on 20 silicon wafers. Both machines are integrated into the HBP collaboratory and offer full software support for their configuration, operation and data analysis.

The most prominent feature of the neuromorphic machines is their execution speed. The SpiNNaker system runs at real-time, BrainScaleS is implemented as an accelerated system and operates at 10,000 times real-time. Simulations at conventional supercomputers typical run factors of 1000 slower than biology and cannot access the vastly different timescales involved in learning and development ranging from milliseconds to years.

Recent research in neuroscience and computing has indicated that learning and development are a key aspect for neuroscience and real world applications of cognitive computing. HBP is the only project worldwide addressing this need with dedicated novel hardware architectures.

I’ve highlighted Manchester because that’s a very important city where graphene is concerned. The UK’s National Graphene Institute is housed at the University of Manchester where graphene was first isolated in 2004 by two scientists, Andre Geim and Konstantin (Kostya) Novoselov. (For their effort, they were awarded the Nobel Prize for physics in 2010.)

Getting back to the HBP (and the Graphene Flagship for that matter), the funding should be drying up sometime around 2023 and I wonder if it will be possible to assess the impact.

Human lung enzyme can degrade graphene

Caption: A human lung enzyme can biodegrade graphene. Credit: Fotolia Courtesy: Graphene Flagship

The big European Commission research programme, Grahene Flagship, has announced some new work with widespread implications if graphene is to be used in biomedical implants. From a August 23, 2018 news item on ScienceDaily,

Myeloperoxidase — an enzyme naturally found in our lungs — can biodegrade pristine graphene, according to the latest discovery of Graphene Flagship partners in CNRS, University of Strasbourg (France), Karolinska Institute (Sweden) and University of Castilla-La Mancha (Spain). Among other projects, the Graphene Flagship designs based like flexible biomedical electronic devices that will interfaced with the human body. Such applications require graphene to be biodegradable, so our body can be expelled from the body.

An August 23, 2018 Grapehene Flagship press release (mildly edited version on EurekAlert), which originated the news item, provides more detail,

To test how graphene behaves within the body, researchers analysed how it was broken down with the addition of a common human enzyme – myeloperoxidase or MPO. If a foreign body or bacteria is detected, neutrophils surround it and secrete MPO, thereby destroying the threat. Previous work by Graphene Flagship partners found that MPO could successfully biodegrade graphene oxide.

However, the structure of non-functionalized graphene was thought to be more resistant to degradation. To test this, the team looked at the effects of MPO ex vivo on two graphene forms; single- and few-layer.

Alberto Bianco, researcher at Graphene Flagship Partner CNRS, explains: “We used two forms of graphene, single- and few-layer, prepared by two different methods in water. They were then taken and put in contact with myeloperoxidase in the presence of hydrogen peroxide. This peroxidase was able to degrade and oxidise them. This was really unexpected, because we thought that non-functionalized graphene was more resistant than graphene oxide.”

Rajendra Kurapati, first author on the study and researcher at Graphene Flagship Partner CNRS, remarks how “the results emphasize that highly dispersible graphene could be degraded in the body by the action of neutrophils. This would open the new avenue for developing graphene-based materials.”

With successful ex-vivo testing, in-vivo testing is the next stage. Bengt Fadeel, professor at Graphene Flagship Partner Karolinska Institute believes that “understanding whether graphene is biodegradable or not is important for biomedical and other applications of this material. The fact that cells of the immune system are capable of handling graphene is very promising.”

Prof. Maurizio Prato, the Graphene Flagship leader for its Health and Environment Work Package said that “the enzymatic degradation of graphene is a very important topic, because in principle, graphene dispersed in the atmosphere could produce some harm. Instead, if there are microorganisms able to degrade graphene and related materials, the persistence of these materials in our environment will be strongly decreased. These types of studies are needed.” “What is also needed is to investigate the nature of degradation products,” adds Prato. “Once graphene is digested by enzymes, it could produce harmful derivatives. We need to know the structure of these derivatives and study their impact on health and environment,” he concludes.

Prof. Andrea C. Ferrari, Science and Technology Officer of the Graphene Flagship, and chair of its management panel added: “The report of a successful avenue for graphene biodegradation is a very important step forward to ensure the safe use of this material in applications. The Graphene Flagship has put the investigation of the health and environment effects of graphene at the centre of its programme since the start. These results strengthen our innovation and technology roadmap.”

Here’s a link to and a citation for the paper,

Degradation of Single‐Layer and Few‐Layer Graphene by Neutrophil Myeloperoxidase by Dr. Rajendra Kurapati, Dr. Sourav P. Mukherjee, Dr. Cristina Martín, Dr. George Bepete, Prof. Ester Vázquez, Dr. Alain Pénicaud, Prof. Dr. Bengt Fadeel, Dr. Alberto Bianco. Angewandte Chemie https://doi.org/10.1002/anie.201806906 First published: 13 July 2018

This paper is behind a paywall.

Artificial intelligence (AI) brings together International Telecommunications Union (ITU) and World Health Organization (WHO) and AI outperforms animal testing

Following on my May 11, 2018 posting about the International Telecommunications Union (ITU) and the 2018 AI for Good Global Summit in mid- May, there’s an announcement. My other bit of AI news concerns animal testing.

Leveraging the power of AI for health

A July 24, 2018 ITU press release (a shorter version was received via email) announces a joint initiative focused on improving health,

Two United Nations specialized agencies are joining forces to expand the use of artificial intelligence (AI) in the health sector to a global scale, and to leverage the power of AI to advance health for all worldwide. The International Telecommunication Union (ITU) and the World Health Organization (WHO) will work together through the newly established ITU Focus Group on AI for Health to develop an international “AI for health” standards framework and to identify use cases of AI in the health sector that can be scaled-up for global impact. The group is open to all interested parties.

“AI could help patients to assess their symptoms, enable medical professionals in underserved areas to focus on critical cases, and save great numbers of lives in emergencies by delivering medical diagnoses to hospitals before patients arrive to be treated,” said ITU Secretary-General Houlin Zhao. “ITU and WHO plan to ensure that such capabilities are available worldwide for the benefit of everyone, everywhere.”

The demand for such a platform was first identified by participants of the second AI for Good Global Summit held in Geneva, 15-17 May 2018. During the summit, AI and the health sector were recognized as a very promising combination, and it was announced that AI-powered technologies such as skin disease recognition and diagnostic applications based on symptom questions could be deployed on six billion smartphones by 2021.

The ITU Focus Group on AI for Health is coordinated through ITU’s Telecommunications Standardization Sector – which works with ITU’s 193 Member States and more than 800 industry and academic members to establish global standards for emerging ICT innovations. It will lead an intensive two-year analysis of international standardization opportunities towards delivery of a benchmarking framework of international standards and recommendations by ITU and WHO for the use of AI in the health sector.

“I believe the subject of AI for health is both important and useful for advancing health for all,” said WHO Director-General Tedros Adhanom Ghebreyesus.

The ITU Focus Group on AI for Health will also engage researchers, engineers, practitioners, entrepreneurs and policy makers to develop guidance documents for national administrations, to steer the creation of policies that ensure the safe, appropriate use of AI in the health sector.

“1.3 billion people have a mobile phone and we can use this technology to provide AI-powered health data analytics to people with limited or no access to medical care. AI can enhance health by improving medical diagnostics and associated health intervention decisions on a global scale,” said Thomas Wiegand, ITU Focus Group on AI for Health Chairman, and Executive Director of the Fraunhofer Heinrich Hertz Institute, as well as professor at TU Berlin.

He added, “The health sector is in many countries among the largest economic sectors or one of the fastest-growing, signalling a particularly timely need for international standardization of the convergence of AI and health.”

Data analytics are certain to form a large part of the ITU focus group’s work. AI systems are proving increasingly adept at interpreting laboratory results and medical imagery and extracting diagnostically relevant information from text or complex sensor streams.

As part of this, the ITU Focus Group for AI for Health will also produce an assessment framework to standardize the evaluation and validation of AI algorithms — including the identification of structured and normalized data to train AI algorithms. It will develop open benchmarks with the aim of these becoming international standards.

The ITU Focus Group for AI for Health will report to the ITU standardization expert group for multimedia, Study Group 16.

I got curious about Study Group 16 (from the Study Group 16 at a glance webpage),

Study Group 16 leads ITU’s standardization work on multimedia coding, systems and applications, including the coordination of related studies across the various ITU-T SGs. It is also the lead study group on ubiquitous and Internet of Things (IoT) applications; telecommunication/ICT accessibility for persons with disabilities; intelligent transport system (ITS) communications; e-health; and Internet Protocol television (IPTV).

Multimedia is at the core of the most recent advances in information and communication technologies (ICTs) – especially when we consider that most innovation today is agnostic of the transport and network layers, focusing rather on the higher OSI model layers.

SG16 is active in all aspects of multimedia standardization, including terminals, architecture, protocols, security, mobility, interworking and quality of service (QoS). It focuses its studies on telepresence and conferencing systems; IPTV; digital signage; speech, audio and visual coding; network signal processing; PSTN modems and interfaces; facsimile terminals; and ICT accessibility.

I wonder which group deals with artificial intelligence and, possibly, robots.

Chemical testing without animals

Thomas Hartung, professor of environmental health and engineering at Johns Hopkins University (US), describes in his July 25, 2018 essay (written for The Conversation) on phys.org the situation where chemical testing is concerned,

Most consumers would be dismayed with how little we know about the majority of chemicals. Only 3 percent of industrial chemicals – mostly drugs and pesticides – are comprehensively tested. Most of the 80,000 to 140,000 chemicals in consumer products have not been tested at all or just examined superficially to see what harm they may do locally, at the site of contact and at extremely high doses.

I am a physician and former head of the European Center for the Validation of Alternative Methods of the European Commission (2002-2008), and I am dedicated to finding faster, cheaper and more accurate methods of testing the safety of chemicals. To that end, I now lead a new program at Johns Hopkins University to revamp the safety sciences.

As part of this effort, we have now developed a computer method of testing chemicals that could save more than a US$1 billion annually and more than 2 million animals. Especially in times where the government is rolling back regulations on the chemical industry, new methods to identify dangerous substances are critical for human and environmental health.

Having written on the topic of alternatives to animal testing on a number of occasions (my December 26, 2014 posting provides an overview of sorts), I was particularly interested to see this in Hartung’s July 25, 2018 essay on The Conversation (Note: Links have been removed),

Following the vision of Toxicology for the 21st Century, a movement led by U.S. agencies to revamp safety testing, important work was carried out by my Ph.D. student Tom Luechtefeld at the Johns Hopkins Center for Alternatives to Animal Testing. Teaming up with Underwriters Laboratories, we have now leveraged an expanded database and machine learning to predict toxic properties. As we report in the journal Toxicological Sciences, we developed a novel algorithm and database for analyzing chemicals and determining their toxicity – what we call read-across structure activity relationship, RASAR.

This graphic reveals a small part of the chemical universe. Each dot represents a different chemical. Chemicals that are close together have similar structures and often properties. Thomas Hartung, CC BY-SA

To do this, we first created an enormous database with 10 million chemical structures by adding more public databases filled with chemical data, which, if you crunch the numbers, represent 50 trillion pairs of chemicals. A supercomputer then created a map of the chemical universe, in which chemicals are positioned close together if they share many structures in common and far where they don’t. Most of the time, any molecule close to a toxic molecule is also dangerous. Even more likely if many toxic substances are close, harmless substances are far. Any substance can now be analyzed by placing it into this map.

If this sounds simple, it’s not. It requires half a billion mathematical calculations per chemical to see where it fits. The chemical neighborhood focuses on 74 characteristics which are used to predict the properties of a substance. Using the properties of the neighboring chemicals, we can predict whether an untested chemical is hazardous. For example, for predicting whether a chemical will cause eye irritation, our computer program not only uses information from similar chemicals, which were tested on rabbit eyes, but also information for skin irritation. This is because what typically irritates the skin also harms the eye.

How well does the computer identify toxic chemicals?

This method will be used for new untested substances. However, if you do this for chemicals for which you actually have data, and compare prediction with reality, you can test how well this prediction works. We did this for 48,000 chemicals that were well characterized for at least one aspect of toxicity, and we found the toxic substances in 89 percent of cases.

This is clearly more accurate that the corresponding animal tests which only yield the correct answer 70 percent of the time. The RASAR shall now be formally validated by an interagency committee of 16 U.S. agencies, including the EPA [Environmental Protection Agency] and FDA [Food and Drug Administration], that will challenge our computer program with chemicals for which the outcome is unknown. This is a prerequisite for acceptance and use in many countries and industries.

The potential is enormous: The RASAR approach is in essence based on chemical data that was registered for the 2010 and 2013 REACH [Registration, Evaluation, Authorizations and Restriction of Chemicals] deadlines [in Europe]. If our estimates are correct and chemical producers would have not registered chemicals after 2013, and instead used our RASAR program, we would have saved 2.8 million animals and $490 million in testing costs – and received more reliable data. We have to admit that this is a very theoretical calculation, but it shows how valuable this approach could be for other regulatory programs and safety assessments.

In the future, a chemist could check RASAR before even synthesizing their next chemical to check whether the new structure will have problems. Or a product developer can pick alternatives to toxic substances to use in their products. This is a powerful technology, which is only starting to show all its potential.

It’s been my experience that these claims having led a movement (Toxicology for the 21st Century) are often contested with many others competing for the title of ‘leader’ or ‘first’. That said, this RASAR approach seems very exciting, especially in light of the skepticism about limiting and/or making animal testing unnecessary noted in my December 26, 2014 posting.it was from someone I thought knew better.

Here’s a link to and a citation for the paper mentioned in Hartung’s essay,

Machine learning of toxicological big data enables read-across structure activity relationships (RASAR) outperforming animal test reproducibility by Thomas Luechtefeld, Dan Marsh, Craig Rowlands, Thomas Hartung. Toxicological Sciences, kfy152, https://doi.org/10.1093/toxsci/kfy152 Published: 11 July 2018

This paper is open access.

Neurons and graphene carpets

I don’t entirely grasp the carpet analogy. Actually, I have no why they used a carpet analogy but here’s the June 12, 2018 ScienceDaily news item about the research,

A work led by SISSA [Scuola Internazionale Superiore di Studi Avanzati] and published on Nature Nanotechnology reports for the first time experimentally the phenomenon of ion ‘trapping’ by graphene carpets and its effect on the communication between neurons. The researchers have observed an increase in the activity of nerve cells grown on a single layer of graphene. Combining theoretical and experimental approaches they have shown that the phenomenon is due to the ability of the material to ‘trap’ several ions present in the surrounding environment on its surface, modulating its composition. Graphene is the thinnest bi-dimensional material available today, characterised by incredible properties of conductivity, flexibility and transparency. Although there are great expectations for its applications in the biomedical field, only very few works have analysed its interactions with neuronal tissue.

A June 12, 2018 SISSA press release (also on EurekAlert), which originated the news item, provides more detail,

A study conducted by SISSA – Scuola Internazionale Superiore di Studi Avanzati, in association with the University of Antwerp (Belgium), the University of Trieste and the Institute of Science and Technology of Barcelona (Spain), has analysed the behaviour of neurons grown on a single layer of graphene, observing a strengthening in their activity. Through theoretical and experimental approaches the researchers have shown that such behaviour is due to reduced ion mobility, in particular of potassium, to the neuron-graphene interface. This phenomenon is commonly called ‘ion trapping’, already known at theoretical level, but observed experimentally for the first time only now. “It is as if graphene behaves as an ultra-thin magnet on whose surface some of the potassium ions present in the extra cellular solution between the cells and the graphene remain trapped. It is this small variation that determines the increase in neuronal excitability” comments Denis Scaini, researcher at SISSA who has led the research alongside Laura Ballerini.

The study has also shown that this strengthening occurs when the graphene itself is supported by an insulator, like glass, or suspended in solution, while it disappears when lying on a conductor. “Graphene is a highly conductive material which could potentially be used to coat any surface. Understanding how its behaviour varies according to the substratum on which it is laid is essential for its future applications, above all in the neurological field” continues Scaini, “considering the unique properties of graphene it is natural to think for example about the development of innovative electrodes of cerebral stimulation or visual devices”.

It is a study with a double outcome. Laura Ballerini comments as follows: “This ‘ion trap’ effect was described only in theory. Studying the impact of the ‘technology of materials’ on biological systems, we have documented a mechanism to regulate membrane excitability, but at the same time we have also experimentally described a property of the material through the biology of neurons.”

Dexter Johnson in a June 13, 2018 posting, on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website), provides more context for the work (Note: Links have been removed),

While graphene has been tapped to deliver on everything from electronics to optoelectronics, it’s a bit harder to picture how it may offer a key tool for addressing neurological damage and disorders. But that’s exactly what researchers have been looking at lately because of the wonder material’s conductivity and transparency.

In the most recent development, a team from Europe has offered a deeper understanding of how graphene can be combined with neurological tissue and, in so doing, may have not only given us an additional tool for neurological medicine, but also provided a tool for gaining insights into other biological processes.

“The results demonstrate that, depending on how the interface with [single-layer graphene] is engineered, the material may tune neuronal activities by altering the ion mobility, in particular potassium, at the cell/substrate interface,” said Laura Ballerini, a researcher in neurons and nanomaterials at SISSA.

Ballerini provided some context for this most recent development by explaining that graphene-based nanomaterials have come to represent potential tools in neurology and neurosurgery.

“These materials are increasingly engineered as components of a variety of applications such as biosensors, interfaces, or drug-delivery platforms,” said Ballerini. “In particular, in neural electrode or interfaces, a precise requirement is the stable device/neuronal electrical coupling, which requires governing the interactions between the electrode surface and the cell membrane.”

This neuro-electrode hybrid is at the core of numerous studies, she explained, and graphene, thanks to its electrical properties, transparency, and flexibility represents an ideal material candidate.

In all of this work, the real challenge has been to investigate the ability of a single atomic layer to tune neuronal excitability and to demonstrate unequivocally that graphene selectively modifies membrane-associated neuronal functions.

I encourage you to read Dexter’s posting as it clarifies the work described in the SISSA press release for those of us (me) who may fail to grasp the implications.

Here’s a link to and a citation for the paper,

Single-layer graphene modulates neuronal communication and augments membrane ion currents by Niccolò Paolo Pampaloni, Martin Lottner, Michele Giugliano, Alessia Matruglio, Francesco D’Amico, Maurizio Prato, Josè Antonio Garrido, Laura Ballerini, & Denis Scaini. Nature Nanotechnology (2018) DOI: https://doi.org/10.1038/s41565-018-0163-6 Published online June 13, 2018

This paper is behind a paywall.

All this brings to mind a prediction made about the Graphene Flagship and the Human Brain Project shortly after the European Commission announced in January 2013 that each project had won funding of 1B Euros to be paid out over a period of 10 years. The prediction was that scientists would work on graphene/human brain research.

Nanomaterials the SUN (Sustainable Nanotechnologies) project sunsets, finally and the Belgians amend their registry

Health, safety, and risks have been an important discussion where nanotechnology is concerned. The sense of urgency and concern has died down somewhat but scientists and regulators continue with their risk analysis.

SUN (Sustainable Nanotechnologies) project

Back in a December 7, 2016 posting I mentioned the Sustainable Nanotechnologies (SUN) project and its imminent demise in 2017. A February 26, 2018 news item on Nanowerk announces a tool developed by SUN scientists and intended for current use,

Over 100 scientists from 25 research institutions and industries in 12 different European Countries, coordinated by the group of professor Antonio Marcomini from Ca’ Foscari University of Venice, have completed one of the first attempts to understand the risks nanomaterials carry throughout their life-cycle, starting from their fabrication and ending in being discarded or recycled.

From nanoscale silver to titanium dioxide for air purification, the use of nanomaterials of high commercial relevance proves to have clear benefits as it attracts investments, and raises concerns. ‘Nano’ sized materials (a nanometre is one millionth of a millimetre) could pose environmental and health risks under certain conditions. The uncertainties and insufficient scientific knowledge could slow down innovation and economic growth.

How do we evaluate these risks and take the appropriate preventative measures? The answer comes from the results of the Sustainable Nanotechnologies Project (SUN), which has been given 13 million euros of funding from the European Commission.

Courtesy: SUN Project

A February 26, 2018 Ca’ Foscari University of Venice press release describes some of the SUN project’s last t initiatives including, https://sunds.gd/  or the ‘SUNDS; Decision support system for risk management of engineered nanomaterials and nano-enabled products’,

After 3 years of research in laboratories and in contact with industrial partners, the scientists have processed, tested and made available an online platform (https://sunds.gd/) that supports industries and control and regulating institutions in evaluating potential risks that may arise for the production teams, for the consumers and for the environment.

The goal is to understand the extent to which these risks are sustainable, especially in relation to the traditional materials available, and to take the appropriate preventative measures. Additionally, this tool allows us to compare risk reduction costs with the benefits generated by this innovative product, while measuring its possible environmental impact.

Danail Hristozov, the project’s principal investigator from the Department of Environmental Sciences, Informatics and Statistics at Ca’ Foscari, commented: “The great amount of work done for developing and testing the methods and tools for evaluating and managing the risks posed by nanomaterials has not only generated an enormous amount of new scientific data and knowledge on the potential dangers of different types of nanomaterials, but has also resulted in key discoveries on the interactions between nanomaterials and biological or ecological systems and on their diffusion, on how they work and on their possible adverse consequences. These results, disseminated in over 140 research papers, have been immediately taken up by industries and regulators and will inevitably have great impact on developing safer and more sustainable nanotechnologies and on regulating their risks”.”.

The SUN project has also composed a guide for the safest products and processes, published on its website: www.sun.fp7.eu.

Studied Materials

Scientists have focused their research on specific materials and their us, in order to analyse the entire life cycle of the products. Two of the best-known were chosen: nanoscale silver that is used in textiles, and multi-walled carbon nanotubes that is used in marine coatings and automotive parts. Less known materials that are of great relevance for their use were also included: car pigments and silica anticaking agents used by food industry.

Lastly, SUN included nanomaterials of high commercial value which are extremely innovative: Nitrogen doped Titanium Dioxide for air purification is a new product enabled by SUN and exploited by the large colour ceramics company Colorobbia. The copper based coating and impregnation for wood protection has been re-oriented based on SUN safety assessment, and the Tungsten Carbide based coatings for paper mills is marketed based on SUN results.

You can find out more about the SUN project here and about ‘SUNDS; Decision support system for risk management of engineered nanomaterials and nano-enabled products’ here.

Belgium’s nanomaterials reigster

A February 26, 2018 Nanowerk Spotlight article by Anthony Bochon has a   rather acerbic take on Belgium’s efforts to regulate nanomaterials with a national register,

In Alice’s Adventures in Wonderland, the White Rabbit keeps saying “Oh dear! Oh dear! I shall be too late.” The same could have been said by the Belgian federal government when it adopted the Royal Decree of 22nd December 2017, published in the annexes of the Belgian Official Gazette of 15th January 2018 (“Amending Royal Decree”), whose main provisions retroactively enter into force on 31st December 2016. …

The Belgian federal government unnecessarily delayed the adoption of the Amending Royal Decree until December 2017 and published it only mid-January 2018. It creates legal uncertainty where it should have been avoided. The Belgian nanomaterials register (…) symbolizes a Belgian exceptionalism in the small world of national nanomaterials registers. Unlike France, Denmark and Sweden, Belgium decided from the very beginning to have three different deadlines for substances, mixtures and articles.

In an already fragmented regulatory landscape (with 4 EU Member States having their own national nanomaterials register and 24 EU Member States which do not have such registration requirements), the confusion around the deadline for the registration of mixtures in Belgium does not allow the addressees of the legal obligations to comply with them.

Even though failure to properly register substances – and now mixtures – within the Belgian nanomaterials register exposes the addressees of the obligation to criminal penalties, the function of the register remains purely informational.

The data collected through the registration was meant to be used to identify the presence of manufactured nanomaterials on the Belgian market, with the implicit objective of regulating the exposure of workers and consumers to these nanomaterials. The absence of entry into force of the provisions relating to the registration of articles is therefore incoherent and should question the relevance of the whole Belgian registration system.

Taking into account the author’s snarkiness, Belgium seems to have adopted (knowingly or unknowingly) a chaotic approach to registering nanomaterials.  For anyone interesting in the Belgian’ nanoregister’, there’s this September 3, 2014 posting featuring another Anthony Bochon article on the topic and for anyone interested in Bochon’s book, there’s this August 15, 2014 posting (Note: his book, ‘Nanotechnology Law & Guidelines: A Practical Guide for the Nanotechnology Industries in Europe’, seems to have been updated [there is a copyright date of 2019 in the bibliographic information on the publisher’s website]).

Wearable technology: two types of sensors one from the University of Glasgow (Scotland) and the other from the University of British Columbia (Canada)

Sometimes it’s good to try and pull things together.

University of Glasgow and monitoring chronic conditions

A February 23, 2018 news item on phys.org describes the latest wearable tech from the University of Glasgow,

A new type of flexible, wearable sensor could help people with chronic conditions like diabetes avoid the discomfort of regular pin-prick blood tests by monitoring the chemical composition of their sweat instead.

In a new paper published in the journal Biosensors and Bioelectronics, a team of scientists from the University of Glasgow’s School of Engineering outline how they have built a stretchable, wireless system which is capable of measuring the pH level of users’ sweat.

A February 22, 2018 University of Glasgow press release, which originated the news item, expands on the theme,

Ravinder Dahiya

 Courtesy: University of Glasgow

 

Sweat, like blood, contains chemicals generated in the human body, including glucose and urea. Monitoring the levels of those chemicals in sweat could help clinicians diagnose and monitor chronic conditions such as diabetes, kidney disease and some types of cancers without invasive tests which require blood to be drawn from patients.

However, non-invasive, wearable systems require consistent contact with skin to offer the highest-quality monitoring. Current systems are made from rigid materials, making it more difficult to ensure consistent contact, and other potential solutions such as adhesives can irritate skin. Wireless systems which use Bluetooth to transmit their information are also often bulky and power-hungry, requiring frequent recharging.

The University of Glasgow team’s new system is built around an inexpensively-produced sensor capable of measuring pH levels which can stretch and flex to better fit the contours of users’ bodies. Made from a graphite-polyurethane composite and measuring around a single square centimetre, it can stretch up to 53% in length without compromising performance. It will also continue to work after being subjected to flexes of 30% up to 500 times, which the researchers say will allow it to be used comfortably on human skin with minimal impact on the performance of the sensor.

The sensor can transmit its data wirelessly, and without external power, to an accompanying smartphone app called ‘SenseAble’, also developed by the team. The transmissions use near-field communication, a data transmission system found in many current smartphones which is used most often for smartphone payments like ApplePay, via a stretchable RFID antenna integrated into the system – another breakthrough innovation from the research team.

The smartphone app allows users to track pH levels in real time and was demonstrated in the lab using a chemical solution created by the researchers which mimics the composition of human sweat.

The research was led by Professor Ravinder Dahiya, head of the University of Glasgow’s School of Engineering’s Bendable Electronics and Sensing Technologies (BEST) group.

Professor Dahiya said: “Human sweat contains much of the same physiological information that blood does, and its use in diagnostic systems has the significant advantage of not needing to break the skin in order to administer tests.

“Now that we’ve demonstrated that our stretchable system can be used to monitor pH levels, we’ve already begun additional research to expand the capabilities of the sensor and make it a more complete diagnostic system. We’re planning to add sensors capable of measuring glucose, ammonia and urea, for example, and ultimately we’d like to see a system ready for market in the next few years.”

The team’s paper, titled ‘Stretchable Wireless System for Sweat pH Monitoring’, is published in Biosensors and Bioelectronics. The research was supported by funding from the European Commission and the Engineering and Physical Sciences Research Council (EPSRC).

Here’s a link to and a citation for the paper,

Stretchable wireless system for sweat pH monitoring by Wenting Dang, Libu Manjakkal, William Taube Navaraj, Leandro Lorenzelli, Vincenzo Vinciguerra. Biosensors and Bioelectronics Volume 107, 1 June 2018, Pages 192–202 [Available online February 2018] https://doi.org/10.1016/j.bios.2018.02.025

This paper is behind a paywall.

University of British Columbia (UBC; Okanagan) and monitor bio-signals

This is a completely other type of wearable tech monitor, from a February 22, 2018 UBC news release (also on EurekAlert) by Patty Wellborn (A link has been removed),

Creating the perfect wearable device to monitor muscle movement, heart rate and other tiny bio-signals without breaking the bank has inspired scientists to look for a simpler and more affordable tool.

Now, a team of researchers at UBC’s Okanagan campus have developed a practical way to monitor and interpret human motion, in what may be the missing piece of the puzzle when it comes to wearable technology.

What started as research to create an ultra-stretchable sensor transformed into a sophisticated inter-disciplinary project resulting in a smart wearable device that is capable of sensing and understanding complex human motion, explains School of Engineering Professor Homayoun Najjaran.

The sensor is made by infusing graphene nano-flakes (GNF) into a rubber-like adhesive pad. Najjaran says they then tested the durability of the tiny sensor by stretching it to see if it can maintain accuracy under strains of up to 350 per cent of its original state. The device went through more than 10,000 cycles of stretching and relaxing while maintaining its electrical stability.

“We tested this sensor vigorously,” says Najjaran. “Not only did it maintain its form but more importantly it retained its sensory functionality. We have further demonstrated the efficacy of GNF-Pad as a haptic technology in real-time applications by precisely replicating the human finger gestures using a three-joint robotic finger.”

The goal was to make something that could stretch, be flexible and a reasonable size, and have the required sensitivity, performance, production cost, and robustness. Unlike an inertial measurement unit—an electronic unit that measures force and movement and is used in most step-based wearable technologies—Najjaran says the sensors need to be sensitive enough to respond to different and complex body motions. That includes infinitesimal movements like a heartbeat or a twitch of a finger, to large muscle movements from walking and running.

School of Engineering Professor and study co-author Mina Hoorfar says their results may help manufacturers create the next level of health monitoring and biomedical devices.

“We have introduced an easy and highly repeatable fabrication method to create a highly sensitive sensor with outstanding mechanical and electrical properties at a very low cost,” says Hoorfar.

To demonstrate its practicality, researchers built three wearable devices including a knee band, a wristband and a glove. The wristband monitored heartbeats by sensing the pulse of the artery. In an entirely different range of motion, the finger and knee bands monitored finger gestures and larger scale muscle movements during walking, running, sitting down and standing up. The results, says Hoorfar, indicate an inexpensive device that has a high-level of sensitivity, selectivity and durability.

Hoorfar and Najjaran are both members of the Okanagan node of UBC’s STITCH (SmarT Innovations for Technology Connected Health) Institute that creates and investigates advanced wearable devices.

The research, partially funded by the Natural Sciences and Engineering Research Council, was recently published in the Journal of Sensors and Actuators A: Physical.

Here’s a link to and a citation for the paper,

Low-cost ultra-stretchable strain sensors for monitoring human motion and bio-signals by Seyed Reza Larimi, Hojatollah Rezaei Nejad, Michael Oyatsi, Allen O’Brien, Mina Hoorfar, Homayoun Najjaran. Sensors and Actuators A: Physical Volume 271, 1 March 2018, Pages 182-191 [Published online February 2018] https://doi.org/10.1016/j.sna.2018.01.028

This paper is behind a paywall.

Final comments

The term ‘wearable tech’ covers a lot of ground. In addition to sensors, there are materials that harvest energy, detect poisons, etc.  making for a diverse field.

Europe’s cathedrals get a ‘lift’ with nanoparticles

That headline is a teensy bit laboured but I couldn’t resist the levels of wordplay available to me. They’re working on a cathedral close to the leaning Tower of Pisa in this video about the latest in stone preservation in Europe.

*ETA August 7, 2019: Video reinserted today.*

I have covered the topic of preserving stone monuments before (most recently in my Oct. 21, 2014 posting). The action in this field seems to be taking place mostly in Europe, specifically Italy, although other countries are also quite involved.

Finally, getting to the European Commission’s latest stone monument preservation project, Nano-Cathedral, a Sept. 26, 2017 news item on Nanowerk announces the latest developments,

Just a few meters from Pisa’s famous Leaning Tower, restorers are defying scorching temperatures to bring back shine to the city’s Cathedral.

Ordinary restoration techniques like laser are being used on much of the stonework that dates back to the 11th century. But a brand new technique is also being used: a new material made of innovative nanoparticles. The aim is to consolidate the inner structure of the stones. It’s being applied mainly on marble.

A March 7, 2017 item on the Euro News website, which originated the Nanowerk news item, provides more detail,

“Marble has very low porosity, which means we have to use nanometric particles in order to go deep inside the stone, to ensure that the treatment is both efficient while still allowing the stone to breathe,” explains Roberto Cela, civil engineer at Opera Della Primaziale Pisana.

The material developed by the European research team includes calcium carbonate, which is a mix of calcium oxide, water and carbon dioxide.

The nano-particles penetrate the stone cementing its decaying structure.

“It is important that these particles have the same chemical nature as the stones that are being treated, so that the physical and mechanical processes that occur over time don’t lead to the break-up of the stones,” says Dario Paolucci, chemist at the University of Pisa.

Vienna’s St Stephen’s is another of the five cathedrals where the new restoration materials are being tested.

The first challenge for researchers is to determine the mechanical characteristics of the cathedral’s stones. Since there are few original samples to work on, they had to figure out a way of “ageing” samples of stones of similar nature to those originally used.

“We tried different things: we tried freeze storage, we tried salts and acids, and we decided to go for thermal ageing,” explains Matea Ban, material scientist at the University of Technology in Vienna. “So what happens is that we heat the stone at certain temperatures. Minerals inside then expand in certain directions, and when they expand they build up stresses to neighbouring minerals and then they crack, and we need those cracks in order to consolidate them.”

Consolidating materials were then applied on a variety of limestones, sandstones and marble – a selection of the different types of stones that were used to build cathedrals around Europe.

What researchers are looking for are very specific properties.

“First of all, the consolidating material has to be well absorbed by the stone,” says petrologist Johannes Weber of the University of Applied Arts in Vienna. “Then, as it evaporates, it has to settle properly within the stone structure. It should not shrink too much. All materials shrink when drying, including consolidating materials. They should adhere to the particles of the stone but shouldn’t completely obstruct its pores.”

Further tests are underway in cathedrals across Europe in the hope of better protecting our invaluable cultural heritage.

There’s a bit more detail about Nano-Cathedral on the Opera della Primaziale Pisana (O₽A) website (from their Nano-Cathedral project page),

With the meeting of June 3 this year the Nano Cathedral project kicked off, supported by the European Union within the nanotechnology field applied to Horizon 2020 cultural heritage with a fund of about 6.5 million euro.

A total of six monumental buildings will be for three years under the eyes and hands of petrographers, geologists, chemists and restorers of the institutes belonging to the Consortium: five cathedrals have been selected to represent the cultural diversity within Europe from the perspective of developing shared values and transnational identity, and a contemporary monumental building entirely clad in Carrara marble, the Opera House of Oslo.

Purpose: the testing of nanomaterials for the conservation of marble and the outer surfaces of our ‘cathedrals’.
The field of investigation to check degradation, testing new consolidating and protective products is the Cathedral of Pisa together with the Cathedrals of Cologne, Vienna, Ghent and Vitoria.
For the selection of case studies we have crosschecked requirements for their historical and architectural value but also for the different types of construction materials – marble, limestone and sandstone – as well as the relocation of six monumental buildings according to European climates.

The Cathedral of Pisa is the most southern, fully positioned in Mediterranean climate, therefore subject to degradation and very different from those which the weather conditions of the Scandinavian peninsula recorded; all the intermediate climate phases are modulated through Ghent, Vitoria, Cologne and Vienna.

At the conclusion of the three-year project, once the analysis in situ and in the laboratory are completed and all the experiments are tested on each different identified portion in each monumental building, an intervention protocol will be defined in detail in order to identify the mineralogical and petrographic characteristics of stone materials and of their degradation, the assessment of the causes and mechanisms of associated alteration, including interactions with factors of environmental pollution. Then we will be able to identify the most appropriate method of restoration and testing of nanotechnology products for the consolidation and protection of different stone materials.

In 2018 we hope to have new materials to protect and safeguard the ‘skin’ of our historic buildings and monuments for a long time.

Back to my headline and the second piece of wordplay, ‘lift’ as in ‘skin lift’ in that last sentence.

I realize this is a bit off topic but it’s worth taking a look at ORA’s home page,

Gabriele D’Annunzio effectively condenses the wonder and admiration that catch whoever visits the Duomo Square of Pisa.

The Opera della Primaziale Pisana (O₽A) is a non-profit organisation which was established in order to oversee the first works for the construction of the monuments in the Piazza del Duomo, subject to its own charter which includes the protection, promotion and enhancement of its heritage, in order to pass the religious and artistic meaning onto future generations.

«L’Ardea roteò nel cielo di Cristo, sul prato dei Miracoli.»
Gabriele d’Annunzio in Forse che sì forse che no (1910)

If you go to the home page, you can buy tickets to visit the monuments surrounding the square and there are other notices including one for a competition (it’s too late to apply but the details are interesting) to construct four stained glass windows for the Pisa cathedral.

European Commission has issued evaluation of nanomaterial risk frameworks and tools

Despite complaints that there should have been more, there has been some research into risks where nanomaterials are concerned. While additional research would be welcome, it’s perhaps more imperative that standardized testing and risk frameworks are developed so, for example, carbon nanotube safety research in Japan can be compared with the similar research in the Netherlands, the US, and elsewhere. This March 15, 2017 news item on Nanowerk features some research analyzing risk assessment frameworks and tools in Europe,

A recent study has evaluated frameworks and tools used in Europe to assess the potential health and environmental risks of manufactured nanomaterials. The study identifies a trend towards tools that provide protocols for conducting experiments, which enable more flexible and efficient hazard testing. Among its conclusions, however, it notes that no existing frameworks meet all the study’s evaluation criteria and calls for a new, more comprehensive framework.

A March 9, 2017 news alert in the European Commission’s Science for Environment Policy series, which originated the news item, provides more detail (Note: Links have been removed),

Nanotechnology is identified as a key emerging technology in the EU’s growth strategy, Europe 2020. It has great potential to contribute to innovation and economic growth and many of its applications have already received large investments. However,there are some uncertainties surrounding the environmental, health and safety risks of manufactured nanomaterials. For effective regulation, careful scientific analysis of their potential impacts is needed, as conducted through risk assessment exercises.

This study, conducted under the EU-funded MARINA project1, reviewed existing frameworks and tools for risk assessing manufactured nanomaterials. The researchers define a framework as a ‘conceptual paradigm’ of how a risk assessment should be conducted and understood, and give the REACH chemical safety assessment as an example. Tools are defined as implements used to carry out a specific task or function, such as experimental protocols, computer models or databases.

In all, 12 frameworks and 48 tools were evaluated. These were identified from other studies and projects. The frameworks were assessed against eight criteria which represent different strengths, such as whether they consider properties specific to nanomaterials, whether they consider the entire life cycle of a nanomaterial and whether they include careful planning and prioritise objectives before the risk assessment is conducted.

The tools were assessed against seven criteria, such as ease of use, whether they provide quantitative information and if they clearly communicate uncertainty in their results. The researchers defined the criteria for both frameworks and tools by reviewing other studies and by interviewing staff at organisations who develop tools.

The evaluation was thus able to produce a list of strengths and areas for improvement for the frameworks and tools, based on whether they meet each of the criteria. Among its many findings, the evaluation showed that most of the frameworks stress that ‘problem formulation’, which sets the goals and scope of an assessment during the planning process, is essential to avoid unnecessary testing. In addition, most frameworks consider routes of exposure in the initial stages of assessment, which is beneficial as it can exclude irrelevant exposure routes and avoid unnecessary tests.

However, none of the frameworks met all eight of the criteria. The study therefore recommends that a new, comprehensive framework is developed that meets all criteria. Such a framework is needed to inform regulation, the researchers say, and should integrate human health and environmental factors, and cover all stages of the life cycle of a product containing nanomaterials.

The evaluation of the tools suggested that many of them are designed to screen risks, and not necessarily to support regulatory risk assessment. However, their strengths include a growing trend in quantitative models, which can assess uncertainty; for example, one tool analysed can identify uncertainties in its results that are due to gaps in knowledge about a material’s origin, characteristics and use.

The researchers also identified a growing trend in tools that provide protocols for experiments, such as identifying materials and test hazards, which are reproducible across laboratories. These tools could lead to a shift from expensive case-by-case testing for risk assessment of manufactured nanomaterials towards a more efficient process based on groupings of nanomaterials; and ‘read-across’ methods, where the properties of one material can be inferred without testing, based on the known properties of a similar material. The researchers do note, however, that although read-across methods are well established for chemical substances, they are still being developed for nanomaterials. To improve nanomaterial read-across methods, they suggest that more data are needed on the links between nanomaterials’ specific properties and their biological effects.

That’s all, folks.

Graphene-based neural probes

I have two news bits (dated almost one month apart) about the use of graphene in neural probes, one from the European Union and the other from Korea.

European Union (EU)

This work is being announced by the European Commission’s (a subset of the EU) Graphene Flagship (one of two mega-funding projects announced in 2013; 1B Euros each over ten years for the Graphene Flagship and the Human Brain Project).

According to a March 27, 2017 news item on ScienceDaily, researchers have developed a graphene-based neural probe that has been tested on rats,

Measuring brain activity with precision is essential to developing further understanding of diseases such as epilepsy and disorders that affect brain function and motor control. Neural probes with high spatial resolution are needed for both recording and stimulating specific functional areas of the brain. Now, researchers from the Graphene Flagship have developed a new device for recording brain activity in high resolution while maintaining excellent signal to noise ratio (SNR). Based on graphene field-effect transistors, the flexible devices open up new possibilities for the development of functional implants and interfaces.

The research, published in 2D Materials, was a collaborative effort involving Flagship partners Technical University of Munich (TU Munich; Germany), Institut d’Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS; Spain), Spanish National Research Council (CSIC; Spain), The Biomedical Research Networking Center in Bioengineering, Biomaterials and Nanomedicine (CIBER-BBN; Spain) and the Catalan Institute of Nanoscience and Nanotechnology (ICN2; Spain).

Caption: Graphene transistors integrated in a flexible neural probe enables electrical signals from neurons to be measured with high accuracy and density. Inset: The tip of the probe contains 16 flexible graphene transistors. Credit: ICN2

A March 27, 2017 Graphene Flagship press release on EurekAlert, which originated the news item, describes the work,  in more detail,

The devices were used to record the large signals generated by pre-epileptic activity in rats, as well as the smaller levels of brain activity during sleep and in response to visual light stimulation. These types of activities lead to much smaller electrical signals, and are at the level of typical brain activity. Neural activity is detected through the highly localised electric fields generated when neurons fire, so densely packed, ultra-small measuring devices is important for accurate brain readings.

The neural probes are placed directly on the surface of the brain, so safety is of paramount importance for the development of graphene-based neural implant devices. Importantly, the researchers determined that the graphene-based probes are non-toxic, and did not induce any significant inflammation.

Devices implanted in the brain as neural prosthesis for therapeutic brain stimulation technologies and interfaces for sensory and motor devices, such as artificial limbs, are an important goal for improving quality of life for patients. This work represents a first step towards the use of graphene in research as well as clinical neural devices, showing that graphene-based technologies can deliver the high resolution and high SNR needed for these applications.

First author Benno Blaschke (TU Munich) said “Graphene is one of the few materials that allows recording in a transistor configuration and simultaneously complies with all other requirements for neural probes such as flexibility, biocompability and chemical stability. Although graphene is ideally suited for flexible electronics, it was a great challenge to transfer our fabrication process from rigid substrates to flexible ones. The next step is to optimize the wafer-scale fabrication process and improve device flexibility and stability.”

Jose Antonio Garrido (ICN2), led the research. He said “Mechanical compliance is an important requirement for safe neural probes and interfaces. Currently, the focus is on ultra-soft materials that can adapt conformally to the brain surface. Graphene neural interfaces have shown already great potential, but we have to improve on the yield and homogeneity of the device production in order to advance towards a real technology. Once we have demonstrated the proof of concept in animal studies, the next goal will be to work towards the first human clinical trial with graphene devices during intraoperative mapping of the brain. This means addressing all regulatory issues associated to medical devices such as safety, biocompatibility, etc.”

Caption: The graphene-based neural probes were used to detect rats’ responses to visual stimulation, as well as neural signals during sleep. Both types of signals are small, and typically difficult to measure. Credit: ICN2

Here’s a link to and a citation for the paper,

Mapping brain activity with flexible graphene micro-transistors by Benno M Blaschke, Núria Tort-Colet, Anton Guimerà-Brunet, Julia Weinert, Lionel Rousseau, Axel Heimann, Simon Drieschner, Oliver Kempski, Rosa Villa, Maria V Sanchez-Vives. 2D Materials, Volume 4, Number 2 DOI https://doi.org/10.1088/2053-1583/aa5eff Published 24 February 2017

© 2017 IOP Publishing Ltd

This paper is behind a paywall.

Korea

While this research from Korea was published more recently, the probe itself has not been subjected to in vivo (animal testing). From an April 19, 2017 news item on ScienceDaily,

Electrodes placed in the brain record neural activity, and can help treat neural diseases like Parkinson’s and epilepsy. Interest is also growing in developing better brain-machine interfaces, in which electrodes can help control prosthetic limbs. Progress in these fields is hindered by limitations in electrodes, which are relatively stiff and can damage soft brain tissue.

Designing smaller, gentler electrodes that still pick up brain signals is a challenge because brain signals are so weak. Typically, the smaller the electrode, the harder it is to detect a signal. However, a team from the Daegu Gyeongbuk Institute of Science & Technology [DGIST} in Korea developed new probes that are small, flexible and read brain signals clearly.

This is a pretty interesting way to illustrate the research,

Caption: Graphene and gold make a better brain probe. Credit: DGIST

An April 19, 2017 DGIST press release (also on EurekAlert), which originated the news item, expands on the theme (Note: A link has been removed),

The probe consists of an electrode, which records the brain signal. The signal travels down an interconnection line to a connector, which transfers the signal to machines measuring and analysing the signals.

The electrode starts with a thin gold base. Attached to the base are tiny zinc oxide nanowires, which are coated in a thin layer of gold, and then a layer of conducting polymer called PEDOT. These combined materials increase the probe’s effective surface area, conducting properties, and strength of the electrode, while still maintaining flexibility and compatibility with soft tissue.

Packing several long, thin nanowires together onto one probe enables the scientists to make a smaller electrode that retains the same effective surface area of a larger, flat electrode. This means the electrode can shrink, but not reduce signal detection. The interconnection line is made of a mix of graphene and gold. Graphene is flexible and gold is an excellent conductor. The researchers tested the probe and found it read rat brain signals very clearly, much better than a standard flat, gold electrode.

“Our graphene and nanowires-based flexible electrode array can be useful for monitoring and recording the functions of the nervous system, or to deliver electrical signals to the brain,” the researchers conclude in their paper recently published in the journal ACS Applied Materials and Interfaces.

The probe requires further clinical tests before widespread commercialization. The researchers are also interested in developing a wireless version to make it more convenient for a variety of applications.

Here’s a link to and a citation for the paper,

Enhancement of Interface Characteristics of Neural Probe Based on Graphene, ZnO Nanowires, and Conducting Polymer PEDOT by Mingyu Ryu, Jae Hoon Yang, Yumi Ahn, Minkyung Sim, Kyung Hwa Lee, Kyungsoo Kim, Taeju Lee, Seung-Jun Yoo, So Yeun Kim, Cheil Moon, Minkyu Je, Ji-Woong Choi, Youngu Lee, and Jae Eun Jang. ACS Appl. Mater. Interfaces, 2017, 9 (12), pp 10577–10586 DOI: 10.1021/acsami.7b02975 Publication Date (Web): March 7, 2017

Copyright © 2017 American Chemical Society

This paper is behind a paywall.

Developing cortical implants for future speech neural prostheses

I’m guessing that graphene will feature in these proposed cortical implants since the project leader is a member of the Graphene Flagship’s Biomedical Technologies Work Package. (For those who don’t know, the Graphene Flagship is one of two major funding initiatives each receiving funding of 1B Euros over 10 years from the European Commission as part of their FET [Future and Emerging Technologies)] Initiative.)  A Jan. 12, 2017 news item on Nanowerk announces the new project (Note: A link has been removed),

BrainCom is a FET Proactive project, funded by the European Commission with 8.35M€ [8.3 million Euros] for the next 5 years, holding its Kick-off meeting on January 12-13 at ICN2 (Catalan Institute of Nanoscience and Nanotechnology) and the UAB [ Universitat Autònoma de Barcelona]. This project, coordinated by ICREA [Catalan Institution for Research and Advanced Studies] Research Prof. Jose A. Garrido from ICN2, will permit significant advances in understanding of cortical speech networks and the development of speech rehabilitation solutions using innovative brain-computer interfaces.

A Jan. 12, 2017 ICN2 press release, which originated the news item expands on the theme (it is a bit repetitive),

More than 5 million people worldwide suffer annually from aphasia, an extremely invalidating condition in which patients lose the ability to comprehend and formulate language after brain damage or in the course of neurodegenerative disorders. Brain-computer interfaces (BCIs), enabled by forefront technologies and materials, are a promising approach to treat patients with aphasia. The principle of BCIs is to collect neural activity at its source and decode it by means of electrodes implanted directly in the brain. However, neurorehabilitation of higher cognitive functions such as language raises serious issues. The current challenge is to design neural implants that cover sufficiently large areas of the brain to allow for reliable decoding of detailed neuronal activity distributed in various brain regions that are key for language processing.

BrainCom is a FET Proactive project funded by the European Commission with 8.35M€ for the next 5 years. This interdisciplinary initiative involves 10 partners including technologists, engineers, biologists, clinicians, and ethics experts. They aim to develop a new generation of neuroprosthetic cortical devices enabling large-scale recordings and stimulation of cortical activity to study high level cognitive functions. Ultimately, the BraimCom project will seed a novel line of knowledge and technologies aimed at developing the future generation of speech neural prostheses. It will cover different levels of the value chain: from technology and engineering to basic and language neuroscience, and from preclinical research in animals to clinical studies in humans.

This recently funded project is coordinated by ICREA Prof. Jose A. Garrido, Group Leader of the Advanced Electronic Materials and Devices Group at the Institut Català de Nanociència i Nanotecnologia (Catalan Institute of Nanoscience and Nanotechnology – ICN2) and deputy leader of the Biomedical Technologies Work Package presented last year in Barcelona by the Graphene Flagship. The BrainCom Kick-Off meeting is held on January 12-13 at ICN2 and the Universitat Autònoma de Barcelona (UAB).

Recent developments show that it is possible to record cortical signals from a small region of the motor cortex and decode them to allow tetraplegic [also known as, quadriplegic] people to activate a robotic arm to perform everyday life actions. Brain-computer interfaces have also been successfully used to help tetraplegic patients unable to speak to communicate their thoughts by selecting letters on a computer screen using non-invasive electroencephalographic (EEG) recordings. The performance of such technologies can be dramatically increased using more detailed cortical neural information.

BrainCom project proposes a radically new electrocorticography technology taking advantage of unique mechanical and electrical properties of novel nanomaterials such as graphene, 2D materials and organic semiconductors.  The consortium members will fabricate ultra-flexible cortical and intracortical implants, which will be placed right on the surface of the brain, enabling high density recording and stimulation sites over a large area. This approach will allow the parallel stimulation and decoding of cortical activity with unprecedented spatial and temporal resolution.

These technologies will help to advance the basic understanding of cortical speech networks and to develop rehabilitation solutions to restore speech using innovative brain-computer paradigms. The technology innovations developed in the project will also find applications in the study of other high cognitive functions of the brain such as learning and memory, as well as other clinical applications such as epilepsy monitoring.

The BrainCom project Consortium members are:

  • Catalan Institute of Nanoscience and Nanotechnology (ICN2) – Spain (Coordinator)
  • Institute of Microelectronics of Barcelona (CNM-IMB-CSIC) – Spain
  • University Grenoble Alpes – France
  • ARMINES/ Ecole des Mines de St. Etienne – France
  • Centre Hospitalier Universitaire de Grenoble – France
  • Multichannel Systems – Germany
  • University of Geneva – Switzerland
  • University of Oxford – United Kingdom
  • Ludwig-Maximilians-Universität München – Germany
  • Wavestone – Luxembourg

There doesn’t seem to be a website for the project but there is a BrainCom webpage on the European Commission’s CORDIS (Community Research and Development Information Service) website.