Tag Archives: University of Geneva

Developing cortical implants for future speech neural prostheses

I’m guessing that graphene will feature in these proposed cortical implants since the project leader is a member of the Graphene Flagship’s Biomedical Technologies Work Package. (For those who don’t know, the Graphene Flagship is one of two major funding initiatives each receiving funding of 1B Euros over 10 years from the European Commission as part of their FET [Future and Emerging Technologies)] Initiative.)  A Jan. 12, 2017 news item on Nanowerk announces the new project (Note: A link has been removed),

BrainCom is a FET Proactive project, funded by the European Commission with 8.35M€ [8.3 million Euros] for the next 5 years, holding its Kick-off meeting on January 12-13 at ICN2 (Catalan Institute of Nanoscience and Nanotechnology) and the UAB [ Universitat Autònoma de Barcelona]. This project, coordinated by ICREA [Catalan Institution for Research and Advanced Studies] Research Prof. Jose A. Garrido from ICN2, will permit significant advances in understanding of cortical speech networks and the development of speech rehabilitation solutions using innovative brain-computer interfaces.

A Jan. 12, 2017 ICN2 press release, which originated the news item expands on the theme (it is a bit repetitive),

More than 5 million people worldwide suffer annually from aphasia, an extremely invalidating condition in which patients lose the ability to comprehend and formulate language after brain damage or in the course of neurodegenerative disorders. Brain-computer interfaces (BCIs), enabled by forefront technologies and materials, are a promising approach to treat patients with aphasia. The principle of BCIs is to collect neural activity at its source and decode it by means of electrodes implanted directly in the brain. However, neurorehabilitation of higher cognitive functions such as language raises serious issues. The current challenge is to design neural implants that cover sufficiently large areas of the brain to allow for reliable decoding of detailed neuronal activity distributed in various brain regions that are key for language processing.

BrainCom is a FET Proactive project funded by the European Commission with 8.35M€ for the next 5 years. This interdisciplinary initiative involves 10 partners including technologists, engineers, biologists, clinicians, and ethics experts. They aim to develop a new generation of neuroprosthetic cortical devices enabling large-scale recordings and stimulation of cortical activity to study high level cognitive functions. Ultimately, the BraimCom project will seed a novel line of knowledge and technologies aimed at developing the future generation of speech neural prostheses. It will cover different levels of the value chain: from technology and engineering to basic and language neuroscience, and from preclinical research in animals to clinical studies in humans.

This recently funded project is coordinated by ICREA Prof. Jose A. Garrido, Group Leader of the Advanced Electronic Materials and Devices Group at the Institut Català de Nanociència i Nanotecnologia (Catalan Institute of Nanoscience and Nanotechnology – ICN2) and deputy leader of the Biomedical Technologies Work Package presented last year in Barcelona by the Graphene Flagship. The BrainCom Kick-Off meeting is held on January 12-13 at ICN2 and the Universitat Autònoma de Barcelona (UAB).

Recent developments show that it is possible to record cortical signals from a small region of the motor cortex and decode them to allow tetraplegic [also known as, quadriplegic] people to activate a robotic arm to perform everyday life actions. Brain-computer interfaces have also been successfully used to help tetraplegic patients unable to speak to communicate their thoughts by selecting letters on a computer screen using non-invasive electroencephalographic (EEG) recordings. The performance of such technologies can be dramatically increased using more detailed cortical neural information.

BrainCom project proposes a radically new electrocorticography technology taking advantage of unique mechanical and electrical properties of novel nanomaterials such as graphene, 2D materials and organic semiconductors.  The consortium members will fabricate ultra-flexible cortical and intracortical implants, which will be placed right on the surface of the brain, enabling high density recording and stimulation sites over a large area. This approach will allow the parallel stimulation and decoding of cortical activity with unprecedented spatial and temporal resolution.

These technologies will help to advance the basic understanding of cortical speech networks and to develop rehabilitation solutions to restore speech using innovative brain-computer paradigms. The technology innovations developed in the project will also find applications in the study of other high cognitive functions of the brain such as learning and memory, as well as other clinical applications such as epilepsy monitoring.

The BrainCom project Consortium members are:

  • Catalan Institute of Nanoscience and Nanotechnology (ICN2) – Spain (Coordinator)
  • Institute of Microelectronics of Barcelona (CNM-IMB-CSIC) – Spain
  • University Grenoble Alpes – France
  • ARMINES/ Ecole des Mines de St. Etienne – France
  • Centre Hospitalier Universitaire de Grenoble – France
  • Multichannel Systems – Germany
  • University of Geneva – Switzerland
  • University of Oxford – United Kingdom
  • Ludwig-Maximilians-Universität München – Germany
  • Wavestone – Luxembourg

There doesn’t seem to be a website for the project but there is a BrainCom webpage on the European Commission’s CORDIS (Community Research and Development Information Service) website.

Deep learning and some history from the Swiss National Science Foundation (SNSF)

A June 27, 2016 news item on phys.org provides a measured analysis of deep learning and its current state of development (from a Swiss perspective),

In March 2016, the world Go champion Lee Sedol lost 1-4 against the artificial intelligence AlphaGo. For many, this was yet another defeat for humanity at the hands of the machines. Indeed, the success of the AlphaGo software was forged in an area of artificial intelligence that has seen huge progress over the last decade. Deep learning, as it’s called, uses artificial neural networks to process algorithmic calculations. This software architecture therefore mimics biological neural networks.

Much of the progress in deep learning is thanks to the work of Jürgen Schmidhuber, director of the IDSIA (Istituto Dalle Molle di Studi sull’Intelligenza Artificiale) which is located in the suburbs of Lugano. The IDSIA doctoral student Shane Legg and a group of former colleagues went on to found DeepMind, the startup acquired by Google in early 2014 for USD 500 million. The DeepMind algorithms eventually wound up in AlphaGo.

“Schmidhuber is one of the best at deep learning,” says Boi Faltings of the EPFL Artificial Intelligence Lab. “He never let go of the need to keep working at it.” According to Stéphane Marchand-Maillet of the University of Geneva computing department, “he’s been in the race since the very beginning.”

A June 27, 2016 SNSF news release (first published as a story in Horizons no. 109 June 2016) by Fabien Goubet, which originated the news item, goes on to provide a brief history,

The real strength of deep learning is structural recognition, and winning at Go is just an illustration of this, albeit a rather resounding one. Elsewhere, and for some years now, we have seen it applied to an entire spectrum of areas, such as visual and vocal recognition, online translation tools and smartphone personal assistants. One underlying principle of machine learning is that algorithms must first be trained using copious examples. Naturally, this has been helped by the deluge of user-generated content spawned by smartphones and web 2.0, stretching from Facebook photo comments to official translations published on the Internet. By feeding a machine thousands of accurately tagged images of cats, for example, it learns first to recognise those cats and later any image of a cat, including those it hasn’t been fed.

Deep learning isn’t new; it just needed modern computers to come of age. As far back as the early 1950s, biologists tried to lay out formal principles to explain the working of the brain’s cells. In 1956, the psychologist Frank Rosenblatt of the New York State Aeronautical Laboratory published a numerical model based on these concepts, thereby creating the very first artificial neural network. Once integrated into a calculator, it learned to recognise rudimentary images.

“This network only contained eight neurones organised in a single layer. It could only recognise simple characters”, says Claude Touzet of the Adaptive and Integrative Neuroscience Laboratory of Aix-Marseille University. “It wasn’t until 1985 that we saw the second generation of artificial neural networks featuring multiple layers and much greater performance”. This breakthrough was made simultaneously by three researchers: Yann LeCun in Paris, Geoffrey Hinton in Toronto and Terrence Sejnowski in Baltimore.

Byte-size learning

In multilayer networks, each layer learns to recognise the precise visual characteristics of a shape. The deeper the layer, the more abstract the characteristics. With cat photos, the first layer analyses pixel colour, and the following layer recognises the general form of the cat. This structural design can support calculations being made upon thousands of layers, and it was this aspect of the architecture that gave rise to the name ‘deep learning’.

Marchand-Maillet explains: “Each artificial neurone is assigned an input value, which it computes using a mathematical function, only firing if the output exceeds a pre-defined threshold”. In this way, it reproduces the behaviour of real neurones, which only fire and transmit information when the input signal (the potential difference across the entire neural circuit) reaches a certain level. In the artificial model, the results of a single layer are weighted, added up and then sent as the input signal to the following layer, which processes that input using different functions, and so on and so forth.

For example, if a system is trained with great quantities of photos of apples and watermelons, it will progressively learn to distinguish them on the basis of diameter, says Marchand-Maillet. If it cannot decide (e.g., when processing a picture of a tiny watermelon), the subsequent layers take over by analysing the colours or textures of the fruit in the photo, and so on. In this way, every step in the process further refines the assessment.

Video games to the rescue

For decades, the frontier of computing held back more complex applications, even at the cutting edge. Industry walked away, and deep learning only survived thanks to the video games sector, which eventually began producing graphics chips, or GPUs, with an unprecedented power at accessible prices: up to 6 teraflops (i.e., 6 trillion calculations per second) for a few hundred dollars. “There’s no doubt that it was this calculating power that laid the ground for the quantum leap in deep learning”, says Touzet. GPUs are also very good at parallel calculations, a useful function for executing the innumerable simultaneous operations required by neural networks.
Although image analysis is getting great results, things are more complicated for sequential data objects such as natural spoken language and video footage. This has formed part of Schmidhuber’s work since 1989, and his response has been to develop recurrent neural networks in which neurones communicate with each other in loops, feeding processed data back into the initial layers.

Such sequential data analysis is highly dependent on context and precursory data. In Lugano, networks have been instructed to memorise the order of a chain of events. Long Short Term Memory (LSTM) networks can distinguish ‘boat’ from ‘float’ by recalling the sound that preceded ‘oat’ (i.e., either ‘b’ or ‘fl’). “Recurrent neural networks are more powerful than other approaches such as the Hidden Markov models”, says Schmidhuber, who also notes that Google Voice integrated LSTMs in 2015. “With looped networks, the number of layers is potentially infinite”, says Faltings [?].

For Schmidhuber, deep learning is just one aspect of artificial intelligence; the real thing will lead to “the most important change in the history of our civilisation”. But Marchand-Maillet sees deep learning as “a bit of hype, leading us to believe that artificial intelligence can learn anything provided there’s data. But it’s still an open question as to whether deep learning can really be applied to every last domain”.

It’s nice to get an historical perspective and eye-opening to realize that scientists have been working on these concepts since the 1950s.

Viewing quantum entanglement with the naked eye

A Feb. 18, 2016 article by Bob Yirka for phys.org suggests there may be a way to see quantum entanglement with the naked eye,

A trio of physicists in Europe has come up with an idea that they believe would allow a person to actually witness entanglement. Valentina Caprara Vivoli, with the University of Geneva, Pavel Sekatski, with the University of Innsbruck and Nicolas Sangouard, with the University of Basel, have together written a paper describing a scenario where a human subject would be able to witness an instance of entanglement—they have uploaded it to the arXiv server for review by others.
Entanglement, is of course, where two quantum particles are intrinsically linked to the extent that they actually share the same existence, even though they can be separated and moved apart. The idea was first proposed nearly a century ago, and it has not only been proven, but researchers routinely cause it to occur, but, to date, not one single person has every actually seen it happen—they only know it happens by conducting a series of experiments. It is not clear if anyone has ever actually tried to see it happen, but in this new effort, the research trio claim to have found a way to make it happen—if only someone else will carry out the experiment on a willing volunteer.

A Feb. 17, 2016 article for the MIT (Massachusetts Institute of Technology) Technology Review describes this proposed project in detail,

Finding a way for a human eye to detect entangled photons sounds straightforward. After all, the eye is a photon detector, so it ought to be possible for an eye to replace a photo detector in any standard entanglement detecting experiment.

Such an experiment might consist of a source of entangled pairs of photons, each of which is sent to a photo detector via an appropriate experimental setup.

By comparing the arrival of photons at each detector and by repeating the detecting process many times, it is possible to determine statistically whether entanglement is occurring.

It’s easy to imagine that this experiment can be easily repeated by replacing one of the photodetectors with an eye. But that turns out not to be the case.

The main problem is that the eye cannot detect single photons. Instead, each light-detecting rod at the back of the eye must be stimulated by a good handful of photons to trigger a detection. The lowest number of photons that can do the trick is thought to be about seven, but in practice, people usually see photons only when they arrive in the hundreds or thousands.

Even then, the eye is not a particularly efficient photodetector. A good optics lab will have photodetectors that are well over 90 percent efficient. By contrast, at the very lowest light levels, the eye is about 8 percent efficient. That means it misses lots of photons.

That creates a significant problem. If a human eye is ever to “see” entanglement in this way, then physicists will have to entangle not just two photons but at least seven, and ideally many hundreds or thousands of them.

And that simply isn’t possible with today’s technology. At best, physicists are capable of entangling half a dozen photons but even this is a difficult task.

But the researchers have come up with a solution to the problem,

Vivoli and co say they have devised a trick that effectively amplifies a single entangled photon into many photons that the eye can see. Their trick depends on a technique called a displacement operation, in which two quantum objects interfere so that one changes the phase of another.

One way to do this with photons is with a beam splitter. Imagine a beam of coherent photons from a laser that is aimed at a beam splitter. The beam is transmitted through the splitter but a change of phase can cause it to be reflected instead.

Now imagine another beam of coherent photons that interferes with the first. This changes the phase of the first beam so that it is reflected rather than transmitted. In other words, the second beam can switch the reflection on and off.

Crucially, the switching beam needn’t be as intense as the main beam—it only needs to be coherent. Indeed, a single photon can do this trick of switching more intense beam, at least in theory.

That’s the basis of the new approach. The idea is to use a single entangled photon to switch the passage of more powerful beam through a beam splitter. And it is this more powerful beam that the eye detects and which still preserves the quantum nature of the original entanglement.

… this experiment will be hard to do. Ensuring that the optical amplifier works as they claim will be hard, for example.

And even if it does, reliably recording each detection in the eye will be even harder. The test for entanglement is a statistical one that requires many counts from both detectors. That means an individual would have to sit in the experiment registering a yes or no answer for each run, repeated thousands or tens of thousands of times. Volunteers will need to have plenty of time on their hands.

Of course, experiments like this will quickly take the glamor and romance out of the popular perception of entanglement. Indeed, it’s hard to see why anybody would want to be entangled with a photodetector over the time it takes to do this experiment.

There is a suggestion as to how to make this a more attractive proposition for volunteers,

One way to increase this motivation would be to modify the experiment so that it entangles two humans. It’s not hard to imagine a people wanting to take part in such an experiment, perhaps even eagerly.

That will require a modified set up in which both detectors are human eyes, with their high triggering level and their low efficiency. Whether this will be possible with Vivoli and co’s setup isn’t yet clear.

Only then will volunteers be able to answer the question that sits uncomfortably with most physicists. What does it feel like to be entangled with another human?

Given the nature of this experiment, the answer will be “mind-numbingly boring.” But as Vivoli and co point out in their conclusion: “It is safe to say that probing human vision with quantum light is terra incognita. This makes it an attractive challenge on its own.”

You can read the arXiv paper,

What Does It Take to See Entanglement? by Valentina Caprara Vivoli, Pavel Sekatski, Nicolas Sangouard arxiv.org/abs/1602.01907 Submitted Feb. 5, 2016

This is an open access paper and this site encourages comments and peer review.

One final comment, the articles reminded me of a March 1, 2012 posting which posed this question Can we see entangled images? a question for physicists in the headline for a piece about a physicist’s (Geraldo Barbosa) challenge and his arXiv paper. Coincidentally, the source article was by Bob Yirka and was published on phys.org.

Crowd computing for improved nanotechnology-enabled water filtration

This research is the product of a China/Israel/Switzerland collaboration on water filtration with involvement from the UK and Australia. Here’s some general information about the importance of water and about the collaboration in a July 5, 2015 news item on Nanowerk (Note: A link has been removed),

Nearly 800 million people worldwide don’t have access to safe drinking water, and some 2.5 billion people live in precariously unsanitary conditions, according to the Centers for Disease Control and Prevention. Together, unsafe drinking water and the inadequate supply of water for hygiene purposes contribute to almost 90% of all deaths from diarrheal diseases — and effective water sanitation interventions are still challenging scientists and engineers.

A new study published in Nature Nanotechnology (“Water transport inside carbon nanotubes mediated by phonon-induced oscillating friction”) proposes a novel nanotechnology-based strategy to improve water filtration. The research project involves the minute vibrations of carbon nanotubes called “phonons,” which greatly enhance the diffusion of water through sanitation filters. The project was the joint effort of a Tsinghua University-Tel Aviv University research team and was led by Prof. Quanshui Zheng of the Tsinghua Center for Nano and Micro Mechanics and Prof. Michael Urbakh of the TAU School of Chemistry, both of the TAU-Tsinghua XIN Center, in collaboration with Prof. Francois Grey of the University of Geneva.

A July 5, 2015 American Friends of Tel Aviv University news release (also on EurekAlert), which originated the news item, provides more details about the work,

“We’ve discovered that very small vibrations help materials, whether wet or dry, slide more smoothly past each other,” said Prof. Urbakh. “Through phonon oscillations — vibrations of water-carrying nanotubes — water transport can be enhanced, and sanitation and desalination improved. Water filtration systems require a lot of energy due to friction at the nano-level. With these oscillations, however, we witnessed three times the efficiency of water transport, and, of course, a great deal of energy saved.”

The research team managed to demonstrate how, under the right conditions, such vibrations produce a 300% improvement in the rate of water diffusion by using computers to simulate the flow of water molecules flowing through nanotubes. The results have important implications for desalination processes and energy conservation, e.g. improving the energy efficiency for desalination using reverse osmosis membranes with pores at the nanoscale level, or energy conservation, e.g. membranes with boron nitride nanotubes.

Crowdsourcing the solution

The project, initiated by IBM’s World Community Grid, was an experiment in crowdsourced computing — carried out by over 150,000 volunteers who contributed their own computing power to the research.

“Our project won the privilege of using IBM’s world community grid, an open platform of users from all around the world, to run our program and obtain precise results,” said Prof. Urbakh. “This was the first project of this kind in Israel, and we could never have managed with just four students in the lab. We would have required the equivalent of nearly 40,000 years of processing power on a single computer. Instead we had the benefit of some 150,000 computing volunteers from all around the world, who downloaded and ran the project on their laptops and desktop computers.

“Crowdsourced computing is playing an increasingly major role in scientific breakthroughs,” Prof. Urbakh continued. “As our research shows, the range of questions that can benefit from public participation is growing all the time.”

The computer simulations were designed by Ming Ma, who graduated from Tsinghua University and is doing his postdoctoral research in Prof. Urbakh’s group at TAU. Ming catalyzed the international collaboration. “The students from Tsinghua are remarkable. The project represents the very positive cooperation between the two universities, which is taking place at XIN and because of XIN,” said Prof. Urbakh.

Other partners in this international project include researchers at the London Centre for Nanotechnology of University College London; the University of Geneva; the University of Sydney and Monash University in Australia; and the Xi’an Jiaotong University in China. The researchers are currently in discussions with companies interested in harnessing the oscillation knowhow for various commercial projects.

 

Here’s a link to and a citation for the paper,

Water transport inside carbon nanotubes mediated by phonon-induced oscillating friction by Ming Ma, François Grey, Luming Shen, Michael Urbakh, Shuai Wu,    Jefferson Zhe Liu, Yilun Liu, & Quanshui Zheng. Nature Nanotechnology (2015) doi:10.1038/nnano.2015.134 Published online 06 July 2015

This paper is behind a paywall.

Final comment, I find it surprising that they used labour and computing power from 150,000 volunteers and didn’t offer open access to the paper. Perhaps the volunteers got their own copy? I certainly hope so.

Three citizen cyberscience projects, LHC@home 2.0, computing for clean water, and collaborating with UNOSAT for crisis response

I sometimes lose track of how many years there are such as International Year of Chemistry, Year of Science in BC, etc. but here’s one that’s new to me, the European Year of Volunteering.

CERN (the European Organization for Nuclear Research [I imagine the French version was Centre européen de la recherche scientifique] and the world’s leading laboratory for particle physics) just announced as part of its support for volunteering, a new version of their volunteer computing project, LHC@home, 2.0, From the August 8, 2011 news item on Science Daily,

This version allows volunteers to participate for the first time in simulating high-energy collisions of protons in CERN’s Large Hadron Collider (LHC). Thus, volunteers can now actively help physicists in the search for new fundamental particles that will provide insights into the origin of our Universe, by contributing spare computing power from their personal computers and laptops.

This means that volunteers at home can participate in the search for the Higgs boson particle, sometimes known as the ‘god’ particle or the ‘champagne bottle’ boson. (Despite rumours earlier this year, the Higgs boson has not yet materialized as Jon Butterworth mentions in his May 11, 2011 post on the Guardian Science blogs. Note: Jon Butterworth is a physics professor at University College London and a member of the High Energy Physics group on the Atlas experiment at Cern’s Large Hadron Collider.)

This latest iteration of the LHC@home project is just one of a series of projects and events being developed by the Citizen Cyberscience Centre (which itself is supported by CERN, by UNITAR [United Nations Institute for Training and Research, and by the University of Geneva) for the European Year of Volunteering.

Two other projects just announced by the Citizen Cyberscience Centre (from the Science Daily news item),

Other projects the Citizen Cyberscience Centre has initiated focus on promoting volunteer science in the developing world, for humanitarian purposes. For example, in collaboration with IBM’s philanthropic World Community Grid and Tsinghua University in Beijing, the Citizen Cyberscience Centre launched the Computing for Clean Water project. The project uses the supercomputer-like strength of World Community Grid to enable scientists to design efficient low-cost water filters for clean water.

In a separate project supported by HP, volunteers can help UNOSAT, the Operational Satellite Applications Programme of UNITAR, to improve damage assessment in developing regions affected by natural or human-made disasters, for humanitarian purposes.

More information about these projects is available in the August 8, 2011 news item on physorg.com,

As Sergio Bertolucci, Director of Research and Scientific Computing at CERN, emphasizes: “While LHC@home is a great opportunity to encourage more public involvement in science, the biggest benefits of citizen cyberscience are for researchers in developing regions who have limited resources for computing and manpower. Online volunteers can boost available research resources enormously at very low cost. This is a trend we are committed to promote through the Citizen Cyberscience Center”.

Leading international computer manufacturers such as IBM and HP have contributed their support and expertise to Citizen Cyberscience Center projects including UNOSAT [UNITAR’s Operational Satellite Applications Prorgramme]. Using data from space agencies and satellite operators around the world, UNOSAT can produce maps for humanitarian applications such as damage assessment or monitoring deforestation. The project relies on ‘volunteer thinking’ where participants actively analyse imagery and their results are compared.

“From a development and humanitarian perspective, the potential of citizen-powered research is enormous”, says Francesco Pisano, Manager of UNOSAT, ” Participating in the Citizen Cyberscience Center enables us to get new insights into the cutting edge of crowdsourcing technologies. There is no doubt that volunteers are playing an increasingly central role in dealing with crisis response, thanks to the Internet.”

Well, the current London riots are revealing other less salubrious uses of social media and the internet but I like to think that in the end, creative uses will prove more enticing than destructive uses.

ETA August 10, 2011: I found one more year, 2011 is the International Year of Forests.