Tag Archives: Institute of Photonic Sciences (ICFO)

Canada’s ‘Smart Cities’ will need new technology (5G wireless) and, maybe, graphene

I recently published [March 20, 2018] a piece on ‘smart cities’ both an art/science event in Toronto and a Canadian government initiative without mentioning the necessity of new technology to support all of the grand plans. On that note, it seems the Canadian federal government and two provincial (Québec and Ontario) governments are prepared to invest in one of the necessary ‘new’ technologies, 5G wireless. The Canadian Broadcasting Corporation’s (CBC) Shawn Benjamin reports about Canada’s 5G plans in suitably breathless (even in text only) tones of excitement in a March 19, 2018 article,

The federal, Ontario and Quebec governments say they will spend $200 million to help fund research into 5G wireless technology, the next-generation networks with download speeds 100 times faster than current ones can handle.

The so-called “5G corridor,” known as ENCQOR, will see tech companies such as Ericsson, Ciena Canada, Thales Canada, IBM and CGI kick in another $200 million to develop facilities to get the project up and running.

The idea is to set up a network of linked research facilities and laboratories that these companies — and as many as 1,000 more across Canada — will be able to use to test products and services that run on 5G networks.

Benjamin’s description of 5G is focused on what it will make possible in the future,

If you think things are moving too fast, buckle up, because a new 5G cellular network is just around the corner and it promises to transform our lives by connecting nearly everything to a new, much faster, reliable wireless network.

The first networks won’t be operational for at least a few years, but technology and telecom companies around the world are already planning to spend billions to make sure they aren’t left behind, says Lawrence Surtees, a communications analyst with the research firm IDC.

The new 5G is no tentative baby step toward the future. Rather, as Surtees puts it, “the move from 4G to 5G is a quantum leap.”

In a downtown Toronto soundstage, Alan Smithson recently demonstrated a few virtual reality and augmented reality projects that his company MetaVRse is working on.

The potential for VR and AR technology is endless, he said, in large part for its potential to help hurdle some of the walls we are already seeing with current networks.

Virtual Reality technology on the market today is continually increasing things like frame rates and screen resolutions in a constant quest to make their devices even more lifelike.

… They [current 4G networks] can’t handle the load. But 5G can do so easily, Smithson said, so much so that the current era of bulky augmented reality headsets could be replaced buy a pair of normal looking glasses.

In a 5G world, those internet-connected glasses will automatically recognize everyone you meet, and possibly be able to overlay their name in your field of vision, along with a link to their online profile. …

Benjamin also mentions ‘smart cities’,

In a University of Toronto laboratory, Professor Alberto Leon-Garcia researches connected vehicles and smart power grids. “My passion right now is enabling smart cities — making smart cities a reality — and that means having much more immediate and detailed sense of the environment,” he said.

Faster 5G networks will assist his projects in many ways, by giving planners more, instant data on things like traffic patterns, energy consumption, variou carbon footprints and much more.

Leon-Garcia points to a brightly lit map of Toronto [image embedded in Benjamin’s article] in his office, and explains that every dot of light represents a sensor transmitting real time data.

Currently, the network is hooked up to things like city buses, traffic cameras and the city-owned fleet of shared bicycles. He currently has thousands of data points feeding him info on his map, but in a 5G world, the network will support about a million sensors per square kilometre.

Very exciting but where is all this data going? What computers will be processing the information? Where are these sensors located? Benjamin does not venture into those waters nor does The Economist in a February 13, 2018 article about 5G, the Olympic Games in Pyeonchang, South Korea, but the magazine does note another barrier to 5G implementation,

“FASTER, higher, stronger,” goes the Olympic motto. So it is only appropriate that the next generation of wireless technology, “5G” for short, should get its first showcase at the Winter Olympics  under way in Pyeongchang, South Korea. Once fully developed, it is supposed to offer download speeds of at least 20 gigabits per second (4G manages about half that at best) and response times (“latency”) of below 1 millisecond. So the new networks will be able to transfer a high-resolution movie in two seconds and respond to requests in less than a hundredth of the time it takes to blink an eye. But 5G is not just about faster and swifter wireless connections.

The technology is meant to enable all sorts of new services. One such would offer virtual- or augmented-reality experiences. At the Olympics, for example, many contestants are being followed by 360-degree video cameras. At special venues sports fans can don virtual-reality goggles to put themselves right into the action. But 5G is also supposed to become the connective tissue for the internet of things, to link anything from smartphones to wireless sensors and industrial robots to self-driving cars. This will be made possible by a technique called “network slicing”, which allows operators quickly to create bespoke networks that give each set of devices exactly the connectivity they need.

Despite its versatility, it is not clear how quickly 5G will take off. The biggest brake will be economic. [emphasis mine] When the GSMA, an industry group, last year asked 750 telecoms bosses about the most salient impediment to delivering 5G, more than half cited the lack of a clear business case. People may want more bandwidth, but they are not willing to pay for it—an attitude even the lure of the fanciest virtual-reality applications may not change. …

That may not be the only brake, Dexter Johnson in a March 19, 2018 posting on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website), covers some of the others (Note: Links have been removed),

Graphene has been heralded as a “wonder material” for well over a decade now, and 5G has been marketed as the next big thing for at least the past five years. Analysts have suggested that 5G could be the golden ticket to virtual reality and artificial intelligence, and promised that graphene could improve technologies within electronics and optoelectronics.

But proponents of both graphene and 5G have also been accused of stirring up hype. There now seems to be a rising sense within industry circles that these glowing technological prospects will not come anytime soon.

At Mobile World Congress (MWC) in Barcelona last month [February 2018], some misgivings for these long promised technologies may have been put to rest, though, thanks in large part to each other.

In a meeting at MWC with Jari Kinaret, a professor at Chalmers University in Sweden and director of the Graphene Flagship, I took a guided tour around the Pavilion to see some of the technologies poised to have an impact on the development of 5G.

Being invited back to the MWC for three years is a pretty clear indication of how important graphene is to those who are trying to raise the fortunes of 5G. But just how important became more obvious to me in an interview with Frank Koppens, the leader of the quantum nano-optoelectronic group at Institute of Photonic Sciences (ICFO) just outside of Barcelona, last year.

He said: “5G cannot just scale. Some new technology is needed. And that’s why we have several companies in the Graphene Flagship that are putting a lot of pressure on us to address this issue.”

In a collaboration led by CNIT—a consortium of Italian universities and national laboratories focused on communication technologies—researchers from AMO GmbH, Ericsson, Nokia Bell Labs, and Imec have developed graphene-based photodetectors and modulators capable of receiving and transmitting optical data faster than ever before.

The aim of all this speed for transmitting data is to support the ultrafast data streams with extreme bandwidth that will be part of 5G. In fact, at another section during MWC, Ericsson was presenting the switching of a 100 Gigabits per second (Gbps) channel based on the technology.

“The fact that Ericsson is demonstrating another version of this technology demonstrates that from Ericsson’s point of view, this is no longer just research” said Kinaret.

It’s no mystery why the big mobile companies are jumping on this technology. Not only does it provide high-speed data transmission, but it also does it 10 times more efficiently than silicon or doped silicon devices, and will eventually do it more cheaply than those devices, according to Vito Sorianello, senior researcher at CNIT.

Interestingly, Ericsson is one of the tech companies mentioned with regard to Canada’s 5G project, ENCQOR and Sweden’s Chalmers University, as Dexter Johnson notes, is the lead institution for the Graphene Flagship.. One other fact to note, Canada’s resources include graphite mines with ‘premium’ flakes for producing graphene. Canada’s graphite mines are located (as far as I know) in only two Canadian provinces, Ontario and Québec, which also happen to be pitching money into ENCQOR. My March 21, 2018 posting describes the latest entry into the Canadian graphite mining stakes.

As for the questions I posed about processing power, etc. It seems the South Koreans have found answers of some kind but it’s hard to evaluate as I haven’t found any additional information about 5G and its implementation in South Korea. If anyone has answers, please feel free to leave them in the ‘comments’. Thank you.

Machine learning software and quantum computers that think

A Sept. 14, 2017 news item on phys.org sets the stage for quantum machine learning by explaining a few basics first,

Language acquisition in young children is apparently connected with their ability to detect patterns. In their learning process, they search for patterns in the data set that help them identify and optimize grammar structures in order to properly acquire the language. Likewise, online translators use algorithms through machine learning techniques to optimize their translation engines to produce well-rounded and understandable outcomes. Even though many translations did not make much sense at all at the beginning, in these past years we have been able to see major improvements thanks to machine learning.

Machine learning techniques use mathematical algorithms and tools to search for patterns in data. These techniques have become powerful tools for many different applications, which can range from biomedical uses such as in cancer reconnaissance, in genetics and genomics, in autism monitoring and diagnosis and even plastic surgery, to pure applied physics, for studying the nature of materials, matter or even complex quantum systems.

Capable of adapting and changing when exposed to a new set of data, machine learning can identify patterns, often outperforming humans in accuracy. Although machine learning is a powerful tool, certain application domains remain out of reach due to complexity or other aspects that rule out the use of the predictions that learning algorithms provide.

Thus, in recent years, quantum machine learning has become a matter of interest because of is vast potential as a possible solution to these unresolvable challenges and quantum computers show to be the right tool for its solution.

A Sept. 14, 2017 Institute of Photonic Sciences ([Catalan] Institut de Ciències Fotòniques] ICFO) press release, which originated the news item, goes on to detail a recently published overview of the state of quantum machine learning,

In a recent study, published in Nature, an international team of researchers integrated by Jacob Biamonte from Skoltech/IQC, Peter Wittek from ICFO, Nicola Pancotti from MPQ, Patrick Rebentrost from MIT, Nathan Wiebe from Microsoft Research, and Seth Lloyd from MIT have reviewed the actual status of classical machine learning and quantum machine learning. In their review, they have thoroughly addressed different scenarios dealing with classical and quantum machine learning. In their study, they have considered different possible combinations: the conventional method of using classical machine learning to analyse classical data, using quantum machine learning to analyse both classical and quantum data, and finally, using classical machine learning to analyse quantum data.

Firstly, they set out to give an in-depth view of the status of current supervised and unsupervised learning protocols in classical machine learning by stating all applied methods. They introduce quantum machine learning and provide an extensive approach on how this technique could be used to analyse both classical and quantum data, emphasizing that quantum machines could accelerate processing timescales thanks to the use of quantum annealers and universal quantum computers. Quantum annealing technology has better scalability, but more limited use cases. For instance, the latest iteration of D-Wave’s [emphasis mine] superconducting chip integrates two thousand qubits, and it is used for solving certain hard optimization problems and for efficient sampling. On the other hand, universal (also called gate-based) quantum computers are harder to scale up, but they are able to perform arbitrary unitary operations on qubits by sequences of quantum logic gates. This resembles how digital computers can perform arbitrary logical operations on classical bits.

However, they address the fact that controlling a quantum system is very complex and analyzing classical data with quantum resources is not as straightforward as one may think, mainly due to the challenge of building quantum interface devices that allow classical information to be encoded into a quantum mechanical form. Difficulties, such as the “input” or “output” problems appear to be the major technical challenge that needs to be overcome.

The ultimate goal is to find the most optimized method that is able to read, comprehend and obtain the best outcomes of a data set, be it classical or quantum. Quantum machine learning is definitely aimed at revolutionizing the field of computer sciences, not only because it will be able to control quantum computers, speed up the information processing rates far beyond current classical velocities, but also because it is capable of carrying out innovative functions, such quantum deep learning, that could not only recognize counter-intuitive patterns in data, invisible to both classical machine learning and to the human eye, but also reproduce them.

As Peter Wittek [emphasis mine] finally states, “Writing this paper was quite a challenge: we had a committee of six co-authors with different ideas about what the field is, where it is now, and where it is going. We rewrote the paper from scratch three times. The final version could not have been completed without the dedication of our editor, to whom we are indebted.”

It was a bit of a surprise to see local (Vancouver, Canada) company D-Wave Systems mentioned but i notice that one of the paper’s authors (Peter Wittek) is mentioned in a May 22, 2017 D-Wave news release announcing a new partnership to foster quantum machine learning,

Today [May 22, 2017] D-Wave Systems Inc., the leader in quantum computing systems and software, announced a new initiative with the Creative Destruction Lab (CDL) at the University of Toronto’s Rotman School of Management. D-Wave will work with CDL, as a CDL Partner, to create a new track to foster startups focused on quantum machine learning. The new track will complement CDL’s successful existing track in machine learning. Applicants selected for the intensive one-year program will go through an introductory boot camp led by Dr. Peter Wittek [emphasis mine], author of Quantum Machine Learning: What Quantum Computing means to Data Mining, with instruction and technical support from D-Wave experts, access to a D-Wave 2000Q™ quantum computer, and the opportunity to use a D-Wave sampling service to enable machine learning computations and applications. D-Wave staff will be a part of the committee selecting up to 40 individuals for the program, which begins in September 2017.

For anyone interested in the paper, here’s a link to and a citation,

Quantum machine learning by Jacob Biamonte, Peter Wittek, Nicola Pancotti, Patrick Rebentrost, Nathan Wiebe, & Seth Lloyd. Nature 549, 195–202 (14 September 2017) doi:10.1038/nature23474 Published online 13 September 2017

This paper is behind a paywall.

Replicating brain’s neural networks with 3D nanoprinting

An announcement about European Union funding for a project to reproduce neural networks by 3D nanoprinting can be found in a June 10, 2016 news item on Nanowerk,

The MESO-BRAIN consortium has received a prestigious award of €3.3million in funding from the European Commission as part of its Future and Emerging Technology (FET) scheme. The project aims to develop three-dimensional (3D) human neural networks with specific biological architecture, and the inherent ability to interrogate the network’s brain-like activity both electrophysiologically and optically. It is expected that the MESO-BRAIN will facilitate a better understanding of human disease progression, neuronal growth and enable the development of large-scale human cell-based assays to test the modulatory effects of pharmacological and toxicological compounds on neural network activity. The use of more physiologically relevant human models will increase drug screening efficiency and reduce the need for animal testing.

A June 9, 2016 Institute of Photonic Sciences (ICFO) press release (also on EurekAlert), which originated the news item, provides more detail,

About the MESO-BRAIN project

The MESO-BRAIN project’s cornerstone will use human induced pluripotent stem cells (iPSCs) that have been differentiated into neurons upon a defined and reproducible 3D scaffold to support the development of human neural networks that emulate brain activity. The structure will be based on a brain cortical module and will be unique in that it will be designed and produced using nanoscale 3D-laser-printed structures incorporating nano-electrodes to enable downstream electrophysiological analysis of neural network function. Optical analysis will be conducted using cutting-edge light sheet-based, fast volumetric imaging technology to enable cellular resolution throughout the 3D network. The MESO-BRAIN project will allow for a comprehensive and detailed investigation of neural network development in health and disease.

Prof Edik Rafailov, Head of the MESO-BRAIN project (Aston University) said: “What we’re proposing to achieve with this project has, until recently, been the stuff of science fiction. Being able to extract and replicate neural networks from the brain through 3D nanoprinting promises to change this. The MESO-BRAIN project has the potential to revolutionise the way we are able to understand the onset and development of disease and discover treatments for those with dementia or brain injuries. We cannot wait to get started!”

The MESO-BRAIN project will launch in September 2016 and research will be conducted over three years.

About the MESO-BRAIN consortium

Each of the consortium partners have been chosen for the highly specific skills & knowledge that they bring to this project. These include technologies and expertise in stem cells, photonics, physics, 3D nanoprinting, electrophysiology, molecular biology, imaging and commercialisation.

Aston University (UK) Aston Institute of Photonic Technologies (School of Engineering and Applied Science) is one of the largest photonic groups in UK and an internationally recognised research centre in the fields of lasers, fibre-optics, high-speed optical communications, nonlinear and biomedical photonics. The Cell & Tissue Biomedical Research Group (Aston Research Centre for Healthy Ageing) combines collective expertise in genetic manipulation, tissue engineering and neuronal modelling with the electrophysiological and optical analysis of human iPSC-derived neural networks. Axol Bioscience Ltd. (UK) was founded to fulfil the unmet demand for high quality, clinically relevant human iPSC-derived cells for use in biomedical research and drug discovery. The Laser Zentrum Hannover (Germany) is a leading research organisation in the fields of laser development, material processing, laser medicine, and laser-based nanotechnologies. The Neurophysics Group (Physics Department) at University of Barcelona (Spain) are experts in combing experiments with theoretical and computational modelling to infer functional connectivity in neuronal circuits. The Institute of Photonic Sciences (ICFO) (Spain) is a world-leading research centre in photonics with expertise in several microscopy techniques including light sheet imaging. KITE Innovation (UK) helps to bridge the gap between the academic and business sectors in supporting collaboration, enterprise, and knowledge-based business development.

For anyone curious about the FET funding scheme, there’s this from the press release,

Horizon 2020 aims to ensure Europe produces world-class science by removing barriers to innovation through funding programmes such as the FET. The FET (Open) funds forward-looking collaborations between advanced multidisciplinary science and cutting-edge engineering for radically new future technologies. The published success rate is below 1.4%, making it amongst the toughest in the Horizon 2020 suite of funding schemes. The MESO-BRAIN proposal scored a perfect 5/5.

You can find out more about the MESO-BRAIN project on its ICFO webpage.

They don’t say anything about it but I can’t help wondering if the scientists aren’t also considering the possibility of creating an artificial brain.

With over 150 partners from over 20 countries, the European Union’s Graphene Flagship research initiative unveils its work package devoted to biomedical technologies

An April 11, 2016 news item on Nanowerk announces the Graphene Flagship’s latest work package,

With a budget of €1 billion, the Graphene Flagship represents a new form of joint, coordinated research on an unprecedented scale, forming Europe’s biggest ever research initiative. It was launched in 2013 to bring together academic and industrial researchers to take graphene from the realm of academic laboratories into European society in the timeframe of 10 years. The initiative currently involves over 150 partners from more than 20 European countries. The Graphene Flagship, coordinated by Chalmers University of Technology (Sweden), is implemented around 15 scientific Work Packages on specific science and technology topics, such as fundamental science, materials, health and environment, energy, sensors, flexible electronics and spintronics.

Today [April 11, 2016], the Graphene Flagship announced in Barcelona the creation of a new Work Package devoted to Biomedical Technologies, one emerging application area for graphene and other 2D materials. This initiative is led by Professor Kostas Kostarelos, from the University of Manchester (United Kingdom), and ICREA Professor Jose Antonio Garrido, from the Catalan Institute of Nanoscience and Nanotechnology (ICN2, Spain). The Kick-off event, held in the Casa Convalescència of the Universitat Autònoma de Barcelona (UAB), is co-organised by ICN2 (ICREA Prof Jose Antonio Garrido), Centro Nacional de Microelectrónica (CNM-IMB-CSIC, CIBER-BBN; CSIC Tenured Scientist Dr Rosa Villa), and Institut d’Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS; ICREA Prof Mavi Sánchez-Vives).

An April 11, 2016 ICN2 press release, which originated the news item, provides more detail about the Biomedical Technologies work package and other work packages,

The new Work Package will focus on the development of implants based on graphene and 2D-materials that have therapeutic functionalities for specific clinical outcomes, in disciplines such as neurology, ophthalmology and surgery. It will include research in three main areas: Materials Engineering; Implant Technology & Engineering; and Functionality and Therapeutic Efficacy. The objective is to explore novel implants with therapeutic capacity that will be further developed in the next phases of the Graphene Flagship.

The Materials Engineering area will be devoted to the production, characterisation, chemical modification and optimisation of graphene materials that will be adopted for the design of implants and therapeutic element technologies. Its results will be applied by the Implant Technology and Engineering area on the design of implant technologies. Several teams will work in parallel on retinal, cortical, and deep brain implants, as well as devices to be applied in the periphery nerve system. Finally, The Functionality and Therapeutic Efficacy area activities will centre on development of devices that, in addition to interfacing the nerve system for recording and stimulation of electrical activity, also have therapeutic functionality.

Stimulation therapies will focus on the adoption of graphene materials in implants with stimulation capabilities in Parkinson’s, blindness and epilepsy disease models. On the other hand, biological therapies will focus on the development of graphene materials as transport devices of biological molecules (nucleic acids, protein fragments, peptides) for modulation of neurophysiological processes. Both approaches involve a transversal innovation environment that brings together the efforts of different Work Packages within the Graphene Flagship.

A leading role for Barcelona in Graphene and 2D-Materials

The kick-off meeting of the new Graphene Flagship Work Package takes place in Barcelona because of the strong involvement of local institutions and the high international profile of Catalonia in 2D-materials and biomedical research. Institutions such as the Catalan Institute of Nanoscience and Nanotechnology (ICN2) develop frontier research in a supportive environment which attracts talented researchers from abroad, such as ICREA Research Prof Jose Antonio Garrido, Group Leader of the ICN2 Advanced Electronic Materials and Devices Group and now also Deputy Leader of the Biomedical Technologies Work Package. Until summer 2015 he was leading a research group at the Technische Universität München (Germany).

Further Graphene Flagship events in Barcelona are planned; in May 2016 ICN2 will also host a meeting of the Spintronics Work Package. ICREA Prof Stephan Roche, Group Leader of the ICN2 Theoretical and Computational Nanoscience Group, is the deputy leader of this Work Package led by Prof Bart van Wees, from the University of Groningen (The Netherlands). Another Work Package, on optoelectronics, is led by Prof Frank Koppens from the Institute of Photonic Sciences (ICFO, Spain), with Prof Andrea Ferrari from the University of Cambridge (United Kingdom) as deputy. Thus a number of prominent research institutes in Barcelona are deeply involved in the coordination of this European research initiative.

Kostas Kostarelos, the leader of the Biomedical Technologies Graphene Flagship work package, has been mentioned here before in the context of his blog posts for The Guardian science blog network (see my Aug. 7, 2014 post for a link to his post on metaphors used in medicine).

Research into phase changes in solids and control

A July 28, 2015 news item on ScienceDaily describes some practical reasons for research into phase changes from the Institute of Photonic Sciences (ICFO) in Spain in collaboration with Firtz-Haber-Institut der Max-Planck-Gesellschaft,

Rewritable CDs, DVDs and Blu-Ray discs owe their existence to phase-change materials, those materials that change their internal order when heated and whose structures can be switched back and forth between their crystalline and amorphous phases. Phase-change materials have even more exciting applications on the horizon, but our limited ability to precisely control their phase changes is a hurdle to the development of new technology.

A July 28, 2015 ICFO news release (also on EurekAlert), which originated the news item, describes the problem and the researchers’ solution,

One of the most popular and useful phase-change materials is GST, which consists of germanium, antimony, and tellurium. This material is particularly useful because it alternates between its crystalline and amorphous phases more quickly than any other material yet studied. These phase changes result from changes in the bonds between atoms, which also modify the electronic and optical properties of GST as well as its lattice structure. Specifically, resonant bonds, in which electrons participate in several neighboring bonds, influence the material’s electro-optical properties, while covalent bonds, in which electrons are shared between two atoms, influence its lattice structure. Most techniques that use GST simultaneously change both the electro-optical and structural properties. This is actually a considerable drawback since in the process of repeating structural transitions, such as heating and cooling the material, the lifetime of any device based on this material is drastically reduced.

In a study recently published in Nature Materials, researchers from the ICFO groups led by Prof. Simon Wall and ICREA Prof. at ICFO Valerio Pruneri, in collaboration with the Firtz-Haber-Institut der Max-Planck-Gesellschaft, have demonstrated how the material and electro-optical properties of GST change over fractions of a trillionth of a second as the phase of the material changes. Laser light was successfully used to alter the bonds controlling the electro-optical properties without meaningfully altering the bonds controlling the lattice. This new configuration allowed the rapid, reversible changes in the electro-optical properties that are important in device applications without reducing the lifetime of the device by changing its lattice structure. Moreover, the change in the electro-optical properties of GST measured in this study is more than ten times greater than that previously achieved by silicon materials used for the same purpose. This finding suggests that GST may be a good substitute for these commonly used silicon materials.

The results of this study may be expected to have far-reaching implications for the development of new technologies, including flexible displays, logic circuits, optical circuits, and universal memory for data storage. These results also indicate the potential of GST for other applications requiring materials with large changes in optical properties that can be achieved rapidly and with high precision.

Here’s a link to and a citation for the paper,

Time-domain separation of optical properties from structural transitions in resonantly bonded materials by Lutz Waldecker, Timothy A. Miller, Miquel Rudé, Roman Bertoni, Johann Osmond,  Valerio Pruneri, Robert E. Simpson, Ralph Ernstorfer, & Simon Wall. Nature Materials (2015)
doi:10.1038/nmat4359 Published online 27 July 2015

This paper is behind a paywall.