Tag Archives: European Union

Human lung enzyme can degrade graphene

Caption: A human lung enzyme can biodegrade graphene. Credit: Fotolia Courtesy: Graphene Flagship

The big European Commission research programme, Grahene Flagship, has announced some new work with widespread implications if graphene is to be used in biomedical implants. From a August 23, 2018 news item on ScienceDaily,

Myeloperoxidase — an enzyme naturally found in our lungs — can biodegrade pristine graphene, according to the latest discovery of Graphene Flagship partners in CNRS, University of Strasbourg (France), Karolinska Institute (Sweden) and University of Castilla-La Mancha (Spain). Among other projects, the Graphene Flagship designs based like flexible biomedical electronic devices that will interfaced with the human body. Such applications require graphene to be biodegradable, so our body can be expelled from the body.

An August 23, 2018 Grapehene Flagship press release (mildly edited version on EurekAlert), which originated the news item, provides more detail,

To test how graphene behaves within the body, researchers analysed how it was broken down with the addition of a common human enzyme – myeloperoxidase or MPO. If a foreign body or bacteria is detected, neutrophils surround it and secrete MPO, thereby destroying the threat. Previous work by Graphene Flagship partners found that MPO could successfully biodegrade graphene oxide.

However, the structure of non-functionalized graphene was thought to be more resistant to degradation. To test this, the team looked at the effects of MPO ex vivo on two graphene forms; single- and few-layer.

Alberto Bianco, researcher at Graphene Flagship Partner CNRS, explains: “We used two forms of graphene, single- and few-layer, prepared by two different methods in water. They were then taken and put in contact with myeloperoxidase in the presence of hydrogen peroxide. This peroxidase was able to degrade and oxidise them. This was really unexpected, because we thought that non-functionalized graphene was more resistant than graphene oxide.”

Rajendra Kurapati, first author on the study and researcher at Graphene Flagship Partner CNRS, remarks how “the results emphasize that highly dispersible graphene could be degraded in the body by the action of neutrophils. This would open the new avenue for developing graphene-based materials.”

With successful ex-vivo testing, in-vivo testing is the next stage. Bengt Fadeel, professor at Graphene Flagship Partner Karolinska Institute believes that “understanding whether graphene is biodegradable or not is important for biomedical and other applications of this material. The fact that cells of the immune system are capable of handling graphene is very promising.”

Prof. Maurizio Prato, the Graphene Flagship leader for its Health and Environment Work Package said that “the enzymatic degradation of graphene is a very important topic, because in principle, graphene dispersed in the atmosphere could produce some harm. Instead, if there are microorganisms able to degrade graphene and related materials, the persistence of these materials in our environment will be strongly decreased. These types of studies are needed.” “What is also needed is to investigate the nature of degradation products,” adds Prato. “Once graphene is digested by enzymes, it could produce harmful derivatives. We need to know the structure of these derivatives and study their impact on health and environment,” he concludes.

Prof. Andrea C. Ferrari, Science and Technology Officer of the Graphene Flagship, and chair of its management panel added: “The report of a successful avenue for graphene biodegradation is a very important step forward to ensure the safe use of this material in applications. The Graphene Flagship has put the investigation of the health and environment effects of graphene at the centre of its programme since the start. These results strengthen our innovation and technology roadmap.”

Here’s a link to and a citation for the paper,

Degradation of Single‐Layer and Few‐Layer Graphene by Neutrophil Myeloperoxidase by Dr. Rajendra Kurapati, Dr. Sourav P. Mukherjee, Dr. Cristina Martín, Dr. George Bepete, Prof. Ester Vázquez, Dr. Alain Pénicaud, Prof. Dr. Bengt Fadeel, Dr. Alberto Bianco. Angewandte Chemie https://doi.org/10.1002/anie.201806906 First published: 13 July 2018

This paper is behind a paywall.

Artificial intelligence (AI) brings together International Telecommunications Union (ITU) and World Health Organization (WHO) and AI outperforms animal testing

Following on my May 11, 2018 posting about the International Telecommunications Union (ITU) and the 2018 AI for Good Global Summit in mid- May, there’s an announcement. My other bit of AI news concerns animal testing.

Leveraging the power of AI for health

A July 24, 2018 ITU press release (a shorter version was received via email) announces a joint initiative focused on improving health,

Two United Nations specialized agencies are joining forces to expand the use of artificial intelligence (AI) in the health sector to a global scale, and to leverage the power of AI to advance health for all worldwide. The International Telecommunication Union (ITU) and the World Health Organization (WHO) will work together through the newly established ITU Focus Group on AI for Health to develop an international “AI for health” standards framework and to identify use cases of AI in the health sector that can be scaled-up for global impact. The group is open to all interested parties.

“AI could help patients to assess their symptoms, enable medical professionals in underserved areas to focus on critical cases, and save great numbers of lives in emergencies by delivering medical diagnoses to hospitals before patients arrive to be treated,” said ITU Secretary-General Houlin Zhao. “ITU and WHO plan to ensure that such capabilities are available worldwide for the benefit of everyone, everywhere.”

The demand for such a platform was first identified by participants of the second AI for Good Global Summit held in Geneva, 15-17 May 2018. During the summit, AI and the health sector were recognized as a very promising combination, and it was announced that AI-powered technologies such as skin disease recognition and diagnostic applications based on symptom questions could be deployed on six billion smartphones by 2021.

The ITU Focus Group on AI for Health is coordinated through ITU’s Telecommunications Standardization Sector – which works with ITU’s 193 Member States and more than 800 industry and academic members to establish global standards for emerging ICT innovations. It will lead an intensive two-year analysis of international standardization opportunities towards delivery of a benchmarking framework of international standards and recommendations by ITU and WHO for the use of AI in the health sector.

“I believe the subject of AI for health is both important and useful for advancing health for all,” said WHO Director-General Tedros Adhanom Ghebreyesus.

The ITU Focus Group on AI for Health will also engage researchers, engineers, practitioners, entrepreneurs and policy makers to develop guidance documents for national administrations, to steer the creation of policies that ensure the safe, appropriate use of AI in the health sector.

“1.3 billion people have a mobile phone and we can use this technology to provide AI-powered health data analytics to people with limited or no access to medical care. AI can enhance health by improving medical diagnostics and associated health intervention decisions on a global scale,” said Thomas Wiegand, ITU Focus Group on AI for Health Chairman, and Executive Director of the Fraunhofer Heinrich Hertz Institute, as well as professor at TU Berlin.

He added, “The health sector is in many countries among the largest economic sectors or one of the fastest-growing, signalling a particularly timely need for international standardization of the convergence of AI and health.”

Data analytics are certain to form a large part of the ITU focus group’s work. AI systems are proving increasingly adept at interpreting laboratory results and medical imagery and extracting diagnostically relevant information from text or complex sensor streams.

As part of this, the ITU Focus Group for AI for Health will also produce an assessment framework to standardize the evaluation and validation of AI algorithms — including the identification of structured and normalized data to train AI algorithms. It will develop open benchmarks with the aim of these becoming international standards.

The ITU Focus Group for AI for Health will report to the ITU standardization expert group for multimedia, Study Group 16.

I got curious about Study Group 16 (from the Study Group 16 at a glance webpage),

Study Group 16 leads ITU’s standardization work on multimedia coding, systems and applications, including the coordination of related studies across the various ITU-T SGs. It is also the lead study group on ubiquitous and Internet of Things (IoT) applications; telecommunication/ICT accessibility for persons with disabilities; intelligent transport system (ITS) communications; e-health; and Internet Protocol television (IPTV).

Multimedia is at the core of the most recent advances in information and communication technologies (ICTs) – especially when we consider that most innovation today is agnostic of the transport and network layers, focusing rather on the higher OSI model layers.

SG16 is active in all aspects of multimedia standardization, including terminals, architecture, protocols, security, mobility, interworking and quality of service (QoS). It focuses its studies on telepresence and conferencing systems; IPTV; digital signage; speech, audio and visual coding; network signal processing; PSTN modems and interfaces; facsimile terminals; and ICT accessibility.

I wonder which group deals with artificial intelligence and, possibly, robots.

Chemical testing without animals

Thomas Hartung, professor of environmental health and engineering at Johns Hopkins University (US), describes in his July 25, 2018 essay (written for The Conversation) on phys.org the situation where chemical testing is concerned,

Most consumers would be dismayed with how little we know about the majority of chemicals. Only 3 percent of industrial chemicals – mostly drugs and pesticides – are comprehensively tested. Most of the 80,000 to 140,000 chemicals in consumer products have not been tested at all or just examined superficially to see what harm they may do locally, at the site of contact and at extremely high doses.

I am a physician and former head of the European Center for the Validation of Alternative Methods of the European Commission (2002-2008), and I am dedicated to finding faster, cheaper and more accurate methods of testing the safety of chemicals. To that end, I now lead a new program at Johns Hopkins University to revamp the safety sciences.

As part of this effort, we have now developed a computer method of testing chemicals that could save more than a US$1 billion annually and more than 2 million animals. Especially in times where the government is rolling back regulations on the chemical industry, new methods to identify dangerous substances are critical for human and environmental health.

Having written on the topic of alternatives to animal testing on a number of occasions (my December 26, 2014 posting provides an overview of sorts), I was particularly interested to see this in Hartung’s July 25, 2018 essay on The Conversation (Note: Links have been removed),

Following the vision of Toxicology for the 21st Century, a movement led by U.S. agencies to revamp safety testing, important work was carried out by my Ph.D. student Tom Luechtefeld at the Johns Hopkins Center for Alternatives to Animal Testing. Teaming up with Underwriters Laboratories, we have now leveraged an expanded database and machine learning to predict toxic properties. As we report in the journal Toxicological Sciences, we developed a novel algorithm and database for analyzing chemicals and determining their toxicity – what we call read-across structure activity relationship, RASAR.

This graphic reveals a small part of the chemical universe. Each dot represents a different chemical. Chemicals that are close together have similar structures and often properties. Thomas Hartung, CC BY-SA

To do this, we first created an enormous database with 10 million chemical structures by adding more public databases filled with chemical data, which, if you crunch the numbers, represent 50 trillion pairs of chemicals. A supercomputer then created a map of the chemical universe, in which chemicals are positioned close together if they share many structures in common and far where they don’t. Most of the time, any molecule close to a toxic molecule is also dangerous. Even more likely if many toxic substances are close, harmless substances are far. Any substance can now be analyzed by placing it into this map.

If this sounds simple, it’s not. It requires half a billion mathematical calculations per chemical to see where it fits. The chemical neighborhood focuses on 74 characteristics which are used to predict the properties of a substance. Using the properties of the neighboring chemicals, we can predict whether an untested chemical is hazardous. For example, for predicting whether a chemical will cause eye irritation, our computer program not only uses information from similar chemicals, which were tested on rabbit eyes, but also information for skin irritation. This is because what typically irritates the skin also harms the eye.

How well does the computer identify toxic chemicals?

This method will be used for new untested substances. However, if you do this for chemicals for which you actually have data, and compare prediction with reality, you can test how well this prediction works. We did this for 48,000 chemicals that were well characterized for at least one aspect of toxicity, and we found the toxic substances in 89 percent of cases.

This is clearly more accurate that the corresponding animal tests which only yield the correct answer 70 percent of the time. The RASAR shall now be formally validated by an interagency committee of 16 U.S. agencies, including the EPA [Environmental Protection Agency] and FDA [Food and Drug Administration], that will challenge our computer program with chemicals for which the outcome is unknown. This is a prerequisite for acceptance and use in many countries and industries.

The potential is enormous: The RASAR approach is in essence based on chemical data that was registered for the 2010 and 2013 REACH [Registration, Evaluation, Authorizations and Restriction of Chemicals] deadlines [in Europe]. If our estimates are correct and chemical producers would have not registered chemicals after 2013, and instead used our RASAR program, we would have saved 2.8 million animals and $490 million in testing costs – and received more reliable data. We have to admit that this is a very theoretical calculation, but it shows how valuable this approach could be for other regulatory programs and safety assessments.

In the future, a chemist could check RASAR before even synthesizing their next chemical to check whether the new structure will have problems. Or a product developer can pick alternatives to toxic substances to use in their products. This is a powerful technology, which is only starting to show all its potential.

It’s been my experience that these claims having led a movement (Toxicology for the 21st Century) are often contested with many others competing for the title of ‘leader’ or ‘first’. That said, this RASAR approach seems very exciting, especially in light of the skepticism about limiting and/or making animal testing unnecessary noted in my December 26, 2014 posting.it was from someone I thought knew better.

Here’s a link to and a citation for the paper mentioned in Hartung’s essay,

Machine learning of toxicological big data enables read-across structure activity relationships (RASAR) outperforming animal test reproducibility by Thomas Luechtefeld, Dan Marsh, Craig Rowlands, Thomas Hartung. Toxicological Sciences, kfy152, https://doi.org/10.1093/toxsci/kfy152 Published: 11 July 2018

This paper is open access.

Preserving art canvases (think Van Gogh, Picasso, Vermeer, and others) with nanomaterials

It has to be disconcerting to realize that your precious paintings are deteriorating day by day.  In a June 22, 2017 posting titled ‘Art masterpieces are turning into soap‘,

This piece of research has made a winding trek through the online science world. First it was featured in an April 20, 2017 American Chemical Society news release on EurekAlert,

A good art dealer can really clean up in today’s market, but not when some weird chemistry wreaks havoc on masterpieces [emphasis mine]. Art conservators started to notice microscopic pockmarks forming on the surfaces of treasured oil paintings that cause the images to look hazy. It turns out the marks are eruptions of paint caused, weirdly, by soap that forms via chemical reactions. Since you have no time to watch paint dry, we explain how paintings from Rembrandts to O’Keefes are threatened by their own compositions — and we don’t mean the imagery.

Here’s the video,


Now, for the latest: canavases are deteriorating too. A May 23, 2018 news item on Nanowerk announces the latest research on the ‘canvas issue’ (Note: A link has been removed),

Paintings by Vincent van Gogh, Pablo Picasso and Johannes Vermeer have been delighting art lovers for years. But it turns out that these works of art might be their own worst enemy — the canvases they were painted on can deteriorate over time.

In an effort to combat this aging process, one group is reporting in ACS Applied Nano Materials (“Combined Nanocellulose/Nanosilica Approach for Multiscale Consolidation of Painting Canvases”) that nanomaterials can provide multiple layers of reinforcement.

A May 23, 2018 American Chemical Society (ACS) news release (also on EurekAlert), which originated the news item,  expands on the theme,

One of the most important parts of a painting is the canvas, which is usually made from cellulose-based fibers. Over time, the canvas ages, resulting in discoloration, wrinkles, tears and moisture retention, all greatly affecting the artwork. To combat aging, painting conservators currently place a layer of adhesive and a lining on the back of a painting, but this treatment is invasive and difficult to reverse. In previous work, Romain Bordes and colleagues from Chalmers University of Technology, Sweden, investigated nanocellulose as a new way to strengthen painting canvases on their surfaces. In addition, together with Krzysztof Kolman, they showed that silica nanoparticles can strengthen individual paper and cotton fibers. So, they next wanted to combine these two methods to see if they could further strengthen aging canvas.

The team combined polyelectrolyte-treated silica nanoparticles (SNP) with cellulose nanofibrils (CNF) for a one-step treatment. The researchers first treated canvases with acid and oxidizing conditions to simulate aging. When they applied the SNP-CNF treatment, the SNP penetrated and strengthened the individual fibers of the canvas, making it stiffer compared to untreated materials. The CNF strengthened the surface of the canvas and increased the canvas’s flexibility. The team notes that this treatment could be a good alternative to conventional methods.

Here’s a link to and a citation for the paper,

Combined Nanocellulose/Nanosilica Approach for Multiscale Consolidation of Painting Canvases by Krzysztof Kolman, Oleksandr Nechyporchuk, Michael Persson, Krister Holmberg, and Romain Bordes. ACS Appl. Nano Mater., Article ASAP DOI: 10.1021/acsanm.8b00262 Publication Date (Web): April 26, 2018

Copyright © 2018 American Chemical Society

This image illustrating the researchers’ solution accompanies the article,

Courtesy: ACS

The European Union’s NanoRestART project was mentioned here before they’d put together this introductory video, which provides a good overview of the research,

For more details about the problems with contemporary and modern art, there’s my April 4, 2016 posting when the NanoRestART project was first mentioned here and there’s my Jan. 10, 2017 posting which details research into 3D-printed art and some of the questions raised by the use of 3D printing and other emerging technologies in the field of contemporary art.

Nanomaterials the SUN (Sustainable Nanotechnologies) project sunsets, finally and the Belgians amend their registry

Health, safety, and risks have been an important discussion where nanotechnology is concerned. The sense of urgency and concern has died down somewhat but scientists and regulators continue with their risk analysis.

SUN (Sustainable Nanotechnologies) project

Back in a December 7, 2016 posting I mentioned the Sustainable Nanotechnologies (SUN) project and its imminent demise in 2017. A February 26, 2018 news item on Nanowerk announces a tool developed by SUN scientists and intended for current use,

Over 100 scientists from 25 research institutions and industries in 12 different European Countries, coordinated by the group of professor Antonio Marcomini from Ca’ Foscari University of Venice, have completed one of the first attempts to understand the risks nanomaterials carry throughout their life-cycle, starting from their fabrication and ending in being discarded or recycled.

From nanoscale silver to titanium dioxide for air purification, the use of nanomaterials of high commercial relevance proves to have clear benefits as it attracts investments, and raises concerns. ‘Nano’ sized materials (a nanometre is one millionth of a millimetre) could pose environmental and health risks under certain conditions. The uncertainties and insufficient scientific knowledge could slow down innovation and economic growth.

How do we evaluate these risks and take the appropriate preventative measures? The answer comes from the results of the Sustainable Nanotechnologies Project (SUN), which has been given 13 million euros of funding from the European Commission.

Courtesy: SUN Project

A February 26, 2018 Ca’ Foscari University of Venice press release describes some of the SUN project’s last t initiatives including, https://sunds.gd/  or the ‘SUNDS; Decision support system for risk management of engineered nanomaterials and nano-enabled products’,

After 3 years of research in laboratories and in contact with industrial partners, the scientists have processed, tested and made available an online platform (https://sunds.gd/) that supports industries and control and regulating institutions in evaluating potential risks that may arise for the production teams, for the consumers and for the environment.

The goal is to understand the extent to which these risks are sustainable, especially in relation to the traditional materials available, and to take the appropriate preventative measures. Additionally, this tool allows us to compare risk reduction costs with the benefits generated by this innovative product, while measuring its possible environmental impact.

Danail Hristozov, the project’s principal investigator from the Department of Environmental Sciences, Informatics and Statistics at Ca’ Foscari, commented: “The great amount of work done for developing and testing the methods and tools for evaluating and managing the risks posed by nanomaterials has not only generated an enormous amount of new scientific data and knowledge on the potential dangers of different types of nanomaterials, but has also resulted in key discoveries on the interactions between nanomaterials and biological or ecological systems and on their diffusion, on how they work and on their possible adverse consequences. These results, disseminated in over 140 research papers, have been immediately taken up by industries and regulators and will inevitably have great impact on developing safer and more sustainable nanotechnologies and on regulating their risks”.”.

The SUN project has also composed a guide for the safest products and processes, published on its website: www.sun.fp7.eu.

Studied Materials

Scientists have focused their research on specific materials and their us, in order to analyse the entire life cycle of the products. Two of the best-known were chosen: nanoscale silver that is used in textiles, and multi-walled carbon nanotubes that is used in marine coatings and automotive parts. Less known materials that are of great relevance for their use were also included: car pigments and silica anticaking agents used by food industry.

Lastly, SUN included nanomaterials of high commercial value which are extremely innovative: Nitrogen doped Titanium Dioxide for air purification is a new product enabled by SUN and exploited by the large colour ceramics company Colorobbia. The copper based coating and impregnation for wood protection has been re-oriented based on SUN safety assessment, and the Tungsten Carbide based coatings for paper mills is marketed based on SUN results.

You can find out more about the SUN project here and about ‘SUNDS; Decision support system for risk management of engineered nanomaterials and nano-enabled products’ here.

Belgium’s nanomaterials reigster

A February 26, 2018 Nanowerk Spotlight article by Anthony Bochon has a   rather acerbic take on Belgium’s efforts to regulate nanomaterials with a national register,

In Alice’s Adventures in Wonderland, the White Rabbit keeps saying “Oh dear! Oh dear! I shall be too late.” The same could have been said by the Belgian federal government when it adopted the Royal Decree of 22nd December 2017, published in the annexes of the Belgian Official Gazette of 15th January 2018 (“Amending Royal Decree”), whose main provisions retroactively enter into force on 31st December 2016. …

The Belgian federal government unnecessarily delayed the adoption of the Amending Royal Decree until December 2017 and published it only mid-January 2018. It creates legal uncertainty where it should have been avoided. The Belgian nanomaterials register (…) symbolizes a Belgian exceptionalism in the small world of national nanomaterials registers. Unlike France, Denmark and Sweden, Belgium decided from the very beginning to have three different deadlines for substances, mixtures and articles.

In an already fragmented regulatory landscape (with 4 EU Member States having their own national nanomaterials register and 24 EU Member States which do not have such registration requirements), the confusion around the deadline for the registration of mixtures in Belgium does not allow the addressees of the legal obligations to comply with them.

Even though failure to properly register substances – and now mixtures – within the Belgian nanomaterials register exposes the addressees of the obligation to criminal penalties, the function of the register remains purely informational.

The data collected through the registration was meant to be used to identify the presence of manufactured nanomaterials on the Belgian market, with the implicit objective of regulating the exposure of workers and consumers to these nanomaterials. The absence of entry into force of the provisions relating to the registration of articles is therefore incoherent and should question the relevance of the whole Belgian registration system.

Taking into account the author’s snarkiness, Belgium seems to have adopted (knowingly or unknowingly) a chaotic approach to registering nanomaterials.  For anyone interesting in the Belgian’ nanoregister’, there’s this September 3, 2014 posting featuring another Anthony Bochon article on the topic and for anyone interested in Bochon’s book, there’s this August 15, 2014 posting (Note: his book, ‘Nanotechnology Law & Guidelines: A Practical Guide for the Nanotechnology Industries in Europe’, seems to have been updated [there is a copyright date of 2019 in the bibliographic information on the publisher’s website]).

Alberta adds a newish quantum nanotechnology research hub to the Canada’s quantum computing research scene

One of the winners in Canada’s 2017 federal budget announcement of the Pan-Canadian Artificial Intelligence Strategy was Edmonton, Alberta. It’s a fact which sometimes goes unnoticed while Canadians marvel at the wonderfulness found in Toronto and Montréal where it seems new initiatives and monies are being announced on a weekly basis (I exaggerate) for their AI (artificial intelligence) efforts.

Alberta’s quantum nanotechnology hub (graduate programme)

Intriguingly, it seems that Edmonton has higher aims than (an almost unnoticed) leadership in AI. Physicists at the University of Alberta have announced hopes to be just as successful as their AI brethren in a Nov. 27, 2017 article by Juris Graney for the Edmonton Journal,

Physicists at the University of Alberta [U of A] are hoping to emulate the success of their artificial intelligence studying counterparts in establishing the city and the province as the nucleus of quantum nanotechnology research in Canada and North America.

Google’s artificial intelligence research division DeepMind announced in July [2017] it had chosen Edmonton as its first international AI research lab, based on a long-running partnership with the U of A’s 10-person AI lab.

Retaining the brightest minds in the AI and machine-learning fields while enticing a global tech leader to Alberta was heralded as a coup for the province and the university.

It is something U of A physics professor John Davis believes the university’s new graduate program, Quanta, can help achieve in the world of quantum nanotechnology.

The field of quantum mechanics had long been a realm of theoretical science based on the theory that atomic and subatomic material like photons or electrons behave both as particles and waves.

“When you get right down to it, everything has both behaviours (particle and wave) and we can pick and choose certain scenarios which one of those properties we want to use,” he said.

But, Davis said, physicists and scientists are “now at the point where we understand quantum physics and are developing quantum technology to take to the marketplace.”

“Quantum computing used to be realm of science fiction, but now we’ve figured it out, it’s now a matter of engineering,” he said.

Quantum computing labs are being bought by large tech companies such as Google, IBM and Microsoft because they realize they are only a few years away from having this power, he said.

Those making the groundbreaking developments may want to commercialize their finds and take the technology to market and that is where Quanta comes in.

East vs. West—Again?

Ivan Semeniuk in his article, Quantum Supremacy, ignores any quantum research effort not located in either Waterloo, Ontario or metro Vancouver, British Columbia to describe a struggle between the East and the West (a standard Canadian trope). From Semeniuk’s Oct. 17, 2017 quantum article [link follows the excerpts] for the Globe and Mail’s October 2017 issue of the Report on Business (ROB),

 Lazaridis [Mike], of course, has experienced lost advantage first-hand. As co-founder and former co-CEO of Research in Motion (RIM, now called Blackberry), he made the smartphone an indispensable feature of the modern world, only to watch rivals such as Apple and Samsung wrest away Blackberry’s dominance. Now, at 56, he is engaged in a high-stakes race that will determine who will lead the next technology revolution. In the rolling heartland of southwestern Ontario, he is laying the foundation for what he envisions as a new Silicon Valley—a commercial hub based on the promise of quantum technology.

Semeniuk skips over the story of how Blackberry lost its advantage. I came onto that story late in the game when Blackberry was already in serious trouble due to a failure to recognize that the field they helped to create was moving in a new direction. If memory serves, they were trying to keep their technology wholly proprietary which meant that developers couldn’t easily create apps to extend the phone’s features. Blackberry also fought a legal battle in the US with a patent troll draining company resources and energy in proved to be a futile effort.

Since then Lazaridis has invested heavily in quantum research. He gave the University of Waterloo a serious chunk of money as they named their Quantum Nano Centre (QNC) after him and his wife, Ophelia (you can read all about it in my Sept. 25, 2012 posting about the then new centre). The best details for Lazaridis’ investments in Canada’s quantum technology are to be found on the Quantum Valley Investments, About QVI, History webpage,

History-bannerHistory has repeatedly demonstrated the power of research in physics to transform society.  As a student of history and a believer in the power of physics, Mike Lazaridis set out in 2000 to make real his bold vision to establish the Region of Waterloo as a world leading centre for physics research.  That is, a place where the best researchers in the world would come to do cutting-edge research and to collaborate with each other and in so doing, achieve transformative discoveries that would lead to the commercialization of breakthrough  technologies.

Establishing a World Class Centre in Quantum Research:

The first step in this regard was the establishment of the Perimeter Institute for Theoretical Physics.  Perimeter was established in 2000 as an independent theoretical physics research institute.  Mike started Perimeter with an initial pledge of $100 million (which at the time was approximately one third of his net worth).  Since that time, Mike and his family have donated a total of more than $170 million to the Perimeter Institute.  In addition to this unprecedented monetary support, Mike also devotes his time and influence to help lead and support the organization in everything from the raising of funds with government and private donors to helping to attract the top researchers from around the globe to it.  Mike’s efforts helped Perimeter achieve and grow its position as one of a handful of leading centres globally for theoretical research in fundamental physics.

Stephen HawkingPerimeter is located in a Governor-General award winning designed building in Waterloo.  Success in recruiting and resulting space requirements led to an expansion of the Perimeter facility.  A uniquely designed addition, which has been described as space-ship-like, was opened in 2011 as the Stephen Hawking Centre in recognition of one of the most famous physicists alive today who holds the position of Distinguished Visiting Research Chair at Perimeter and is a strong friend and supporter of the organization.

Recognizing the need for collaboration between theorists and experimentalists, in 2002, Mike applied his passion and his financial resources toward the establishment of The Institute for Quantum Computing at the University of Waterloo.  IQC was established as an experimental research institute focusing on quantum information.  Mike established IQC with an initial donation of $33.3 million.  Since that time, Mike and his family have donated a total of more than $120 million to the University of Waterloo for IQC and other related science initiatives.  As in the case of the Perimeter Institute, Mike devotes considerable time and influence to help lead and support IQC in fundraising and recruiting efforts.  Mike’s efforts have helped IQC become one of the top experimental physics research institutes in the world.

Quantum ComputingMike and Doug Fregin have been close friends since grade 5.  They are also co-founders of BlackBerry (formerly Research In Motion Limited).  Doug shares Mike’s passion for physics and supported Mike’s efforts at the Perimeter Institute with an initial gift of $10 million.  Since that time Doug has donated a total of $30 million to Perimeter Institute.  Separately, Doug helped establish the Waterloo Institute for Nanotechnology at the University of Waterloo with total gifts for $29 million.  As suggested by its name, WIN is devoted to research in the area of nanotechnology.  It has established as an area of primary focus the intersection of nanotechnology and quantum physics.

With a donation of $50 million from Mike which was matched by both the Government of Canada and the province of Ontario as well as a donation of $10 million from Doug, the University of Waterloo built the Mike & Ophelia Lazaridis Quantum-Nano Centre, a state of the art laboratory located on the main campus of the University of Waterloo that rivals the best facilities in the world.  QNC was opened in September 2012 and houses researchers from both IQC and WIN.

Leading the Establishment of Commercialization Culture for Quantum Technologies in Canada:

In the Research LabFor many years, theorists have been able to demonstrate the transformative powers of quantum mechanics on paper.  That said, converting these theories to experimentally demonstrable discoveries has, putting it mildly, been a challenge.  Many naysayers have suggested that achieving these discoveries was not possible and even the believers suggested that it could likely take decades to achieve these discoveries.  Recently, a buzz has been developing globally as experimentalists have been able to achieve demonstrable success with respect to Quantum Information based discoveries.  Local experimentalists are very much playing a leading role in this regard.  It is believed by many that breakthrough discoveries that will lead to commercialization opportunities may be achieved in the next few years and certainly within the next decade.

Recognizing the unique challenges for the commercialization of quantum technologies (including risk associated with uncertainty of success, complexity of the underlying science and high capital / equipment costs) Mike and Doug have chosen to once again lead by example.  The Quantum Valley Investment Fund will provide commercialization funding, expertise and support for researchers that develop breakthroughs in Quantum Information Science that can reasonably lead to new commercializable technologies and applications.  Their goal in establishing this Fund is to lead in the development of a commercialization infrastructure and culture for Quantum discoveries in Canada and thereby enable such discoveries to remain here.

Semeniuk goes on to set the stage for Waterloo/Lazaridis vs. Vancouver (from Semeniuk’s 2017 ROB article),

… as happened with Blackberry, the world is once again catching up. While Canada’s funding of quantum technology ranks among the top five in the world, the European Union, China, and the US are all accelerating their investments in the field. Tech giants such as Google [also known as Alphabet], Microsoft and IBM are ramping up programs to develop companies and other technologies based on quantum principles. Meanwhile, even as Lazaridis works to establish Waterloo as the country’s quantum hub, a Vancouver-area company has emerged to challenge that claim. The two camps—one methodically focused on the long game, the other keen to stake an early commercial lead—have sparked an East-West rivalry that many observers of the Canadian quantum scene are at a loss to explain.

Is it possible that some of the rivalry might be due to an influential individual who has invested heavily in a ‘quantum valley’ and has a history of trying to ‘own’ a technology?

Getting back to D-Wave Systems, the Vancouver company, I have written about them a number of times (particularly in 2015; for the full list: input D-Wave into the blog search engine). This June 26, 2015 posting includes a reference to an article in The Economist magazine about D-Wave’s commercial opportunities while the bulk of the posting is focused on a technical breakthrough.

Semeniuk offers an overview of the D-Wave Systems story,

D-Wave was born in 1999, the same year Lazaridis began to fund quantum science in Waterloo. From the start, D-Wave had a more immediate goal: to develop a new computer technology to bring to market. “We didn’t have money or facilities,” says Geordie Rose, a physics PhD who co0founded the company and served in various executive roles. …

The group soon concluded that the kind of machine most scientists were pursing based on so-called gate-model architecture was decades away from being realized—if ever. …

Instead, D-Wave pursued another idea, based on a principle dubbed “quantum annealing.” This approach seemed more likely to produce a working system, even if the application that would run on it were more limited. “The only thing we cared about was building the machine,” says Rose. “Nobody else was trying to solve the same problem.”

D-Wave debuted its first prototype at an event in California in February 2007 running it through a few basic problems such as solving a Sudoku puzzle and finding the optimal seating plan for a wedding reception. … “They just assumed we were hucksters,” says Hilton [Jeremy Hilton, D.Wave senior vice-president of systems]. Federico Spedalieri, a computer scientist at the University of Southern California’s [USC} Information Sciences Institute who has worked with D-Wave’s system, says the limited information the company provided about the machine’s operation provoked outright hostility. “I think that played against them a lot in the following years,” he says.

It seems Lazaridis is not the only one who likes to hold company information tightly.

Back to Semeniuk and D-Wave,

Today [October 2017], the Los Alamos National Laboratory owns a D-Wave machine, which costs about $15million. Others pay to access D-Wave systems remotely. This year , for example, Volkswagen fed data from thousands of Beijing taxis into a machine located in Burnaby [one of the municipalities that make up metro Vancouver] to study ways to optimize traffic flow.

But the application for which D-Wave has the hights hope is artificial intelligence. Any AI program hings on the on the “training” through which a computer acquires automated competence, and the 2000Q [a D-Wave computer] appears well suited to this task. …

Yet, for all the buzz D-Wave has generated, with several research teams outside Canada investigating its quantum annealing approach, the company has elicited little interest from the Waterloo hub. As a result, what might seem like a natural development—the Institute for Quantum Computing acquiring access to a D-Wave machine to explore and potentially improve its value—has not occurred. …

I am particularly interested in this comment as it concerns public funding (from Semeniuk’s article),

Vern Brownell, a former Goldman Sachs executive who became CEO of D-Wave in 2009, calls the lack of collaboration with Waterloo’s research community “ridiculous,” adding that his company’s efforts to establish closer ties have proven futile, “I’ll be blunt: I don’t think our relationship is good enough,” he says. Brownell also point out that, while  hundreds of millions in public funds have flowed into Waterloo’s ecosystem, little funding is available for  Canadian scientists wishing to make the most of D-Wave’s hardware—despite the fact that it remains unclear which core quantum technology will prove the most profitable.

There’s a lot more to Semeniuk’s article but this is the last excerpt,

The world isn’t waiting for Canada’s quantum rivals to forge a united front. Google, Microsoft, IBM, and Intel are racing to develop a gate-model quantum computer—the sector’s ultimate goal. (Google’s researchers have said they will unveil a significant development early next year.) With the U.K., Australia and Japan pouring money into quantum, Canada, an early leader, is under pressure to keep up. The federal government is currently developing  a strategy for supporting the country’s evolving quantum sector and, ultimately, getting a return on its approximately $1-billion investment over the past decade [emphasis mine].

I wonder where the “approximately $1-billion … ” figure came from. I ask because some years ago MP Peter Julian asked the government for information about how much Canadian federal money had been invested in nanotechnology. The government replied with sheets of paper (a pile approximately 2 inches high) that had funding disbursements from various ministries. Each ministry had its own method with different categories for listing disbursements and the titles for the research projects were not necessarily informative for anyone outside a narrow specialty. (Peter Julian’s assistant had kindly sent me a copy of the response they had received.) The bottom line is that it would have been close to impossible to determine the amount of federal funding devoted to nanotechnology using that data. So, where did the $1-billion figure come from?

In any event, it will be interesting to see how the Council of Canadian Academies assesses the ‘quantum’ situation in its more academically inclined, “The State of Science and Technology and Industrial Research and Development in Canada,” when it’s released later this year (2018).

Finally, you can find Semeniuk’s October 2017 article here but be aware it’s behind a paywall.

Whither we goest?

Despite any doubts one might have about Lazaridis’ approach to research and technology, his tremendous investment and support cannot be denied. Without him, Canada’s quantum research efforts would be substantially less significant. As for the ‘cowboys’ in Vancouver, it takes a certain temperament to found a start-up company and it seems the D-Wave folks have more in common with Lazaridis than they might like to admit. As for the Quanta graduate  programme, it’s early days yet and no one should ever count out Alberta.

Meanwhile, one can continue to hope that a more thoughtful approach to regional collaboration will be adopted so Canada can continue to blaze trails in the field of quantum research.

Nano- and neuro- together for nanoneuroscience

This is not the first time I’ve posted about nanotechnology and neuroscience (see this April 2, 2013 piece about then new brain science initiative in the US and Michael Berger’s  Nanowerk Spotlight article/review of an earlier paper covering the topic of nanotechnology and neuroscience).

Interestingly, the European Union (EU) had announced its two  $1B Euro research initiatives, the Human Brain Project and the Graphene Flagship (see my Jan. 28, 2013 posting about it),  months prior to the US brain research push. For those unfamiliar with the nanotechnology effort, graphene is a nanomaterial and there is high interest in its potential use in biomedical technology, thus partially connecting both EU projects.

In any event, Berger is highlighting a nanotechnology and neuroscience connection again in his Oct. 18, 2017 Nanowerk Spotlight article, or overview of, a new paper, which updates our understanding of the potential connections between the two fields (Note: A link has been removed),

Over the past several years, nanoscale analysis tools and in the design and synthesis of nanomaterials have generated optical, electrical, and chemical methods that can readily be adapted for use in neuroscience and brain activity mapping.

A review paper in Advanced Functional Materials (“Nanotechnology for Neuroscience: Promising Approaches for Diagnostics, Therapeutics and Brain Activity Mapping”) summarizes the basic concepts associated with neuroscience and the current journey of nanotechnology towards the study of neuron function by addressing various concerns on the significant role of nanomaterials in neuroscience and by describing the future applications of this emerging technology.

The collaboration between nanotechnology and neuroscience, though still at the early stages, utilizes broad concepts, such as drug delivery, cell protection, cell regeneration and differentiation, imaging and surgery, to give birth to novel clinical methods in neuroscience.

Ultimately, the clinical translation of nanoneuroscience implicates that central nervous system (CNS) diseases, including neurodevelopmental, neurodegenerative and psychiatric diseases, have the potential to be cured, while the industrial translation of nanoneuroscience indicates the need for advancement of brain-computer interface technologies.

Future Developing Arenas in Nanoneuroscience

The Brain Activity Map (BAM) Project aims to map the neural activity of every neuron across all neural circuits with the ultimate aim of curing diseases associated with the nervous system. The announcement of this collaborative, public-private research initiative in 2013 by President Obama has driven the surge in developing methods to elucidate neural circuitry. Three current developing arenas in the context of nanoneuroscience applications that will push such initiative forward are 1) optogenetics, 2) molecular/ion sensing and monitoring and 3) piezoelectric effects.

In their review, the authors discuss these aspects in detail.

Neurotoxicity of Nanomaterials

By engineering particles on the scale of molecular-level entities – proteins, lipid bilayers and nucleic acids – we can stereotactically interface with many of the components of cell systems, and at the cutting edge of this technology, we can begin to devise ways in which we can manipulate these components to our own ends. However, interfering with the internal environment of cells, especially neurons, is by no means simple.

“If we are to continue to make great strides in nanoneuroscience, functional investigations of nanomaterials must be complemented with robust toxicology studies,” the authors point out. “A database on the toxicity of materials that fully incorporates these findings for use in future schema must be developed. These databases should include information and data on 1) the chemical nature of the nanomaterials in complex aqueous environments; 2) the biological interactions of nanomaterials with chemical specificity; 3) the effects of various nanomaterial properties on living systems; and 4) a model for the simulation and computation of possible effects of nanomaterials in living systems across varying time and space. If we can establish such methods, it may be possible to design nanopharmaceuticals for improved research as well as quality of life.”

“However, challenges in nanoneuroscience are present in many forms, such as neurotoxicity; the inability to cross the blood-brain barrier [emphasis mine]; the need for greater specificity, bioavailability and short half-lives; and monitoring of disease treatment,” the authors conclude their review. “The nanoneurotoxicity surrounding these nanomaterials is a barrier that must be overcome for the translation of these applications from bench-to-bedside. While the challenges associated with nanoneuroscience seem unending, they represent opportunities for future work.”

I have a March 26, 2015 posting about Canadian researchers breaching the blood-brain barrier and an April 13, 2016 posting about US researchers at Cornell University also breaching the blood-brain barrier. Perhaps the “inability” mentioned in this Spotlight article means that it can’t be done consistently or that it hasn’t been achieved on humans.

Here’s a link to and a citation for the paper,

Nanotechnology for Neuroscience: Promising Approaches for Diagnostics, Therapeutics and Brain Activity Mapping by Anil Kumar, Aaron Tan, Joanna Wong, Jonathan Clayton Spagnoli, James Lam, Brianna Diane Blevins, Natasha G, Lewis Thorne, Keyoumars Ashkan, Jin Xie, and Hong Liu. Advanced Functional Materials Volume 27, Issue 39, October 19, 2017 DOI: 10.1002/adfm.201700489 Version of Record online: 14 AUG 2017

© 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

I took a look at the authors’ information and found that most of these researchers are based in  China and in the UK, with a sole researcher based in the US.

Limitless energy and the International Thermonuclear Experimental Reactor (ITER)

Over 30 years in the dreaming, the International Thermonuclear Experimental Reactor (ITER) is now said to be 1/2 way to completing construction. A December 6, 2017 ITER press release (received via email) makes the joyful announcement,

WORLD’S MOST COMPLEX MACHINE IS 50 PERCENT COMPLETED
ITER is proving that fusion is the future source of clean, abundant, safe and economic energy_

The International Thermonuclear Experimental Reactor (ITER), a project to prove that fusion power can be produced on a commercial scale and is sustainable, is now 50 percent built to initial operation. Fusion is the same energy source from the Sun that gives the Earth its light and warmth.

ITER will use hydrogen fusion, controlled by superconducting magnets, to produce massive heat energy. In the commercial machines that will follow, this heat will drive turbines to produce electricity with these positive benefits:

* Fusion energy is carbon-free and environmentally sustainable, yet much more powerful than fossil fuels. A pineapple-sized amount of hydrogen offers as much fusion energy as 10,000 tons of coal.

* ITER uses two forms of hydrogen fuel: deuterium, which is easily extracted from seawater; and tritium, which is bred from lithium inside the fusion reactor. The supply of fusion fuel for industry and megacities is abundant, enough for millions of years.

* When the fusion reaction is disrupted, the reactor simply shuts down-safely and without external assistance. Tiny amounts of fuel are used, about 2-3 grams at a time; so there is no physical possibility of a meltdown accident.

* Building and operating a fusion power plant is targeted to be comparable to the cost of a fossil fuel or nuclear fission plant. But unlike today’s nuclear plants, a fusion plant will not have the costs of high-level radioactive waste disposal. And unlike fossil fuel plants,
fusion will not have the environmental cost of releasing CO2 and other pollutants.

ITER is the most complex science project in human history. The hydrogen plasma will be heated to 150 million degrees Celsius, ten times hotter than the core of the Sun, to enable the fusion reaction. The process happens in a donut-shaped reactor, called a tokamak(*), which is surrounded by giant magnets that confine and circulate the superheated, ionized plasma, away from the metal walls. The superconducting magnets must be cooled to minus 269°C, as cold as interstellar space.

The ITER facility is being built in Southern France by a scientific partnership of 35 countries. ITER’s specialized components, roughly 10 million parts in total, are being manufactured in industrial facilities all over the world. They are subsequently shipped to the ITER worksite, where they must be assembled, piece-by-piece, into the final machine.

Each of the seven ITER members-the European Union, China, India, Japan, Korea, Russia, and the United States-is fabricating a significant portion of the machine. This adds to ITER’s complexity.

In a message dispatched on December 1 [2017] to top-level officials in ITER member governments, the ITER project reported that it had completed 50 percent of the “total construction work scope through First Plasma” (**). First Plasma, scheduled for December 2025, will be the first stage of operation for ITER as a functional machine.

“The stakes are very high for ITER,” writes Bernard Bigot, Ph.D., Director-General of ITER. “When we prove that fusion is a viable energy source, it will eventually replace burning fossil fuels, which are non-renewable and non-sustainable. Fusion will be complementary with wind, solar, and other renewable energies.

“ITER’s success has demanded extraordinary project management, systems engineering, and almost perfect integration of our work.

“Our design has taken advantage of the best expertise of every member’s scientific and industrial base. No country could do this alone. We are all learning from each other, for the world’s mutual benefit.”

The ITER 50 percent milestone is getting significant attention.

“We are fortunate that ITER and fusion has had the support of world leaders, historically and currently,” says Director-General Bigot. “The concept of the ITER project was conceived at the 1985 Geneva Summit between Ronald Reagan and Mikhail Gorbachev. When the ITER Agreement was signed in 2006, it was strongly supported by leaders such as French President Jacques Chirac, U.S. President George W. Bush, and Indian Prime Minister Manmohan Singh.

“More recently, President Macron and U.S. President Donald Trump exchanged letters about ITER after their meeting this past July. One month earlier, President Xi Jinping of China hosted Russian President Vladimir Putin and other world leaders in a showcase featuring ITER and fusion power at the World EXPO in Astana, Kazakhstan.

“We know that other leaders have been similarly involved behind the scenes. It is clear that each ITER member understands the value and importance of this project.”

Why use this complex manufacturing arrangement?

More than 80 percent of the cost of ITER, about $22 billion or EUR18 billion, is contributed in the form of components manufactured by the partners. Many of these massive components of the ITER machine must be precisely fitted-for example, 17-meter-high magnets with less than a millimeter of tolerance. Each component must be ready on time to fit into the Master Schedule for machine assembly.

Members asked for this deal for three reasons. First, it means that most of the ITER costs paid by any member are actually paid to that member’s companies; the funding stays in-country. Second, the companies working on ITER build new industrial expertise in major fields-such as electromagnetics, cryogenics, robotics, and materials science. Third, this new expertise leads to innovation and spin-offs in other fields.

For example, expertise gained working on ITER’s superconducting magnets is now being used to map the human brain more precisely than ever before.

The European Union is paying 45 percent of the cost; China, India, Japan, Korea, Russia, and the United States each contribute 9 percent equally. All members share in ITER’s technology; they receive equal access to the intellectual property and innovation that comes from building ITER.

When will commercial fusion plants be ready?

ITER scientists predict that fusion plants will start to come on line as soon as 2040. The exact timing, according to fusion experts, will depend on the level of public urgency and political will that translates to financial investment.

How much power will they provide?

The ITER tokamak will produce 500 megawatts of thermal power. This size is suitable for studying a “burning” or largely self-heating plasma, a state of matter that has never been produced in a controlled environment on Earth. In a burning plasma, most of the plasma heating comes from the fusion reaction itself. Studying the fusion science and technology at ITER’s scale will enable optimization of the plants that follow.

A commercial fusion plant will be designed with a slightly larger plasma chamber, for 10-15 times more electrical power. A 2,000-megawatt fusion electricity plant, for example, would supply 2 million homes.

How much would a fusion plant cost and how many will be needed?

The initial capital cost of a 2,000-megawatt fusion plant will be in the range of $10 billion. These capital costs will be offset by extremely low operating costs, negligible fuel costs, and infrequent component replacement costs over the 60-year-plus life of the plant. Capital costs will decrease with large-scale deployment of fusion plants.

At current electricity usage rates, one fusion plant would be more than enough to power a city the size of Washington, D.C. The entire D.C. metropolitan area could be powered with four fusion plants, with zero carbon emissions.

“If fusion power becomes universal, the use of electricity could be expanded greatly, to reduce the greenhouse gas emissions from transportation, buildings and industry,” predicts Dr. Bigot. “Providing clean, abundant, safe, economic energy will be a miracle for our planet.”

*     *     *

FOOTNOTES:

* “Tokamak” is a word of Russian origin meaning a toroidal or donut-shaped magnetic chamber. Tokamaks have been built and operated for the past six decades. They are today’s most advanced fusion device design.

** “Total construction work scope,” as used in ITER’s project performance metrics, includes design, component manufacturing, building construction, shipping and delivery, assembly, and installation.

It is an extraordinary project on many levels as Henry Fountain notes in a March 27, 2017 article for the New York Times (Note: Links have been removed),

At a dusty construction site here amid the limestone ridges of Provence, workers scurry around immense slabs of concrete arranged in a ring like a modern-day Stonehenge.

It looks like the beginnings of a large commercial power plant, but it is not. The project, called ITER, is an enormous, and enormously complex and costly, physics experiment. But if it succeeds, it could determine the power plants of the future and make an invaluable contribution to reducing planet-warming emissions.

ITER, short for International Thermonuclear Experimental Reactor (and pronounced EAT-er), is being built to test a long-held dream: that nuclear fusion, the atomic reaction that takes place in the sun and in hydrogen bombs, can be controlled to generate power.

ITER will produce heat, not electricity. But if it works — if it produces more energy than it consumes, which smaller fusion experiments so far have not been able to do — it could lead to plants that generate electricity without the climate-affecting carbon emissions of fossil-fuel plants or most of the hazards of existing nuclear reactors that split atoms rather than join them.

Success, however, has always seemed just a few decades away for ITER. The project has progressed in fits and starts for years, plagued by design and management problems that have led to long delays and ballooning costs.

ITER is moving ahead now, with a director-general, Bernard Bigot, who took over two years ago after an independent analysis that was highly critical of the project. Dr. Bigot, who previously ran France’s atomic energy agency, has earned high marks for resolving management problems and developing a realistic schedule based more on physics and engineering and less on politics.

The site here is now studded with tower cranes as crews work on the concrete structures that will support and surround the heart of the experiment, a doughnut-shaped chamber called a tokamak. This is where the fusion reactions will take place, within a plasma, a roiling cloud of ionized atoms so hot that it can be contained only by extremely strong magnetic fields.

Here’s a rendering of the proposed reactor,

Source: ITER Organization

It seems the folks at the New York Times decided to remove the notes which help make sense of this image. However, it does get the idea across.

If I read the article rightly, the official cost in March 2017 was around 22 B Euros and more will likely be needed. You can read Fountain’s article for more information about fusion and ITER or go to the ITER website.

I could have sworn a local (Vancouver area) company called General Fusion was involved in the ITER project but I can’t track down any sources for confirmation. The sole connection I could find is in a documentary about fusion technology,

Here’s a little context for the film from a July 4, 2017 General Fusion news release (Note: A link has been removed),

A new documentary featuring General Fusion has captured the exciting progress in fusion across the public and private sectors.

Let There Be Light made its international premiere at the South By Southwest (SXSW) music and film festival in March [2017] to critical acclaim. The film was quickly purchased by Amazon Video, where it will be available for more than 70 million users to stream.

Let There Be Light follows scientists at General Fusion, ITER and Lawrenceville Plasma Physics in their pursuit of a clean, safe and abundant source of energy to power the world.

The feature length documentary has screened internationally across Europe and North America. Most recently it was shown at the Hot Docs film festival in Toronto, where General Fusion founder and Chief Scientist Dr. Michel Laberge joined fellow fusion physicist Dr. Mark Henderson from ITER at a series of Q&A panels with the filmmakers.

Laberge and Henderson were also interviewed by the popular CBC radio science show Quirks and Quarks, discussing different approaches to fusion, its potential benefits, and the challenges it faces.

It is yet to be confirmed when the film will be release for streaming, check Amazon Video for details.

You can find out more about General Fusion here.

Brief final comment

ITER is a breathtaking effort but if you’ve read about other large scale projects such as building a railway across the Canadian Rocky Mountains, establishing telecommunications in an  astonishing number of countries around the world, getting someone to the moon, eliminating small pox, building the pyramids, etc., it seems standard operating procedure both for the successes I’ve described and for the failures we’ve forgotten. Where ITER will finally rest on the continuum between success and failure is yet to be determined but the problems experienced so far are not necessarily a predictor.

I wish the engineers, scientists, visionaries, and others great success with finding better ways to produce energy.

Europe’s cathedrals get a ‘lift’ with nanoparticles

That headline is a teensy bit laboured but I couldn’t resist the levels of wordplay available to me. They’re working on a cathedral close to the leaning Tower of Pisa in this video about the latest in stone preservation in Europe.

I have covered the topic of preserving stone monuments before (most recently in my Oct. 21, 2014 posting). The action in this field seems to be taking place mostly in Europe, specifically Italy, although other countries are also quite involved.

Finally, getting to the European Commission’s latest stone monument preservation project, Nano-Cathedral, a Sept. 26, 2017 news item on Nanowerk announces the latest developments,

Just a few meters from Pisa’s famous Leaning Tower, restorers are defying scorching temperatures to bring back shine to the city’s Cathedral.

Ordinary restoration techniques like laser are being used on much of the stonework that dates back to the 11th century. But a brand new technique is also being used: a new material made of innovative nanoparticles. The aim is to consolidate the inner structure of the stones. It’s being applied mainly on marble.

A March 7, 2017 item on the Euro News website, which originated the Nanowerk news item, provides more detail,

“Marble has very low porosity, which means we have to use nanometric particles in order to go deep inside the stone, to ensure that the treatment is both efficient while still allowing the stone to breathe,” explains Roberto Cela, civil engineer at Opera Della Primaziale Pisana.

The material developed by the European research team includes calcium carbonate, which is a mix of calcium oxide, water and carbon dioxide.

The nano-particles penetrate the stone cementing its decaying structure.

“It is important that these particles have the same chemical nature as the stones that are being treated, so that the physical and mechanical processes that occur over time don’t lead to the break-up of the stones,” says Dario Paolucci, chemist at the University of Pisa.

Vienna’s St Stephen’s is another of the five cathedrals where the new restoration materials are being tested.

The first challenge for researchers is to determine the mechanical characteristics of the cathedral’s stones. Since there are few original samples to work on, they had to figure out a way of “ageing” samples of stones of similar nature to those originally used.

“We tried different things: we tried freeze storage, we tried salts and acids, and we decided to go for thermal ageing,” explains Matea Ban, material scientist at the University of Technology in Vienna. “So what happens is that we heat the stone at certain temperatures. Minerals inside then expand in certain directions, and when they expand they build up stresses to neighbouring minerals and then they crack, and we need those cracks in order to consolidate them.”

Consolidating materials were then applied on a variety of limestones, sandstones and marble – a selection of the different types of stones that were used to build cathedrals around Europe.

What researchers are looking for are very specific properties.

“First of all, the consolidating material has to be well absorbed by the stone,” says petrologist Johannes Weber of the University of Applied Arts in Vienna. “Then, as it evaporates, it has to settle properly within the stone structure. It should not shrink too much. All materials shrink when drying, including consolidating materials. They should adhere to the particles of the stone but shouldn’t completely obstruct its pores.”

Further tests are underway in cathedrals across Europe in the hope of better protecting our invaluable cultural heritage.

There’s a bit more detail about Nano-Cathedral on the Opera della Primaziale Pisana (O₽A) website (from their Nano-Cathedral project page),

With the meeting of June 3 this year the Nano Cathedral project kicked off, supported by the European Union within the nanotechnology field applied to Horizon 2020 cultural heritage with a fund of about 6.5 million euro.

A total of six monumental buildings will be for three years under the eyes and hands of petrographers, geologists, chemists and restorers of the institutes belonging to the Consortium: five cathedrals have been selected to represent the cultural diversity within Europe from the perspective of developing shared values and transnational identity, and a contemporary monumental building entirely clad in Carrara marble, the Opera House of Oslo.

Purpose: the testing of nanomaterials for the conservation of marble and the outer surfaces of our ‘cathedrals’.
The field of investigation to check degradation, testing new consolidating and protective products is the Cathedral of Pisa together with the Cathedrals of Cologne, Vienna, Ghent and Vitoria.
For the selection of case studies we have crosschecked requirements for their historical and architectural value but also for the different types of construction materials – marble, limestone and sandstone – as well as the relocation of six monumental buildings according to European climates.

The Cathedral of Pisa is the most southern, fully positioned in Mediterranean climate, therefore subject to degradation and very different from those which the weather conditions of the Scandinavian peninsula recorded; all the intermediate climate phases are modulated through Ghent, Vitoria, Cologne and Vienna.

At the conclusion of the three-year project, once the analysis in situ and in the laboratory are completed and all the experiments are tested on each different identified portion in each monumental building, an intervention protocol will be defined in detail in order to identify the mineralogical and petrographic characteristics of stone materials and of their degradation, the assessment of the causes and mechanisms of associated alteration, including interactions with factors of environmental pollution. Then we will be able to identify the most appropriate method of restoration and testing of nanotechnology products for the consolidation and protection of different stone materials.

In 2018 we hope to have new materials to protect and safeguard the ‘skin’ of our historic buildings and monuments for a long time.

Back to my headline and the second piece of wordplay, ‘lift’ as in ‘skin lift’ in that last sentence.

I realize this is a bit off topic but it’s worth taking a look at ORA’s home page,

Gabriele D’Annunzio effectively condenses the wonder and admiration that catch whoever visits the Duomo Square of Pisa.

The Opera della Primaziale Pisana (O₽A) is a non-profit organisation which was established in order to oversee the first works for the construction of the monuments in the Piazza del Duomo, subject to its own charter which includes the protection, promotion and enhancement of its heritage, in order to pass the religious and artistic meaning onto future generations.

«L’Ardea roteò nel cielo di Cristo, sul prato dei Miracoli.»
Gabriele d’Annunzio in Forse che sì forse che no (1910)

If you go to the home page, you can buy tickets to visit the monuments surrounding the square and there are other notices including one for a competition (it’s too late to apply but the details are interesting) to construct four stained glass windows for the Pisa cathedral.

A jellyfish chat on November 28, 2017 at Café Scientifique Vancouver get together

Café Scientifique Vancouver sent me an announcement (via email) about their upcoming event,

We are pleased to announce our next café which will happen on TUESDAY,
NOVEMBER 28TH at 7:30PM in the back room of YAGGER'S DOWNTOWN (433 W
Pender).

JELLYFISH – FRIEND, FOE, OR FOOD?

Did you know that in addition to stinging swimmers, jellyfish also cause
extensive damage to fisheries and coastal power plants? As threats such
as overfishing, pollution, and climate change alter the marine
environment, recent media reports are proclaiming that jellyfish are
taking over the oceans. Should we hail to our new jellyfish overlords or
do we need to examine the evidence behind these claims? Join Café
Scientifique on Nov. 28, 2017 to learn everything you ever wanted to
know about jellyfish, and find out if jelly burgers are coming soon to a
menu near you.

Our speaker for the evening will be DR. LUCAS BROTZ, a Postdoctoral
Research Fellow with the Sea Around Us at UBC’s Institute for the
Oceans and Fisheries. Lucas has been studying jellyfish for more than a
decade, and has been called “Canada’s foremost jellyfish
researcher” by CBC Nature of Things host Dr. David Suzuki. Lucas has
participated in numerous international scientific collaborations, and
his research has been featured in more than 100 media outlets including
Nature News, The Washington Post, and The New York Times. He recently
received the Michael A. Bigg award for highly significant student
research as part of the Coastal Ocean Awards at the Vancouver Aquarium.

We hope to see you there!

You can find out more about Lucas Brotz here and about Sea Around Us here.

For anyone who’s curious about the jellyfish ‘issue’, there’s a November 8, 2017 Norwegian University of Science and Technology press release on AlphaGallileo or on EurekAlert, which provides insight into the problems and the possibilities,

Jellyfish could be a resource in producing microplastic filters, fertilizer or fish feed. A new 6 million euro project called GoJelly, funded by the EU and coordinated by the GEOMAR Helmholtz Centre for Ocean Research, Germany and including partners at the Norwegian University of Science and Technology (NTNNU) and SINTEF [headquartered in Trondheim, Norway, is the largest independent research organisation in Scandinavia; more about SINTEF in its Wikipedia entry], hopes to turn jellyfish from a nuisance into a useful product.

Global climate change and the human impact on marine ecosystems has led to dramatic decreases in the number of fish in the ocean. It has also had an unforseen side effect: because overfishing decreases the numbers of jellyfish competitors, their blooms are on the rise.

The GoJelly project, coordinated by the GEOMAR Helmholtz Centre for Ocean Research, Germany, would like to transform problematic jellyfish into a resource that can be used to produce microplastic filter, fertilizer or fish feed. The EU has just approved funding of EUR 6 million over 4 years to support the project through its Horizon 2020 programme.

Rising water temperatures, ocean acidification and overfishing seem to favour jellyfish blooms. More and more often, they appear in huge numbers that have already destroyed entire fish farms on European coasts and blocked cooling systems of power stations near the coast. A number of jellyfish species are poisonous, while some tropical species are even among the most toxic animals on earth.

“In Europe alone, the imported American comb jelly has a biomass of one billion tons. While we tend to ignore the jellyfish there must be other solutions,” says Jamileh Javidpour of GEOMAR, initiator and coordinator of the GoJelly project, which is a consortium of 15 scientific institutions from eight countries led by the GEOMAR Helmholtz Centre for Ocean Research in Kiel.

The project will first entail exploring the life cycle of a number of jellyfish species. A lack of knowledge about life cycles makes it is almost impossible to predict when and why a large jellyfish bloom will occur. “This is what we want to change so that large jellyfish swarms can be caught before they reach the coasts,” says Javidpour.

At the same time, the project partners will also try to answer the question of what to do with jellyfish once they have been caught. One idea is to use the jellyfish to battle another, man-made threat.

“Studies have shown that mucus of jellyfish can bind microplastic. Therefore, we want to test whether biofilters can be produced from jellyfish. These biofilters could then be used in sewage treatment plants or in factories where microplastic is produced,” the GoJelly researchers say.

Jellyfish can also be used as fertilizers for agriculture or as aquaculture feed. “Fish in fish farms are currently fed with captured wild fish, which does not reduce the problem of overfishing, but increases it. Jellyfish as feed would be much more sustainable and would protect natural fish stocks,” says the GoJelly team.

Another option is using jellyfish as food for humans. “In some cultures, jellyfish are already on the menu. As long as the end product is no longer slimy, it could also gain greater general acceptance,” said Javidpour. Finally yet importantly, jellyfish contain collagen, a substance very much sought after in the cosmetics industry.

Project partners from the Norwegian University of Science and Technology, led by Nicole Aberle-Malzahn, and SINTEF Ocean, led by Rachel Tiller, will analyse how abiotic (hydrography, temperature), biotic (abundance, biomass, ecology, reproduction) and biochemical parameters (stoichiometry, food quality) affect the initiation of jellyfish blooms.

Based on a comprehensive analysis of triggering mechanisms, origin of seed populations and ecological modelling, the researchers hope to be able to make more reliable predictions on jellyfish bloom formation of specific taxa in the GoJelly target areas. This knowledge will allow sustainable harvesting of jellyfish communities from various Northern and Southern European populations.

This harvest will provide a marine biomass of unknown potential that will be explored by researchers at SINTEF Ocean, among others, to explore the possible ways to use the material.

A team from SINTEF Ocean’s strategic program Clean Ocean will also work with European colleagues on developing a filter from the mucus of the jellyfish that will catch microplastics from household products (which have their source in fleece sweaters, breakdown of plastic products or from cosmetics, for example) and prevent these from entering the marine ecosystem.

Finally, SINTEF Ocean will examine the socio-ecological system and games, where they will explore the potentials of an emerging international management regime for a global effort to mitigate the negative effects of microplastics in the oceans.

“Jellyfish can be used for many purposes. We see this as an opportunity to use the potential of the huge biomass drifting right in front of our front door,” Javidpour said.

You can find out more about GoJelly on their Twitter account.

Congratulate China on the world’s first quantum communication network

China has some exciting news about the world’s first quantum network; it’s due to open in late August 2017 so you may want to have your congratulations in order for later this month.

An Aug. 4, 2017 news item on phys.org makes the announcement,

As malicious hackers find ever more sophisticated ways to launch attacks, China is about to launch the Jinan Project, the world’s first unhackable computer network, and a major milestone in the development of quantum technology.

Named after the eastern Chinese city where the technology was developed, the network is planned to be fully operational by the end of August 2017. Jinan is the hub of the Beijing-Shanghai quantum network due to its strategic location between the two principal Chinese metropolises.

“We plan to use the network for national defence, finance and other fields, and hope to spread it out as a pilot that if successful can be used across China and the whole world,” commented Zhou Fei, assistant director of the Jinan Institute of Quantum Technology, who was speaking to Britain’s Financial Times.

An Aug. 3, 2017 CORDIS (Community Research and Development Research Information Service [for the European Commission]) press release, which originated the news item, provides more detail about the technology,

By launching the network, China will become the first country worldwide to implement quantum technology for a real life, commercial end. It also highlights that China is a key global player in the rush to develop technologies based on quantum principles, with the EU and the United States also vying for world leadership in the field.

The network, known as a Quantum Key Distribution (QKD) network, is more secure than widely used electronic communication equivalents. Unlike a conventional telephone or internet cable, which can be tapped without the sender or recipient being aware, a QKD network alerts both users to any tampering with the system as soon as it occurs. This is because tampering immediately alters the information being relayed, with the disturbance being instantly recognisable. Once fully implemented, it will make it almost impossible for other governments to listen in on Chinese communications.

In the Jinan network, some 200 users from China’s military, government, finance and electricity sectors will be able to send messages safe in the knowledge that only they are reading them. It will be the world’s longest land-based quantum communications network, stretching over 2 000 km.

Also speaking to the ‘Financial Times’, quantum physicist Tim Byrnes, based at New York University’s (NYU) Shanghai campus commented: ‘China has achieved staggering things with quantum research… It’s amazing how quickly China has gotten on with quantum research projects that would be too expensive to do elsewhere… quantum communication has been taken up by the commercial sector much more in China compared to other countries, which means it is likely to pull ahead of Europe and US in the field of quantum communication.’

However, Europe is also determined to also be at the forefront of the ‘quantum revolution’ which promises to be one of the major defining technological phenomena of the twenty-first century. The EU has invested EUR 550 million into quantum technologies and has provided policy support to researchers through the 2016 Quantum Manifesto.

Moreover, with China’s latest achievement (and a previous one already notched up from July 2017 when its quantum satellite – the world’s first – sent a message to Earth on a quantum communication channel), it looks like the race to be crowned the world’s foremost quantum power is well and truly underway…

Prior to this latest announcement, Chinese scientists had published work about quantum satellite communications, a development that makes their imminent terrestrial quantum network possible. Gabriel Popkin wrote about the quantum satellite in a June 15, 2017 article Science magazine,

Quantum entanglement—physics at its strangest—has moved out of this world and into space. In a study that shows China’s growing mastery of both the quantum world and space science, a team of physicists reports that it sent eerily intertwined quantum particles from a satellite to ground stations separated by 1200 kilometers, smashing the previous world record. The result is a stepping stone to ultrasecure communication networks and, eventually, a space-based quantum internet.

“It’s a huge, major achievement,” says Thomas Jennewein, a physicist at the University of Waterloo in Canada. “They started with this bold idea and managed to do it.”

Entanglement involves putting objects in the peculiar limbo of quantum superposition, in which an object’s quantum properties occupy multiple states at once: like Schrödinger’s cat, dead and alive at the same time. Then those quantum states are shared among multiple objects. Physicists have entangled particles such as electrons and photons, as well as larger objects such as superconducting electric circuits.

Theoretically, even if entangled objects are separated, their precarious quantum states should remain linked until one of them is measured or disturbed. That measurement instantly determines the state of the other object, no matter how far away. The idea is so counterintuitive that Albert Einstein mocked it as “spooky action at a distance.”

Starting in the 1970s, however, physicists began testing the effect over increasing distances. In 2015, the most sophisticated of these tests, which involved measuring entangled electrons 1.3 kilometers apart, showed once again that spooky action is real.

Beyond the fundamental result, such experiments also point to the possibility of hack-proof communications. Long strings of entangled photons, shared between distant locations, can be “quantum keys” that secure communications. Anyone trying to eavesdrop on a quantum-encrypted message would disrupt the shared key, alerting everyone to a compromised channel.

But entangled photons degrade rapidly as they pass through the air or optical fibers. So far, the farthest anyone has sent a quantum key is a few hundred kilometers. “Quantum repeaters” that rebroadcast quantum information could extend a network’s reach, but they aren’t yet mature. Many physicists have dreamed instead of using satellites to send quantum information through the near-vacuum of space. “Once you have satellites distributing your quantum signals throughout the globe, you’ve done it,” says Verónica Fernández Mármol, a physicist at the Spanish National Research Council in Madrid. …

Popkin goes on to detail the process for making the discovery in easily accessible (for the most part) writing and in a video and a graphic.

Russell Brandom writing for The Verge in a June 15, 2017 article about the Chinese quantum satellite adds detail about previous work and teams in other countries also working on the challenge (Note: Links have been removed),

Quantum networking has already shown promise in terrestrial fiber networks, where specialized routing equipment can perform the same trick over conventional fiber-optic cable. The first such network was a DARPA-funded connection established in 2003 between Harvard, Boston University, and a private lab. In the years since, a number of companies have tried to build more ambitious connections. The Swiss company ID Quantique has mapped out a quantum network that would connect many of North America’s largest data centers; in China, a separate team is working on a 2,000-kilometer quantum link between Beijing and Shanghai, which would rely on fiber to span an even greater distance than the satellite link. Still, the nature of fiber places strict limits on how far a single photon can travel.

According to ID Quantique, a reliable satellite link could connect the existing fiber networks into a single globe-spanning quantum network. “This proves the feasibility of quantum communications from space,” ID Quantique CEO Gregoire Ribordy tells The Verge. “The vision is that you have regional quantum key distribution networks over fiber, which can connect to each other through the satellite link.”

China isn’t the only country working on bringing quantum networks to space. A collaboration between the UK’s University of Strathclyde and the National University of Singapore is hoping to produce the same entanglement in cheap, readymade satellites called Cubesats. A Canadian team is also developing a method of producing entangled photons on the ground before sending them into space.

I wonder if there’s going to be an invitational event for scientists around the world to celebrate the launch.