Tag Archives: European Union

Alberta adds a newish quantum nanotechnology research hub to the Canada’s quantum computing research scene

One of the winners in Canada’s 2017 federal budget announcement of the Pan-Canadian Artificial Intelligence Strategy was Edmonton, Alberta. It’s a fact which sometimes goes unnoticed while Canadians marvel at the wonderfulness found in Toronto and Montréal where it seems new initiatives and monies are being announced on a weekly basis (I exaggerate) for their AI (artificial intelligence) efforts.

Alberta’s quantum nanotechnology hub (graduate programme)

Intriguingly, it seems that Edmonton has higher aims than (an almost unnoticed) leadership in AI. Physicists at the University of Alberta have announced hopes to be just as successful as their AI brethren in a Nov. 27, 2017 article by Juris Graney for the Edmonton Journal,

Physicists at the University of Alberta [U of A] are hoping to emulate the success of their artificial intelligence studying counterparts in establishing the city and the province as the nucleus of quantum nanotechnology research in Canada and North America.

Google’s artificial intelligence research division DeepMind announced in July [2017] it had chosen Edmonton as its first international AI research lab, based on a long-running partnership with the U of A’s 10-person AI lab.

Retaining the brightest minds in the AI and machine-learning fields while enticing a global tech leader to Alberta was heralded as a coup for the province and the university.

It is something U of A physics professor John Davis believes the university’s new graduate program, Quanta, can help achieve in the world of quantum nanotechnology.

The field of quantum mechanics had long been a realm of theoretical science based on the theory that atomic and subatomic material like photons or electrons behave both as particles and waves.

“When you get right down to it, everything has both behaviours (particle and wave) and we can pick and choose certain scenarios which one of those properties we want to use,” he said.

But, Davis said, physicists and scientists are “now at the point where we understand quantum physics and are developing quantum technology to take to the marketplace.”

“Quantum computing used to be realm of science fiction, but now we’ve figured it out, it’s now a matter of engineering,” he said.

Quantum computing labs are being bought by large tech companies such as Google, IBM and Microsoft because they realize they are only a few years away from having this power, he said.

Those making the groundbreaking developments may want to commercialize their finds and take the technology to market and that is where Quanta comes in.

East vs. West—Again?

Ivan Semeniuk in his article, Quantum Supremacy, ignores any quantum research effort not located in either Waterloo, Ontario or metro Vancouver, British Columbia to describe a struggle between the East and the West (a standard Canadian trope). From Semeniuk’s Oct. 17, 2017 quantum article [link follows the excerpts] for the Globe and Mail’s October 2017 issue of the Report on Business (ROB),

 Lazaridis [Mike], of course, has experienced lost advantage first-hand. As co-founder and former co-CEO of Research in Motion (RIM, now called Blackberry), he made the smartphone an indispensable feature of the modern world, only to watch rivals such as Apple and Samsung wrest away Blackberry’s dominance. Now, at 56, he is engaged in a high-stakes race that will determine who will lead the next technology revolution. In the rolling heartland of southwestern Ontario, he is laying the foundation for what he envisions as a new Silicon Valley—a commercial hub based on the promise of quantum technology.

Semeniuk skips over the story of how Blackberry lost its advantage. I came onto that story late in the game when Blackberry was already in serious trouble due to a failure to recognize that the field they helped to create was moving in a new direction. If memory serves, they were trying to keep their technology wholly proprietary which meant that developers couldn’t easily create apps to extend the phone’s features. Blackberry also fought a legal battle in the US with a patent troll draining company resources and energy in proved to be a futile effort.

Since then Lazaridis has invested heavily in quantum research. He gave the University of Waterloo a serious chunk of money as they named their Quantum Nano Centre (QNC) after him and his wife, Ophelia (you can read all about it in my Sept. 25, 2012 posting about the then new centre). The best details for Lazaridis’ investments in Canada’s quantum technology are to be found on the Quantum Valley Investments, About QVI, History webpage,

History-bannerHistory has repeatedly demonstrated the power of research in physics to transform society.  As a student of history and a believer in the power of physics, Mike Lazaridis set out in 2000 to make real his bold vision to establish the Region of Waterloo as a world leading centre for physics research.  That is, a place where the best researchers in the world would come to do cutting-edge research and to collaborate with each other and in so doing, achieve transformative discoveries that would lead to the commercialization of breakthrough  technologies.

Establishing a World Class Centre in Quantum Research:

The first step in this regard was the establishment of the Perimeter Institute for Theoretical Physics.  Perimeter was established in 2000 as an independent theoretical physics research institute.  Mike started Perimeter with an initial pledge of $100 million (which at the time was approximately one third of his net worth).  Since that time, Mike and his family have donated a total of more than $170 million to the Perimeter Institute.  In addition to this unprecedented monetary support, Mike also devotes his time and influence to help lead and support the organization in everything from the raising of funds with government and private donors to helping to attract the top researchers from around the globe to it.  Mike’s efforts helped Perimeter achieve and grow its position as one of a handful of leading centres globally for theoretical research in fundamental physics.

Stephen HawkingPerimeter is located in a Governor-General award winning designed building in Waterloo.  Success in recruiting and resulting space requirements led to an expansion of the Perimeter facility.  A uniquely designed addition, which has been described as space-ship-like, was opened in 2011 as the Stephen Hawking Centre in recognition of one of the most famous physicists alive today who holds the position of Distinguished Visiting Research Chair at Perimeter and is a strong friend and supporter of the organization.

Recognizing the need for collaboration between theorists and experimentalists, in 2002, Mike applied his passion and his financial resources toward the establishment of The Institute for Quantum Computing at the University of Waterloo.  IQC was established as an experimental research institute focusing on quantum information.  Mike established IQC with an initial donation of $33.3 million.  Since that time, Mike and his family have donated a total of more than $120 million to the University of Waterloo for IQC and other related science initiatives.  As in the case of the Perimeter Institute, Mike devotes considerable time and influence to help lead and support IQC in fundraising and recruiting efforts.  Mike’s efforts have helped IQC become one of the top experimental physics research institutes in the world.

Quantum ComputingMike and Doug Fregin have been close friends since grade 5.  They are also co-founders of BlackBerry (formerly Research In Motion Limited).  Doug shares Mike’s passion for physics and supported Mike’s efforts at the Perimeter Institute with an initial gift of $10 million.  Since that time Doug has donated a total of $30 million to Perimeter Institute.  Separately, Doug helped establish the Waterloo Institute for Nanotechnology at the University of Waterloo with total gifts for $29 million.  As suggested by its name, WIN is devoted to research in the area of nanotechnology.  It has established as an area of primary focus the intersection of nanotechnology and quantum physics.

With a donation of $50 million from Mike which was matched by both the Government of Canada and the province of Ontario as well as a donation of $10 million from Doug, the University of Waterloo built the Mike & Ophelia Lazaridis Quantum-Nano Centre, a state of the art laboratory located on the main campus of the University of Waterloo that rivals the best facilities in the world.  QNC was opened in September 2012 and houses researchers from both IQC and WIN.

Leading the Establishment of Commercialization Culture for Quantum Technologies in Canada:

In the Research LabFor many years, theorists have been able to demonstrate the transformative powers of quantum mechanics on paper.  That said, converting these theories to experimentally demonstrable discoveries has, putting it mildly, been a challenge.  Many naysayers have suggested that achieving these discoveries was not possible and even the believers suggested that it could likely take decades to achieve these discoveries.  Recently, a buzz has been developing globally as experimentalists have been able to achieve demonstrable success with respect to Quantum Information based discoveries.  Local experimentalists are very much playing a leading role in this regard.  It is believed by many that breakthrough discoveries that will lead to commercialization opportunities may be achieved in the next few years and certainly within the next decade.

Recognizing the unique challenges for the commercialization of quantum technologies (including risk associated with uncertainty of success, complexity of the underlying science and high capital / equipment costs) Mike and Doug have chosen to once again lead by example.  The Quantum Valley Investment Fund will provide commercialization funding, expertise and support for researchers that develop breakthroughs in Quantum Information Science that can reasonably lead to new commercializable technologies and applications.  Their goal in establishing this Fund is to lead in the development of a commercialization infrastructure and culture for Quantum discoveries in Canada and thereby enable such discoveries to remain here.

Semeniuk goes on to set the stage for Waterloo/Lazaridis vs. Vancouver (from Semeniuk’s 2017 ROB article),

… as happened with Blackberry, the world is once again catching up. While Canada’s funding of quantum technology ranks among the top five in the world, the European Union, China, and the US are all accelerating their investments in the field. Tech giants such as Google [also known as Alphabet], Microsoft and IBM are ramping up programs to develop companies and other technologies based on quantum principles. Meanwhile, even as Lazaridis works to establish Waterloo as the country’s quantum hub, a Vancouver-area company has emerged to challenge that claim. The two camps—one methodically focused on the long game, the other keen to stake an early commercial lead—have sparked an East-West rivalry that many observers of the Canadian quantum scene are at a loss to explain.

Is it possible that some of the rivalry might be due to an influential individual who has invested heavily in a ‘quantum valley’ and has a history of trying to ‘own’ a technology?

Getting back to D-Wave Systems, the Vancouver company, I have written about them a number of times (particularly in 2015; for the full list: input D-Wave into the blog search engine). This June 26, 2015 posting includes a reference to an article in The Economist magazine about D-Wave’s commercial opportunities while the bulk of the posting is focused on a technical breakthrough.

Semeniuk offers an overview of the D-Wave Systems story,

D-Wave was born in 1999, the same year Lazaridis began to fund quantum science in Waterloo. From the start, D-Wave had a more immediate goal: to develop a new computer technology to bring to market. “We didn’t have money or facilities,” says Geordie Rose, a physics PhD who co0founded the company and served in various executive roles. …

The group soon concluded that the kind of machine most scientists were pursing based on so-called gate-model architecture was decades away from being realized—if ever. …

Instead, D-Wave pursued another idea, based on a principle dubbed “quantum annealing.” This approach seemed more likely to produce a working system, even if the application that would run on it were more limited. “The only thing we cared about was building the machine,” says Rose. “Nobody else was trying to solve the same problem.”

D-Wave debuted its first prototype at an event in California in February 2007 running it through a few basic problems such as solving a Sudoku puzzle and finding the optimal seating plan for a wedding reception. … “They just assumed we were hucksters,” says Hilton [Jeremy Hilton, D.Wave senior vice-president of systems]. Federico Spedalieri, a computer scientist at the University of Southern California’s [USC} Information Sciences Institute who has worked with D-Wave’s system, says the limited information the company provided about the machine’s operation provoked outright hostility. “I think that played against them a lot in the following years,” he says.

It seems Lazaridis is not the only one who likes to hold company information tightly.

Back to Semeniuk and D-Wave,

Today [October 2017], the Los Alamos National Laboratory owns a D-Wave machine, which costs about $15million. Others pay to access D-Wave systems remotely. This year , for example, Volkswagen fed data from thousands of Beijing taxis into a machine located in Burnaby [one of the municipalities that make up metro Vancouver] to study ways to optimize traffic flow.

But the application for which D-Wave has the hights hope is artificial intelligence. Any AI program hings on the on the “training” through which a computer acquires automated competence, and the 2000Q [a D-Wave computer] appears well suited to this task. …

Yet, for all the buzz D-Wave has generated, with several research teams outside Canada investigating its quantum annealing approach, the company has elicited little interest from the Waterloo hub. As a result, what might seem like a natural development—the Institute for Quantum Computing acquiring access to a D-Wave machine to explore and potentially improve its value—has not occurred. …

I am particularly interested in this comment as it concerns public funding (from Semeniuk’s article),

Vern Brownell, a former Goldman Sachs executive who became CEO of D-Wave in 2009, calls the lack of collaboration with Waterloo’s research community “ridiculous,” adding that his company’s efforts to establish closer ties have proven futile, “I’ll be blunt: I don’t think our relationship is good enough,” he says. Brownell also point out that, while  hundreds of millions in public funds have flowed into Waterloo’s ecosystem, little funding is available for  Canadian scientists wishing to make the most of D-Wave’s hardware—despite the fact that it remains unclear which core quantum technology will prove the most profitable.

There’s a lot more to Semeniuk’s article but this is the last excerpt,

The world isn’t waiting for Canada’s quantum rivals to forge a united front. Google, Microsoft, IBM, and Intel are racing to develop a gate-model quantum computer—the sector’s ultimate goal. (Google’s researchers have said they will unveil a significant development early next year.) With the U.K., Australia and Japan pouring money into quantum, Canada, an early leader, is under pressure to keep up. The federal government is currently developing  a strategy for supporting the country’s evolving quantum sector and, ultimately, getting a return on its approximately $1-billion investment over the past decade [emphasis mine].

I wonder where the “approximately $1-billion … ” figure came from. I ask because some years ago MP Peter Julian asked the government for information about how much Canadian federal money had been invested in nanotechnology. The government replied with sheets of paper (a pile approximately 2 inches high) that had funding disbursements from various ministries. Each ministry had its own method with different categories for listing disbursements and the titles for the research projects were not necessarily informative for anyone outside a narrow specialty. (Peter Julian’s assistant had kindly sent me a copy of the response they had received.) The bottom line is that it would have been close to impossible to determine the amount of federal funding devoted to nanotechnology using that data. So, where did the $1-billion figure come from?

In any event, it will be interesting to see how the Council of Canadian Academies assesses the ‘quantum’ situation in its more academically inclined, “The State of Science and Technology and Industrial Research and Development in Canada,” when it’s released later this year (2018).

Finally, you can find Semeniuk’s October 2017 article here but be aware it’s behind a paywall.

Whither we goest?

Despite any doubts one might have about Lazaridis’ approach to research and technology, his tremendous investment and support cannot be denied. Without him, Canada’s quantum research efforts would be substantially less significant. As for the ‘cowboys’ in Vancouver, it takes a certain temperament to found a start-up company and it seems the D-Wave folks have more in common with Lazaridis than they might like to admit. As for the Quanta graduate  programme, it’s early days yet and no one should ever count out Alberta.

Meanwhile, one can continue to hope that a more thoughtful approach to regional collaboration will be adopted so Canada can continue to blaze trails in the field of quantum research.

Nano- and neuro- together for nanoneuroscience

This is not the first time I’ve posted about nanotechnology and neuroscience (see this April 2, 2013 piece about then new brain science initiative in the US and Michael Berger’s  Nanowerk Spotlight article/review of an earlier paper covering the topic of nanotechnology and neuroscience).

Interestingly, the European Union (EU) had announced its two  $1B Euro research initiatives, the Human Brain Project and the Graphene Flagship (see my Jan. 28, 2013 posting about it),  months prior to the US brain research push. For those unfamiliar with the nanotechnology effort, graphene is a nanomaterial and there is high interest in its potential use in biomedical technology, thus partially connecting both EU projects.

In any event, Berger is highlighting a nanotechnology and neuroscience connection again in his Oct. 18, 2017 Nanowerk Spotlight article, or overview of, a new paper, which updates our understanding of the potential connections between the two fields (Note: A link has been removed),

Over the past several years, nanoscale analysis tools and in the design and synthesis of nanomaterials have generated optical, electrical, and chemical methods that can readily be adapted for use in neuroscience and brain activity mapping.

A review paper in Advanced Functional Materials (“Nanotechnology for Neuroscience: Promising Approaches for Diagnostics, Therapeutics and Brain Activity Mapping”) summarizes the basic concepts associated with neuroscience and the current journey of nanotechnology towards the study of neuron function by addressing various concerns on the significant role of nanomaterials in neuroscience and by describing the future applications of this emerging technology.

The collaboration between nanotechnology and neuroscience, though still at the early stages, utilizes broad concepts, such as drug delivery, cell protection, cell regeneration and differentiation, imaging and surgery, to give birth to novel clinical methods in neuroscience.

Ultimately, the clinical translation of nanoneuroscience implicates that central nervous system (CNS) diseases, including neurodevelopmental, neurodegenerative and psychiatric diseases, have the potential to be cured, while the industrial translation of nanoneuroscience indicates the need for advancement of brain-computer interface technologies.

Future Developing Arenas in Nanoneuroscience

The Brain Activity Map (BAM) Project aims to map the neural activity of every neuron across all neural circuits with the ultimate aim of curing diseases associated with the nervous system. The announcement of this collaborative, public-private research initiative in 2013 by President Obama has driven the surge in developing methods to elucidate neural circuitry. Three current developing arenas in the context of nanoneuroscience applications that will push such initiative forward are 1) optogenetics, 2) molecular/ion sensing and monitoring and 3) piezoelectric effects.

In their review, the authors discuss these aspects in detail.

Neurotoxicity of Nanomaterials

By engineering particles on the scale of molecular-level entities – proteins, lipid bilayers and nucleic acids – we can stereotactically interface with many of the components of cell systems, and at the cutting edge of this technology, we can begin to devise ways in which we can manipulate these components to our own ends. However, interfering with the internal environment of cells, especially neurons, is by no means simple.

“If we are to continue to make great strides in nanoneuroscience, functional investigations of nanomaterials must be complemented with robust toxicology studies,” the authors point out. “A database on the toxicity of materials that fully incorporates these findings for use in future schema must be developed. These databases should include information and data on 1) the chemical nature of the nanomaterials in complex aqueous environments; 2) the biological interactions of nanomaterials with chemical specificity; 3) the effects of various nanomaterial properties on living systems; and 4) a model for the simulation and computation of possible effects of nanomaterials in living systems across varying time and space. If we can establish such methods, it may be possible to design nanopharmaceuticals for improved research as well as quality of life.”

“However, challenges in nanoneuroscience are present in many forms, such as neurotoxicity; the inability to cross the blood-brain barrier [emphasis mine]; the need for greater specificity, bioavailability and short half-lives; and monitoring of disease treatment,” the authors conclude their review. “The nanoneurotoxicity surrounding these nanomaterials is a barrier that must be overcome for the translation of these applications from bench-to-bedside. While the challenges associated with nanoneuroscience seem unending, they represent opportunities for future work.”

I have a March 26, 2015 posting about Canadian researchers breaching the blood-brain barrier and an April 13, 2016 posting about US researchers at Cornell University also breaching the blood-brain barrier. Perhaps the “inability” mentioned in this Spotlight article means that it can’t be done consistently or that it hasn’t been achieved on humans.

Here’s a link to and a citation for the paper,

Nanotechnology for Neuroscience: Promising Approaches for Diagnostics, Therapeutics and Brain Activity Mapping by Anil Kumar, Aaron Tan, Joanna Wong, Jonathan Clayton Spagnoli, James Lam, Brianna Diane Blevins, Natasha G, Lewis Thorne, Keyoumars Ashkan, Jin Xie, and Hong Liu. Advanced Functional Materials Volume 27, Issue 39, October 19, 2017 DOI: 10.1002/adfm.201700489 Version of Record online: 14 AUG 2017

© 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

I took a look at the authors’ information and found that most of these researchers are based in  China and in the UK, with a sole researcher based in the US.

Limitless energy and the International Thermonuclear Experimental Reactor (ITER)

Over 30 years in the dreaming, the International Thermonuclear Experimental Reactor (ITER) is now said to be 1/2 way to completing construction. A December 6, 2017 ITER press release (received via email) makes the joyful announcement,

WORLD’S MOST COMPLEX MACHINE IS 50 PERCENT COMPLETED
ITER is proving that fusion is the future source of clean, abundant, safe and economic energy_

The International Thermonuclear Experimental Reactor (ITER), a project to prove that fusion power can be produced on a commercial scale and is sustainable, is now 50 percent built to initial operation. Fusion is the same energy source from the Sun that gives the Earth its light and warmth.

ITER will use hydrogen fusion, controlled by superconducting magnets, to produce massive heat energy. In the commercial machines that will follow, this heat will drive turbines to produce electricity with these positive benefits:

* Fusion energy is carbon-free and environmentally sustainable, yet much more powerful than fossil fuels. A pineapple-sized amount of hydrogen offers as much fusion energy as 10,000 tons of coal.

* ITER uses two forms of hydrogen fuel: deuterium, which is easily extracted from seawater; and tritium, which is bred from lithium inside the fusion reactor. The supply of fusion fuel for industry and megacities is abundant, enough for millions of years.

* When the fusion reaction is disrupted, the reactor simply shuts down-safely and without external assistance. Tiny amounts of fuel are used, about 2-3 grams at a time; so there is no physical possibility of a meltdown accident.

* Building and operating a fusion power plant is targeted to be comparable to the cost of a fossil fuel or nuclear fission plant. But unlike today’s nuclear plants, a fusion plant will not have the costs of high-level radioactive waste disposal. And unlike fossil fuel plants,
fusion will not have the environmental cost of releasing CO2 and other pollutants.

ITER is the most complex science project in human history. The hydrogen plasma will be heated to 150 million degrees Celsius, ten times hotter than the core of the Sun, to enable the fusion reaction. The process happens in a donut-shaped reactor, called a tokamak(*), which is surrounded by giant magnets that confine and circulate the superheated, ionized plasma, away from the metal walls. The superconducting magnets must be cooled to minus 269°C, as cold as interstellar space.

The ITER facility is being built in Southern France by a scientific partnership of 35 countries. ITER’s specialized components, roughly 10 million parts in total, are being manufactured in industrial facilities all over the world. They are subsequently shipped to the ITER worksite, where they must be assembled, piece-by-piece, into the final machine.

Each of the seven ITER members-the European Union, China, India, Japan, Korea, Russia, and the United States-is fabricating a significant portion of the machine. This adds to ITER’s complexity.

In a message dispatched on December 1 [2017] to top-level officials in ITER member governments, the ITER project reported that it had completed 50 percent of the “total construction work scope through First Plasma” (**). First Plasma, scheduled for December 2025, will be the first stage of operation for ITER as a functional machine.

“The stakes are very high for ITER,” writes Bernard Bigot, Ph.D., Director-General of ITER. “When we prove that fusion is a viable energy source, it will eventually replace burning fossil fuels, which are non-renewable and non-sustainable. Fusion will be complementary with wind, solar, and other renewable energies.

“ITER’s success has demanded extraordinary project management, systems engineering, and almost perfect integration of our work.

“Our design has taken advantage of the best expertise of every member’s scientific and industrial base. No country could do this alone. We are all learning from each other, for the world’s mutual benefit.”

The ITER 50 percent milestone is getting significant attention.

“We are fortunate that ITER and fusion has had the support of world leaders, historically and currently,” says Director-General Bigot. “The concept of the ITER project was conceived at the 1985 Geneva Summit between Ronald Reagan and Mikhail Gorbachev. When the ITER Agreement was signed in 2006, it was strongly supported by leaders such as French President Jacques Chirac, U.S. President George W. Bush, and Indian Prime Minister Manmohan Singh.

“More recently, President Macron and U.S. President Donald Trump exchanged letters about ITER after their meeting this past July. One month earlier, President Xi Jinping of China hosted Russian President Vladimir Putin and other world leaders in a showcase featuring ITER and fusion power at the World EXPO in Astana, Kazakhstan.

“We know that other leaders have been similarly involved behind the scenes. It is clear that each ITER member understands the value and importance of this project.”

Why use this complex manufacturing arrangement?

More than 80 percent of the cost of ITER, about $22 billion or EUR18 billion, is contributed in the form of components manufactured by the partners. Many of these massive components of the ITER machine must be precisely fitted-for example, 17-meter-high magnets with less than a millimeter of tolerance. Each component must be ready on time to fit into the Master Schedule for machine assembly.

Members asked for this deal for three reasons. First, it means that most of the ITER costs paid by any member are actually paid to that member’s companies; the funding stays in-country. Second, the companies working on ITER build new industrial expertise in major fields-such as electromagnetics, cryogenics, robotics, and materials science. Third, this new expertise leads to innovation and spin-offs in other fields.

For example, expertise gained working on ITER’s superconducting magnets is now being used to map the human brain more precisely than ever before.

The European Union is paying 45 percent of the cost; China, India, Japan, Korea, Russia, and the United States each contribute 9 percent equally. All members share in ITER’s technology; they receive equal access to the intellectual property and innovation that comes from building ITER.

When will commercial fusion plants be ready?

ITER scientists predict that fusion plants will start to come on line as soon as 2040. The exact timing, according to fusion experts, will depend on the level of public urgency and political will that translates to financial investment.

How much power will they provide?

The ITER tokamak will produce 500 megawatts of thermal power. This size is suitable for studying a “burning” or largely self-heating plasma, a state of matter that has never been produced in a controlled environment on Earth. In a burning plasma, most of the plasma heating comes from the fusion reaction itself. Studying the fusion science and technology at ITER’s scale will enable optimization of the plants that follow.

A commercial fusion plant will be designed with a slightly larger plasma chamber, for 10-15 times more electrical power. A 2,000-megawatt fusion electricity plant, for example, would supply 2 million homes.

How much would a fusion plant cost and how many will be needed?

The initial capital cost of a 2,000-megawatt fusion plant will be in the range of $10 billion. These capital costs will be offset by extremely low operating costs, negligible fuel costs, and infrequent component replacement costs over the 60-year-plus life of the plant. Capital costs will decrease with large-scale deployment of fusion plants.

At current electricity usage rates, one fusion plant would be more than enough to power a city the size of Washington, D.C. The entire D.C. metropolitan area could be powered with four fusion plants, with zero carbon emissions.

“If fusion power becomes universal, the use of electricity could be expanded greatly, to reduce the greenhouse gas emissions from transportation, buildings and industry,” predicts Dr. Bigot. “Providing clean, abundant, safe, economic energy will be a miracle for our planet.”

*     *     *

FOOTNOTES:

* “Tokamak” is a word of Russian origin meaning a toroidal or donut-shaped magnetic chamber. Tokamaks have been built and operated for the past six decades. They are today’s most advanced fusion device design.

** “Total construction work scope,” as used in ITER’s project performance metrics, includes design, component manufacturing, building construction, shipping and delivery, assembly, and installation.

It is an extraordinary project on many levels as Henry Fountain notes in a March 27, 2017 article for the New York Times (Note: Links have been removed),

At a dusty construction site here amid the limestone ridges of Provence, workers scurry around immense slabs of concrete arranged in a ring like a modern-day Stonehenge.

It looks like the beginnings of a large commercial power plant, but it is not. The project, called ITER, is an enormous, and enormously complex and costly, physics experiment. But if it succeeds, it could determine the power plants of the future and make an invaluable contribution to reducing planet-warming emissions.

ITER, short for International Thermonuclear Experimental Reactor (and pronounced EAT-er), is being built to test a long-held dream: that nuclear fusion, the atomic reaction that takes place in the sun and in hydrogen bombs, can be controlled to generate power.

ITER will produce heat, not electricity. But if it works — if it produces more energy than it consumes, which smaller fusion experiments so far have not been able to do — it could lead to plants that generate electricity without the climate-affecting carbon emissions of fossil-fuel plants or most of the hazards of existing nuclear reactors that split atoms rather than join them.

Success, however, has always seemed just a few decades away for ITER. The project has progressed in fits and starts for years, plagued by design and management problems that have led to long delays and ballooning costs.

ITER is moving ahead now, with a director-general, Bernard Bigot, who took over two years ago after an independent analysis that was highly critical of the project. Dr. Bigot, who previously ran France’s atomic energy agency, has earned high marks for resolving management problems and developing a realistic schedule based more on physics and engineering and less on politics.

The site here is now studded with tower cranes as crews work on the concrete structures that will support and surround the heart of the experiment, a doughnut-shaped chamber called a tokamak. This is where the fusion reactions will take place, within a plasma, a roiling cloud of ionized atoms so hot that it can be contained only by extremely strong magnetic fields.

Here’s a rendering of the proposed reactor,

Source: ITER Organization

It seems the folks at the New York Times decided to remove the notes which help make sense of this image. However, it does get the idea across.

If I read the article rightly, the official cost in March 2017 was around 22 B Euros and more will likely be needed. You can read Fountain’s article for more information about fusion and ITER or go to the ITER website.

I could have sworn a local (Vancouver area) company called General Fusion was involved in the ITER project but I can’t track down any sources for confirmation. The sole connection I could find is in a documentary about fusion technology,

Here’s a little context for the film from a July 4, 2017 General Fusion news release (Note: A link has been removed),

A new documentary featuring General Fusion has captured the exciting progress in fusion across the public and private sectors.

Let There Be Light made its international premiere at the South By Southwest (SXSW) music and film festival in March [2017] to critical acclaim. The film was quickly purchased by Amazon Video, where it will be available for more than 70 million users to stream.

Let There Be Light follows scientists at General Fusion, ITER and Lawrenceville Plasma Physics in their pursuit of a clean, safe and abundant source of energy to power the world.

The feature length documentary has screened internationally across Europe and North America. Most recently it was shown at the Hot Docs film festival in Toronto, where General Fusion founder and Chief Scientist Dr. Michel Laberge joined fellow fusion physicist Dr. Mark Henderson from ITER at a series of Q&A panels with the filmmakers.

Laberge and Henderson were also interviewed by the popular CBC radio science show Quirks and Quarks, discussing different approaches to fusion, its potential benefits, and the challenges it faces.

It is yet to be confirmed when the film will be release for streaming, check Amazon Video for details.

You can find out more about General Fusion here.

Brief final comment

ITER is a breathtaking effort but if you’ve read about other large scale projects such as building a railway across the Canadian Rocky Mountains, establishing telecommunications in an  astonishing number of countries around the world, getting someone to the moon, eliminating small pox, building the pyramids, etc., it seems standard operating procedure both for the successes I’ve described and for the failures we’ve forgotten. Where ITER will finally rest on the continuum between success and failure is yet to be determined but the problems experienced so far are not necessarily a predictor.

I wish the engineers, scientists, visionaries, and others great success with finding better ways to produce energy.

Europe’s cathedrals get a ‘lift’ with nanoparticles

That headline is a teensy bit laboured but I couldn’t resist the levels of wordplay available to me. They’re working on a cathedral close to the leaning Tower of Pisa in this video about the latest in stone preservation in Europe.

I have covered the topic of preserving stone monuments before (most recently in my Oct. 21, 2014 posting). The action in this field seems to be taking place mostly in Europe, specifically Italy, although other countries are also quite involved.

Finally, getting to the European Commission’s latest stone monument preservation project, Nano-Cathedral, a Sept. 26, 2017 news item on Nanowerk announces the latest developments,

Just a few meters from Pisa’s famous Leaning Tower, restorers are defying scorching temperatures to bring back shine to the city’s Cathedral.

Ordinary restoration techniques like laser are being used on much of the stonework that dates back to the 11th century. But a brand new technique is also being used: a new material made of innovative nanoparticles. The aim is to consolidate the inner structure of the stones. It’s being applied mainly on marble.

A March 7, 2017 item on the Euro News website, which originated the Nanowerk news item, provides more detail,

“Marble has very low porosity, which means we have to use nanometric particles in order to go deep inside the stone, to ensure that the treatment is both efficient while still allowing the stone to breathe,” explains Roberto Cela, civil engineer at Opera Della Primaziale Pisana.

The material developed by the European research team includes calcium carbonate, which is a mix of calcium oxide, water and carbon dioxide.

The nano-particles penetrate the stone cementing its decaying structure.

“It is important that these particles have the same chemical nature as the stones that are being treated, so that the physical and mechanical processes that occur over time don’t lead to the break-up of the stones,” says Dario Paolucci, chemist at the University of Pisa.

Vienna’s St Stephen’s is another of the five cathedrals where the new restoration materials are being tested.

The first challenge for researchers is to determine the mechanical characteristics of the cathedral’s stones. Since there are few original samples to work on, they had to figure out a way of “ageing” samples of stones of similar nature to those originally used.

“We tried different things: we tried freeze storage, we tried salts and acids, and we decided to go for thermal ageing,” explains Matea Ban, material scientist at the University of Technology in Vienna. “So what happens is that we heat the stone at certain temperatures. Minerals inside then expand in certain directions, and when they expand they build up stresses to neighbouring minerals and then they crack, and we need those cracks in order to consolidate them.”

Consolidating materials were then applied on a variety of limestones, sandstones and marble – a selection of the different types of stones that were used to build cathedrals around Europe.

What researchers are looking for are very specific properties.

“First of all, the consolidating material has to be well absorbed by the stone,” says petrologist Johannes Weber of the University of Applied Arts in Vienna. “Then, as it evaporates, it has to settle properly within the stone structure. It should not shrink too much. All materials shrink when drying, including consolidating materials. They should adhere to the particles of the stone but shouldn’t completely obstruct its pores.”

Further tests are underway in cathedrals across Europe in the hope of better protecting our invaluable cultural heritage.

There’s a bit more detail about Nano-Cathedral on the Opera della Primaziale Pisana (O₽A) website (from their Nano-Cathedral project page),

With the meeting of June 3 this year the Nano Cathedral project kicked off, supported by the European Union within the nanotechnology field applied to Horizon 2020 cultural heritage with a fund of about 6.5 million euro.

A total of six monumental buildings will be for three years under the eyes and hands of petrographers, geologists, chemists and restorers of the institutes belonging to the Consortium: five cathedrals have been selected to represent the cultural diversity within Europe from the perspective of developing shared values and transnational identity, and a contemporary monumental building entirely clad in Carrara marble, the Opera House of Oslo.

Purpose: the testing of nanomaterials for the conservation of marble and the outer surfaces of our ‘cathedrals’.
The field of investigation to check degradation, testing new consolidating and protective products is the Cathedral of Pisa together with the Cathedrals of Cologne, Vienna, Ghent and Vitoria.
For the selection of case studies we have crosschecked requirements for their historical and architectural value but also for the different types of construction materials – marble, limestone and sandstone – as well as the relocation of six monumental buildings according to European climates.

The Cathedral of Pisa is the most southern, fully positioned in Mediterranean climate, therefore subject to degradation and very different from those which the weather conditions of the Scandinavian peninsula recorded; all the intermediate climate phases are modulated through Ghent, Vitoria, Cologne and Vienna.

At the conclusion of the three-year project, once the analysis in situ and in the laboratory are completed and all the experiments are tested on each different identified portion in each monumental building, an intervention protocol will be defined in detail in order to identify the mineralogical and petrographic characteristics of stone materials and of their degradation, the assessment of the causes and mechanisms of associated alteration, including interactions with factors of environmental pollution. Then we will be able to identify the most appropriate method of restoration and testing of nanotechnology products for the consolidation and protection of different stone materials.

In 2018 we hope to have new materials to protect and safeguard the ‘skin’ of our historic buildings and monuments for a long time.

Back to my headline and the second piece of wordplay, ‘lift’ as in ‘skin lift’ in that last sentence.

I realize this is a bit off topic but it’s worth taking a look at ORA’s home page,

Gabriele D’Annunzio effectively condenses the wonder and admiration that catch whoever visits the Duomo Square of Pisa.

The Opera della Primaziale Pisana (O₽A) is a non-profit organisation which was established in order to oversee the first works for the construction of the monuments in the Piazza del Duomo, subject to its own charter which includes the protection, promotion and enhancement of its heritage, in order to pass the religious and artistic meaning onto future generations.

«L’Ardea roteò nel cielo di Cristo, sul prato dei Miracoli.»
Gabriele d’Annunzio in Forse che sì forse che no (1910)

If you go to the home page, you can buy tickets to visit the monuments surrounding the square and there are other notices including one for a competition (it’s too late to apply but the details are interesting) to construct four stained glass windows for the Pisa cathedral.

A jellyfish chat on November 28, 2017 at Café Scientifique Vancouver get together

Café Scientifique Vancouver sent me an announcement (via email) about their upcoming event,

We are pleased to announce our next café which will happen on TUESDAY,
NOVEMBER 28TH at 7:30PM in the back room of YAGGER'S DOWNTOWN (433 W
Pender).

JELLYFISH – FRIEND, FOE, OR FOOD?

Did you know that in addition to stinging swimmers, jellyfish also cause
extensive damage to fisheries and coastal power plants? As threats such
as overfishing, pollution, and climate change alter the marine
environment, recent media reports are proclaiming that jellyfish are
taking over the oceans. Should we hail to our new jellyfish overlords or
do we need to examine the evidence behind these claims? Join Café
Scientifique on Nov. 28, 2017 to learn everything you ever wanted to
know about jellyfish, and find out if jelly burgers are coming soon to a
menu near you.

Our speaker for the evening will be DR. LUCAS BROTZ, a Postdoctoral
Research Fellow with the Sea Around Us at UBC’s Institute for the
Oceans and Fisheries. Lucas has been studying jellyfish for more than a
decade, and has been called “Canada’s foremost jellyfish
researcher” by CBC Nature of Things host Dr. David Suzuki. Lucas has
participated in numerous international scientific collaborations, and
his research has been featured in more than 100 media outlets including
Nature News, The Washington Post, and The New York Times. He recently
received the Michael A. Bigg award for highly significant student
research as part of the Coastal Ocean Awards at the Vancouver Aquarium.

We hope to see you there!

You can find out more about Lucas Brotz here and about Sea Around Us here.

For anyone who’s curious about the jellyfish ‘issue’, there’s a November 8, 2017 Norwegian University of Science and Technology press release on AlphaGallileo or on EurekAlert, which provides insight into the problems and the possibilities,

Jellyfish could be a resource in producing microplastic filters, fertilizer or fish feed. A new 6 million euro project called GoJelly, funded by the EU and coordinated by the GEOMAR Helmholtz Centre for Ocean Research, Germany and including partners at the Norwegian University of Science and Technology (NTNNU) and SINTEF [headquartered in Trondheim, Norway, is the largest independent research organisation in Scandinavia; more about SINTEF in its Wikipedia entry], hopes to turn jellyfish from a nuisance into a useful product.

Global climate change and the human impact on marine ecosystems has led to dramatic decreases in the number of fish in the ocean. It has also had an unforseen side effect: because overfishing decreases the numbers of jellyfish competitors, their blooms are on the rise.

The GoJelly project, coordinated by the GEOMAR Helmholtz Centre for Ocean Research, Germany, would like to transform problematic jellyfish into a resource that can be used to produce microplastic filter, fertilizer or fish feed. The EU has just approved funding of EUR 6 million over 4 years to support the project through its Horizon 2020 programme.

Rising water temperatures, ocean acidification and overfishing seem to favour jellyfish blooms. More and more often, they appear in huge numbers that have already destroyed entire fish farms on European coasts and blocked cooling systems of power stations near the coast. A number of jellyfish species are poisonous, while some tropical species are even among the most toxic animals on earth.

“In Europe alone, the imported American comb jelly has a biomass of one billion tons. While we tend to ignore the jellyfish there must be other solutions,” says Jamileh Javidpour of GEOMAR, initiator and coordinator of the GoJelly project, which is a consortium of 15 scientific institutions from eight countries led by the GEOMAR Helmholtz Centre for Ocean Research in Kiel.

The project will first entail exploring the life cycle of a number of jellyfish species. A lack of knowledge about life cycles makes it is almost impossible to predict when and why a large jellyfish bloom will occur. “This is what we want to change so that large jellyfish swarms can be caught before they reach the coasts,” says Javidpour.

At the same time, the project partners will also try to answer the question of what to do with jellyfish once they have been caught. One idea is to use the jellyfish to battle another, man-made threat.

“Studies have shown that mucus of jellyfish can bind microplastic. Therefore, we want to test whether biofilters can be produced from jellyfish. These biofilters could then be used in sewage treatment plants or in factories where microplastic is produced,” the GoJelly researchers say.

Jellyfish can also be used as fertilizers for agriculture or as aquaculture feed. “Fish in fish farms are currently fed with captured wild fish, which does not reduce the problem of overfishing, but increases it. Jellyfish as feed would be much more sustainable and would protect natural fish stocks,” says the GoJelly team.

Another option is using jellyfish as food for humans. “In some cultures, jellyfish are already on the menu. As long as the end product is no longer slimy, it could also gain greater general acceptance,” said Javidpour. Finally yet importantly, jellyfish contain collagen, a substance very much sought after in the cosmetics industry.

Project partners from the Norwegian University of Science and Technology, led by Nicole Aberle-Malzahn, and SINTEF Ocean, led by Rachel Tiller, will analyse how abiotic (hydrography, temperature), biotic (abundance, biomass, ecology, reproduction) and biochemical parameters (stoichiometry, food quality) affect the initiation of jellyfish blooms.

Based on a comprehensive analysis of triggering mechanisms, origin of seed populations and ecological modelling, the researchers hope to be able to make more reliable predictions on jellyfish bloom formation of specific taxa in the GoJelly target areas. This knowledge will allow sustainable harvesting of jellyfish communities from various Northern and Southern European populations.

This harvest will provide a marine biomass of unknown potential that will be explored by researchers at SINTEF Ocean, among others, to explore the possible ways to use the material.

A team from SINTEF Ocean’s strategic program Clean Ocean will also work with European colleagues on developing a filter from the mucus of the jellyfish that will catch microplastics from household products (which have their source in fleece sweaters, breakdown of plastic products or from cosmetics, for example) and prevent these from entering the marine ecosystem.

Finally, SINTEF Ocean will examine the socio-ecological system and games, where they will explore the potentials of an emerging international management regime for a global effort to mitigate the negative effects of microplastics in the oceans.

“Jellyfish can be used for many purposes. We see this as an opportunity to use the potential of the huge biomass drifting right in front of our front door,” Javidpour said.

You can find out more about GoJelly on their Twitter account.

Congratulate China on the world’s first quantum communication network

China has some exciting news about the world’s first quantum network; it’s due to open in late August 2017 so you may want to have your congratulations in order for later this month.

An Aug. 4, 2017 news item on phys.org makes the announcement,

As malicious hackers find ever more sophisticated ways to launch attacks, China is about to launch the Jinan Project, the world’s first unhackable computer network, and a major milestone in the development of quantum technology.

Named after the eastern Chinese city where the technology was developed, the network is planned to be fully operational by the end of August 2017. Jinan is the hub of the Beijing-Shanghai quantum network due to its strategic location between the two principal Chinese metropolises.

“We plan to use the network for national defence, finance and other fields, and hope to spread it out as a pilot that if successful can be used across China and the whole world,” commented Zhou Fei, assistant director of the Jinan Institute of Quantum Technology, who was speaking to Britain’s Financial Times.

An Aug. 3, 2017 CORDIS (Community Research and Development Research Information Service [for the European Commission]) press release, which originated the news item, provides more detail about the technology,

By launching the network, China will become the first country worldwide to implement quantum technology for a real life, commercial end. It also highlights that China is a key global player in the rush to develop technologies based on quantum principles, with the EU and the United States also vying for world leadership in the field.

The network, known as a Quantum Key Distribution (QKD) network, is more secure than widely used electronic communication equivalents. Unlike a conventional telephone or internet cable, which can be tapped without the sender or recipient being aware, a QKD network alerts both users to any tampering with the system as soon as it occurs. This is because tampering immediately alters the information being relayed, with the disturbance being instantly recognisable. Once fully implemented, it will make it almost impossible for other governments to listen in on Chinese communications.

In the Jinan network, some 200 users from China’s military, government, finance and electricity sectors will be able to send messages safe in the knowledge that only they are reading them. It will be the world’s longest land-based quantum communications network, stretching over 2 000 km.

Also speaking to the ‘Financial Times’, quantum physicist Tim Byrnes, based at New York University’s (NYU) Shanghai campus commented: ‘China has achieved staggering things with quantum research… It’s amazing how quickly China has gotten on with quantum research projects that would be too expensive to do elsewhere… quantum communication has been taken up by the commercial sector much more in China compared to other countries, which means it is likely to pull ahead of Europe and US in the field of quantum communication.’

However, Europe is also determined to also be at the forefront of the ‘quantum revolution’ which promises to be one of the major defining technological phenomena of the twenty-first century. The EU has invested EUR 550 million into quantum technologies and has provided policy support to researchers through the 2016 Quantum Manifesto.

Moreover, with China’s latest achievement (and a previous one already notched up from July 2017 when its quantum satellite – the world’s first – sent a message to Earth on a quantum communication channel), it looks like the race to be crowned the world’s foremost quantum power is well and truly underway…

Prior to this latest announcement, Chinese scientists had published work about quantum satellite communications, a development that makes their imminent terrestrial quantum network possible. Gabriel Popkin wrote about the quantum satellite in a June 15, 2017 article Science magazine,

Quantum entanglement—physics at its strangest—has moved out of this world and into space. In a study that shows China’s growing mastery of both the quantum world and space science, a team of physicists reports that it sent eerily intertwined quantum particles from a satellite to ground stations separated by 1200 kilometers, smashing the previous world record. The result is a stepping stone to ultrasecure communication networks and, eventually, a space-based quantum internet.

“It’s a huge, major achievement,” says Thomas Jennewein, a physicist at the University of Waterloo in Canada. “They started with this bold idea and managed to do it.”

Entanglement involves putting objects in the peculiar limbo of quantum superposition, in which an object’s quantum properties occupy multiple states at once: like Schrödinger’s cat, dead and alive at the same time. Then those quantum states are shared among multiple objects. Physicists have entangled particles such as electrons and photons, as well as larger objects such as superconducting electric circuits.

Theoretically, even if entangled objects are separated, their precarious quantum states should remain linked until one of them is measured or disturbed. That measurement instantly determines the state of the other object, no matter how far away. The idea is so counterintuitive that Albert Einstein mocked it as “spooky action at a distance.”

Starting in the 1970s, however, physicists began testing the effect over increasing distances. In 2015, the most sophisticated of these tests, which involved measuring entangled electrons 1.3 kilometers apart, showed once again that spooky action is real.

Beyond the fundamental result, such experiments also point to the possibility of hack-proof communications. Long strings of entangled photons, shared between distant locations, can be “quantum keys” that secure communications. Anyone trying to eavesdrop on a quantum-encrypted message would disrupt the shared key, alerting everyone to a compromised channel.

But entangled photons degrade rapidly as they pass through the air or optical fibers. So far, the farthest anyone has sent a quantum key is a few hundred kilometers. “Quantum repeaters” that rebroadcast quantum information could extend a network’s reach, but they aren’t yet mature. Many physicists have dreamed instead of using satellites to send quantum information through the near-vacuum of space. “Once you have satellites distributing your quantum signals throughout the globe, you’ve done it,” says Verónica Fernández Mármol, a physicist at the Spanish National Research Council in Madrid. …

Popkin goes on to detail the process for making the discovery in easily accessible (for the most part) writing and in a video and a graphic.

Russell Brandom writing for The Verge in a June 15, 2017 article about the Chinese quantum satellite adds detail about previous work and teams in other countries also working on the challenge (Note: Links have been removed),

Quantum networking has already shown promise in terrestrial fiber networks, where specialized routing equipment can perform the same trick over conventional fiber-optic cable. The first such network was a DARPA-funded connection established in 2003 between Harvard, Boston University, and a private lab. In the years since, a number of companies have tried to build more ambitious connections. The Swiss company ID Quantique has mapped out a quantum network that would connect many of North America’s largest data centers; in China, a separate team is working on a 2,000-kilometer quantum link between Beijing and Shanghai, which would rely on fiber to span an even greater distance than the satellite link. Still, the nature of fiber places strict limits on how far a single photon can travel.

According to ID Quantique, a reliable satellite link could connect the existing fiber networks into a single globe-spanning quantum network. “This proves the feasibility of quantum communications from space,” ID Quantique CEO Gregoire Ribordy tells The Verge. “The vision is that you have regional quantum key distribution networks over fiber, which can connect to each other through the satellite link.”

China isn’t the only country working on bringing quantum networks to space. A collaboration between the UK’s University of Strathclyde and the National University of Singapore is hoping to produce the same entanglement in cheap, readymade satellites called Cubesats. A Canadian team is also developing a method of producing entangled photons on the ground before sending them into space.

I wonder if there’s going to be an invitational event for scientists around the world to celebrate the launch.

2D printed transistors in Ireland

2D transistors seem to be a hot area for research these days. In Ireland, the AMBER Centre has announced a transistor consisting entirely of 2D nanomaterials in an April 6, 2017 news item on Nanowerk,

Researchers in AMBER, the Science Foundation Ireland-funded materials science research centre hosted in Trinity College Dublin, have fabricated printed transistors consisting entirely of 2-dimensional nanomaterials for the first time. These 2D materials combine exciting electronic properties with the potential for low-cost production.

This breakthrough could unlock the potential for applications such as food packaging that displays a digital countdown to warn you of spoiling, wine labels that alert you when your white wine is at its optimum temperature, or even a window pane that shows the day’s forecast. …

An April 7, 2017 AMBER Centre press release (also on EurekAlert), which originated the news item, expands on the theme,

Prof Jonathan Coleman, who is an investigator in AMBER and Trinity’s School of Physics, said, “In the future, printed devices will be incorporated into even the most mundane objects such as labels, posters and packaging.

Printed electronic circuitry (constructed from the devices we have created) will allow consumer products to gather, process, display and transmit information: for example, milk cartons could send messages to your phone warning that the milk is about to go out-of-date.

We believe that 2D nanomaterials can compete with the materials currently used for printed electronics. Compared to other materials employed in this field, our 2D nanomaterials have the capability to yield more cost effective and higher performance printed devices. However, while the last decade has underlined the potential of 2D materials for a range of electronic applications, only the first steps have been taken to demonstrate their worth in printed electronics. This publication is important because it shows that conducting, semiconducting and insulating 2D nanomaterials can be combined together in complex devices. We felt that it was critically important to focus on printing transistors as they are the electric switches at the heart of modern computing. We believe this work opens the way to print a whole host of devices solely from 2D nanosheets.”

Led by Prof Coleman, in collaboration with the groups of Prof Georg Duesberg (AMBER) and Prof. Laurens Siebbeles (TU Delft,Netherlands), the team used standard printing techniques to combine graphene nanosheets as the electrodes with two other nanomaterials, tungsten diselenide and boron nitride as the channel and separator (two important parts of a transistor) to form an all-printed, all-nanosheet, working transistor.

Printable electronics have developed over the last thirty years based mainly on printable carbon-based molecules. While these molecules can easily be turned into printable inks, such materials are somewhat unstable and have well-known performance limitations. There have been many attempts to surpass these obstacles using alternative materials, such as carbon nanotubes or inorganic nanoparticles, but these materials have also shown limitations in either performance or in manufacturability. While the performance of printed 2D devices cannot yet compare with advanced transistors, the team believe there is a wide scope to improve performance beyond the current state-of-the-art for printed transistors.

The ability to print 2D nanomaterials is based on Prof. Coleman’s scalable method of producing 2D nanomaterials, including graphene, boron nitride, and tungsten diselenide nanosheets, in liquids, a method he has licensed to Samsung and Thomas Swan. These nanosheets are flat nanoparticles that are a few nanometres thick but hundreds of nanometres wide. Critically, nanosheets made from different materials have electronic properties that can be conducting, insulating or semiconducting and so include all the building blocks of electronics. Liquid processing is especially advantageous in that it yields large quantities of high quality 2D materials in a form that is easy to process into inks. Prof. Coleman’s publication provides the potential to print circuitry at extremely low cost which will facilitate a range of applications from animated posters to smart labels.

Prof Coleman is a partner in Graphene flagship, a €1 billion EU initiative to boost new technologies and innovation during the next 10 years.

Here’s a link to and a citation for the paper,

All-printed thin-film transistors from networks of liquid-exfoliated nanosheets by Adam G. Kelly, Toby Hallam, Claudia Backes, Andrew Harvey, Amir Sajad Esmaeily, Ian Godwin, João Coelho, Valeria Nicolosi, Jannika Lauth, Aditya Kulkarni, Sachin Kinge, Laurens D. A. Siebbeles, Georg S. Duesberg, Jonathan N. Coleman. Science  07 Apr 2017: Vol. 356, Issue 6333, pp. 69-73 DOI: 10.1126/science.aal4062

This paper is behind a paywall.

Canada and its Vancouver tech scene gets a boost

Prime Minister Justin Trudeau has been running around attending tech events both in the Vancouver area (Canada) and in Seattle these last few days (May 17 and May 18, 2017). First he attended the Microsoft CEO Summit as noted in a May 11, 2017 news release from the Prime Minister’s Office (Note: I have a few comments about this performance and the Canadian tech scene at the end of this post),

The Prime Minister, Justin Trudeau, today [May 11, 2017] announced that he will participate in the Microsoft CEO Summit in Seattle, Washington, on May 17 and 18 [2017], to promote the Cascadia Innovation Corridor, encourage investment in the Canadian technology sector, and draw global talent to Canada.

This year’s summit, under the theme “The CEO Agenda: Navigating Change,” will bring together more than 150 chief executive officers. While at the Summit, Prime Minister Trudeau will showcase Budget 2017’s Innovation and Skills Plan and demonstrate how Canada is making it easier for Canadian entrepreneurs and innovators to turn their ideas into thriving businesses.

Prime Minister Trudeau will also meet with Washington Governor Jay Inslee.

Quote

“Canada’s greatest strength is its skilled, hard-working, creative, and diverse workforce. Canada is recognized as a world leader in research and development in many areas like artificial intelligence, quantum computing, and 3D programming. Our government will continue to help Canadian businesses grow and create good, well-paying middle class jobs in today’s high-tech economy.”
— Rt. Honourable Justin Trudeau, Prime Minister of Canada

Quick Facts

  • Canada-U.S. bilateral trade in goods and services reached approximately $882 billion in 2016.
  • Nearly 400,000 people and over $2 billion-worth of goods and services cross the Canada-U.S. border every day.
  • Canada-Washington bilateral trade was $19.8 billion in 2016. Some 223,300 jobs in the State of Washington depend on trade and investment with Canada. Canada is among Washington’s top export destinations.

Associated Link

Here’s a little more about the Microsoft meeting from a May 17, 2017 article by Alan Boyle for GeekWire.com (Note: Links have been removed),

So far, this year’s Microsoft CEO Summit has been all about Canadian Prime Minister Justin Trudeau’s talk today, but there’s been precious little information available about who else is attending – and Trudeau may be one of the big reasons why.

Microsoft co-founder Bill Gates created the annual summit back in 1997, to give global business leaders an opportunity to share their experiences and learn about new technologies that will have an impact on business in the future. The event’s attendee list is kept largely confidential, as is the substance of the discussions.

This year, Microsoft says the summit’s two themes are “trust in technology” (as in cybersecurity, international hacking, privacy and the flow of data) and “the race to space” (as in privately funded space efforts such as Amazon billionaire Jeff Bezos’ Blue Origin rocket venture).

Usually, Microsoft lists a few folks who are attending the summit on the company’s Redmond campus, just to give a sense of the event’s cachet. For example, last year’s headliners included Berkshire Hathaway CEO Warren Buffett and Exxon Mobil CEO Rex Tillerson (who is now the Trump administration’s secretary of state)

This year, however, the spotlight has fallen almost exclusively on the hunky 45-year-old Trudeau, the first sitting head of government or state to address the summit. Microsoft isn’t saying anything about the other 140-plus VIPs attending the discussions. “Out of respect for the privacy of our guests, we are not providing any additional information,” a Microsoft spokesperson told GeekWire via email.

Even Trudeau’s remarks at the summit are hush-hush, although officials say he’s talking up Canada’s tech sector.  …

Laura Kane’s May 18, 2017 article for therecord.com provides a little more information about Trudeau’s May 18, 2017 activities in Washington state,

Prime Minister Justin Trudeau continued his efforts to promote Canada’s technology sector to officials in Washington state on Thursday [May 18, 2017], meeting with Gov. Jay Inslee a day after attending the secretive Microsoft CEO Summit.

Trudeau and Inslee discussed, among other issues, the development of the Cascadia Innovation Corridor, an initiative that aims to strengthen technology industry ties between British Columbia and Washington.

The pair also spoke about trade and investment opportunities and innovation in the energy sector, said Trudeau’s office. In brief remarks before the meeting, the prime minister said Washington and Canada share a lot in common.

But protesters clad in yellow hazardous material suits that read “Keystone XL Toxic Cleanup Crew” gathered outside the hotel to criticize Trudeau’s environmental record, arguing his support of pipelines is at odds with any global warming promises he has made.

Later that afternoon, Trudeau visited Electronic Arts (a US games company with offices in the Vancouver area) for more tech talk as Stephanie Ip notes in her May 18, 2017 article for The Vancouver Sun,

Prime Minister Justin Trudeau was in Metro Vancouver Thursday [may 18, 2017] to learn from local tech and business leaders how the federal government can boost B.C.’s tech sector.

The roundtable discussion was organized by the Vancouver Economic Commission and hosted in Burnaby at Electronic Arts’ Capture Lab, where the video game company behind the popular FIFA, Madden and NHL franchises records human movement to add more realism to its digital characters. Representatives from Amazon, Launch Academy, Sony Pictures, Darkhorse 101 Pictures and Front Fundr were also there.

While the roundtable was not open to media, Trudeau met beforehand with media.

“We’re going to talk about how the government can be a better partner or better get out of your way in some cases to allow you to continue to grow, to succeed, to create great opportunities to allow innovation to advance success in Canada and to create good jobs for Canadians and draw in people from around the world and continue to lead the way in the world,” he said.

“Everything from clean tech, to bio-medical advances, to innovation in digital economy — there’s a lot of very, very exciting things going on”

Comments on the US tech sector and the supposed Canadian tech sector

I wonder at all the secrecy. As for the companies mentioned as being at the roundtable, you’ll notice a preponderance of US companies with Launch Academy and Front Fundr (which is not a tech company but a crowdfunding equity company) supplying Canadian content. As for Darkhorse 101 Pictures,  I strongly suspect (after an online search) it is part of Darkhorse Comics (as US company) which has an entertainment division.

Perhaps it didn’t seem worthwhile to mention the Canadian companies? In that case, that’s a sad reflection on how poorly we and our media support our tech sector.

In fact, it seems Trudeau’s version of the Canadian technology sector is for us to continue in our role as a branch plant remaining forever in service of the US economy or at least the US tech sector which may be experiencing some concerns with the US Trump administration and what appears to be an increasingly isolationist perspective with regard to trade and immigration. It’s a perspective that the tech sector, especially the entertainment component, can ill afford.

As for the Cascadia Innovation Corridor mentioned in the Prime Minister’s news release and in Kane’s article, I have more about that in a Feb. 28, 2017 posting about the Cascadia Data Analytics Cooperative.

I noticed he mentioned clean tech as an area of excitement. Well, we just lost a significant player not to the US this time but to the EU (European Union) or more specifically, Germany. (There’ll be more about that in an upcoming post.)

I’m glad to see that Trudeau remains interested in Canadian science and technology but perhaps he could concentrate on new ways of promoting sectoral health rather than relying on the same old thing.

European Commission has issued evaluation of nanomaterial risk frameworks and tools

Despite complaints that there should have been more, there has been some research into risks where nanomaterials are concerned. While additional research would be welcome, it’s perhaps more imperative that standardized testing and risk frameworks are developed so, for example, carbon nanotube safety research in Japan can be compared with the similar research in the Netherlands, the US, and elsewhere. This March 15, 2017 news item on Nanowerk features some research analyzing risk assessment frameworks and tools in Europe,

A recent study has evaluated frameworks and tools used in Europe to assess the potential health and environmental risks of manufactured nanomaterials. The study identifies a trend towards tools that provide protocols for conducting experiments, which enable more flexible and efficient hazard testing. Among its conclusions, however, it notes that no existing frameworks meet all the study’s evaluation criteria and calls for a new, more comprehensive framework.

A March 9, 2017 news alert in the European Commission’s Science for Environment Policy series, which originated the news item, provides more detail (Note: Links have been removed),

Nanotechnology is identified as a key emerging technology in the EU’s growth strategy, Europe 2020. It has great potential to contribute to innovation and economic growth and many of its applications have already received large investments. However,there are some uncertainties surrounding the environmental, health and safety risks of manufactured nanomaterials. For effective regulation, careful scientific analysis of their potential impacts is needed, as conducted through risk assessment exercises.

This study, conducted under the EU-funded MARINA project1, reviewed existing frameworks and tools for risk assessing manufactured nanomaterials. The researchers define a framework as a ‘conceptual paradigm’ of how a risk assessment should be conducted and understood, and give the REACH chemical safety assessment as an example. Tools are defined as implements used to carry out a specific task or function, such as experimental protocols, computer models or databases.

In all, 12 frameworks and 48 tools were evaluated. These were identified from other studies and projects. The frameworks were assessed against eight criteria which represent different strengths, such as whether they consider properties specific to nanomaterials, whether they consider the entire life cycle of a nanomaterial and whether they include careful planning and prioritise objectives before the risk assessment is conducted.

The tools were assessed against seven criteria, such as ease of use, whether they provide quantitative information and if they clearly communicate uncertainty in their results. The researchers defined the criteria for both frameworks and tools by reviewing other studies and by interviewing staff at organisations who develop tools.

The evaluation was thus able to produce a list of strengths and areas for improvement for the frameworks and tools, based on whether they meet each of the criteria. Among its many findings, the evaluation showed that most of the frameworks stress that ‘problem formulation’, which sets the goals and scope of an assessment during the planning process, is essential to avoid unnecessary testing. In addition, most frameworks consider routes of exposure in the initial stages of assessment, which is beneficial as it can exclude irrelevant exposure routes and avoid unnecessary tests.

However, none of the frameworks met all eight of the criteria. The study therefore recommends that a new, comprehensive framework is developed that meets all criteria. Such a framework is needed to inform regulation, the researchers say, and should integrate human health and environmental factors, and cover all stages of the life cycle of a product containing nanomaterials.

The evaluation of the tools suggested that many of them are designed to screen risks, and not necessarily to support regulatory risk assessment. However, their strengths include a growing trend in quantitative models, which can assess uncertainty; for example, one tool analysed can identify uncertainties in its results that are due to gaps in knowledge about a material’s origin, characteristics and use.

The researchers also identified a growing trend in tools that provide protocols for experiments, such as identifying materials and test hazards, which are reproducible across laboratories. These tools could lead to a shift from expensive case-by-case testing for risk assessment of manufactured nanomaterials towards a more efficient process based on groupings of nanomaterials; and ‘read-across’ methods, where the properties of one material can be inferred without testing, based on the known properties of a similar material. The researchers do note, however, that although read-across methods are well established for chemical substances, they are still being developed for nanomaterials. To improve nanomaterial read-across methods, they suggest that more data are needed on the links between nanomaterials’ specific properties and their biological effects.

That’s all, folks.