Tag Archives: University of Waterloo

Nanotechnology, math, cancer, and a boxing metaphor

Violent metaphors in medicine are not unusual although the reference is often to war rather than boxing as it is in this news from the University of Waterloo (Canada). Still, it seems counter-intuitive to closely link violence with healing but the practice is well entrenched and it seems attempts to counteract it are a ‘losing battle’ (pun intended).

Credit: Gabriel Picolo "2-in-1 punch." Courtesy: University of Waterloo

Credit: Gabriel Picolo “2-in-1 punch.” Courtesy: University of Waterloo

A June 23, 2016 news item on ScienceDaily describes a new approach to cancer therapy,

Math, biology and nanotechnology are becoming strange, yet effective bed-fellows in the fight against cancer treatment resistance. Researchers at the University of Waterloo and Harvard Medical School have engineered a revolutionary new approach to cancer treatment that pits a lethal combination of drugs together into a single nanoparticle.

Their work, published online on June 3, 2016 in the leading nanotechnology journal ACS Nano, finds a new method of shrinking tumors and prevents resistance in aggressive cancers by activating two drugs within the same cell at the same time.

A June 23, 2016 University of Waterloo news release (also on EurekAlert), which originated the news item, provides more information,

Every year thousands of patients die from recurrent cancers that have become resistant to therapy, resulting in one of the greatest unsolved challenges in cancer treatment. By tracking the fate of individual cancer cells under pressure of chemotherapy, biologists and bioengineers at Harvard Medical School studied a network of signals and molecular pathways that allow the cells to generate resistance over the course of treatment.

Using this information, a team of applied mathematicians led by Professor Mohammad Kohandel at the University of Waterloo, developed a mathematical model that incorporated algorithms that define the phenotypic cell state transitions of cancer cells in real-time while under attack by an anticancer agent. The mathematical simulations enabled them to define the exact molecular behavior and pathway of signals, which allow cancer cells to survive treatment over time.

They discovered that the PI3K/AKT kinase, which is often over-activated in cancers, enables cells to undergo a resistance program when pressured with the cytotoxic chemotherapy known as Taxanes, which are conventionally used to treat aggressive breast cancers. This revolutionary window into the life of a cell reveals that vulnerabilities to small molecule PI3K/AKT kinase inhibitors exist, and can be targeted if they are applied in the right sequence with combinations of other drugs.

Previously theories of drug resistance have relied on the hypothesis that only certain, “privileged” cells can overcome therapy. The mathematical simulations demonstrate that, under the right conditions and signaling events, any cell can develop a resistance program.

“Only recently have we begun to appreciate how important mathematics and physics are to understanding the biology and evolution of cancer,” said Professor Kohandel. “In fact, there is now increasing synergy between these disciplines, and we are beginning to appreciate how critical this information can be to create the right recipes to treat cancer.”

Although previous studies explored the use of drug combinations to treat cancer, the one-two punch approach is not always successful. In the new study, led by Professor Aaron Goldman, a faculty member in the division of Engineering in Medicine at Brigham and Women’s Hospital, the scientists realized a major shortcoming of the combination therapy approach is that both drugs need to be active in the same cell, something that current delivery methods can’t guarantee.

“We were inspired by the mathematical understanding that a cancer cell rewires the mechanisms of resistance in a very specific order and time-sensitive manner,” said Professor Goldman. “By developing a 2-in-1 nanomedicine, we could ensure the cell that was acquiring this new resistance saw the lethal drug combination, shutting down the survival program and eliminating the evidence of resistance. This approach could redefine how clinicians deliver combinations of drugs in the clinic.”

The approach the bioengineers took was to build a single nanoparticle, inspired by computer models, that exploit a technique known as supramolecular chemistry. This nanotechnology enables scientists to build cholesterol-tethered drugs together from “tetris-like” building blocks that self-assemble, incorporating multiple drugs into stable, individual nano-vehicles that target tumors through the leaky vasculature. This 2-in-1 strategy ensures that resistance to therapy never has a chance to develop, bringing together the right recipe to destroy surviving cancer cells.

Using mouse models of aggressive breast cancer, the scientists confirmed the predictions from the mathematical model that both drugs must be deterministically delivered to the same cell.

Here’s a link to and a citation for the paper,

Rationally Designed 2-in-1 Nanoparticles Can Overcome Adaptive Resistance in Cancer by Aaron Goldman, Ashish Kulkarni, Mohammad Kohandel, Prithvi Pandey, Poornima Rao, Siva Kumar Natarajan, Venkata Sabbisetti, and Shiladitya Sengupta. ACS Nano, Article ASAP DOI: 10.1021/acsnano.6b00320 Publication Date (Web): June 03, 2016

Copyright © 2016 American Chemical Society

This paper is behind a paywall.

The researchers have made this illustration of their work available,

Courtesy: American Chemical Society

Courtesy: American Chemical Society

A treasure trove of molecule and battery data released to the public

Scientists working on The Materials Project have taken the notion of open science to their hearts and opened up access to their data according to a June 9, 2016 news item on Nanowerk,

The Materials Project, a Google-like database of material properties aimed at accelerating innovation, has released an enormous trove of data to the public, giving scientists working on fuel cells, photovoltaics, thermoelectrics, and a host of other advanced materials a powerful tool to explore new research avenues. But it has become a particularly important resource for researchers working on batteries. Co-founded and directed by Lawrence Berkeley National Laboratory (Berkeley Lab) scientist Kristin Persson, the Materials Project uses supercomputers to calculate the properties of materials based on first-principles quantum-mechanical frameworks. It was launched in 2011 by the U.S. Department of Energy’s (DOE) Office of Science.

A June 8, 2016 Berkeley Lab news release, which originated the news item, provides more explanation about The Materials Project,

The idea behind the Materials Project is that it can save researchers time by predicting material properties without needing to synthesize the materials first in the lab. It can also suggest new candidate materials that experimentalists had not previously dreamed up. With a user-friendly web interface, users can look up the calculated properties, such as voltage, capacity, band gap, and density, for tens of thousands of materials.

Two sets of data were released last month: nearly 1,500 compounds investigated for multivalent intercalation electrodes and more than 21,000 organic molecules relevant for liquid electrolytes as well as a host of other research applications. Batteries with multivalent cathodes (which have multiple electrons per mobile ion available for charge transfer) are promising candidates for reducing cost and achieving higher energy density than that available with current lithium-ion technology.

The sheer volume and scope of the data is unprecedented, said Persson, who is also a professor in UC Berkeley’s Department of Materials Science and Engineering. “As far as the multivalent cathodes, there’s nothing similar in the world that exists,” she said. “To give you an idea, experimentalists are usually able to focus on one of these materials at a time. Using calculations, we’ve added data on 1,500 different compositions.”

While other research groups have made their data publicly available, what makes the Materials Project so useful are the online tools to search all that data. The recent release includes two new web apps—the Molecules Explorer and the Redox Flow Battery Dashboard—plus an add-on to the Battery Explorer web app enabling researchers to work with other ions in addition to lithium.

“Not only do we give the data freely, we also give algorithms and software to interpret or search over the data,” Persson said.

The Redox Flow Battery app gives scientific parameters as well as techno-economic ones, so battery designers can quickly rule out a molecule that might work well but be prohibitively expensive. The Molecules Explorer app will be useful to researchers far beyond the battery community.

“For multivalent batteries it’s so hard to get good experimental data,” Persson said. “The calculations provide rich and robust benchmarks to assess whether the experiments are actually measuring a valid intercalation process or a side reaction, which is particularly difficult for multivalent energy technology because there are so many problems with testing these batteries.”

Here’s a screen capture from the Battery Explorer app,

The Materials Project’s Battery Explorer app now allows researchers to work with other ions in addition to lithium.

The Materials Project’s Battery Explorer app now allows researchers to work with other ions in addition to lithium. Courtesy: The Materials Project

The news release goes on to describe a new discovery made possible by The Materials Project (Note: A link has been removed),

Together with Persson, Berkeley Lab scientist Gerbrand Ceder, postdoctoral associate Miao Liu, and MIT graduate student Ziqin Rong, the Materials Project team investigated some of the more promising materials in detail for high multivalent ion mobility, which is the most difficult property to achieve in these cathodes. This led the team to materials known as thiospinels. One of these thiospinels has double the capacity of the currently known multivalent cathodes and was recently synthesized and tested in the lab by JCESR researcher Linda Nazar of the University of Waterloo, Canada.

“These materials may not work well the first time you make them,” Persson said. “You have to be persistent; for example you may have to make the material very phase pure or smaller than a particular particle size and you have to test them under very controlled conditions. There are people who have actually tried this material before and discarded it because they thought it didn’t work particularly well. The power of the computations and the design metrics we have uncovered with their help is that it gives us the confidence to keep trying.”

The researchers were able to double the energy capacity of what had previously been achieved for this kind of multivalent battery. The study has been published in the journal Energy & Environmental Science in an article titled, “A High Capacity Thiospinel Cathode for Mg Batteries.”

“The new multivalent battery works really well,” Persson said. “It’s a significant advance and an excellent proof-of-concept for computational predictions as a valuable new tool for battery research.”

Here’s a link to and a citation for the paper,

A high capacity thiospinel cathode for Mg batteries by Xiaoqi Sun, Patrick Bonnick, Victor Duffort, Miao Liu, Ziqin Rong, Kristin A. Persson, Gerbrand Ceder and  Linda F. Nazar. Energy Environ. Sci., 2016, Advance Article DOI: 10.1039/C6EE00724D First published online 24 May 2016

This paper seems to be behind a paywall.

Getting back to the news release, there’s more about The Materials Project in relationship to its membership,

The Materials Project has attracted more than 20,000 users since launching five years ago. Every day about 20 new users register and 300 to 400 people log in to do research.

One of those users is Dane Morgan, a professor of engineering at the University of Wisconsin-Madison who develops new materials for a wide range of applications, including highly active catalysts for fuel cells, stable low-work function electron emitter cathodes for high-powered microwave devices, and efficient, inexpensive, and environmentally safe solar materials.

“The Materials Project has enabled some of the most exciting research in my group,” said Morgan, who also serves on the Materials Project’s advisory board. “By providing easy access to a huge database, as well as tools to process that data for thermodynamic predictions, the Materials Project has enabled my group to rapidly take on materials design projects that would have been prohibitive just a few years ago.”

More materials are being calculated and added to the database every day. In two years, Persson expects another trove of data to be released to the public.

“This is the way to reach a significant part of the research community, to reach students while they’re still learning material science,” she said. “It’s a teaching tool. It’s a science tool. It’s unprecedented.”

Supercomputing clusters at the National Energy Research Scientific Computing Center (NERSC), a DOE Office of Science User Facility hosted at Berkeley Lab, provide the infrastructure for the Materials Project.

Funding for the Materials Project is provided by the Office of Science (US Department of Energy], including support through JCESR [Joint Center for Energy Storage Research].

Happy researching!

Prime Minister Trudeau, the quantum physicist

Prime Minister Justin Trudeau’s apparently extemporaneous response to a joking (non)question about quantum computing by a journalist during an April 15, 2016 press conference at the Perimeter Institute for Theoretical Physics in Waterloo, Ontario, Canada has created a buzz online, made international news, and caused Canadians to sit taller.

For anyone who missed the moment, here’s a video clip from the Canadian Broadcasting Corporation (CBC),

Aaron Hutchins in an April 15, 2016 article for Maclean’s magazine digs deeper to find out more about Trudeau and quantum physics (Note: A link has been removed),

Raymond Laflamme knows the drill when politicians visit the Perimeter Institute. A photo op here, a few handshakes there and a tour with “really basic, basic, basic facts” about the field of quantum mechanics.

But when the self-described “geek” Justin Trudeau showed up for a funding announcement on Friday [April 15, 2016], the co-founder and director of the Institute for Quantum Computing at the University of Waterloo wasn’t met with simple nods of the Prime Minister pretending to understand. Trudeau immediately started talking about things being waves and particles at the same time, like cats being dead and alive at the same time. It wasn’t just nonsense—Trudeau was referencing the famous thought experiment of the late legendary physicist Erwin Schrödinger.

“I don’t know where he learned all that stuff, but we were all surprised,” Laflamme says. Soon afterwards, as Trudeau met with one student talking about superconductivity, the Prime Minister asked her, “Why don’t we have high-temperature superconducting systems?” something Laflamme describes as the institute’s “Holy Grail” quest.

“I was flabbergasted,” Laflamme says. “I don’t know how he does in other subjects, but in quantum physics, he knows the basic pieces and the important questions.”

Strangely, Laflamme was not nearly as excited (tongue in cheek) when I demonstrated my understanding of quantum physics during our interview (see my May 11, 2015 posting; scroll down about 40% of the way to the Ramond Laflamme subhead).

As Jon Butterworth comments in his April 16, 2016 posting on the Guardian science blog, the response says something about our expectations regarding politicians,

This seems to have enhanced Trudeau’s reputation no end, and quite right too. But it is worth thinking a bit about why.

The explanation he gives is clear, brief, and understandable to a non-specialist. It is the kind of thing any sufficiently engaged politician could pick up from a decent briefing, given expert help. …

Butterworth also goes on to mention journalists’ expectations,

The reporter asked the question in a joking fashion, not unkindly as far as I can tell, but not expecting an answer either. If this had been an announcement about almost any other government investment, wouldn’t the reporter have expected a brief explanation of the basic ideas behind it? …

As for the announcement being made by Trudeau, there is this April 15, 2016 Perimeter Institute press release (Note: Links have been removed),

Prime Minister Justin Trudeau says the work being done at Perimeter and in Canada’s “Quantum Valley” [emphasis mine] is vital to the future of the country and the world.

Prime Minister Justin Trudeau became both teacher and student when he visited Perimeter Institute today to officially announce the federal government’s commitment to support fundamental scientific research at Perimeter.

Joined by Minister of Science Kirsty Duncan and Small Business and Tourism Minister Bardish Chagger, the self-described “geek prime minister” listened intensely as he received brief overviews of Perimeter research in areas spanning from quantum science to condensed matter physics and cosmology.

“You don’t have to be a geek like me to appreciate how important this work is,” he then told a packed audience of scientists, students, and community leaders in Perimeter’s atrium.

The Prime Minister was also welcomed by 200 teenagers attending the Institute’s annual Inspiring Future Women in Science conference, and via video greetings from cosmologist Stephen Hawking [he was Laflamme’s PhD supervisor], who is a Perimeter Distinguished Visiting Research Chair. The Prime Minister said he was “incredibly overwhelmed” by Hawking’s message.

“Canada is a wonderful, huge country, full of people with big hearts and forward-looking minds,” Hawking said in his message. “It’s an ideal place for an institute dedicated to the frontiers of physics. In supporting Perimeter, Canada sets an example for the world.”

The visit reiterated the Government of Canada’s pledge of $50 million over five years announced in last month’s [March 2016] budget [emphasis mine] to support Perimeter research, training, and outreach.

It was the Prime Minister’s second trip to the Region of Waterloo this year. In January [2016], he toured the region’s tech sector and universities, and praised the area’s innovation ecosystem.

This time, the focus was on the first link of the innovation chain: fundamental science that could unlock important discoveries, advance human understanding, and underpin the groundbreaking technologies of tomorrow.

As for the “quantum valley’ in Ontario, I think there might be some competition here in British Columbia with D-Wave Systems (first commercially available quantum computing, of a sort; my Dec. 16, 2015 post is the most recent one featuring the company) and the University of British Columbia’s Stewart Blusson Quantum Matter Institute.

Getting back to Trudeau, it’s exciting to have someone who seems so interested in at least some aspects of science that he can talk about it with a degree of understanding. I knew he had an interest in literature but there is also this (from his Wikipedia entry; Note: Links have been removed),

Trudeau has a bachelor of arts degree in literature from McGill University and a bachelor of education degree from the University of British Columbia…. After graduation, he stayed in Vancouver and he found substitute work at several local schools and permanent work as a French and math teacher at the private West Point Grey Academy … . From 2002 to 2004, he studied engineering at the École Polytechnique de Montréal, a part of the Université de Montréal.[67] He also started a master’s degree in environmental geography at McGill University, before suspending his program to seek public office.[68] [emphases mine]

Trudeau is not the only political leader to have a strong interest in science. In our neighbour to the south, there’s President Barack Obama who has done much to promote science since he was elected in 2008. David Bruggeman in an April 15, 2016  post (Obama hosts DNews segments for Science Channel week of April 11-15, 2016) and an April 17, 2016 post (Obama hosts White House Science Fair) describes two of Obama’s most recent efforts.

ETA April 19, 2016: I’ve found confirmation that this Q&A was somewhat staged as I hinted in the opening with “Prime Minister Justin Trudeau’s apparently extemporaneous response … .” Will Oremus’s April 19, 2016 article for Slate.com breaks the whole news cycle down and points out (Note: A link has been removed),

Over the weekend, even as latecomers continued to dine on the story’s rapidly decaying scraps, a somewhat different picture began to emerge. A Canadian blogger pointed out that Trudeau himself had suggested to reporters at the event that they lob him a question about quantum computing so that he could knock it out of the park with the newfound knowledge he had gleaned on his tour.

The Canadian blogger who tracked this down is J. J. McCullough (Jim McCullough) and you can read his Oct. 16, 2016 posting on the affair here. McCullough has a rather harsh view of the media response to Trudeau’s lecture. Oremus is a bit more measured,

… Monday brought the countertake parade—smaller and less pompous, if no less righteous—led by Gawker with the headline, “Justin Trudeau’s Quantum Computing Explanation Was Likely Staged for Publicity.”

But few of us in the media today are immune to the forces that incentivize timeliness and catchiness over subtlety, and even Gawker’s valuable corrective ended up meriting a corrective of its own. Author J.K. Trotter soon updated his post with comments from Trudeau’s press secretary, who maintained (rather convincingly, I think) that nothing in the episode was “staged”—at least, not in the sinister way that the word implies. Rather, Trudeau had joked that he was looking forward to someone asking him about quantum computing; a reporter at the press conference jokingly complied, without really expecting a response (he quickly moved on to his real question before Trudeau could answer); Trudeau responded anyway, because he really did want to show off his knowledge.

Trotter deserves credit, regardless, for following up and getting a fuller picture of what transpired. He did what those who initially jumped on the story did not, which was to contact the principals for context and comment.

But my point here is not to criticize any particular writer or publication. The too-tidy Trudeau narrative was not the deliberate work of any bad actor or fabricator. Rather, it was the inevitable product of today’s inexorable social-media machine, in which shareable content fuels the traffic-referral engines that pay online media’s bills.

I suggest reading both McCullough’s and Oremus’s posts in their entirety should you find debates about the role of media compelling.

Changing the colour of single photons in a diamond quantum memory

An artist’s impression of quantum frequency conversion in a diamond quantum memory. (Credit: Dr. Khabat Heshami, National Research Council Canada)

An artist’s impression of quantum frequency conversion in a diamond quantum memory. (Credit: Dr. Khabat Heshami, National Research Council Canada)

An April 5, 2016 University of Waterloo news release (also on EurekAlert) describes the research,

Researchers from the Institute for Quantum Computing at the University of Waterloo and the National Research Council of Canada (NRC) have, for the first time, converted the colour and bandwidth of ultrafast single photons using a room-temperature quantum memory in diamond.

Shifting the colour of a photon, or changing its frequency, is necessary to optimally link components in a quantum network. For example, in optical quantum communication, the best transmission through an optical fibre is near infrared, but many of the sensors that measure them work much better for visible light, which is a higher frequency. Being able to shift the colour of the photon between the fibre and the sensor enables higher performance operation, including bigger data rates.

The research, published in Nature Communications, demonstrated small frequency shifts that are useful for a communication protocol known as wavelength division multiplexing. This is used today when a sender needs to transmit large amounts of information through a transmission so the signal is broken into smaller packets of slightly different frequencies and sent through together. The information is then organized at the other end based on those frequencies.

In the experiments conducted at NRC, the researchers demonstrated the conversion of both the frequency and bandwidth of single photons using a room-temperature diamond quantum memory.

“Originally there was this thought that you just stop the photon, store it for a little while and get it back out. The fact that we can manipulate it at the same time is exciting,” said Kent Fisher a PhD student at the Institute for Quantum Computing and with the Department of Physics and Astronomy at Waterloo. “These findings could open the door for other uses of quantum memory as well.”

The diamond quantum memory works by converting the photon into a particular vibration of the carbon atoms in the diamond, called a phonon. This conversion works for many different colours of light allowing for the manipulation of a broad spectrum of light. The energy structure of diamond allows for this to occur at room temperature with very low noise. Researchers used strong laser pulses to store and retrieve the photon. By controlling the colours of these laser pulses, researchers controlled the colour of the retrieved photon.

“The fragility of quantum systems means that you are always working against the clock,” remarked Duncan England, researcher at NRC. “The interesting step that we’ve shown here is that by using extremely short pulses of light, we are able to beat the clock and maintain quantum performance.”

The integrated platform for photon storage and spectral conversion could be used for frequency multiplexing in quantum communication, as well as build up a very large entangled state – something called a cluster state. Researchers are interested in exploiting cluster states as the resource for quantum computing driven entirely by measurements.

Here’s a link to and a citation for the paper,

Frequency and bandwidth conversion of single photons in a room-temperature diamond quantum memory by Kent A. G. Fisher, Duncan G. England, Jean-Philippe W. MacLean, Philip J. Bustard, Kevin J. Resch, & Benjamin J. Sussman. Nature Communications 7, Article number: 11200  doi:10.1038/ncomms11200 Published 05 April 2016

This paper is open access.

Handling massive digital datasets the quantum way

A Jan. 25, 2016 news item on phys.org describes a new approach to analyzing and managing huge datasets,

From gene mapping to space exploration, humanity continues to generate ever-larger sets of data—far more information than people can actually process, manage, or understand.

Machine learning systems can help researchers deal with this ever-growing flood of information. Some of the most powerful of these analytical tools are based on a strange branch of geometry called topology, which deals with properties that stay the same even when something is bent and stretched every which way.

Such topological systems are especially useful for analyzing the connections in complex networks, such as the internal wiring of the brain, the U.S. power grid, or the global interconnections of the Internet. But even with the most powerful modern supercomputers, such problems remain daunting and impractical to solve. Now, a new approach that would use quantum computers to streamline these problems has been developed by researchers at [Massachusetts Institute of Technology] MIT, the University of Waterloo, and the University of Southern California [USC}.

A Jan. 25, 2016 MIT news release (*also on EurekAlert*), which originated the news item, describes the theory in more detail,

… Seth Lloyd, the paper’s lead author and the Nam P. Suh Professor of Mechanical Engineering, explains that algebraic topology is key to the new method. This approach, he says, helps to reduce the impact of the inevitable distortions that arise every time someone collects data about the real world.

In a topological description, basic features of the data (How many holes does it have? How are the different parts connected?) are considered the same no matter how much they are stretched, compressed, or distorted. Lloyd [ explains that it is often these fundamental topological attributes “that are important in trying to reconstruct the underlying patterns in the real world that the data are supposed to represent.”

It doesn’t matter what kind of dataset is being analyzed, he says. The topological approach to looking for connections and holes “works whether it’s an actual physical hole, or the data represents a logical argument and there’s a hole in the argument. This will find both kinds of holes.”

Using conventional computers, that approach is too demanding for all but the simplest situations. Topological analysis “represents a crucial way of getting at the significant features of the data, but it’s computationally very expensive,” Lloyd says. “This is where quantum mechanics kicks in.” The new quantum-based approach, he says, could exponentially speed up such calculations.

Lloyd offers an example to illustrate that potential speedup: If you have a dataset with 300 points, a conventional approach to analyzing all the topological features in that system would require “a computer the size of the universe,” he says. That is, it would take 2300 (two to the 300th power) processing units — approximately the number of all the particles in the universe. In other words, the problem is simply not solvable in that way.

“That’s where our algorithm kicks in,” he says. Solving the same problem with the new system, using a quantum computer, would require just 300 quantum bits — and a device this size may be achieved in the next few years, according to Lloyd.

“Our algorithm shows that you don’t need a big quantum computer to kick some serious topological butt,” he says.

There are many important kinds of huge datasets where the quantum-topological approach could be useful, Lloyd says, for example understanding interconnections in the brain. “By applying topological analysis to datasets gleaned by electroencephalography or functional MRI, you can reveal the complex connectivity and topology of the sequences of firing neurons that underlie our thought processes,” he says.

The same approach could be used for analyzing many other kinds of information. “You could apply it to the world’s economy, or to social networks, or almost any system that involves long-range transport of goods or information,” says Lloyd, who holds a joint appointment as a professor of physics. But the limits of classical computation have prevented such approaches from being applied before.

While this work is theoretical, “experimentalists have already contacted us about trying prototypes,” he says. “You could find the topology of simple structures on a very simple quantum computer. People are trying proof-of-concept experiments.”

Ignacio Cirac, a professor at the Max Planck Institute of Quantum Optics in Munich, Germany, who was not involved in this research, calls it “a very original idea, and I think that it has a great potential.” He adds “I guess that it has to be further developed and adapted to particular problems. In any case, I think that this is top-quality research.”

Here’s a link to and a citation for the paper,

Quantum algorithms for topological and geometric analysis of data by Seth Lloyd, Silvano Garnerone, & Paolo Zanardi. Nature Communications 7, Article number: 10138 doi:10.1038/ncomms10138 Published 25 January 2016

This paper is open access.

ETA Jan. 25, 2016 1245 hours PST,

Shown here are the connections between different regions of the brain in a control subject (left) and a subject under the influence of the psychedelic compound psilocybin (right). This demonstrates a dramatic increase in connectivity, which explains some of the drug’s effects (such as “hearing” colors or “seeing” smells). Such an analysis, involving billions of brain cells, would be too complex for conventional techniques, but could be handled easily by the new quantum approach, the researchers say. Courtesy of the researchers

Shown here are the connections between different regions of the brain in a control subject (left) and a subject under the influence of the psychedelic compound psilocybin (right). This demonstrates a dramatic increase in connectivity, which explains some of the drug’s effects (such as “hearing” colors or “seeing” smells). Such an analysis, involving billions of brain cells, would be too complex for conventional techniques, but could be handled easily by the new quantum approach, the researchers say. Courtesy of the researchers

*’also on EurekAlert’ text and link added Jan. 26, 2016.

New photocatalytic approach to cleaning wastewater from oil sands

With oil sands in the title, this story had to mention the Canadian province of Alberta, which has been widely castigated and applauded for its oil extraction efforts in their massive oil sands field. A Nov. 24, 2015 news item on Nanotechnology Now describes a new technology for cleaning the wastewater from oil sands extraction processes,

Researchers have developed a process to remove contaminants from oil sands wastewater using only sunlight and nanoparticles that is more effective and inexpensive than conventional treatment methods.

Frank Gu, a professor in the Faculty of Engineering at the University of Waterloo [in the province of Ontario] and Canada Research Chair in Nanotechnology Engineering, is the senior researcher on the team that was the first to find that photocatalysis — a chemical reaction that involves the absorption of light by nanoparticles — can completely eliminate naphthenic acids in oil sands wastewater, and within hours. Naphthenic acids pose a threat to ecology and human health. Water in tailing ponds left to biodegrade naturally in the environment still contains these contaminants decades later.

A Nov. 23, 2015 University of Waterloo news release, which originated the news item, expands on the theme but doesn’t provide much in the way of technical detail,

“With about a billion tonnes of water stored in ponds in Alberta, removing naphthenic acids is one of the largest environmental challenges in Canada,” said Tim Leshuk, a PhD candidate in chemical engineering at Waterloo. He is the lead author of this paper and a recipient of the prestigious Vanier Canada Graduate Scholarship. “Conventional treatments people have tried either haven’t worked, or if they have worked, they’ve been far too impractical or expensive to solve the size of the problem.  Waterloo’s technology is the first step of what looks like a very practical and green treatment method.”

Unlike treating polluted water with chlorine or membrane filtering, the Waterloo technology is energy-efficient and relatively inexpensive. Nanoparticles become extremely reactive when exposed to sunlight and break down the persistent pollutants in their individual atoms, completely removing them from the water. This treatment depends on only sunlight for energy, and the nanoparticles can be recovered and reused indefinitely.

Next steps for the Waterloo research include ensuring that the treated water meets all of the objectives Canadian environmental legislation and regulations required to ensure it can be safely discharged from sources larger than the samples, such as tailing ponds.

Here’s a link to and a citation for the paper,

Solar photocatalytic degradation of naphthenic acids in oil sands process-affected water by Tim Leshuk, Timothy Wong, Stuart Linley, Kerry M. Peru, John V. Headley, Frank Gu. Chemosphere Volume 144, February 2016, Pages 1854–1861 doi:10.1016/j.chemosphere.2015.10.073

This paper is behind a paywall.

Nanomaterials and UV (ultraviolet) light for environmental cleanups

I think this is the first time I’ve seen anything about a technology that removes toxic materials from both water and soil; it’s usually one or the other. A July 22, 2015 news item on Nanowerk makes the announcement (Note: A link has been removed),

Many human-made pollutants in the environment resist degradation through natural processes, and disrupt hormonal and other systems in mammals and other animals. Removing these toxic materials — which include pesticides and endocrine disruptors such as bisphenol A (BPA) — with existing methods is often expensive and time-consuming.

In a new paper published this week in Nature Communications (“Nanoparticles with photoinduced precipitation for the extraction of pollutants from water and soil”), researchers from MIT [Massachusetts Institute of Technology] and the Federal University of Goiás in Brazil demonstrate a novel method for using nanoparticles and ultraviolet (UV) light to quickly isolate and extract a variety of contaminants from soil and water.

A July 21, 2015 MIT news release by Jonathan Mingle, which originated the news item, describes the inspiration and the research in more detail,

Ferdinand Brandl and Nicolas Bertrand, the two lead authors, are former postdocs in the laboratory of Robert Langer, the David H. Koch Institute Professor at MIT’s Koch Institute for Integrative Cancer Research. (Eliana Martins Lima, of the Federal University of Goiás, is the other co-author.) Both Brandl and Bertrand are trained as pharmacists, and describe their discovery as a happy accident: They initially sought to develop nanoparticles that could be used to deliver drugs to cancer cells.

Brandl had previously synthesized polymers that could be cleaved apart by exposure to UV light. But he and Bertrand came to question their suitability for drug delivery, since UV light can be damaging to tissue and cells, and doesn’t penetrate through the skin. When they learned that UV light was used to disinfect water in certain treatment plants, they began to ask a different question.

“We thought if they are already using UV light, maybe they could use our particles as well,” Brandl says. “Then we came up with the idea to use our particles to remove toxic chemicals, pollutants, or hormones from water, because we saw that the particles aggregate once you irradiate them with UV light.”

A trap for ‘water-fearing’ pollution

The researchers synthesized polymers from polyethylene glycol, a widely used compound found in laxatives, toothpaste, and eye drops and approved by the Food and Drug Administration as a food additive, and polylactic acid, a biodegradable plastic used in compostable cups and glassware.

Nanoparticles made from these polymers have a hydrophobic core and a hydrophilic shell. Due to molecular-scale forces, in a solution hydrophobic pollutant molecules move toward the hydrophobic nanoparticles, and adsorb onto their surface, where they effectively become “trapped.” This same phenomenon is at work when spaghetti sauce stains the surface of plastic containers, turning them red: In that case, both the plastic and the oil-based sauce are hydrophobic and interact together.

If left alone, these nanomaterials would remain suspended and dispersed evenly in water. But when exposed to UV light, the stabilizing outer shell of the particles is shed, and — now “enriched” by the pollutants — they form larger aggregates that can then be removed through filtration, sedimentation, or other methods.

The researchers used the method to extract phthalates, hormone-disrupting chemicals used to soften plastics, from wastewater; BPA, another endocrine-disrupting synthetic compound widely used in plastic bottles and other resinous consumer goods, from thermal printing paper samples; and polycyclic aromatic hydrocarbons, carcinogenic compounds formed from incomplete combustion of fuels, from contaminated soil.

The process is irreversible and the polymers are biodegradable, minimizing the risks of leaving toxic secondary products to persist in, say, a body of water. “Once they switch to this macro situation where they’re big clumps,” Bertrand says, “you won’t be able to bring them back to the nano state again.”

The fundamental breakthrough, according to the researchers, was confirming that small molecules do indeed adsorb passively onto the surface of nanoparticles.

“To the best of our knowledge, it is the first time that the interactions of small molecules with pre-formed nanoparticles can be directly measured,” they write in Nature Communications.

Nano cleansing

Even more exciting, they say, is the wide range of potential uses, from environmental remediation to medical analysis.

The polymers are synthesized at room temperature, and don’t need to be specially prepared to target specific compounds; they are broadly applicable to all kinds of hydrophobic chemicals and molecules.

“The interactions we exploit to remove the pollutants are non-specific,” Brandl says. “We can remove hormones, BPA, and pesticides that are all present in the same sample, and we can do this in one step.”

And the nanoparticles’ high surface-area-to-volume ratio means that only a small amount is needed to remove a relatively large quantity of pollutants. The technique could thus offer potential for the cost-effective cleanup of contaminated water and soil on a wider scale.

“From the applied perspective, we showed in a system that the adsorption of small molecules on the surface of the nanoparticles can be used for extraction of any kind,” Bertrand says. “It opens the door for many other applications down the line.”

This approach could possibly be further developed, he speculates, to replace the widespread use of organic solvents for everything from decaffeinating coffee to making paint thinners. Bertrand cites DDT, banned for use as a pesticide in the U.S. since 1972 but still widely used in other parts of the world, as another example of a persistent pollutant that could potentially be remediated using these nanomaterials. “And for analytical applications where you don’t need as much volume to purify or concentrate, this might be interesting,” Bertrand says, offering the example of a cheap testing kit for urine analysis of medical patients.

The study also suggests the broader potential for adapting nanoscale drug-delivery techniques developed for use in environmental remediation.

“That we can apply some of the highly sophisticated, high-precision tools developed for the pharmaceutical industry, and now look at the use of these technologies in broader terms, is phenomenal,” says Frank Gu, an assistant professor of chemical engineering at the University of Waterloo in Canada, and an expert in nanoengineering for health care and medical applications.

“When you think about field deployment, that’s far down the road, but this paper offers a really exciting opportunity to crack a problem that is persistently present,” says Gu, who was not involved in the research. “If you take the normal conventional civil engineering or chemical engineering approach to treating it, it just won’t touch it. That’s where the most exciting part is.”

The researchers have made this illustration of their work available,

Nanoparticles that lose their stability upon irradiation with light have been designed to extract endocrine disruptors, pesticides, and other contaminants from water and soils. The system exploits the large surface-to-volume ratio of nanoparticles, while the photoinduced precipitation ensures nanomaterials are not released in the environment. Image: Nicolas Bertrand Courtesy: MIT

Nanoparticles that lose their stability upon irradiation with light have been designed to extract endocrine disruptors, pesticides, and other contaminants from water and soils. The system exploits the large surface-to-volume ratio of nanoparticles, while the photoinduced precipitation ensures nanomaterials are not released in the environment.
Image: Nicolas Bertrand Courtesy: MIT

Here’s a link to and a citation for the paper,

Nanoparticles with photoinduced precipitation for the extraction of pollutants from water and soil by Ferdinand Brandl, Nicolas Bertrand, Eliana Martins Lima & Robert Langer. Nature Communications 6, Article number: 7765 doi:10.1038/ncomms8765 Published 21 July 2015

This paper is open access.

7th (2015) Canadian Science Policy Conference line-up

The Seventh Canadian Science Policy Conference, being held in Ottawa, Ontario from Nov. 25 – 27, 2015 at the Delta Ottawa City Centre Hotel, has announced its programme and speakers in a July 16, 2015 Canadian Science Policy Centre newsletter,

Presentations

Theme 1: Transformative and Converging Technologies on
Private Sector Innovation and Productivity

New technologies, from 3D printing to quantum computing, present risks and opportunities for Canadian industries and the economy. Join CSPC 2015 in a discussion of how Canada’s mining industry and digital economy can best take advantage of these technological innovations.

Challenges Associated with Transferring New Technologies to the Mining Industry,
Centre for Excellence in Mining Innovation

Creating Digital Opportunity for Canada: challenges and emerging trends,
Munk School of Global Affairs

Disruptive Technologies,
Ryerson University

Theme 2: Big Science in Canada – Realizing the Benefits

ENCode, the LHC, the Very Large Array: Big Science is reshaping modern research and with it, Canada’s scientific landscape. Join the conversation at CSPC 2015 on how Canada navigates those vast new waters.

Science Without Boundaries,
TRIUMF

Are we Jupiters in the celestial field of science? How ‘Big Science’ and major facilities influence Canadian Science Culture,
SNOLAB

Theme 3: Transformation of Science, Society and Research
in the Digital Age: Open science, participation, security and
confidentiality

The digital age has brought important changes to the Canadian scientific landscape. Come discuss and think about the effects of those changes on our society.

The Role of Innovation in Addressing Antimicrobial Resistance,
Industry Canada

Digital Literacy: What is going to make the real difference?,
Actua

Science Blogging: The Next Generation,
Science Borealis

Proposals for Advancing Canadian Open Science Policy,
Environment Canada

Theme 4: Science and Innovation for Development

Innovation and sciences are among the key driver of development. Come and find out how Canadian creativity creates unique opportunities.

Role of Open Science in Innovation for Development,
International Development Research Centre (IDRC)

Learning Creativity in STEM Education,
University of Calgary

Theme 5: Evidence-Based Decision Making: The challenge
of connecting science and policy making

GMOs, climate change, energy: Many of the big major issues facing Canada fall at the nexus of science and policymaking. Join CSPC 2015 in a discussion of the role of big data and evidence-based decision-making in government.

Beating Superbugs: Innovative Genomics and Policies to Tackle AMR,
Genome Canada

Addressing Concerns Over GMOs – Striking the Right Balance,
Agriculture and Agri-food Canada

Who Should be the Voice for Science Within Government?,
Evidence for Democracy

Data Driven Decisions: Putting IoT, Big Data and Analytics to Work For Better Public Policy,
Cybera

The future of university support for Canada’s Science, Technology & Innovation Strategy,
York University

Please note, there will be more panels announced soon.

Keynote Session

Science Advice to Governments
Innovation, science and technologies never had a more critical role in decision making than today. CSPC 2015 keynote session will address the importance and role of the input from the scientific world to decision making in political affairs.

Speakers:

Sir Peter Gluckman,
Chief Science Adviser to New Zealand Government

Rémi Quirion,
Chief Scientist, Quebec

Arthur Carty,
Executive Director, Inst. Nanotechnology U Waterloo, Former science adviser to PM Paul Martin [emphasis mine]

I have a few comments. First, I’m glad to see the balance between the “money, money, money” attitude and more scholarly/policy interests has been evened out somewhat as compared to last year’s conference in Halifax (Nova Scotia). Second, I see there aren’t any politicians listed as speakers in the website’s banner as is the usual case (Ted Hsu, Member of Parliament and current science critic for the Liberal Party, is on the speaker list but will not be running in the 2015 election). This makes some sense since there is a federal election coming up in October 2015 and changes are likely. Especially, since it seems to be a three-horse race at this point. (For anyone unfamiliar with the term, it means that any one of the three main political parties could win and lead should they possess a majority of the votes in the House of Commons. There are other possibilities such as a minority government led by one party (the Harper Conservatives have been in that situation). Or, should two parties, with enough combined votes to outnumber the third party, be able to agree, there could be a coalition government of some kind.) As for other politicians at the provincial and municipal levels, perhaps it’s too early to commit? Third, Arthur Carty, as he notes, was a science advisor to Prime Minister Paul Martin. Martin was the leader of the country for approximately two years from Dec. 2003 – Nov. 2005 when a motion of non confidence was passed in Parliament (more about Paul Martin and his political career in his Wikipedia entry) an election was called for January 2006 when Stephen Harper and the conservatives were voted in to form a minority government. Arthur Carty’s tenure as Canada’s first science advisor began in 2004 and ended in 2008, according to Carty’s Wikipedia entry. It seems Carty is not claiming to have been Stephen Harper’s science advisor although arguably he was the Harper government’s science advisor for the same amount of time. This excerpt from a March 6, 2008 Canada.com news item seems to shed some light on why the Harper sojourn is not mentioned in Cary’s conference biography,

The need for a national science adviser has never been greater and the government is risking damage to Canada’s international reputation as a science leader by cutting the position, according to the man who holds the job until the end of the month.

Appearing before a Commons committee on Thursday, Arthur Carty told MPs that he is “dismayed and disappointed” that the Conservative government decided last fall to discontinue the office of the national science adviser.

“There are, I think, negative consequences of eliminating the position,” said Carty. He said his international counterparts have expressed support for him and that Canada eliminating the position has the “potential to tarnish our image,” as a world leader in science and innovation.

Carty was head of the National Research Council in 2004 when former prime minister Paul Martin asked him to be his science adviser.

In October 2006, [months] after Prime Minister Stephen Harper was elected, Carty’s office was shifted to Industry Canada. After that move, he and his staff were “increasingly marginalized,” Carty told the industry, science and technology committee, and they had little input in crafting the government’s new science and technology strategy.

But Conservative members of the committee questioned whether taxpayers got their money’s worth from the national adviser and asked Carty to explain travel and meal expenses he had claimed during his time in the public service, including lunch and dinner meetings that cost around $1,000 each. Some of the figures they cited were from when Carty was head of the National Research Council.

The suggestions that Carty took advantage of the public purse prompted Liberal MP Scott Brison to accuse the Tories of launching a “smear campaign” against Carty, whom he described as a “great public servant.”

“I have never overcharged the government for anything,” Carty said in his own defence.

The keynote has the potential for some liveliness based on Carty’s history as a science advisor but one never knows.  It would have been nice if the organizers had been able to include someone from South Korea, Japan, India, China, etc. to be a keynote speaker on the topic of science advice. After all, those countries have all invested heavily in science and made some significant social and economic progress based on those investments. If you’re going to talk about the global science enterprise perhaps you could attract a few new people (and let’s not forget Latin America, Africa, and the Middle East) to the table, so to speak.

You can find out more about the conference and register (there’s a 30% supersaver discount at the moment) here.

Canada and some graphene scene tidbits

For a long time It seemed as if every country in the world, except Canada, had some some sort of graphene event. According to a July 16, 2015 news item on Nanotechnology Now, Canada has now stepped up, albeit, in a peculiarly Canadian fashion. First the news,

Mid October [Oct. 14 -16, 2015], the Graphene & 2D Materials Canada 2015 International Conference & Exhibition (www.graphenecanada2015.com) will take place in Montreal (Canada).

I found a July 16, 2015 news release (PDF) announcing the Canadian event on the lead organizer’s (Phantoms Foundation located in Spain) website,

On the second day of the event (15th October, 2015), an Industrial Forum will bring together top industry leaders to discuss recent advances in technology developments and business opportunities in graphene commercialization.
At this stage, the event unveils 38 keynote & invited speakers. On the Industrial Forum 19 of them will present the latest in terms of Energy, Applications, Production and Worldwide Initiatives & Priorities.

Plenary:
Gary Economo (Grafoid Inc., Canada)
Khasha Ghaffarzadeh (IDTechEx, UK)
Shu-Jen Han (IBM T.J. Watson Research Center, USA)
Bor Z. Jang (Angstron Materials, USA)
Seongjun Park (Samsung Advanced Institute of Technology (SAIT), Korea)
Chun-Yun Sung (Lockheed Martin, USA)

Parallel Sessions:
Gordon Chiu (Grafoid Inc., Canada)
Jesus de la Fuente (Graphenea, Spain)
Mark Gallerneault (ALCERECO Inc., Canada)
Ray Gibbs (Haydale Graphene Industries, UK)
Masataka Hasegawa (AIST, Japan)
Byung Hee Hong (SNU & Graphene Square, Korea)
Tony Ling (Jestico + Whiles, UK)
Carla Miner (SDTC, Canada)
Gregory Pognon (THALES Research & Technology, France)
Elena Polyakova (Graphene Laboratories Inc, USA)
Federico Rosei (INRS–EMT, Université du Québec, Canada)
Aiping Yu (University of Waterloo, Canada)
Hua Zhang (MSE-NTU, Singapore)

Apart from the industrial forum, several industry-related activities will be organized:
– Extensive thematic workshops in parallel (Standardization, Materials & Devices Characterization, Bio & Health and Electronic Devices)
– An exhibition carried out with the latest graphene trends (Grafoid, RAYMOR NanoIntegris, Nanomagnetics Instruments, ICEX and Xerox Research Centre of Canada (XRCC) already confirmed)
– B2B meetings to foster technical cooperation in the field of Graphene

It’s still possible to contribute to the event with an oral presentation. The call for abstracts is open until July, 20 [2015]. [emphasis mine]

Graphene Canada 2015 is already supported by Canada’s leading graphene applications developer, Grafoid Inc., Tourisme Montréal and Université de Montréal.

This is what makes the event peculiarly Canadian: multiculturalism, anyone? From the news release,

Organisers: Phantoms Foundation www.phantomsnet.net & Grafoid Foundation (lead organizers)

CEMES/CNRS (France) | Grafoid (Canada) | Catalan Institute of Nanoscience and Nanotechnology – ICN2 (Spain) | IIT (Italy) | McGill University, Canada | Texas Instruments (USA) | Université Catholique de Louvain (Belgium) | Université de Montreal, Canada

It’s billed as a ‘Canada Graphene 2015’ and, as I recall, these types of events don’t usually have so many other countries listed as organizers. For example, UK Graphene 2015 would have mostly or all of its organizers (especially the leads) located in the UK.

Getting to the Canadian content, I wrote about Grafoid at length tracking some of its relationships to companies it owns, a business deal with Hydro Québec, and a partnership with the University of Waterloo, and a nonrepayable grant from the Canadian federal government (Sustainable Development Technology Canada [SDTC]) in a Feb. 23, 2015 posting. Do take a look at the post if you’re curious about the heavily interlinked nature of the Canadian graphene scene and take another look at the list of speakers and their agencies (Mark Gallerneault of ALCERECO [partially owned by Grafoid], Carla Miner of SDTC [Grafoid received monies from the Canadian federal department],  Federico Rosei of INRS–EMT, Université du Québec [another Quebec link], Aiping Yu, University of Waterloo [an academic partner to Grafoid]). The Canadian graphene community is a small one so it’s not surprising there are links between the Canadian speakers but it does seem odd that Lomiko Metals is not represented here. Still, new speakers have been announced since the news release (e.g., Frank Koppens of ICFO, Spain, and Vladimir Falko of Lancaster University, UK) so  time remains.

Meanwhile, Lomiko Metals has announced in a July 17, 2015 news item on Azonano that Graphene 3D labs has changed the percentage of its outstanding shares affecting the percentage that Lomiko owns, amid some production and distribution announcements. The bit about launching commercial sales of its graphene filament seems more interesting to me,

On March 16, 2015 Graphene 3D Lab (TSXV:GGG) (OTCQB:GPHBF) announced that it launched commercial sales of its Conductive Graphene Filament for 3D printing. The filament incorporates highly conductive proprietary nano-carbon materials to enhance the properties of PLA, a widely used thermoplastic material for 3D printing; therefore, the filament is compatible with most commercially available 3D printers. The conductive filament can be used to print conductive traces (similar to as used in circuit boards) within 3D printed parts for electronics.

So, that’s all I’ve got for Canada’s graphene scene.

Mystery of glass—shattered and Happy Canada Day!

I’m pretty sure I’ve said this before but a repetition can’t hurt, “I love glass both for the art and the mystery.” Naturally, I am of two minds about this ‘shattered’ glass mystery from the University of Waterloo (Canada).

A June 29, 2015 University of Waterloo news release (also on EurekAlert) provides a teasing (for impatient people like me) introduction before describing the solution to the mystery,

A physicist at the University of Waterloo is among a team of scientists who have described how glasses form at the molecular level and provided a possible solution to a problem that has stumped scientists for decades.

Their simple theory is expected to open up the study of glasses to non-experts and undergraduates as well as inspire breakthroughs in novel nanomaterials.

The paper published by physicists from the University of Waterloo, McMaster University, ESPCI ParisTech and Université Paris Diderot appeared in the prestigious peer-reviewed journal, Proceedings of the National Academy of Sciences (PNAS).

Glasses are much more than silicon-based materials in bottles and windows. In fact, any solid without an ordered, crystalline structure — metal, plastic, a polymer — that forms a molten liquid when heated above a certain temperature is a glass. Glasses are an essential material in technology, pharmaceuticals, housing, renewable energy and increasingly nano electronics.

“We were surprised — delighted — that the model turned out to be so simple,” said author James Forrest, a University Research Chair and professor in the Faculty of Science. “We were convinced it had already been published.”

The theory relies on two basic concepts: molecular crowding and string-like co-operative movement. [emphasis mine] Molecular crowding describes how molecules within glasses move like people in a crowded room. As the number of people increase, the amount of free volume decreases and the slower people can move through the crowd. Those people next to the door are able to move more freely, just as the surfaces of glasses never actually stop flowing, even at lower temperatures.

The more crowded the room, the more you rely on the co-operative movement with your neighbours to get where you’re going. Likewise, individual molecules within a glass aren’t able to move totally freely. They move with, yet are confined by, strings of weak molecular bonds with their neighbours.

Theories of crowding and cooperative movement are decades old. This is the first time scientists combined both theories to describe how a liquid turns into a glass.

“Research on glasses is normally reserved for specialists in condensed matter physics,” said Forrest, who is also an associate faculty member at Perimeter Institute for Theoretical Physics and a member of the Waterloo Institute for Nanotechnology.  “Now a whole new generation of scientists can study and apply glasses just using first-year calculus.”

Their theory successfully predicts everything from bulk behaviour to surface flow to the once-elusive phenomenon of the glass transition itself. Forrest and colleagues worked for 20 years to bring theory in agreement with decades of observation on glassy materials.

An accurate theory becomes particularly important when trying to understand glass dynamics at the nanoscale. This finding has implications for developing and manufacturing new nanomaterials, such as glasses with conductive properties, or even calculating the uptake of glassy forms of pharmaceuticals.

Here’s a link to and a citation for the paper,

Cooperative strings and glassy interfaces by Thomas Salez, Justin Salez, Kari Dalnoki-Veress, Elie Raphaël, and James A. Forrest. PNAS (Proceedings of the National Academy of Sciences) Published online before print June 22, 2015, doi: 10.1073/pnas.1503133112

This paper is behind a paywall.

Finally and again, Happy Canada Day!