Monthly Archives: May 2015

An efficient method for signal transmission from nanocomponents

A May 23, 2015 news item on Nanotechnology Now describes research into perfecting the use of nanocomponents in electronic circuits,

Physicists have developed an innovative method that could enable the efficient use of nanocomponents in electronic circuits. To achieve this, they have developed a layout in which a nanocomponent is connected to two electrical conductors, which uncouple the electrical signal in a highly efficient manner. The scientists at the Department of Physics and the Swiss Nanoscience Institute at the University of Basel have published their results in the scientific journal Nature Communications together with their colleagues from ETH Zurich.

A May 22, 2015 University of Basel press release (also on EurkeAlert) describes why there is interest in smaller components and some of the challenges once electrodes can be measured in atoms,

Electronic components are becoming smaller and smaller. Components measuring just a few nanometers – the size of around ten atoms – are already being produced in research laboratories. Thanks to miniaturization, numerous electronic components can be placed in restricted spaces, which will boost the performance of electronics even further in the future.

Teams of scientists around the world are investigating how to produce such nanocomponents with the aid of carbon nanotubes. These tubes have unique properties – they offer excellent heat conduction, can withstand strong currents, and are suitable for use as conductors or semiconductors. However, signal transmission between a carbon nanotube and a significantly larger electrical conductor remains problematic as large portions of the electrical signal are lost due to the reflection of part of the signal.

Antireflex increases efficiency

A similar problem occurs with light sources inside a glass object. A large amount of light is reflected by the walls, which means that only a small proportion reaches the outside. This can be countered by using an antireflex coating on the walls.

The press release goes on to describe new technique for addressing the issue,

Led by Professor Christian Schönenberger, scientists in Basel are now taking a similar approach to nanoelectronics. They have developed an antireflex device for electrical signals to reduce the reflection that occurs during transmission from nanocomponents to larger circuits. To do so, they created a special formation of electrical conductors of a certain length, which are coupled with a carbon nanotube. The researchers were therefore able to efficiently uncouple a high-frequency signal from the nanocomponent.

Differences in impedance cause the problem

Coupling nanostructures with significantly larger conductors proved difficult because they have very different impedances. The greater the difference in impedance between two conducting structures, the greater the loss during transmission. The difference between nanocomponents and macroscopic conductors is so great that no signal will be transmitted unless countermeasures are taken. The antireflex device minimizes this effect and adjusts the impedances, leading to efficient coupling. This brings the scientists significantly closer to their goal of using nanocomponents to transmit signals in electronic parts.

Here’s a link to and a citation for the paper,

Clean carbon nanotubes coupled to superconducting impedance-matching circuits by V. Ranjan, G. Puebla-Hellmann, M. Jung, T. Hasler, A. Nunnenkamp, M. Muoth, C. Hierold, A. Wallraff, & C. Schönenberger. Nature Communications 6, Article number: 7165 doi:10.1038/ncomms8165 Published 15 May 2015

This paper is behind a paywall.

‘Green’, flexible electronics with nanocellulose materials

Bendable or flexible electronics based on nanocellulose paper present a ‘green’ alternative to other solutions according to a May 20, 2015 American Chemical Society (ACS) news release (also on EurekAlert),

Technology experts have long predicted the coming age of flexible electronics, and researchers have been working on multiple fronts to reach that goal. But many of the advances rely on petroleum-based plastics and toxic materials. Yu-Zhong Wang, Fei Song and colleagues wanted to seek a “greener” way forward.

The researchers developed a thin, clear nanocellulose paper made out of wood flour and infused it with biocompatible quantum dots — tiny, semiconducting crystals — made out of zinc and selenium. The paper glowed at room temperature and could be rolled and unrolled without cracking.

(h’t Nanotechnology Now, May 20, 2015)

There’s no mention in the news release or abstract as to what material (wood, carrot, banana, etc.) was used to derive the nanocellulose. Regardless, here’s a link to and a citation for the paper,

Let It Shine: A Transparent and Photoluminescent Foldable Nanocellulose/Quantum Dot Paper by Juan Xue, Fei Song, Xue-wu Yin, Xiu-li Wang, and Yu-zhong Wang. ACS Appl. Mater. Interfaces, 2015, 7 (19), pp 10076–10079 DOI: 10.1021/acsami.5b02011 Publication Date (Web): May 4, 2015

Copyright © 2015 American Chemical Society

This paper is behind a paywall.

Earthquakes, deep and shallow, and their nanocrystals

Those of us who live in this region are warned on a regular basis that a ‘big’ one is overdue somewhere along the West Coast of Canada and the US. It gives me an interest in the geological side of things  While the May 19, 2015 news items on Azonano featuring the research story as told by the University of Oklahoma and the University of California at Riverside doesn’t fall directly under my purview, it’s close enough.

Here’s the lead researcher, Harry W. Green II, from the University of California at Riverside explaining, the work,

The May 18, 2015 University of Oklahoma news release on EurekAlert offers a succinct summary,

A University of Oklahoma structural geologist and collaborators are studying earthquake instability and the mechanisms associated with fault weakening during slip. The mechanism of this weakening is central to understanding earthquake sliding.

Ze’ev Reches, professor in the OU School of Geology and Geophysics, is using electron microscopy to examine velocity and temperature in two key observations: (1) a high-speed friction experiment on carbonate at conditions of shallow earthquakes, and (2) a high-pressure/high-temperature faulting experiment at conditions of very deep earthquakes.

Reches and his collaborators have shown phase transformation and the formation of nano-size (millionth of a millimeter) grains are associated with profound weakening and that fluid is not necessary for such weakening. If this mechanism operates in major earthquakes, it resolves two major conflicts between laboratory results and natural faulting–lack of a thermal zone around major faults and the rarity of glassy rocks along faults.

The May 18, 2015 University of California at Riverside (UCR) news release provides more detail about earthquakes,

Earthquakes are labeled “shallow” if they occur at less than 50 kilometers depth.  They are labeled “deep” if they occur at 300-700 kilometers depth.  When slippage occurs during these earthquakes, the faults weaken.  How this fault weakening takes place is central to understanding earthquake sliding.

A new study published online in Nature Geoscience today by a research team led by University of California, Riverside geologists now reports that a universal sliding mechanism operates for earthquakes of all depths – from the deep ones all the way up to the crustal ones.

“Although shallow earthquakes – the kind that threaten California – must initiate differently from the very deep ones, our new work shows that, once started, they both slide by the same physics,” said deep-earthquake expert Harry W. Green II, a distinguished professor of the Graduate Division in UC Riverside’s Department of Earth Sciences, who led the research project. “Our research paper presents a new, unifying model of how earthquakes work. Our results provide a more accurate understanding of what happens during earthquake sliding that can lead to better computer models and could lead to better predictions of seismic shaking danger.”

The UCR news release goes on to describe the physics of sliding and a controversy concerning shallow and deep earthquakes,

The physics of the sliding is the self-lubrication of the earthquake fault by flow of a new material consisting of tiny new crystals, the study reports. Both shallow earthquakes and deep ones involve phase transformations of rocks that produce tiny crystals of new phases on which sliding occurs.

“Other researchers have suggested that fluids are present in the fault zones or generated there,” Green said. “Our study shows fluids are not necessary for fault weakening. As earthquakes get started, local extreme heating takes place in the fault zone. The result of that heating in shallow earthquakes is to initiate reactions like the ones that take place in deep earthquakes so they both end up lubricated in the same way.”

Green explained that at 300-700 kilometers depth, the pressure and temperature are so high that rocks in this deep interior of the planet cannot break by the brittle processes seen on Earth’s surface. In the case of shallow earthquakes, stresses on the fault increase slowly in response to slow movement of tectonic plates, with sliding beginning when these stresses exceed static friction. While deep earthquakes also get started in response to increasing stresses, the rocks there flow rather than break, except under special conditions.

“Those special conditions of temperature and pressure induce minerals in the rock to break down to other minerals, and in the process of this phase transformation a fault can form and suddenly move, radiating the shaking – just like at shallow depths,” Green said.

The research explains why large faults like the San Andreas Fault in California do not have a heat-flow anomaly around them. Were shallow earthquakes to slide by the grinding and crunching of rock, as geologists once imagined, the process would generate enough heat so that major faults like the San Andreas would be a little warmer along their length than they would be otherwise.

“But such a predicted warm region along such faults has never been found,” Green said.  “The logical conclusion is that the fault must move more easily than we thought.  Extreme heating in a very thin zone along the fault produces the very weak lubricant.  The volume of material that is heated is very small and survives for a very short time – seconds, perhaps – followed by very little heat generation during sliding because the lubricant is very weak.”

The new research also explains why faults with glass on them (reflecting the fact that during the earthquake the fault zone melted) are rare. As shallow earthquakes start, the temperature rises locally until it is hot enough to start a chemical reaction – usually the breakdown of clays or carbonates or other hydrous phases in the fault zone.  The reactions that break down the clays or carbonates stop the temperature from climbing higher, with heat being used up in the reactions that produce the nanocrystalline lubricant.

If the fault zone does not have hydrous phases or carbonates, the sudden heating that begins when sliding starts raises the local temperature on the fault all the way to the melting temperature of the rock.  In such cases, the melt behaves like a lubricant and the sliding surface ends up covered with melt (that would quench to a glass) instead of the nanocrystalline lubricant.

“The reason this does not happen often, that is, the reason we do not see lots of faults with glass on them, is that the Earth’s crust is made up to a large degree of hydrous and carbonate phases, and even the rocks that don’t have such phases usually have feldspars that get crushed up in the fault zone,” Green explained. “The feldspars will ‘rot’ to clays during the hundred years or so between earthquakes as water moves along the fault zone. In that case, when the next earthquake comes, the fault zone is ready with clays and other phases that can break down, and the process repeats itself.”

The research involved the study of laboratory earthquakes – high-pressure earthquakes as well as high-speed ones – using electron microscopy in friction and faulting experiments. It was Green’s laboratory that first conducted a serendipitous series of experiments, in 1989, on the right kind of mantle rocks that give geologists insight into how deep earthquakes work. In the new work, Green and his team also investigated the Punchbowl Fault, an ancestral branch of the San Andreas Fault that has been exhumed by erosion from several kilometers depth, and found nanometric materials within the fault – as predicted by their model.

Here’s a link to and a citation for the paper,

Phase transformation and nanometric flow cause extreme weakening during fault slip by H. W. Green II, F. Shi, K. Bozhilov, G. Xia, & Z. Reches. Nature Geoscience (2015) doi:10.1038/ngeo2436 Published online 18 May 2015

This paper is behind a paywall.

A GEnIuS approach to oil spill remediation at 18th European Forum on Eco-innovation

In light of recent local events (an oil spill in Vancouver’s [Canada] English Bay, a popular local beach [more details in my April 16, 2015 post]), it seems appropriate to mention an* environmentally friendly solution to mopping up oil spills (oil spill remediation). A May 21, 2015 news item on Azonano features a presentation on the topic at hand (Note: A link has been removed),

Directa Plus at 18th European Forum on Eco-innovation to present GEnIuS, the innovative project that leads to the creation of a graphene-based product able to remove hydrocarbons from polluted water and soil.

The Forum untitled “Boosting competitiveness and innovation” is being held by the European Commission on 20th and 21st of May in Barcelona. The main purpose of this event is presenting the last developments in the eco-innovation field: an important moment where emerging and cutting-edge innovators will get in contact with new promising solutions under political, financial and technological point of view.

Directa Plus research has driven to the creation of an ecologic, innovative and highly effective oil-adsorbent, characterized by unique performances in oil adsorbency, and at the same time absence of toxicity and flammability, and the possibility to recover oil.

The creation of this graphene-based oil-adsorbent product, commercialized as Grafysorber, has been promoted by GEnIuS project and already approved by the Italian Ministry of Enviroment to be used in occasion of oil spills clean-up activities.

Giulio Cesareo, Directa Plus President and CEO, commented:

“Grafysorber embodies the nano-carbon paradox -in fact, with a nano-carbon material we are able to cut down part of damages caused by hydrocarbons, derived from carbon itself.

“Moreover, our product, once exhausted after depuration of water, finishes positively its life cycle inside the asphalt and bitumen, introducing new properties as thermal conductivity and mechanical reinforcement. I believe that every company is obliged to work following a sustainable approach to guarantee a balanced use of resources and their reuse, where possible.”

I have mentioned a Romanian project employing Directa Plus’s solution, Grafysorber in a December 30, 2014 post. At the time, the product name was called Graphene Plus and Grafysorber was a constituent of the product.

You can find more information about Graphene Eco Innovative Sorbent (GENIUS) here and about Directa Plus here. The company is located in Italy.

One final bit about oil spills and remediation, the Deepwater Horizon/Gulf/BP oil spill has spawned, amongst many others, a paper from the University of Georgia (US) noting that we don’t know that much about the dispersants used to clean up, from a May 14, 2015 University of Georgia news release on EurekAlert,

New commentary in Nature Reviews Microbiology by Samantha Joye of the University of Georgia and her colleagues argues for further in-depth assessments of the impacts of dispersants on microorganisms to guide their use in response to future oil spills.

Chemical dispersants are widely used in emergency responses to oil spills in marine environments as a means of stimulating microbial degradation of oil. After the Deepwater Horizon spill in 2010, dispersants were applied to the sea surface and deep waters of the Gulf of Mexico, the latter of which was unprecedented. Dispersants were used as a first line of defense even though little is known about how they affect microbial communities or the biodegradation activities they are intended to spur.

The researchers document historical context for the use of dispersants, their approval by the Environmental Protection Agency and the uncertainty about whether they stimulate or in fact inhibit the microbial degradation of oil in marine ecosystems.

One challenge of testing the toxicity from the use of dispersants on the broader ecosystem is the complex microbial communities of the different habitats represented in a large marine environment, such as the Gulf of Mexico. Development of model microbial communities and type species that reflect the composition of surface water, deep water, deep-sea sediments, beach sediments and marsh sediments is needed to evaluate the toxicity effects of dispersants.

“The bottom line is that we do not truly understand the full range of impacts that dispersants have on microbial communities, and we must have this knowledge in hand before the next marine oil spill occurs to support the decision-making process by the response community,” Joye said.

I hope the Canadians who are overseeing our waterways are taking note.

*’a’ changed to ‘an’ for grammatical correctness on Dec. 18, 2015.

The use of graphene scanners in art conservation

A May 20, 2015 news item on phys.org describes a new method of examining art work without damaging it,

Museum curators, art restorers, archaeologists and the broader public will soon be able to learn much more about paintings and other historic objects, thanks to an EU project which has become a pioneer in non-invasive art exploration techniques, based on a graphene scanner.

Researchers working on INSIDDE [INtegration of cost-effective Solutions for Imaging, Detection, and Digitisation of hidden Elements in paintings], which received a EUR 2.9 million investment from FP7 ICT Research Programme, have developed a graphene scanner that can explore under the surface of a painting, or through the dirt covering an ancient object unearthed in an archaeological dig, without touching it.

‘As well as showing sketches or previous paintings that have remained hidden beneath a particular artwork, the scanner, together with post-processing techniques, will allow us to identify and distinguish brushstrokes to understand the creative process,’ explained Javier Gutiérrez, of Spanish technology company Treelogic, which is leading the project.

A May 19, 2015 CORDIS press release, which originated the news item, provides more details about the graphene scanner’s cabilities,

The challenge in this field is to develop advanced technologies that avoid damaging the artwork under examination. Solvents and their potential side effects are progressively being replaced by the likes of lasers, to removed dirt and varnish from paintings. Limestone-producing bacteria can be used to fill cracks in sculptures. INSIDDE is taking a step further in this direction by using terahertz, a frequency band lying between microwave and infrared in the electromagnetic spectrum.

Until graphene, considered to be one of the materials of the future, came along it was difficult to generate terahertz frequencies to acquire such detail. Graphene in this application acts as a frequency multiplier, allowing scientists to reveal previously hidden features such as brushstroke textures, pigments and defects, without harming the work.

Although X-ray and infrared reflectography are used elsewhere to carry out this type of study, they heat the object and cannot reach the intermediate layers between the gesso and the varnish in paintings, or other characteristic elements in ceramics. INSIDDE’s device, using terahertz frequency, works in these intermediate layers and does not heat the object.

In conjunction with a commercial scanner mapping the art’s upper layers, it can generate full 3D data from the object in a completely non-intrusive way and processes this data to extract and interpret features invisible to the naked eye, in a way that has never been done before.

INSIDDE is developing this technology to benefit the general public, too. The 2D and 3D digital models it is producing will be uploaded to the Europeana network and the project aims to make the results available through a smartphone and tablet app to be exploited by local and regional museums. The app is currently being trialled at one of the partners, the Asturias Fine Art Museum in Oviedo. It shows the different layers of the painting the visitor is looking at and provides additional information and audio.

The press release notes that the technology offers some new possibilities,

Although the scanner is still in its trial and calibration phase, the project participants have already unveiled some promising results. Marta Flórez, of the Asturias Fine Art Museum, explained: ‘Using the prototype, we have been able to distinguish clearly between different pigments, which in some cases will avoid having to puncture the painting in order to find out what materials the artist used.’

The prototype is also being validated with some recently unearthed 3rd Century pottery from the Stara Zagora regional history museum in Bulgaria. When the project ends in December 2015, one of the options the consortium is assessing is putting this cost-effective solution at the service of smaller local and regional museums without art restoration departments so that they too, like the bigger museums, can make important discoveries about their collections.

You can find out more about INSIDDE here.

Large(!)-scale graphene composite fabrication at the US Oak Ridge National Laboratory (ORNL)

When you’re talking about large-scale production of nanomaterials, it would be more accurate term to say ‘relatively large when compared to the nanoscale’. A May 15, 2015 news item on ScienceDaily, trumpets the news,

One of the barriers to using graphene at a commercial scale could be overcome using a method demonstrated by researchers at the Department of Energy’s Oak Ridge National Laboratory [ORNL].

Graphene, a material stronger and stiffer than carbon fiber, has enormous commercial potential but has been impractical to employ on a large scale, with researchers limited to using small flakes of the material.

Now, using chemical vapor deposition, a team led by ORNL’s Ivan Vlassiouk has fabricated polymer composites containing 2-inch-by-2-inch sheets of the one-atom thick hexagonally arranged carbon atoms. [emphasis mine]

Once you understand where these scientists are coming from in terms of the material size, it becomes easier to appreciate the accomplishment and its potential. From a May 14, 2015 ORNL news release (also on EurekAlert), which originated the news item,

The findings, reported in the journal Applied Materials & Interfaces, could help usher in a new era in flexible electronics and change the way this reinforcing material is viewed and ultimately used.

“Before our work, superb mechanical properties of graphene were shown at a micro scale [one millionth of a metre],” said Vlassiouk, a member of ORNL’s Energy and Transportation Science Division. “We have extended this to a larger scale, which considerably extends the potential applications and market for graphene.”

While most approaches for polymer nanocomposition construction employ tiny flakes of graphene or other carbon nanomaterials that are difficult to disperse in the polymer, Vlassiouk’s team used larger sheets of graphene. This eliminates the flake dispersion and agglomeration problems and allows the material to better conduct electricity with less actual graphene in the polymer.

“In our case, we were able to use chemical vapor deposition to make a nanocomposite laminate that is electrically conductive with graphene loading that is 50 times less compared to current state-of-the-art samples,” Vlassiouk said. This is a key to making the material competitive on the market.

If Vlassiouk and his team can reduce the cost and demonstrate scalability, researchers envision graphene being used in aerospace (structural monitoring, flame-retardants, anti-icing, conductive), the automotive sector (catalysts, wear-resistant coatings), structural applications (self-cleaning coatings, temperature control materials), electronics (displays, printed electronics, thermal management), energy (photovoltaics, filtration, energy storage) and manufacturing (catalysts, barrier coatings, filtration).

Here’s a link to and a citation for the paper,

Strong and Electrically Conductive Graphene-Based Composite Fibers and Laminates by Ivan Vlassiouk, Georgios Polizos, Ryan Cooper, Ilia Ivanov, Jong Kahk Keum, Felix Paulauskas, Panos Datskos, and Sergei Smirnov. ACS Appl. Mater. Interfaces, Article ASAP DOI: 10.1021/acsami.5b01367 Publication Date (Web): April 28, 2015

Copyright © 2015 American Chemical Society

This paper is behind a paywall.

Cosmetics giant, L’Oréal, to 3D print skin

L’Oréal, according to a May 19, 2015 BBC (British Broadcasting Corporation) online news item, has partnered with Organovo, a 3D bioprinting startup, to begin producing skin,

French cosmetics firm L’Oreal is teaming up with bio-engineering start-up Organovo to 3D-print human skin.

It said the printed skin would be used in product tests.

Organovo has already made headlines with claims that it can 3D-print a human liver but this is its first tie-up with the cosmetics industry.

Experts said the science might be legitimate but questioned why a beauty firm would want to print skin. [emphasis mine]

L’Oreal currently grows skin samples from tissues donated by plastic surgery patients. It produces more than 100,000, 0.5 sq cm skin samples per year and grows nine varieties across all ages and ethnicities.

Its statement explaining the advantage of printing skin, offered little detail: “Our partnership will not only bring about new advanced in vitro methods for evaluating product safety and performance, but the potential for where this new field of technology and research can take us is boundless.”

The beauty and cosmetics industry has a major interest in technology, especially anything to do with the skin. I’m curious as to what kind of an expert wouldn’t realize that cosmetics companies test products on skin and might like to have a ready supply. Still, I have to admit to surprise when I first (2006) started researching nanotechnology;  L’Oréal at one point was the sixth largest nanotechnology patent holder in the US (see my Nanotech Mysteries Wiki page: Marketers put the buy in nano [scroll down to Penetration subhead]). In 2008 L’Oréal company representatives were set for a discussion on their nanotechnology efforts and the precautionary principle, which was to be hosted by the Wilson Center’s Project for Emerging Nanotechnologies (PEN). The company cancelled at a rather interesting time as I had noted in my June 19, 2008 posting. (scroll down about 40% of the way until you see mention of Dr. Andrew Maynard).

Back to 3D printing technology and cosmetics giants, a May 5, 2015 Organovo/L’Oréal press release provides more detail about the deal,

L’Oreal USA, the largest subsidiary of the world’s leading beauty company, has announced a partnership with 3-D bioprinting company Organovo Holdings, Inc. (NYSE MKT: ONVO) (“Organovo”).  Developed between L’Oreal’s U.S.-based global Technology Incubator and Organovo, the collaboration will leverage Organovo’s proprietary NovoGen Bioprinting Platform and L’Oreal’s expertise in skin engineering to develop 3-D printed skin tissue for product evaluation and other areas of advanced research.

This partnership marks the first-ever application of Organovo’s groundbreaking technology within the beauty industry.

“We developed our technology incubator to uncover disruptive innovations across industries that have the potential to transform the beauty business,” said Guive Balooch, Global Vice President of L’Oreal’s Technology Incubator.  “Organovo has broken new ground with 3-D bioprinting, an area that complements L’Oreal’s pioneering work in the research and application of reconstructed skin for the past 30 years. Our partnership will not only bring about new advanced in vitro methods for evaluating product safety and performance, but the potential for where this new field of technology and research can take us is boundless.”

Organovo’s 3D bioprinting enables the reproducible, automated creation of living human tissues that mimic the form and function of native tissues in the body.

“We are excited to be partnering with L’Oreal, whose leadership in the beauty industry is rooted in scientific innovation and a deep commitment to research and development,” said Keith Murphy, Chairman and Chief Executive Officer at Organovo. “This partnership is a great next step to expand the applications of Organovo’s 3-D bioprinting technology and to create value for both L’Oreal and Organovo by building new breakthroughs in skin modeling.”

I don’t have much information about Organovo here, certainly nothing about the supposed liver (how did I miss that?), but there is a Dec. 26, 2012 posting about its deal with software giant, Autodesk.

Canadian scientists in a national protest on May 19, 2015 and some thoughts on a more nuanced discussion about ‘science muzzles’

For anyone unfamiliar with Canada’s science muzzle, government scientists are not allowed to speak directly to the media and all requests must be handled by the communications department in the ministry. For one of the odder consequences of that policy, there’s my Sept. 16, 2010 posting about a scientist who wasn’t allowed to talk to media about his research on a 13,000 year old flood that took place in the Canadian North. Adding insult to injury, his international colleagues were giving out all kinds of interviews.

Here’s a more recent incident (h/t Speaking Up For Canadian Science, May 20, 2015) recounted in a May 19, 2015 news item by  Nicole Mortillaro for CTV (Canadian television) news online ,

“Unlike Canadian scientists, I don’t have to ask permission to talk to you.”

That was one of the first things National Oceanic and Atmospheric Administration (NOAA) scientist Pieter Tans said when I called to reach him for comment about rising carbon dioxide levels reaching historic levels.

The topic itself was controversial: climate change is a hot-button topic for many. But getting in touch with NOAA was easy. In total, there were five email exchanges, all providing information about the topic and the arrangement of the interview.

Compare that to trying to get response from a Canadian federal department.

While I’ve had many frustrating dealings with various federal agencies, my most recent experience came as I was working on a story about ways Canadians could protect themselves as severe weather season approached. I wanted to mention the new federal national emergency warning system, Alert Ready. I reached out to Environment Canada for more information.

You’d think the federal government would want to let Canadians know about a new national emergency warning system and they do, in their fashion. For the whole story, there’s Mortillaro’s piece (which has an embedded video and more) but for the fast version, Mortillaro contacted the communications people a day before her Friday deadline asking for a spokesperson. The communications team missed the deadline although they did find a spokesperson who would be available on the Monday. Strangely or not, he proved to be hesitant to talk about the new system.

Getting back to the science muzzle protest of 2015 and the muzzle itself, there’s a May 17, 2015 article by Ivan Semeniuk for the Globe and Mail providing more detail about the muzzle and the then upcoming protest organized by the Professional Institute of the Public Service of Canada (PIPSC) currently in contract negotiations with the federal government. (Echoing what I said in my Dec. 4, 2014 posting about the contract negotiations, the union is bargaining for the right to present science information which is unprecedented in Canada (and, I suspect, internationally). Back to Semeniuk’s article,

With contract negotiations set to resume this week, there will also be a series of demonstrations for the Ottawa area on Tuesday to focus attention on the issue.

If successful, the effort could mark a precedent-setting turn in what the government’s critics portray as a struggle between intellectual independence and political prerogative.

“Our science members said to us: What’s more important than anything else is our ability to do our jobs as professionals,” said Peter Bleyer, an adviser with the Professional Institute of the Public Service of Canada, whose membership includes some 15,000 scientists and engineers.

Government scientists have always been vulnerable to those who hold the reins of power, but tensions have grown under the Conservatives. After the Tories enacted a wave of research program and facility cancellations in 2012, stories began to emerge of researchers who were blocked from responding to media requests about their work.

The onerous communications protocols apply even for stories about scientific advancements that are likely to reflect positively on the federal government. Last month [April 2015], after it was announced that Canada would become a partner in the Thirty Meter Telescope, The Globe and Mail had to appeal to the Prime Minister’s Office to facilitate an interview with the National Research Council astronomer leading the development of the telescope’s sophisticated adaptive-optics system.

Federal Information Commissioner Suzanne Legault is currently conducting an investigation into complaints that scientists have been muzzled by the Conservative government.

As Semeniuk notes at the end of his article in a quote from the US-based Union of Concerned Scientists’ representative, the problem is not new and not unique to Canada. For a ‘not unique’ example, the UK government seems to be interested in taking a similar approach to ‘muzzling’ scientists, according to an April 1, 2015 post by Glyn Moody for Techdirt (Note: Links have been removed),

Techdirt has been following for a while Canada’s moves to stop scientists from speaking out about areas where the facts of the situation don’t sit well with the Canadian government’s dogma-based policies. Sadly, it looks like the UK is taking the same route. It concerns a new code for the country’s civil servants, which will also apply to thousands of publicly-funded scientists. As the Guardian reports:

Under the new code, scientists and engineers employed at government expense must get ministerial approval before they can talk to the media about any of their research, whether it involves GM crops, flu vaccines, the impact of pesticides on bees, or the famously obscure Higgs boson.

The fear — quite naturally — is that ministers could take days before replying to requests, by which time news outlets will probably have lost interest. As a result of this change, science organizations have sent a letter to the UK government, expressing their “deep concern” about the code. …

As for ‘not new’, there’s always a tension between employer and employee about what constitutes free speech. Does an employee get fired for making gross, sexist comments in their free time at a soccer game? The answer in Ontario, Canada is yes according to a May 14, 2015 article by Samantha Leal for Marie Claire magazine. Presumably there will be a law suit and we will find out if the firing is legally acceptable. Or more cynically, this may prove to be a public relations ploy designed to spin the story in the employer’s favour while the employee takes some time off and returns unobtrusively at a later date.

I have a couple of final comments about free speech and employers’ and employees’ rights and responsibilities.First, up until the muzzles were applied, the Canadian government and its scientists seemed to have had a kind of unspoken agreement as to what constituted fair discussion of scientific research in the media. I vaguely recall a few kerfuffles over the years but nothing major. (If someone can recall an incident where a scientist working for the Canadian government seriously embarrassed it, please let me know in the comments.)  So, this relatively new enthusiasm for choking off  media coverage of Canadian science research seems misplaced at best. Unfortunately, it has exacerbated standard tensions about what employees can and can’t say to new heights. Attempting to entrench the right to share science research in a bureaucratic process (a union contract) seems weirdly similar to the Harper government’s approach, which like the union’s proposition added a bureaucratic layer.

As for my second thought, I’m wondering how many people who cheered that soccer fan’s firing for making comments (albeit sexist comments) in his free time are protesting for free speech for Canadian government scientists.

It comes down to* matters of principle. Which ones do we want to follow and when do we apply them? Do principles apply only for those people and ideas we find acceptable?

I just wish there was a little more nuance brought to the ‘science muzzle in Canada’ discussion so we might veer away from heightened adversarial relationships between the government and its scientists.

* The phrase was originally published as “to a matters of principle …” and was corrected on May 22, 2015.

McGill University researchers put the squeeze Tomonaga-Luttinger theory in quantum mechanics

McGill University (Montréal, Québec, Canada) researchers testing the Tomonaga-Luttinger theory had international help according to a May 15, 2015 news item on ScienceDaily,

We all know intuitively that normal liquids flow more quickly as the channel containing them tightens. Think of a river flowing through narrow rapids.

But what if a pipe were so amazingly tiny that only a few atoms of superfluid helium could squeeze through its opening at once? According to a longstanding quantum-mechanics model, the superfluid helium would behave differently from a normal liquid: far from speeding up, it would actually slow down.

For more than 70 years, scientists have been studying the flow of helium through ever smaller pipes. But only recently has nanotechnology made it possible to reach the scale required to test the theoretical model, known as the Tomonaga-Luttinger theory (after the scientists who developed it).

Now, a team of McGill University researchers, with collaborators at the University of Vermont and at Leipzig University in Germany, has succeeded in conducting experiments with the smallest channel yet – less than 30 atoms wide. In results published online today in Science Advances, the researchers report that the flow of superfluid helium through this miniature faucet does, indeed, appear to slow down.

A May 15, 2015 University of McGill news release (also on EurekAlert), which originated the news item, expands on the theme and notes this is one step on the road to proving the theory,

“Our results suggest that a quantum faucet does show a fundamentally different behaviour,” says McGill physics professor Guillaume Gervais, who led the project. “We don’t have the smoking gun yet. But we think this a great step toward proving experimentally the Tomonaga-Luttinger theory in a real liquid.”

The zone where physics changes

Insights from the research could someday contribute to novel technologies, such as nano-sensors with applications in GPS systems. But for now, Gervais says, the results are significant simply because “we’re pushing the limit of understanding things on the nanoscale. We’re approaching the grey zone where all physics changes.”

Prof. Adrian Del Maestro from the University of Vermont has been employing high-performance computer simulations to understand just how small the faucet has to be before this new physics emerges. “The ability to study a quantum liquid at such diminutive length scales in the laboratory is extremely exciting as it allows us to extend our fundamental understanding of how atoms cooperate to form the superfluid state of matter,” he says. “The superfluid slowdown we observe signals that this cooperation is starting to break down as the width of the pipe narrows to the nanoscale” and edges closer to the exotic one-dimensional limit envisioned in the Tomonaga-Luttinger theory.

Building what is probably the world’s smallest faucet has been no simple task. Gervais hatched the idea during a five-minute conversation over coffee with a world-leading theoretical physicist. That was eight years ago. But getting the nano-plumbing to work took “at least 100 trials – maybe 200,” says Gervais, who is a fellow of the Canadian Institute for Advanced Research.

A beam of electrons as drill bit

Using a beam of electrons as a kind of drill bit, the team made holes as small as seven nanometers wide in a piece of silicon nitride, a tough material used in applications such as automotive diesel engines and high-performance ball bearings. By cooling the apparatus to very low temperatures, placing superfluid helium on one side of the pore and applying a vacuum to the other, the researchers were able to observe the flow of the superfluid through the channel. Varying the size of the channel, they found that the maximum speed of the flow slowed as the radius of the pore decreased.

The experiments take advantage of a unique characteristic of superfluids. Unlike ordinary liquids – water or maple syrup, for example – superfluids can flow without any viscosity. As a result, they can course through extremely narrow channels; and once in motion, they don’t need any pressure to keep going. Helium is the only element in nature known to become a superfluid; it does so when cooled to an extremely low temperature.

An inadvertent breakthrough

For years, however, the researchers were frustrated by a technical glitch: the tiny pore in the silicon nitride material kept getting clogged by contaminants. Then one day, while Gervais was away at a conference abroad, a new student in his lab inadvertently deviated from the team’s operating procedure and left a valve open in the apparatus. “It turned out that this open valve kept the hole open,” Gervais says. “It was the key to getting the experiment to work. Scientific breakthroughs don’t always happen by design!”

Prof. Bernd Rosenow, a quantum physicist at Leipzig University’s Institute for Theoretical Physics, also contributed to the study.

Here’s a link to and a citation for the paper,

Critical flow and dissipation in a quasi–one-dimensional superfluid by Pierre-François Duc, Michel Savard, Matei Petrescu, Bernd Rosenow, Adrian Del Maestro, Guillaume Gervais. Science Advances 15 May 2015: Vol. 1 no. 4 e1400222 DOI: 10.1126/sciadv.1400222

This is an open access paper.