Tag Archives: Norway

Emerging technology and the law

I have three news bits about legal issues that are arising as a consequence of emerging technologies.

Deep neural networks, art, and copyright

Caption: The rise of automated art opens new creative avenues, coupled with new problems for copyright protection. Credit: Provided by: Alexander Mordvintsev, Christopher Olah and Mike Tyka

Presumably this artwork is a demonstration of automated art although they never really do explain how in the news item/news release. An April 26, 2017 news item on ScienceDaily announces research into copyright and the latest in using neural networks to create art,

In 1968, sociologist Jean Baudrillard wrote on automatism that “contained within it is the dream of a dominated world […] that serves an inert and dreamy humanity.”

With the growing popularity of Deep Neural Networks (DNN’s), this dream is fast becoming a reality.

Dr. Jean-Marc Deltorn, researcher at the Centre d’études internationales de la propriété intellectuelle in Strasbourg, argues that we must remain a responsive and responsible force in this process of automation — not inert dominators. As he demonstrates in a recent Frontiers in Digital Humanities paper, the dream of automation demands a careful study of the legal problems linked to copyright.

An April 26, 2017 Frontiers (publishing) news release on EurekAlert, which originated the news item, describes the research in more detail,

For more than half a century, artists have looked to computational processes as a way of expanding their vision. DNN’s are the culmination of this cross-pollination: by learning to identify a complex number of patterns, they can generate new creations.

These systems are made up of complex algorithms modeled on the transmission of signals between neurons in the brain.

DNN creations rely in equal measure on human inputs and the non-human algorithmic networks that process them.

Inputs are fed into the system, which is layered. Each layer provides an opportunity for a more refined knowledge of the inputs (shape, color, lines). Neural networks compare actual outputs to expected ones, and correct the predictive error through repetition and optimization. They train their own pattern recognition, thereby optimizing their learning curve and producing increasingly accurate outputs.

The deeper the layers are, the higher the level of abstraction. The highest layers are able to identify the contents of a given input with reasonable accuracy, after extended periods of training.

Creation thus becomes increasingly automated through what Deltorn calls “the arcane traceries of deep architecture”. The results are sufficiently abstracted from their sources to produce original creations that have been exhibited in galleries, sold at auction and performed at concerts.

The originality of DNN’s is a combined product of technological automation on one hand, human inputs and decisions on the other.

DNN’s are gaining popularity. Various platforms (such as DeepDream) now allow internet users to generate their very own new creations . This popularization of the automation process calls for a comprehensive legal framework that ensures a creator’s economic and moral rights with regards to his work – copyright protection.

Form, originality and attribution are the three requirements for copyright. And while DNN creations satisfy the first of these three, the claim to originality and attribution will depend largely on a given country legislation and on the traceability of the human creator.

Legislation usually sets a low threshold to originality. As DNN creations could in theory be able to create an endless number of riffs on source materials, the uncurbed creation of original works could inflate the existing number of copyright protections.

Additionally, a small number of national copyright laws confers attribution to what UK legislation defines loosely as “the person by whom the arrangements necessary for the creation of the work are undertaken.” In the case of DNN’s, this could mean anybody from the programmer to the user of a DNN interface.

Combined with an overly supple take on originality, this view on attribution would further increase the number of copyrightable works.

The risk, in both cases, is that artists will be less willing to publish their own works, for fear of infringement of DNN copyright protections.

In order to promote creativity – one seminal aim of copyright protection – the issue must be limited to creations that manifest a personal voice “and not just the electric glint of a computational engine,” to quote Deltorn. A delicate act of discernment.

DNN’s promise new avenues of creative expression for artists – with potential caveats. Copyright protection – a “catalyst to creativity” – must be contained. Many of us gently bask in the glow of an increasingly automated form of technology. But if we want to safeguard the ineffable quality that defines much art, it might be a good idea to hone in more closely on the differences between the electric and the creative spark.

This research is and be will part of a broader Frontiers Research Topic collection of articles on Deep Learning and Digital Humanities.

Here’s a link to and a citation for the paper,

Deep Creations: Intellectual Property and the Automata by Jean-Marc Deltorn. Front. Digit. Humanit., 01 February 2017 | https://doi.org/10.3389/fdigh.2017.00003

This paper is open access.

Conference on governance of emerging technologies

I received an April 17, 2017 notice via email about this upcoming conference. Here’s more from the Fifth Annual Conference on Governance of Emerging Technologies: Law, Policy and Ethics webpage,

The Fifth Annual Conference on Governance of Emerging Technologies:

Law, Policy and Ethics held at the new

Beus Center for Law & Society in Phoenix, AZ

May 17-19, 2017!

Call for Abstracts – Now Closed

The conference will consist of plenary and session presentations and discussions on regulatory, governance, legal, policy, social and ethical aspects of emerging technologies, including (but not limited to) nanotechnology, synthetic biology, gene editing, biotechnology, genomics, personalized medicine, human enhancement technologies, telecommunications, information technologies, surveillance technologies, geoengineering, neuroscience, artificial intelligence, and robotics. The conference is premised on the belief that there is much to be learned and shared from and across the governance experience and proposals for these various emerging technologies.

Keynote Speakers:

Gillian HadfieldRichard L. and Antoinette Schamoi Kirtland Professor of Law and Professor of Economics USC [University of Southern California] Gould School of Law

Shobita Parthasarathy, Associate Professor of Public Policy and Women’s Studies, Director, Science, Technology, and Public Policy Program University of Michigan

Stuart Russell, Professor at [University of California] Berkeley, is a computer scientist known for his contributions to artificial intelligence

Craig Shank, Vice President for Corporate Standards Group in Microsoft’s Corporate, External and Legal Affairs (CELA)

Plenary Panels:

Innovation – Responsible and/or Permissionless

Ellen-Marie Forsberg, Senior Researcher/Research Manager at Oslo and Akershus University College of Applied Sciences

Adam Thierer, Senior Research Fellow with the Technology Policy Program at the Mercatus Center at George Mason University

Wendell Wallach, Consultant, ethicist, and scholar at Yale University’s Interdisciplinary Center for Bioethics

 Gene Drives, Trade and International Regulations

Greg Kaebnick, Director, Editorial Department; Editor, Hastings Center Report; Research Scholar, Hastings Center

Jennifer Kuzma, Goodnight-North Carolina GlaxoSmithKline Foundation Distinguished Professor in Social Sciences in the School of Public and International Affairs (SPIA) and co-director of the Genetic Engineering and Society (GES) Center at North Carolina State University

Andrew Maynard, Senior Sustainability Scholar, Julie Ann Wrigley Global Institute of Sustainability Director, Risk Innovation Lab, School for the Future of Innovation in Society Professor, School for the Future of Innovation in Society, Arizona State University

Gary Marchant, Regents’ Professor of Law, Professor of Law Faculty Director and Faculty Fellow, Center for Law, Science & Innovation, Arizona State University

Marc Saner, Inaugural Director of the Institute for Science, Society and Policy, and Associate Professor, University of Ottawa Department of Geography

Big Data

Anupam Chander, Martin Luther King, Jr. Professor of Law and Director, California International Law Center, UC Davis School of Law

Pilar Ossorio, Professor of Law and Bioethics, University of Wisconsin, School of Law and School of Medicine and Public Health; Morgridge Institute for Research, Ethics Scholar-in-Residence

George Poste, Chief Scientist, Complex Adaptive Systems Initiative (CASI) (http://www.casi.asu.edu/), Regents’ Professor and Del E. Webb Chair in Health Innovation, Arizona State University

Emily Shuckburgh, climate scientist and deputy head of the Polar Oceans Team at the British Antarctic Survey, University of Cambridge

 Responsible Development of AI

Spring Berman, Ira A. Fulton Schools of Engineering, Arizona State University

John Havens, The IEEE [Institute of Electrical and Electronics Engineers] Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems

Subbarao Kambhampati, Senior Sustainability Scientist, Julie Ann Wrigley Global Institute of Sustainability, Professor, School of Computing, Informatics and Decision Systems Engineering, Ira A. Fulton Schools of Engineering, Arizona State University

Wendell Wallach, Consultant, Ethicist, and Scholar at Yale University’s Interdisciplinary Center for Bioethics

Existential and Catastrophic Ricks [sic]

Tony Barrett, Co-Founder and Director of Research of the Global Catastrophic Risk Institute

Haydn Belfield,  Academic Project Administrator, Centre for the Study of Existential Risk at the University of Cambridge

Margaret E. Kosal Associate Director, Sam Nunn School of International Affairs, Georgia Institute of Technology

Catherine Rhodes,  Academic Project Manager, Centre for the Study of Existential Risk at CSER, University of Cambridge

These were the panels that are of interest to me; there are others on the homepage.

Here’s some information from the Conference registration webpage,

Early Bird Registration – $50 off until May 1! Enter discount code: earlybirdGETs50

New: Group Discount – Register 2+ attendees together and receive an additional 20% off for all group members!

Click Here to Register!

Conference registration fees are as follows:

  • General (non-CLE) Registration: $150.00
  • CLE Registration: $350.00
  • *Current Student / ASU Law Alumni Registration: $50.00
  • ^Cybsersecurity sessions only (May 19): $100 CLE / $50 General / Free for students (registration info coming soon)

There you have it.

Neuro-techno future laws

I’m pretty sure this isn’t the first exploration of potential legal issues arising from research into neuroscience although it’s the first one I’ve stumbled across. From an April 25, 2017 news item on phys.org,

New human rights laws to prepare for advances in neurotechnology that put the ‘freedom of the mind’ at risk have been proposed today in the open access journal Life Sciences, Society and Policy.

The authors of the study suggest four new human rights laws could emerge in the near future to protect against exploitation and loss of privacy. The four laws are: the right to cognitive liberty, the right to mental privacy, the right to mental integrity and the right to psychological continuity.

An April 25, 2017 Biomed Central news release on EurekAlert, which originated the news item, describes the work in more detail,

Marcello Ienca, lead author and PhD student at the Institute for Biomedical Ethics at the University of Basel, said: “The mind is considered to be the last refuge of personal freedom and self-determination, but advances in neural engineering, brain imaging and neurotechnology put the freedom of the mind at risk. Our proposed laws would give people the right to refuse coercive and invasive neurotechnology, protect the privacy of data collected by neurotechnology, and protect the physical and psychological aspects of the mind from damage by the misuse of neurotechnology.”

Advances in neurotechnology, such as sophisticated brain imaging and the development of brain-computer interfaces, have led to these technologies moving away from a clinical setting and into the consumer domain. While these advances may be beneficial for individuals and society, there is a risk that the technology could be misused and create unprecedented threats to personal freedom.

Professor Roberto Andorno, co-author of the research, explained: “Brain imaging technology has already reached a point where there is discussion over its legitimacy in criminal court, for example as a tool for assessing criminal responsibility or even the risk of reoffending. Consumer companies are using brain imaging for ‘neuromarketing’, to understand consumer behaviour and elicit desired responses from customers. There are also tools such as ‘brain decoders’ which can turn brain imaging data into images, text or sound. All of these could pose a threat to personal freedom which we sought to address with the development of four new human rights laws.”

The authors explain that as neurotechnology improves and becomes commonplace, there is a risk that the technology could be hacked, allowing a third-party to ‘eavesdrop’ on someone’s mind. In the future, a brain-computer interface used to control consumer technology could put the user at risk of physical and psychological damage caused by a third-party attack on the technology. There are also ethical and legal concerns over the protection of data generated by these devices that need to be considered.

International human rights laws make no specific mention to neuroscience, although advances in biomedicine have become intertwined with laws, such as those concerning human genetic data. Similar to the historical trajectory of the genetic revolution, the authors state that the on-going neurorevolution will force a reconceptualization of human rights laws and even the creation of new ones.

Marcello Ienca added: “Science-fiction can teach us a lot about the potential threat of technology. Neurotechnology featured in famous stories has in some cases already become a reality, while others are inching ever closer, or exist as military and commercial prototypes. We need to be prepared to deal with the impact these technologies will have on our personal freedom.”

Here’s a link to and a citation for the paper,

Towards new human rights in the age of neuroscience and neurotechnology by Marcello Ienca and Roberto Andorno. Life Sciences, Society and Policy201713:5 DOI: 10.1186/s40504-017-0050-1 Published: 26 April 2017

©  The Author(s). 2017

This paper is open access.

Environmentally sustainable electromobility

Researchers at the Norwegian University of Science and Technology pose an interesting question in a Dec. 8, 2016 news item on Nanowerk,

Does it really help to drive an electric car if the electricity you use to charge the batteries come from a coal mine in Germany, or if the batteries were manufactured in China using coal?

Researchers at the Norwegian University of Science and Technology’s Industrial Ecology Programme have looked at all of the environmental costs of electric vehicles to determine the cradle-to-grave environmental footprint of building and operating these vehicles.

Increasingly, researchers are examining not just immediate environmental impacts but the impact a product has throughout its life cycle as this Dec. 8, 2016 Norwegian University of Science and Technology press release on EurekAlert notes,

In the 6 December [2016] issue of Nature Nanotechnology, the researchers report on a model that can help guide developers as they consider new nanomaterials for batteries or fuel cells. The goal is to create the most environmentally sustainable vehicle fleet possible, which is no small challenge given that there are already an estimated 1 billion cars and light trucks on the world’s roads, a number that is expected to double by 2035.

With this in mind, the researchers created an environmental life-cycle screening framework that looked at the environmental and other impacts of extraction, refining, synthesis, performance, durability and recyclablility of materials.

This allowed the researchers to evaluate the most promising nanomaterials for lithium-ion batteries (LIB) and proton exchange membrane hydrogen fuel cells (PEMFC) as power sources for electric vehicles. “Our analysis of the current situation clearly outlines the challenge,” the researchers wrote. “The materials with the best potential environmental profiles during the material extraction and production phase…. often present environmental disadvantages during their use phase… and vice versa.”

The hope is that by identifying all the environmental costs of different materials used to build electric cars, designers and engineers can “make the right design trade-offs that optimize LIB and PEMFC nanomaterials for EV usage towards mitigating climate change,” the authors wrote.

They encouraged material scientists and those who conduct life-cycle assessments to work together so that electric cars can be a key contributor to mitigating the effects of transportation on climate change.

Here’s a link to and a citation for the paper,

Nanotechnology for environmentally sustainable electromobility by Linda Ager-Wick Ellingsen, Christine Roxanne Hung, Guillaume Majeau-Bettez, Bhawna Singh, Zhongwei Chen, M. Stanley Whittingham, & Anders Hammer Strømman. Nature Nanotechnology 11, 1039–1051 (2016)  doi:10.1038/nnano.2016.237 Published online 06 December 2016 Corrected online 14 December 2016

This paper is behind a paywall.

Natural nanoparticles and perfluorinated compounds in soil

The claim in a Sept. 9, 2015 news item on Nanowerk is that ‘natural’ nanoparticles are being used to remove perfluorinated compounds (PFC) from soil,

Perfluorinated compounds (PFC) are a new type of pollutants found in contaminated soils from industrial sites, airports and other sites worldwide.

In Norway, The Environment Agency has published a plan to eliminate PFOS [perfluorooctanesulfonic acid or perfluorooctane sulfonate] from the environment by 2020. In other countries such as China and the United States, the levels are far higher, and several studies show accumulation of PFOS in fish and animals, however no concrete measures have been taken.

The Norwegian company, Fjordforsk AS, which specializes in nanosciences and environmental methods, has developed a method to remove PFOS from soil by binding them to natural minerals. This method can be used to extract PFOS from contaminated soil and prevent leakage of PFOS to the groundwater.

Electron microscopy images show that the minerals have the ability to bind PFOS on the surface of the natural nanoparticles. [emphasis mine] The proprietary method does not contaminate the treated grounds with chemicals or other parts from remediation process and uses only natural components.

Electron microscopy images and more detail can be found in the Nanowerk news item.

I can’t find the press release, which originated the news item but there is a little additional information about Fjoorkforsk’s remediation efforts on the company’s “Purification of perfluorinated compounds from soil samples” project page,

Project duration: 2014 –

Project leader: Manzetti S.

Collaborators: Prof Lutz Ahrens. Swedish Agricultural University. Prof David van der Spoel, Uppsala University.

Project description:

Perfluorinated compounds (PFCs) are emerging pollutants used in flame retardants on a large scale on airports and other sites of heavy industrial activity. Perfluroinated compounds are toxic and represent an ultra-persistent class of chemicals which can accumulate in animals and humans and have been found to remain in the body for over 5 years after uptake. Perfluorinated compounds can also affect the nerve-system and have recently been associated with high- priority pollutants to be discontinued and to be removed from the environment. Using non-toxic methods, this project develops an approach to sediment perfluorinated compounds from contaminated soil samples using nanoparticles, in order to remove the ecotoxic and ground-water contaminating potential of PFCs from afflicted sites and environments.

The only mineral that I know is used for soil remediation is nano zero-valent iron (nZVI). A very fast search for more information yielded a 2010 EMPA [Swiss Federal Laboratories for Materials Science and Technology] report titled “Nano zero valent iron – THE solution for water and soil remediation? ” (32 pp. pdf) published by ObservatoryNANO.

As for the claim that the company is using ‘natural’ nanoparticles for their remediation efforts, it’s not clear what they mean by that. I suspect they’re using the term ‘natural’ to mean that engineered nanoparticles are being derived from a naturally occurring material, e.g. iron.

Risk assessments not the only path to nanotechnology regulation

Nanowerk has republished an essay about nanotechnology regulation from Australia’s The Conversation in an Aug. 25, 2015 news item (Note: A link has been removed),

When it comes to nanotechnology, Australians have shown strong support for regulation and safety testing.

One common way of deciding whether and how nanomaterials should be regulated is to conduct a risk assessment. This involves calculating the risk a substance or activity poses based on the associated hazards or dangers and the level of exposure to people or the environment.

However, our recent review (“Risk Analysis of Nanomaterials: Exposing Nanotechnology’s Naked Emperor”) found some serious shortcomings of the risk assessment process for determining the safety of nanomaterials.

We have argued that these shortcomings are so significant that risk assessment is effectively a naked emperor [reference to a children’s story “The Emperor’s New Clothes“].

The original Aug. 24, 2015 article written by Fern Wickson (Scientist/Program Coordinator at GenØk – Centre for Biosafety in Norway) and Georgia Miller (PhD candidate at UNSW [University of New South Wales], Australia) points out an oft ignored issue with regard to nanotechnology regulation,

Risk assessment has been the dominant decision-aiding tool used by regulators of new technologies for decades, despite it excluding key questions that the community cares about. [emphasis mine] For example: do we need this technology; what are the alternatives; how will it affect social relations, and; who should be involved in decision making?

Wickson and Miller also note more frequently discussed issues,

A fundamental problem is a lack of nano-specific regulation. Most sector-based regulation does not include a “trigger” for nanomaterials to face specific risk assessment. Where a substance has been approved for use in its macro form, it requires no new assessment.

Even if such a trigger were present, there is also currently no cross-sectoral or international agreement on the definition of what constitutes a nanomaterial.

Another barrier is the lack of measurement capability and validated methods for safety testing. We still do not have the means to conduct routine identification of nanomaterials in the complex “matrix” of finished products or the environment.

This makes supply chain tracking and safety testing under real-world conditions very difficult. Despite ongoing investment in safety research, the lack of validated test methods and different methods yielding diverse results allows scientific uncertainty to persist.

With regard to the first problem, the assumption that if a material at the macroscale is safe, then the same is true at the nanoscale informs regulation in Canada and, as far as I’m aware, every other constituency that has any type of nanomaterial regulation. I’ve had mixed feelings about this. On the one hand, we haven’t seen any serious problems associated with the use of nanomaterials but on the other hand, these problems can be slow to emerge.

The second issue mentioned, the lack of a consistent definition internationally, seems to be a relatively common problem in a lot of areas. As far as I’m aware, there aren’t that many international agreements for safety measures. Nuclear weapons and endangered animals and plants (CITES) being two of the few that come to mind.

The lack of protocols for safety testing of nanomaterials mentioned in the last paragraph of the excerpt is of rising concern. For example, there’s my July 7, 2015 posting featuring two efforts: Nanotechnology research protocols for Environment, Health and Safety Studies in US and a nanomedicine characterization laboratory in the European Union. Despite this and other efforts, I do think more can and should be done to standardize tests and protocols (without killing new types of research and results which don’t fit the models).

The authors do seem to be presenting a circular argument with this (from their Aug. 24, 2015 article; Note: A link has been removed),

Indeed, scientific uncertainty about nanomaterials’ risk profiles is a key barrier to their reliable assessment. A review funded by the European Commission concluded that:

[…] there is still insufficient data available to conduct the in depth risk assessments required to inform the regulatory decision making process on the safety of NMs [nanomaterials].

Reliable assessment of any chemical or drug is a major problem. We do have some good risk profiles but how many times have pharmaceutical companies developed a drug that passed successfully through human clinical trials only to present a serious risk when released to the general population? Assessing risk is a very complex problem. even with risk profiles and extensive testing.

Unmentioned throughout the article are naturally occurring nanoparticles (nanomaterials) and those created inadvertently through some manufacturing or other process. In fact, we have been ingesting nanomaterials throughout time. That said, I do agree we need to carefully consider the impact that engineered nanomaterials could have on us and the environment as ever more are being added.

To that end, the authors make some suggestions (Note: Links have been removed),

There are well-developed alternate decision-aiding tools available. One is multicriteria mapping, which seeks to evaluate various perspectives on an issue. Another is problem formulation and options assessment, which expands science-based risk assessment to engage a broader range of individuals and perspectives.

There is also pedigree assessment, which explores the framing and choices taking place at each step of an assessment process so as to better understand the ambiguity of scientific inputs into political processes.

Another, though less well developed, approach popular in Europe involves a shift from risk to innovation governance, with emphasis on developing “responsible research and innovation”.

I have some hesitation about recommending this read due to Georgia Miller’s involvement and the fact that I don’t have the time to check all the references. Miller was a spokesperson for Friends of the Earth (FoE) Australia, a group which led a substantive campaign against ‘nanosunscreens’. Here’s a July 20, 2010 posting where I featured some cherrypicking/misrepresentation of data by FoE in the persons of Georgia Miller and Ian Illuminato.

My Feb. 9, 2012 posting highlights the unintended consequences (avoidance of all sunscreens by some participants in a survey) of the FoE’s campaign in Australia (Note [1]: The percentage of people likely to avoid all sunscreens due to their concerns with nanoparticles in their sunscreens was originally reported to be 17%; Note [2]: Australia has the highest incidence of skin cancer in the world),

Feb.21.12 correction: According to the information in the Feb. 20, 2012 posting on 2020 Science, the percentage of Australians likely to avoid using sunscreens is 13%,

This has just landed in my email in box from Craig Cormick at the Department of Industry, Innovation, Science, Research and Tertiary Education in Australia, and I thought I would pass it on given the string of posts on nanoparticles in sunscreens on 2020 Science over the past few years:

“An online poll of 1,000 people, conducted in January this year, shows that one in three Australians had heard or read stories about the risks of using sunscreens with nanoparticles in them,” Dr Cormick said.

“Thirteen percent of this group were concerned or confused enough that they would be less likely to use any sunscreen, whether or not it contained nanoparticles, putting them selves at increased risk of developing potentially deadly skin cancers.

“The study also found that while one in five respondents stated they would go out of their way to avoid using sunscreens with nanoparticles in them, over three in five would need to know more information before deciding.”

This article with Fern Wickson (with whom I don’t always agree perfectly but hasn’t played any games with research that I’m know of) helps somewhat but it’s going to take more than this before I feel comfortable recommending Ms. Miller’s work for further reading.

Carbon capture with ‘diamonds from the sky’

Before launching into the latest on a new technique for carbon capture, it might be useful to provide some context. Arthur Neslen’s March 23, 2015 opinion piece outlines the issues and notes that one Norwegian Prime Minister resigned when coalition government partners attempted to build gas power plants without carbon capture and storage facilities (CCS), Note : A link has been removed,

At least 10 European power plants were supposed to begin piping their carbon emissions into underground tombs this year, rather than letting them twirl into the sky. None has done so.

Missed deadlines, squandered opportunities, spiralling costs and green protests have plagued the development of carbon capture and storage (CCS) technology since Statoil proposed the concept more than two decades ago.

But in the face of desperate global warming projections the CCS dream still unites Canadian tar sands rollers with the UN’s Intergovernmental Panel on Climate Change (IPCC), and Shell with some environmentalists.

With 2bn people in the developing world expected to hook up to the world’s dirty energy system by 2050, CCS holds out the tantalising prospect of fossil-led growth that does not fry the planet.


“With CCS in the mix, we can decarbonise in a cost-effective manner and still continue to produce, to some extent, our fossil fuels,” Tim Bertels, Shell’s Glocal CCS portfolio manager told the Guardian. “You don’t need to divest in fossil fuels, you need to decarbonise them.”

The technology has been gifted “a very significant fraction” of the billions of dollars earmarked by Shell for clean energy research, he added. But the firm is also a vocal supporter of public funding for CCS from carbon markets, as are almost all players in the industry.

Enthusiasm for this plan is not universal (from Neslen’s opinion piece),

Many environmentalists see the idea as a non-starter because it locks high emitting power plants into future energy systems, and obstructs funding for the cheaper renewables revolution already underway. “CCS is is completely irrelevant,” said Jeremy Rifkin, a noted author and climate adviser to several governments. “I don’t even think about it. It’s not going to happen. It’s not commercially available and it won’t be commercially viable.”

I recommend reading Neslen’s piece for anyone who’s not already well versed on the issues. He uses Norway as a case study and sums up the overall CCS political situation this way,

In many ways, the debate over carbon capture and storage is a struggle between two competing visions of the societal transformation needed to avert climate disaster. One vision represents the enlightened self-interest of a contributor to the problem. The other cannot succeed without eliminating its highly entrenched opponent. The battle is keenly fought by technological optimists on both sides. But if Norway’s fractious CCS experience is any indicator, it will be decided on the ground by the grimmest of realities.

On that note of urgency, here’s some research on carbon dioxide (CO2) or, more specifically, carbon capture and utilization technology, from an Aug. 19, 2015 news item on Nanowerk,,

Finding a technology to shift carbon dioxide (CO2), the most abundant anthropogenic greenhouse gas, from a climate change problem to a valuable commodity has long been a dream of many scientists and government officials. Now, a team of chemists says they have developed a technology to economically convert atmospheric CO2    directly into highly valued carbon nanofibers for industrial and consumer products.

An Aug. 19, 2015 American Chemical Society (ACS) news release (also on EurekAlert), which originated the news time, expands on the theme,

The team will present brand-new research on this new CO2 capture and utilization technology at the 250th National Meeting & Exposition of the American Chemical Society (ACS). ACS is the world’s largest scientific society. The national meeting, which takes place here through Thursday, features more than 9,000 presentations on a wide range of science topics.

“We have found a way to use atmospheric CO2 to produce high-yield carbon nanofibers,” says Stuart Licht, Ph.D., who leads a research team at George Washington University. “Such nanofibers are used to make strong carbon composites, such as those used in the Boeing Dreamliner, as well as in high-end sports equipment, wind turbine blades and a host of other products.”

Previously, the researchers had made fertilizer and cement without emitting CO2, which they reported. Now, the team, which includes postdoctoral fellow Jiawen Ren, Ph.D., and graduate student Jessica Stuart, says their research could shift CO2 from a global-warming problem to a feed stock for the manufacture of in-demand carbon nanofibers.

Licht calls his approach “diamonds from the sky.” That refers to carbon being the material that diamonds are made of, and also hints at the high value of the products, such as the carbon nanofibers that can be made from atmospheric carbon and oxygen.

Because of its efficiency, this low-energy process can be run using only a few volts of electricity, sunlight and a whole lot of carbon dioxide. At its root, the system uses electrolytic syntheses to make the nanofibers. CO2 is broken down in a high-temperature electrolytic bath of molten carbonates at 1,380 degrees F (750 degrees C). Atmospheric air is added to an electrolytic cell. Once there, the CO2 dissolves when subjected to the heat and direct current through electrodes of nickel and steel. The carbon nanofibers build up on the steel electrode, where they can be removed, Licht says.

To power the syntheses, heat and electricity are produced through a hybrid and extremely efficient concentrating solar-energy system. The system focuses the sun’s rays on a photovoltaic solar cell to generate electricity and on a second system to generate heat and thermal energy, which raises the temperature of the electrolytic cell.

Licht estimates electrical energy costs of this “solar thermal electrochemical process” to be around $1,000 per ton of carbon nanofiber product, which means the cost of running the system is hundreds of times less than the value of product output.

“We calculate that with a physical area less than 10 percent the size of the Sahara Desert, our process could remove enough CO2 to decrease atmospheric levels to those of the pre-industrial revolution within 10 years,” he says. [emphasis mine]

At this time, the system is experimental, and Licht’s biggest challenge will be to ramp up the process and gain experience to make consistently sized nanofibers. “We are scaling up quickly,” he adds, “and soon should be in range of making tens of grams of nanofibers an hour.”

Licht explains that one advance the group has recently achieved is the ability to synthesize carbon fibers using even less energy than when the process was initially developed. “Carbon nanofiber growth can occur at less than 1 volt at 750 degrees C, which for example is much less than the 3-5 volts used in the 1,000 degree C industrial formation of aluminum,” he says.

A low energy approach that cleans up the air by converting greenhouse gases into useful materials and does it quickly is incredibly exciting. Of course, there are a few questions to be asked. Are the research outcomes reproducible by other teams? Licht notes the team is scaling the technology up but how soon can we scale up to industrial strength?

Replacing metal with nanocellulose paper

The quest to find uses for nanocellulose materials has taken a step forward with some work coming from the University of Maryland (US). From a July 24, 2015 news item on Nanowerk,

Researchers at the University of Maryland recently discovered that paper made of cellulose fibers is tougher and stronger the smaller the fibers get … . For a long time, engineers have sought a material that is both strong (resistant to non-recoverable deformation) and tough (tolerant of damage).

“Strength and toughness are often exclusive to each other,” said Teng Li, associate professor of mechanical engineering at UMD. “For example, a stronger material tends to be brittle, like cast iron or diamond.”

A July 23, 2015 University of Maryland news release, which originated the news item, provides details about the thinking which buttresses this research along with some details about the research itself,

The UMD team pursued the development of a strong and tough material by exploring the mechanical properties of cellulose, the most abundant renewable bio-resource on Earth. Researchers made papers with several sizes of cellulose fibers – all too small for the eye to see – ranging in size from about 30 micrometers to 10 nanometers. The paper made of 10-nanometer-thick fibers was 40 times tougher and 130 times stronger than regular notebook paper, which is made of cellulose fibers a thousand times larger.

“These findings could lead to a new class of high performance engineering materials that are both strong and tough, a Holy Grail in materials design,” said Li.

High performance yet lightweight cellulose-based materials might one day replace conventional structural materials (i.e. metals) in applications where weight is important. This could lead, for example, to more energy efficient and “green” vehicles. In addition, team members say, transparent cellulose nanopaper may become feasible as a functional substrate in flexible electronics, resulting in paper electronics, printable solar cells and flexible displays that could radically change many aspects of daily life.

Cellulose fibers can easily form many hydrogen bonds. Once broken, the hydrogen bonds can reform on their own—giving the material a ‘self-healing’ quality. The UMD discovered that the smaller the cellulose fibers, the more hydrogen bonds per square area. This means paper made of very small fibers can both hold together better and re-form more quickly, which is the key for cellulose nanopaper to be both strong and tough.

“It is helpful to know why cellulose nanopaper is both strong and tough, especially when the underlying reason is also applicable to many other materials,” said Liangbing Hu, assistant professor of materials science at UMD.

To confirm, the researchers tried a similar experiment using carbon nanotubes that were similar in size to the cellulose fibers. The carbon nanotubes had much weaker bonds holding them together, so under tension they did not hold together as well. Paper made of carbon nanotubes is weak, though individually nanotubes are arguably the strongest material ever made.

One possible future direction for the research is the improvement of the mechanical performance of carbon nanotube paper.

“Paper made of a network of carbon nanotubes is much weaker than expected,” said Li. “Indeed, it has been a grand challenge to translate the superb properties of carbon nanotubes at nanoscale to macroscale. Our research findings shed light on a viable approach to addressing this challenge and achieving carbon nanotube paper that is both strong and tough.”

Here’s a link to and a citation for the paper,

Anomalous scaling law of strength and toughness of cellulose nanopaper by Hongli Zhu, Shuze Zhu, Zheng Jia, Sepideh Parvinian, Yuanyuan Li, Oeyvind Vaaland, Liangbing Hu, and Teng Li. PNAS (Proceedings of the National Academy of Sciences) July 21, 2015 vol. 112 no. 29 doi: 10.1073/pnas.1502870112

This paper is behind a paywall.

There is a lot of research on applications for nanocellulose, everywhere it seems, except Canada, which at one time was a leader in the business of producing cellulose nanocrystals (CNC).

Here’s a sampling of some of my most recent posts on nanocellulose,

Nanocellulose as a biosensor (July 28, 2015)

Microscopy, Paper and Fibre Research Institute (Norway), and nanocellulose (July 8, 2015)

Nanocellulose markets report released (June 5, 2015; US market research)

New US platform for nanocellulose and occupational health and safety research (June 1, 2015; Note: As you find new applications, you need to concern yourself with occupational health and safety.)

‘Green’, flexible electronics with nanocellulose materials (May 26, 2015; research from China)

Treating municipal wastewater and dirty industry byproducts with nanocellulose-based filters (Dec. 23, 2014; research from Sweden)

Nanocellulose and an intensity of structural colour (June 16, 2014; research about replacing toxic pigments with structural colour from the UK)

I ask again, where are the Canadians? If anybody has an answer, please let me know.

Microscopy, Paper and Fibre Research Institute (Norway), and nanocellulose

In keeping with a longstanding interest here in nanocellulose (aka, cellulose nanomaterials) the Norwegian Paper and Fibre Research Institute’s (PFI) ??,??, 2015 announcement about new ion milling equipment and a new scanning electron microscope suitable for research into cellulose at the nanoscale caught my eye,

In order to advance the microscopy capabilities of cellulose-based materials and thanks to a grant from the Norwegian Pulp and Paper Research Institute foundation, PFI has invested in a modern ion milling equipment and a new Scanning Electron Microscope (SEM).

Unusually, the entire news release is being stored at Nanowerk as a July 3, 2015 news item (Note: Links have been removed),

“There are several microscopy techniques that can be used for characterizing cellulose materials, but the scanning electron microscope is one of the most preferable ones as the microscope is easy to use, versatile and provides a multi-scale assessment”, explains Gary Chinga-Carrasco, lead scientist at the PFI Biocomposite area.

“However, good microscopy depends to a large extent on an adequate and optimized preparation of the samples”, adds Per Olav Johnsen, senior engineer and microscopy expert at PFI.

“We are always trying to be in front in the development of new characterization methods, facilitating research and giving support to our industrial partners”, says Chinga-Carrasco, who has been active in developing new methods for characterization of paper, biocomposites and nanocellulose and cannot hide his enthusiasm when he talks about PFI’s new equipment. “In the first period after the installation it is important to work with the equipment with several material samples and techniques to really become confident with its use and reveal its potential”.

The team at PFI is now offering new methods for assessing cellulose materials in great detail. They point out that they have various activities and projects where they already see a big potential with the new equipment.

Examples for these efforts are the assessment of porous nanocellulose structures for biomedical applications (for instance in the NanoHeal program) and the assessment of surface modified wood fibres for use in biocomposites (for instance in the FiberComp project).

Also unusual is the lack of detail about the microscope’s and ion milling machine’s technical specifications and capabilities.

The NanoHeal program was last mentioned here in an April 14, 2014 post and first mentioned here in an Aug. 23, 2012 posting.

Final comment, I wonder if Nanowerk is embarking on a new initiative where the company agrees to store news releases for various agencies such as PFI and others who would prefer not to  archive their own materials. Just a thought.

Nanotechnology research protocols for Environment, Health and Safety Studies in US and a nanomedicine characterization laboratory in the European Union

I have two items relating to nanotechnology and the development of protocols. The first item concerns the launch of a new web portal by the US National Institute of Standards and Technology.

US National Institute of Standards and Technology (NIST)

From a July 1, 2015 news item on Azonano,

As engineered nanomaterials increasingly find their way into commercial products, researchers who study the potential environmental or health impacts of those materials face a growing challenge to accurately measure and characterize them. These challenges affect measurements of basic chemical and physical properties as well as toxicology assessments.

To help nano-EHS (Environment, Health and Safety)researchers navigate the often complex measurement issues, the National Institute of Standards and Technology (NIST) has launched a new website devoted to NIST-developed (or co-developed) and validated laboratory protocols for nano-EHS studies.

A July 1, 2015 NIST news release on EurekAlert, which originated the news item, offers more details about the information available through the web portal,

In common lab parlance, a “protocol” is a specific step-by-step procedure used to carry out a measurement or related activity, including all the chemicals and equipment required. Any peer-reviewed journal article reporting an experimental result has a “methods” section where the authors document their measurement protocol, but those descriptions are necessarily brief and condensed, and may lack validation of any sort. By comparison, on NIST’s new Protocols for Nano-EHS website the protocols are extraordinarily detailed. For ease of citation, they’re published individually–each with its own unique digital object identifier (DOI).

The protocols detail not only what you should do, but why and what could go wrong. The specificity is important, according to program director Debra Kaiser, because of the inherent difficulty of making reliable measurements of such small materials. “Often, if you do something seemingly trivial–use a different size pipette, for example–you get a different result. Our goal is to help people get data they can reproduce, data they can trust.”

A typical caution, for example, notes that if you’re using an instrument that measures the size of nanoparticles in a solution by how they scatter light, it’s important also to measure the transmission spectrum of the particles if they’re colored, because if they happen to absorb light strongly at the same frequency as your instrument, the result may be biased.

“These measurements are difficult because of the small size involved,” explains Kaiser. “Very few new instruments have been developed for this. People are adapting existing instruments and methods for the job, but often those instruments are being operated close to their limits and the methods were developed for chemicals or bulk materials and not for nanomaterials.”

“For example, NIST offers a reference material for measuring the size of gold nanoparticles in solution, and we report six different sizes depending on the instrument you use. We do it that way because different instruments sense different aspects of a nanoparticle’s dimensions. An electron microscope is telling you something different than a dynamic light scattering instrument, and the researcher needs to understand that.”

The nano-EHS protocols offered by the NIST site, Kaiser says, could form the basis for consensus-based, formal test methods such as those published by ASTM and ISO.

NIST’s nano-EHS protocol site currently lists 12 different protocols in three categories: sample preparation, physico-chemical measurements and toxicological measurements. More protocols will be added as they are validated and documented. Suggestions for additional protocols are welcome at nanoprotocols@nist.gov.

The next item concerns European nanomedicine.

CEA-LETI and Europe’s first nanomedicine characterization laboratory

A July 1, 2015 news item on Nanotechnology Now describes the partnership which has led to launch of the new laboratory,

CEA-Leti today announced the launch of the European Nano-Characterisation Laboratory (EU-NCL) funded by the European Union’s Horizon 2020 research and innovation programm[1]e. Its main objective is to reach a level of international excellence in nanomedicine characterisation for medical indications like cancer, diabetes, inflammatory diseases or infections, and make it accessible to all organisations developing candidate nanomedicines prior to their submission to regulatory agencies to get the approval for clinical trials and, later, marketing authorization.

“As reported in the ETPN White Paper[2], there is a lack of infrastructure to support nanotechnology-based innovation in healthcare,” said Patrick Boisseau, head of business development in nanomedicine at CEA-Leti and chairman of the European Technology Platform Nanomedicine (ETPN). “Nanocharacterisation is the first bottleneck encountered by companies developing nanotherapeutics. The EU-NCL project is of most importance for the nanomedicine community, as it will contribute to the competiveness of nanomedicine products and tools and facilitate regulation in Europe.”

EU-NCL is partnered with the sole international reference facility, the Nanotechnology Characterization Lab of the National Cancer Institute in the U.S. (US-NCL)[3], to get faster international harmonization of analytical protocols.

“We are excited to be part of this cooperative arrangement between Europe and the U.S.,” said Scott E. McNeil, director of U.S. NCL. “We hope this collaboration will help standardize regulatory requirements for clinical evaluation and marketing of nanomedicines internationally. This venture holds great promise for using nanotechnologies to overcome cancer and other major diseases around the world.”

A July 2, 2015 EMPA (Swiss Federal Laboratories for Materials Science and Technology) news release on EurekAlert provides more detail about the laboratory and the partnerships,

The «European Nanomedicine Characterization Laboratory» (EU-NCL), which was launched on 1 June 2015, has a clear-cut goal: to help bring more nanomedicine candidates into the clinic and on the market, for the benefit of patients and the European pharmaceutical industry. To achieve this, EU-NCL is partnered with the sole international reference facility, the «Nanotechnology Characterization Laboratory» (US-NCL) of the US-National Cancer Institute, to get faster international harmonization of analytical protocols. EU-NCL is also closely connected to national medicine agencies and the European Medicines Agency to continuously adapt its analytical services to requests of regulators. EU-NCL is designed, organized and operated according to the highest EU regulatory and quality standards. «We are excited to be part of this cooperative project between Europe and the U.S.,» says Scott E. McNeil, director of US-NCL. «We hope this collaboration will help standardize regulatory requirements for clinical evaluation and marketing of nanomedicines internationally. This venture holds great promise for using nanotechnologies to overcome cancer and other major diseases around the world.»

Nine partners from eight countries

EU-NCL, which is funded by the EU for a four-year period with nearly 5 million Euros, brings together nine partners from eight countries: CEA-Tech in Leti and Liten, France, the coordinator of the project; the Joint Research Centre of the European Commission in Ispra, Italy; European Research Services GmbH in Münster Germany; Leidos Biomedical Research, Inc. in Frederick, USA; Trinity College in Dublin, Ireland; SINTEF in Oslo, Norway; the University of Liverpool in the UK; Empa, the Swiss Federal Laboratories for Materials Science and Technology in St. Gallen, Switzerland; Westfälische Wilhelms-Universität (WWU) and Gesellschaft für Bioanalytik, both in Münster, Germany. Together, the partnering institutions will provide a trans-disciplinary testing infrastructure covering a comprehensive set of preclinical characterization assays (physical, chemical, in vitro and in vivo biological testing), which will allow researchers to fully comprehend the biodistribution, metabolism, pharmacokinetics, safety profiles and immunological effects of their medicinal nano-products. The project will also foster the use and deployment of standard operating procedures (SOPs), benchmark materials and quality management for the preclinical characterization of medicinal nano-products. Yet another objective is to promote intersectoral and interdisciplinary communication among key drivers of innovation, especially between developers and regulatory agencies.

The goal: to bring safe and efficient nano-therapeutics faster to the patient

Within EU-NCL, six analytical facilities will offer transnational access to their existing analytical services for public and private developers, and will also develop new or improved analytical assays to keep EU-NCL at the cutting edge of nanomedicine characterization. A complementary set of networking activities will enable EU-NCL to deliver to European academic or industrial scientists the high-quality analytical services they require for accelerating the industrial development of their candidate nanomedicines. The Empa team of Peter Wick at the «Particles-Biology Interactions» lab will be in charge of the quality management of all analytical methods, a key task to guarantee the best possible reproducibility and comparability of the data between the various analytical labs within the consortium. «EU-NCL supports our research activities in developing innovative and safe nanomaterials for healthcare within an international network, which will actively shape future standards in nanomedicine and strengthen Empa as an enabler to facilitate the transfer of novel nanomedicines from bench to bedside», says Wick.

You can find more information about the laboratory on the Horizon 2020 (a European Union science funding programme) project page for the EU-NCL laboratory. For anyone curious about CEA-Leti, it’s a double-layered organization. CEA is France’s Commission on Atomic Energy and Alternative Energy (Commissariat à l’énergie atomique et aux énergies alternatives); you can go here to their French language site (there is an English language clickable option on the page). Leti is one of the CEA’s institutes and is known as either Leti or CEA-Leti. I have no idea what Leti stands for. Here’s the Leti website (this is the English language version).

LiquiGlide, a nanotechnology-enabled coating for food packaging and oil and gas pipelines

Getting condiments out of their bottles should be a lot easier in several European countries in the near future. A June 30, 2015 news item on Nanowerk describes the technology and the business deal (Note: A link has been removed),

The days of wasting condiments — and other products — that stick stubbornly to the sides of their bottles may be gone, thanks to MIT [Massachusetts Institute of Technology] spinout LiquiGlide, which has licensed its nonstick coating to a major consumer-goods company.

Developed in 2009 by MIT’s Kripa Varanasi and David Smith, LiquiGlide is a liquid-impregnated coating that acts as a slippery barrier between a surface and a viscous liquid. Applied inside a condiment bottle, for instance, the coating clings permanently to its sides, while allowing the condiment to glide off completely, with no residue.

In 2012, amidst a flurry of media attention following LiquiGlide’s entry in MIT’s $100K Entrepreneurship Competition, Smith and Varanasi founded the startup — with help from the Institute — to commercialize the coating.

Today [June 30, 2015], Norwegian consumer-goods producer Orkla has signed a licensing agreement to use the LiquiGlide’s coating for mayonnaise products sold in Germany, Scandinavia, and several other European nations. This comes on the heels of another licensing deal, with Elmer’s [Elmer’s Glue & Adhesives], announced in March [2015].

A June 30, 2015 MIT news release, which originated the news item, provides more details about the researcher/entrepreneurs’ plans,

But this is only the beginning, says Varanasi, an associate professor of mechanical engineering who is now on LiquiGlide’s board of directors and chief science advisor. The startup, which just entered the consumer-goods market, is courting deals with numerous producers of foods, beauty supplies, and household products. “Our coatings can work with a whole range of products, because we can tailor each coating to meet the specific requirements of each application,” Varanasi says.

Apart from providing savings and convenience, LiquiGlide aims to reduce the surprising amount of wasted products — especially food — that stick to container sides and get tossed. For instance, in 2009 Consumer Reports found that up to 15 percent of bottled condiments are ultimately thrown away. Keeping bottles clean, Varanasi adds, could also drastically cut the use of water and energy, as well as the costs associated with rinsing bottles before recycling. “It has huge potential in terms of critical sustainability,” he says.

Varanasi says LiquiGlide aims next to tackle buildup in oil and gas pipelines, which can cause corrosion and clogs that reduce flow. [emphasis mine] Future uses, he adds, could include coatings for medical devices such as catheters, deicing roofs and airplane wings, and improving manufacturing and process efficiency. “Interfaces are ubiquitous,” he says. “We want to be everywhere.”

The news release goes on to describe the research process in more detail and offers a plug for MIT’s innovation efforts,

LiquiGlide was originally developed while Smith worked on his graduate research in Varanasi’s research group. Smith and Varanasi were interested in preventing ice buildup on airplane surfaces and methane hydrate buildup in oil and gas pipelines.

Some initial work was on superhydrophobic surfaces, which trap pockets of air and naturally repel water. But both researchers found that these surfaces don’t, in fact, shed every bit of liquid. During phase transitions — when vapor turns to liquid, for instance — water droplets condense within microscopic gaps on surfaces, and steadily accumulate. This leads to loss of anti-icing properties of the surface. “Something that is nonwetting to macroscopic drops does not remain nonwetting for microscopic drops,” Varanasi says.

Inspired by the work of researcher David Quéré, of ESPCI in Paris, on slippery “hemisolid-hemiliquid” surfaces, Varanasi and Smith invented permanently wet “liquid-impregnated surfaces” — coatings that don’t have such microscopic gaps. The coatings consist of textured solid material that traps a liquid lubricant through capillary and intermolecular forces. The coating wicks through the textured solid surface, clinging permanently under the product, allowing the product to slide off the surface easily; other materials can’t enter the gaps or displace the coating. “One can say that it’s a self-lubricating surface,” Varanasi says.

Mixing and matching the materials, however, is a complicated process, Varanasi says. Liquid components of the coating, for instance, must be compatible with the chemical and physical properties of the sticky product, and generally immiscible. The solid material must form a textured structure while adhering to the container. And the coating can’t spoil the contents: Foodstuffs, for instance, require safe, edible materials, such as plants and insoluble fibers.

To help choose ingredients, Smith and Varanasi developed the basic scientific principles and algorithms that calculate how the liquid and solid coating materials, and the product, as well as the geometry of the surface structures will all interact to find the optimal “recipe.”

Today, LiquiGlide develops coatings for clients and licenses the recipes to them. Included are instructions that detail the materials, equipment, and process required to create and apply the coating for their specific needs. “The state of the coating we end up with depends entirely on the properties of the product you want to slide over the surface,” says Smith, now LiquiGlide’s CEO.

Having researched materials for hundreds of different viscous liquids over the years — from peanut butter to crude oil to blood — LiquiGlide also has a database of optimal ingredients for its algorithms to pull from when customizing recipes. “Given any new product you want LiquiGlide for, we can zero in on a solution that meets all requirements necessary,” Varanasi says.

MIT: A lab for entrepreneurs

For years, Smith and Varanasi toyed around with commercial applications for LiquiGlide. But in 2012, with help from MIT’s entrepreneurial ecosystem, LiquiGlide went from lab to market in a matter of months.

Initially the idea was to bring coatings to the oil and gas industry. But one day, in early 2012, Varanasi saw his wife struggling to pour honey from its container. “And I thought, ‘We have a solution for that,’” Varanasi says.

The focus then became consumer packaging. Smith and Varanasi took the idea through several entrepreneurship classes — such as 6.933 (Entrepreneurship in Engineering: The Founder’s Journey) — and MIT’s Venture Mentoring Service and Innovation Teams, where student teams research the commercial potential of MIT technologies.

“I did pretty much every last thing you could do,” Smith says. “Because we have such a brilliant network here at MIT, I thought I should take advantage of it.”

That May [2012], Smith, Varanasi, and several MIT students entered LiquiGlide in the MIT $100K Entrepreneurship Competition, earning the Audience Choice Award — and the national spotlight. A video of ketchup sliding out of a LiquiGlide-coated bottle went viral. Numerous media outlets picked up the story, while hundreds of companies reached out to Varanasi to buy the coating. “My phone didn’t stop ringing, my website crashed for a month,” Varanasi says. “It just went crazy.”

That summer [2012], Smith and Varanasi took their startup idea to MIT’s Global Founders’ Skills Accelerator program, which introduced them to a robust network of local investors and helped them build a solid business plan. Soon after, they raised money from family and friends, and won $100,000 at the MassChallenge Entrepreneurship Competition.

When LiquiGlide Inc. launched in August 2012, clients were already knocking down the door. The startup chose a select number to pay for the development and testing of the coating for its products. Within a year, LiquiGlide was cash-flow positive, and had grown from three to 18 employees in its current Cambridge headquarters.

Looking back, Varanasi attributes much of LiquiGlide’s success to MIT’s innovation-based ecosystem, which promotes rapid prototyping for the marketplace through experimentation and collaboration. This ecosystem includes the Deshpande Center for Technological Innovation, the Martin Trust Center for MIT Entrepreneurship, the Venture Mentoring Service, and the Technology Licensing Office, among other initiatives. “Having a lab where we could think about … translating the technology to real-world applications, and having this ability to meet people, and bounce ideas … that whole MIT ecosystem was key,” Varanasi says.

Here’s the latest LiquiGlide video,


Credits:

Video: Melanie Gonick/MIT
Additional footage courtesy of LiquiGlide™
Music sampled from “Candlepower” by Chris Zabriskie
https://freemusicarchive.org/music/Ch…
http://creativecommons.org/licenses/b…

I had thought the EU (European Union) offered more roadblocks to marketing nanotechnology-enabled products used in food packaging than the US. If anyone knows why a US company would market its products in Europe first I would love to find out.