It seems past time for someone to have developed an app for nanomaterial risks. A Nov. 12, 2015 news item on Nanowerk makes the announcement (Note: A link has been removed),
The NanoRisk App is a guide to help the researcher in the risk assessment of nanomaterials. This evaluation is determined based on the physicochemical characteristics and the activities to be carried out by staff in research laboratories.
The NanoRisk App was developed at the University of Los Andes or Universidad de los Andes in Colombia (there also seems to be one in Chile). From the Nano Risk App homepage,
The NanoRisk App application was developed at the University of Los Andes by the Department of Chemical Engineering and the Department of Electrical and Electronic Engineering, Faculty of Engineering and implemented in cooperation with the Department of Occupational Health at the University of Los Andes. This application focuses on the use of manufactured nanomaterials.
Homero Fernando Pastrana Rendón MD, MsC, PhD Candidate. Alba Graciela Ávila, Associate Professor, Department of Electrical and Electronic Engineering. Felipe Muñoz Giraldo, Professor Associate Professor, Department of Chemical Engineering, University of Los Andes.
Acknowledgements to Diego Angulo and Diana Fernandez, from the Imagine group, for all the support in the development of this application.
About the App
The app is a guide to help the researcher in the risk assessment of nanomaterials. This evaluation is determined based on the physicochemical characteristics and the activities to be carried out by staff in research laboratories. This is based on nano risk management strategies from various institutions such as the National Institute for Occupational Safety and Health, U.S. (NIOSH), the New Development Organization of Japan Energy and Industrial Technology (NEDO), the European Commission (Nanosafe Program) and the work developed by the Lawrence Livermore National Laboratory (California, USA) in conjunction with the Safety Science Group at the University of Delft in the Netherlands.
The app will estimates the risk at four levels (low, medium, high and very high) for the hazard of the nanomaterial and the probability to be exposed to the material. Then it will recommend measures to contain the risk by applying engineering measures (controlled ventilation system, biosafety cabinet and glovebox).
They have a copyright notice on the page, as well as, instructions on how to access the App and the information.
Because they are lawyers, I was intrigued by a Nov. 4, 2015 article on managing nanotechnology risks by Michael Lisak and James Mizgala of Sidley Austin LLP for Industry Week. I was also intrigued by the language (Note: A link has been removed),
The inclusion of nanotechnologies within manufacturing processes and products has increased exponentially over the past decade. Fortune recently noted that nanotechnology touches almost all Fortune 500 companies and that the industry’s $20 billion worldwide size is expected to double over the next decade. [emphasis mine]
Yet, potential safety issues have been raised and regulatory uncertainties persist. As such, proactive manufacturers seeking to protect their employees, consumers, the environment and their businesses – while continuing to develop, manufacture and market their products – may face difficult choices in how to best navigate this challenging and fluid landscape, while avoiding potential “nanotort,” [emphasis mine] whistleblower, consumer fraud and regulatory enforcement lawsuits. Doing so requires forward-thinking advice based upon detailed analyses of each manufacturer’s products and conduct in the context of rapidly evolving scientific, regulatory and legal developments.
I wonder how many terms lawyers are going to coin in addition to “nanotort”?
The lawyers focus largely on two types of nanoparticles, carbon nanotubes, with a special emphasis on multi-walled carbon nantubes (MWCNT) and nano titanium dioxide,
Despite this scientific uncertainty, international organizations, such as the International Agency for Research on Cancer [a World Health Organization agency], have already concluded that nano titanium dioxide in its powder form and multi-walled carbon nanotube-7 (“MWCNT-7”) [emphasis mine] are “possibly carcinogenic to humans.” As such, California’s Department of Public Health lists titanium dioxide and MWCNT-7 as “ingredients known or suspected to cause cancer, birth defects, or other reproductive toxicity as determined by the authoritative scientific bodies.” Considering that processed (i.e., non-powdered) titanium dioxide is found in products like toothpaste, shampoo, chewing gum and candies, it is not surprising that some have focused upon such statements.
There’s a lot of poison in the world, for example, apples contain seeds which have arsenic in them and, for another, peanuts can be carcinogenic and they can also kill you, as they are poison to people who are allergic to them.
On the occasion of Dunkin’ Donuts removing nano titanium dioxide as an ingredient in the powdered sugar used to coat donuts, I wrote a March 13, 2015 posting, where I quote extensively from Dr. Andrew Maynard’s (then director of the University of Michigan Risk Science Center now director of the Risk Innovation Lab at Arizona State University) 2020 science blog posting about nano titanium dioxide and Dunkin’ Donuts,
He describes some of the research on nano titanium dioxide (Note: Links have been removed),
… In 2004 the European Food Safety Agency carried out a comprehensive safety review of the material. After considering the available evidence on the same materials that are currently being used in products like Dunkin’ Donuts, the review panel concluded that there no evidence for safety concerns.
Most research on titanium dioxide nanoparticles has been carried out on ones that are inhaled, not ones we eat. Yet nanoparticles in the gut are a very different proposition to those that are breathed in.
Studies into the impacts of ingested nanoparticles are still in their infancy, and more research is definitely needed. Early indications are that the gastrointestinal tract is pretty good at handling small quantities of these fine particles. This stands to reason given the naturally occurring nanoparticles we inadvertently eat every day, from charred foods and soil residue on veggies and salad, to more esoteric products such as clay-baked potatoes. There’s even evidence that nanoparticles occur naturally inside the gastrointestinal tract.
You can find Andrew’s entire discussion in his March 12, 2015 post on the 2020 Science blog. Andrew had written earlier in a July 12, 2014 posting about something he terms ‘nano donut math’ as reported by As You Sow, the activist group that made a Dunkin’ Donuts shareholder proposal which resulted in the company’s decision to stop using nano titanium dioxide in the powdered sugar found on their donuts. In any event, Andrew made this point,
In other words, if a Dunkin’ Donut Powdered Cake Donut contained 8.9 mg of TiO2 particles smaller than 10 nm, it would have to have been doused with over 1 million tons of sugar coating! (Note update at the end of this piece)
Clearly something’s wrong here – either Dunkin’ Donuts are not using food grade TiO2 but a nanopowder with particle so small they would be no use whatsoever in the sugar coating (as well as being incredibly expensive, and not FDA approved). Or there’s something rather wrong with the analysis!
If it’s the latter – and it’s hard to imagine any other plausible reason for the data – it looks like As You Sow ended up using rather dubious figures to back up their stakeholder resolution. I’d certainly be interested in more information on the procedures Analytical Sciences used and the checks and balances they had in place, especially as there are a number of things that can mess up a particle analysis like this.
Update July 14: My bad, I made a slight error in the size distribution calculation first time round. This has been corrected in the article above. Originally, I cited the estimated Mass Median Diameter (MMD) of the TiO2 particles as 167 nm, and the Geometric Standard Deviation (GSD) as 1.6. Correcting an error in the Excel spreadsheet used to calculate the distribution (these things happen!) led to a revised estimate of MMD = 168 nm and a GSD of 1.44. These may look like subtle differences, but when calculating the estimated particle mass below 10 nm, they make a massive difference. With the revised figures, you’d expect less than one trillionth of a percent of the mass of the TiO2 powder to be below 10 nm!! (the original estimate was a tenth of a millionth of a percent). In other words – pretty much nothing! The full analysis can be found here.
Update November 16 2014. Based on this post, As You Sow checked the data from Analytical Sciences LLC and revised the report accordingly. This is noted above.
It would seem that if there are concerns over nano titanium dioxide in food, the biggest would not be the amounts ingested by consumers but inhalation by workers should they breathe in large quantities when they are handling the material.
As for the MWCNTs, they have long raised alarms but most especially the long MWCNTs and for people handling them during the course of their work day. Any MWCNTs found in sports equipment and other consumer products are bound in the material and don’t pose any danger of being inhaled into the lungs, unless they should be released from their bound state (e.g. fire might release them).
After some searching for MWCNT-7, I found something which seems also to be known as Mitsui MWCNT-7 or Mitsui 7-MWCNT (here’s one of my sources). As best I understand it, Mitsui is a company that produces an MWCNT which they have coined an MWCNT-7 and which has been used in nanotoxicity testing. As best I can tell, MWCNT is MWCNT-7.
The lawyers (Lisak and Mizgala) note things have changed for manufacturers since the early days and they make some suggestions,
One thing is certain – gone are the days when “sophisticated” manufacturers incorporating nanotechnologies within their products can reasonably expect to shield themselves by pointing to scientific and regulatory uncertainties, especially given the amount of money they are spending on research and development, as well as sales and marketing efforts.
Accordingly, manufacturers should consider undertaking meaningful risk management analyses specific to their applicable products. …
First, manufacturers should fully understand the life-cycle of nanomaterials within their organization. For some, nanomaterials may be an explicit focus of innovation and production, making it easier to pinpoint where nanotechnology fits into their processes and products. For others, nanomaterials may exist either higher-up or in the back-end of their products’ supply chain. …
Second, manufacturers should understand and stay current with the scientific state-of-the-art as well as regulatory requirements and developments potentially applicable to their employees, consumers and the environment. An important consideration related to efforts to understand the state-of-the-art is whether or not manufacturers should themselves expend resources to advance “the science” in seeking to help find answers to some of the aforementioned uncertainties. …
The lawyers go on to suggest that manufacturers should consider proactively researching nanotoxicity so as to better defend themselves against any future legal suits.
Encouraging companies to proactive with toxicity issues is in line with what seems to be an international (Europe & US) regulatory movement putting more onus on producers and manufacturers to take responsibility for safety testing. (This was communicated to me in a conversation I had with an official at the European Union Joint Research Centre where he mentioned REACH regulations and the new emphasis in response to my mention of similar FDA (US Food and Drug Administration) regulations. (We were at the 2014 9th World Congress on Alternatives to Animal Testing in Prague, Czech republic.)
For anyone interested in the International Agency for Research on Cancer you can find it here.
If you’ve been looking for a practical guide to handling nanomaterials you may find that nanoToGo fills the bill. From an Oct. 23, 2015 posting by Lynn Bergeson for Nanotechnology Now,
In September 2015, “Nano to go!” was published. See http://nanovalid.eu/index.php/nanovalid-publications/306-nanotogo “Nano to go!” is “a practically oriented guidance on safe handling of nanomaterials and other innovative materials at the workplace.” The German Federal Institute for Occupational Health (BAuA) developed it within the NanoValid project.
From the nanoToGo webpage on the NanoValid project website (Note: Links have been removed),
Nano to go! contains a brochure, field studies, presentations and general documents to comprehensively support risk assessment and risk management. …
The brochure Safe handling of nanomaterials and other advanced materials at workplacessupports risk assessment and risk management when working with nanomaterials. It provides safety strategies and protection measures for handling nanomaterials bound in solid matrices, dissolved in liquids, insoluble or insoluble powder form, and for handling nanofibres. Additional recommendations are given for storage and disposal of nanomaterials, for protection from fire and explosion, for training and instruction courses, and for occupational health.
The field studies comprise practical examples of expert assessment of safety and health at different workplaces. They contain detailed descriptions of several exposure measurements at pilot plants and laboratories. The reports describe methods, sampling strategies and devices, summarise and discuss results, and combine measurements and non-measurement methods.
Useful information, templates and examples, such as operating instructions, a sampling protocol, a dialogue guide and a short introduction to safety management and nanomaterials.
Ready to use presentations for university lecturers, supervisors and instruction courses, complemented with explanatory notes.
The ‘brochure’ is 56 pages; I would have called it a manual.
The EU FP7 [Framework Programme 7] large-scale integrating project NanoValid (contract: 263147) has been launched on the 1st of November 2011, as one of the “flagship” nanosafety projects. The project consists of 24 European partners from 14 different countries and 6 partners from Brazil, Canada, India and the US and will run from 2011 to 2015, with a total budget of more than 13 mio EUR (EC contribution 9.6 mio EUR). Main objective of NanoValid is to develop a set of reliable reference methods and materials for the fabrication, physicochemical (pc) characterization, hazard identification and exposure assessment of engineered nanomaterials (EN), including methods for dispersion control and labelling of ENs. Based on newly established reference methods, current approaches and strategies for risk and life cycle assessment will be improved, modified and further developed, and their feasibility assessed by means of practical case studies.
If you want to learn how something works, one strategy is to take it apart and put it back together again [also known as reverse engineering]. For 10 years, a global initiative called the Blue Brain Project–hosted at the Ecole Polytechnique Federale de Lausanne (EPFL)–has been attempting to do this digitally with a section of juvenile rat brain. The project presents a first draft of this reconstruction, which contains over 31,000 neurons, 55 layers of cells, and 207 different neuron subtypes, on October 8  in Cell.
Heroic efforts are currently being made to define all the different types of neurons in the brain, to measure their electrical firing properties, and to map out the circuits that connect them to one another. These painstaking efforts are giving us a glimpse into the building blocks and logic of brain wiring. However, getting a full, high-resolution picture of all the features and activity of the neurons within a brain region and the circuit-level behaviors of these neurons is a major challenge.
Henry Markram and colleagues have taken an engineering approach to this question by digitally reconstructing a slice of the neocortex, an area of the brain that has benefitted from extensive characterization. Using this wealth of data, they built a virtual brain slice representing the different neuron types present in this region and the key features controlling their firing and, most notably, modeling their connectivity, including nearly 40 million synapses and 2,000 connections between each brain cell type.
“The reconstruction required an enormous number of experiments,” says Markram, of the EPFL. “It paves the way for predicting the location, numbers, and even the amount of ion currents flowing through all 40 million synapses.”
Once the reconstruction was complete, the investigators used powerful supercomputers to simulate the behavior of neurons under different conditions. Remarkably, the researchers found that, by slightly adjusting just one parameter, the level of calcium ions, they could produce broader patterns of circuit-level activity that could not be predicted based on features of the individual neurons. For instance, slow synchronous waves of neuronal activity, which have been observed in the brain during sleep, were triggered in their simulations, suggesting that neural circuits may be able to switch into different “states” that could underlie important behaviors.
“An analogy would be a computer processor that can reconfigure to focus on certain tasks,” Markram says. “The experiments suggest the existence of a spectrum of states, so this raises new types of questions, such as ‘what if you’re stuck in the wrong state?'” For instance, Markram suggests that the findings may open up new avenues for explaining how initiating the fight-or-flight response through the adrenocorticotropic hormone yields tunnel vision and aggression.
The Blue Brain Project researchers plan to continue exploring the state-dependent computational theory while improving the model they’ve built. All of the results to date are now freely available to the scientific community at https://bbp.epfl.ch/nmc-portal.
Published by the renowned journal Cell, the paper is the result of a massive effort by 82 scientists and engineers at EPFL and at institutions in Israel, Spain, Hungary, USA, China, Sweden, and the UK. It represents the culmination of 20 years of biological experimentation that generated the core dataset, and 10 years of computational science work that developed the algorithms and built the software ecosystem required to digitally reconstruct and simulate the tissue.
The Hebrew University of Jerusalem’s Prof. Idan Segev, a senior author of the research paper, said: “With the Blue Brain Project, we are creating a digital reconstruction of the brain and using supercomputer simulations of its electrical behavior to reveal a variety of brain states. This allows us to examine brain phenomena within a purely digital environment and conduct experiments previously only possible using biological tissue. The insights we gather from these experiments will help us to understand normal and abnormal brain states, and in the future may have the potential to help us develop new avenues for treating brain disorders.”
Segev, a member of the Hebrew University’s Edmond and Lily Safra Center for Brain Sciences and director of the university’s Department of Neurobiology, sees the paper as building on the pioneering work of the Spanish anatomist Ramon y Cajal from more than 100 years ago: “Ramon y Cajal began drawing every type of neuron in the brain by hand. He even drew in arrows to describe how he thought the information was flowing from one neuron to the next. Today, we are doing what Cajal would be doing with the tools of the day: building a digital representation of the neurons and synapses, and simulating the flow of information between neurons on supercomputers. Furthermore, the digitization of the tissue is open to the community and allows the data and the models to be preserved and reused for future generations.”
While a long way from digitizing the whole brain, the study demonstrates that it is feasible to digitally reconstruct and simulate brain tissue, and most importantly, to reveal novel insights into the brain’s functioning. Simulating the emergent electrical behavior of this virtual tissue on supercomputers reproduced a range of previous observations made in experiments on the brain, validating its biological accuracy and providing new insights into the functioning of the neocortex. This is a first step and a significant contribution to Europe’s Human Brain Project, which Henry Markram founded, and where EPFL is the coordinating partner.
Cell has made a video abstract available (it can be found with the Hebrew University of Jerusalem press release)
Here’s a link to and a citation for the paper,
Reconstruction and Simulation of Neocortical Microcircuitry by Henry Markram, Eilif Muller, Srikanth Ramaswamy, Michael W. Reimann, Marwan Abdellah, Carlos Aguado Sanchez, Anastasia Ailamaki, Lidia Alonso-Nanclares, Nicolas Antille, Selim Arsever, Guy Antoine Atenekeng Kahou, Thomas K. Berger, Ahmet Bilgili, Nenad Buncic, Athanassia Chalimourda, Giuseppe Chindemi, Jean-Denis Courcol, Fabien Delalondre, Vincent Delattre, Shaul Druckmann, Raphael Dumusc, James Dynes, Stefan Eilemann, Eyal Gal, Michael Emiel Gevaert, Jean-Pierre Ghobril, Albert Gidon, Joe W. Graham, Anirudh Gupta, Valentin Haenel, Etay Hay, Thomas Heinis, Juan B. Hernando, Michael Hines, Lida Kanari, Daniel Keller, John Kenyon, Georges Khazen, Yihwa Kim, James G. King, Zoltan Kisvarday, Pramod Kumbhar, Sébastien Lasserre, Jean-Vincent Le Bé, Bruno R.C. Magalhães, Angel Merchán-Pérez, Julie Meystre, Benjamin Roy Morrice, Jeffrey Muller, Alberto Muñoz-Céspedes, Shruti Muralidhar, Keerthan Muthurasa, Daniel Nachbaur, Taylor H. Newton, Max Nolte, Aleksandr Ovcharenko, Juan Palacios, Luis Pastor, Rodrigo Perin, Rajnish Ranjan, Imad Riachi, José-Rodrigo Rodríguez, Juan Luis Riquelme, Christian Rössert, Konstantinos Sfyrakis, Ying Shi, Julian C. Shillcock, Gilad Silberberg, Ricardo Silva, Farhan Tauheed, Martin Telefont, Maria Toledo-Rodriguez, Thomas Tränkler, Werner Van Geit, Jafet Villafranca Díaz, Richard Walker, Yun Wang, Stefano M. Zaninetta, Javier DeFelipe, Sean L. Hill, Idan Segev, Felix Schürmann. Cell, Volume 163, Issue 2, p456–492, 8 October 2015 DOI: http://dx.doi.org/10.1016/j.cell.2015.09.029
This paper appears to be open access.
My most substantive description of the Blue Brain Project , previous to this, was in a Jan. 29, 2013 posting featuring the European Union’s (EU) Human Brain project and involvement from countries that are not members.
* I edited a redundant lede (That’s a virtual slice of a rat brain.), moved the second sentence to the lede while adding this: *about this virtual brain slice* on Oct. 16, 2015 at 0955 hours PST.
The resistance is coming from the Nanotechnology Industries Association (NIA) and it concerns some upcoming legislation in the European Union. From a Sept. 24, 2015 news item on Nanowerk,
In October , the European Union Parliament is expected to vote on legislation that repeals Regulation No 258/97 and replaces it with ‘Regulation on Novel Foods 2013/045(COD)’, which will fundamentally change how nanomaterials in food are regulated. The Nanotechnologies Industries Association has issued the following statement on the upcoming vote:
After reviewing the draft European legislation updating the ‘Regulation on Novel Foods 2013/045(COD)’, it has become clear to the Nanotechnology Industries Association and within the nanotech supply chain that the proposed changes are unworkable. It is vague, unclear and contradicts firmly established nanomaterial regulations that have been effectively used by European institutions for years. Implementing it will create new, unnecessary challenges for SMEs, the drivers of economic growth, aiming to use nanotechnology to improve the daily lives of Europeans.
To include materials that are “composed of discrete functional parts….which have one or more dimensions of the order of 100 nm or less” fundamentally changes the accepted definition of engineered nanomaterials and risks countless products being caught in disproportional regulation. The term “discrete functional parts” adds further complexity as it has little scientific basis, opening it up to a wide range of interpretation when put into practice. This will drive innovators to avoid any products that could possibly be caught in this broad, unclear definition and, ultimately, consumers will miss out on the benefits.
Innovators that can overcome this uncertainty and continue to utilize cutting-edge nanomaterials, will then face a new requirement that their safety tests be ‘the most up-to-date’ – a vague term with no formal definition. This leaves companies subject to unpredictable changes in testing requirements with little notice, a burden no other industry faces.
Finally, if implemented, the text would require the European commission to change the definition of ‘engineered nanomaterial’ in the Food Information to Consumers Regulation. However, regulations cannot be updated overnight, which means companies will be faced with two parallel and competing definitions for an indeterminate period of time.
The Council claims that this regulation will ‘reduce administrative burdens,’ however, we believe it will achieve the opposite. Industry and innovators need regulation that is clear and grounded in science. It ensures they know when they are subject to nanomaterial regulations and are prepared to meet all the requirements. This approach provides them with predictability in regulations and assures the public that the nanomaterials used are safe. A conclusion all parties can welcome.
NIA urges Members of the European Parliament not to create uncertainty in a sector that is a leader in European innovation, and to engage in direct discussions with the European Commission, which is already working on a review of the European Commission Recommendation for a Definition of a Nanomaterial with the Joint Research Centre.
While it might be tempting to lambaste the NIA for resisting regulation of nanomaterials in foods, the organization does have a point. Personally, I would approach it by emphasizing that there has been a problem with the European Union definition for nanomaterials (issued in 2011) and that is currently being addressed by the Joint Research Centre (JRC), which has issued a series of three reports the latest being in July 2015. (Note: There have been issues with the European Union definition since it was first issued and it would seem more logical to wait until that matter has been settled before changing regulations.) From a July 10, 2015 JRC press release,
The JRC has published science-based options to improve the clarity and the practical application of the EC recommendation on the definition of a nanomaterial. This is the last JRC report in a series of three, providing the scientific support to the Commission in its review of the definition used to identify materials for which special provisions might apply (e.g. for ingredient labelling or safety assessment). The Commission’s review process continues, assessing the options against policy issues.
As the definition should be broadly applicable in different regulatory sectors, the report suggests that the scope of the definition regarding the origin of nanomaterials should remain unchanged, addressing natural, incidental and manufactured nanomaterials. Furthermore, size as the sole defining property of a nanoparticle, as well as the range of 1 nm to 100 nm as definition of the nanoscale should be maintained.
On the other hand, several issues seem to deserve attention in terms of clarification of the definition and/or provision of additional implementation guidance. These include:
The terms “particle”, “particles size”, “external dimension” and “constituent particles”.
Consequences of the possibility of varying the current 50% threshold for the particle number fraction (if more than half of the particles have one or more external dimensions between 1 nm and 100 nm the material is a nanomaterial): variable thresholds may allow regulators to address specific concerns in certain application areas, but may also confuse customers and lead to an inconsistent classification of the same material based on the field of application.
Ambiguity on the role of the volume-specific surface area (VSSA): The potential use of VSSA should be clarified and ambiguities arising from the current wording should be eliminated.
The methods to prove that a material is not a nanomaterial: The definition makes it very difficult to prove that a material is not a nanomaterial. This issue could be resolved by adding an additional criterion.
The list of explicitly included materials (fullerenes, graphene flakes and single wall carbon nanotubes even with one or more external dimensions below 1 nm): This list does not include non-carbon based materials with a structure similar to carbon nanotubes. A modification (or removal) of the current list could avoid inconsistencies.
A clearer wording in the definition could prevent the misunderstanding that products containing nanoparticles become nanomaterials themselves.
Many of the issues addressed in the report can be clarified by developing new or improved guidance. Also the need for specific guidance beyond clarification of the definition itself is identified. However, relying only on guidance documents for essential parts of the definition may lead to unintended differences in the implementation and decision making. Therefore, also possibilities to introduce more clarity in the definition itself are listed above and discussed in the report.
JRC will continue to support the review process of the definition and its implementation in EU legislation.
Here is an Oct. 18, 2011 posting where I featured some of the issues raised by the European Union definition.
First off, this post features an open access paper summarizing global regulation of nanotechnology in agriculture and food production. From a Sept. 11, 2015 news item on Nanowerk,
An overview of regulatory solutions worldwide on the use of nanotechnology in food and feed production shows a differing approach: only the EU and Switzerland have nano-specific provisions incorporated in existing legislation, whereas other countries count on non-legally binding guidance and standards for industry. Collaboration among countries across the globe is required to share information and ensure protection for people and the environment, according to the paper …
The paper “Regulatory aspects of nanotechnology in the agri/feed/food sector in EU and non-EU countries” reviews how potential risks or the safety of nanotechnology are managed in different countries around the world and recognises that this may have implication on the international market of nano-enabled agricultural and food products.
Nanotechnology offers substantial prospects for the development of innovative products and applications in many industrial sectors, including agricultural production, animal feed and treatment, food processing and food contact materials. While some applications are already marketed, many other nano-enabled products are currently under research and development, and may enter the market in the near future. Expected benefits of such products include increased efficacy of agrochemicals through nano-encapsulation, enhanced bioavailability of nutrients or more secure packaging material through microbial nanoparticles.
As with any other regulated product, applicants applying for market approval have to demonstrate the safe use of such new products without posing undue safety risks to the consumer and the environment. Some countries have been more active than others in examining the appropriateness of their regulatory frameworks for dealing with the safety of nanotechnologies. As a consequence, different approaches have been adopted in regulating nano-based products in the agri/feed/food sector.
The analysis shows that the EU along with Switzerland are the only ones which have introduced binding nanomaterial definitions and/or specific provisions for some nanotechnology applications. An example would be the EU labelling requirements for food ingredients in the form of ‘engineered nanomaterials’. Other regions in the world regulate nanomaterials more implicitly mainly by building on non-legally binding guidance and standards for industry.
The overview of existing legislation and guidances published as an open access article in the Journal Regulatory Toxicology and Pharmacology is based on information gathered by the JRC, RIKILT-Wageningen and the European Food Safety Agency (EFSA) through literature research and a dedicated survey.
Here’s a link to and a citation for the paper,
Regulatory aspects of nanotechnology in the agri/feed/food sector in EU and non-EU countries by Valeria Amenta, Karin Aschberger, , Maria Arena, Hans Bouwmeester, Filipa Botelho Moniz, Puck Brandhoff, Stefania Gottardo, Hans J.P. Marvin, Agnieszka Mech, Laia Quiros Pesudo, Hubert Rauscher, Reinhilde Schoonjans, Maria Vittoria Vettori, Stefan Weigel, Ruud J. Peters. Regulatory Toxicology and Pharmacology Volume 73, Issue 1, October 2015, Pages 463–476 doi:10.1016/j.yrtph.2015.06.016
This is the most inclusive overview I’ve seen yet. The authors cover Asian countries, South America, Africa, and the MIddle East, as well as, the usual suspects in Europe and North America.
Given I’m a Canadian blogger I feel obliged to include their summary of the Canadian situation (Note: Links have been removed),
The Canadian Food Inspection Agency (CFIA) and Public Health Agency of Canada (PHAC), who have recently joined the Health Portfolio of Health Canada, are responsible for food regulation in Canada. No specific regulation for nanotechnology-based food products is available but such products are regulated under the existing legislative and regulatory frameworks.11 In October 2011 Health Canada published a “Policy Statement on Health Canada’s Working Definition for Nanomaterials” (Health Canada, 2011), the document provides a (working) definition of NM which is focused, similarly to the US definition, on the nanoscale dimensions, or on the nanoscale properties/phenomena of the material (see Annex I). For what concerns general chemicals regulation in Canada, the New Substances (NS) program must ensure that new substances, including substances that are at the nano-scale (i.e. NMs), are assessed in order to determine their toxicological profile ( Environment Canada, 2014). The approach applied involves a pre-manufacture and pre-import notification and assessment process. In 2014, the New Substances program published a guidance aimed at increasing clarity on which NMs are subject to assessment in Canada ( Environment Canada, 2014).
Canadian and US regulatory agencies are working towards harmonising the regulatory approaches for NMs under the US-Canada Regulatory Cooperation Council (RCC) Nanotechnology Initiative.12 Canada and the US recently published a Joint Forward Plan where findings and lessons learnt from the RCC Nanotechnology Initiative are discussed (Canada–United States Regulatory Cooperation Council (RCC) 2014).
Based on their summary of the Canadian situation, with which I am familiar, they’ve done a good job of summarizing. Here are a few of the countries whose regulatory instruments have not been mentioned here before (Note: Links have been removed),
In Turkey a national or regional policy for the responsible development of nanotechnology is under development (OECD, 2013b). Nanotechnology is considered as a strategic technological field and at present 32 nanotechnology research centres are working in this field. Turkey participates as an observer in the EFSA Nano Network (Section 3.6) along with other EU candidate countries Former Yugoslav Republic of Macedonia, and Montenegro (EFSA, 2012). The Inventory and Control of Chemicals Regulation entered into force in Turkey in 2008, which represents a scale-down version of the REACH Regulation (Bergeson et al. 2010). Moreover, the Ministry of Environment and Urban Planning published a Turkish version of CLP Regulation (known as SEA in Turkish) to enter into force as of 1st June 2016 (Intertek).
The Russian legislation on food safety is based on regulatory documents such as the Sanitary Rules and Regulations (“SanPiN”), but also on national standards (known as “GOST”) and technical regulations (Office of Agricultural Affairs of the USDA, 2009). The Russian policy on nanotechnology in the industrial sector has been defined in some national programmes (e.g. Nanotechnology Industry Development Program) and a Russian Corporation of Nanotechnologies was established in 2007.15 As reported by FAO/WHO (FAO/WHO, 2013), 17 documents which deal with the risk assessment of NMs in the food sector were released within such federal programs. Safe reference levels on nanoparticles impact on the human body were developed and implemented in the sanitary regulation for the nanoforms of silver and titanium dioxide and, single wall carbon nanotubes (FAO/WHO, 2013).
Other countries included in this overview are Brazil, India, Japan, China, Malaysia, Iran, Thailand, Taiwan, Australia, New Zealand, US, South Africa, South Korea, Switzerland, and the countries of the European Union.
Before launching into the latest on a new technique for carbon capture, it might be useful to provide some context. Arthur Neslen’s March 23, 2015 opinion piece outlines the issues and notes that one Norwegian Prime Minister resigned when coalition government partners attempted to build gas power plants without carbon capture and storage facilities (CCS), Note : A link has been removed,
At least 10 European power plants were supposed to begin piping their carbon emissions into underground tombs this year, rather than letting them twirl into the sky. None has done so.
Missed deadlines, squandered opportunities, spiralling costs and green protests have plagued the development of carbon capture and storage (CCS) technology since Statoil proposed the concept more than two decades ago.
But in the face of desperate global warming projections the CCS dream still unites Canadian tar sands rollers with the UN’s Intergovernmental Panel on Climate Change (IPCC), and Shell with some environmentalists.
With 2bn people in the developing world expected to hook up to the world’s dirty energy system by 2050, CCS holds out the tantalising prospect of fossil-led growth that does not fry the planet.
“With CCS in the mix, we can decarbonise in a cost-effective manner and still continue to produce, to some extent, our fossil fuels,” Tim Bertels, Shell’s Glocal CCS portfolio manager told the Guardian. “You don’t need to divest in fossil fuels, you need to decarbonise them.”
The technology has been gifted “a very significant fraction” of the billions of dollars earmarked by Shell for clean energy research, he added. But the firm is also a vocal supporter of public funding for CCS from carbon markets, as are almost all players in the industry.
Enthusiasm for this plan is not universal (from Neslen’s opinion piece),
Many environmentalists see the idea as a non-starter because it locks high emitting power plants into future energy systems, and obstructs funding for the cheaper renewables revolution already underway. “CCS is is completely irrelevant,” said Jeremy Rifkin, a noted author and climate adviser to several governments. “I don’t even think about it. It’s not going to happen. It’s not commercially available and it won’t be commercially viable.”
I recommend reading Neslen’s piece for anyone who’s not already well versed on the issues. He uses Norway as a case study and sums up the overall CCS political situation this way,
In many ways, the debate over carbon capture and storage is a struggle between two competing visions of the societal transformation needed to avert climate disaster. One vision represents the enlightened self-interest of a contributor to the problem. The other cannot succeed without eliminating its highly entrenched opponent. The battle is keenly fought by technological optimists on both sides. But if Norway’s fractious CCS experience is any indicator, it will be decided on the ground by the grimmest of realities.
On that note of urgency, here’s some research on carbon dioxide (CO2) or, more specifically, carbon capture and utilization technology, from an Aug. 19, 2015 news item on Nanowerk,,
Finding a technology to shift carbon dioxide (CO2), the most abundant anthropogenic greenhouse gas, from a climate change problem to a valuable commodity has long been a dream of many scientists and government officials. Now, a team of chemists says they have developed a technology to economically convert atmospheric CO2 directly into highly valued carbon nanofibers for industrial and consumer products.
The team will present brand-new research on this new CO2 capture and utilization technology at the 250th National Meeting & Exposition of the American Chemical Society (ACS). ACS is the world’s largest scientific society. The national meeting, which takes place here through Thursday, features more than 9,000 presentations on a wide range of science topics.
“We have found a way to use atmospheric CO2 to produce high-yield carbon nanofibers,” says Stuart Licht, Ph.D., who leads a research team at George Washington University. “Such nanofibers are used to make strong carbon composites, such as those used in the Boeing Dreamliner, as well as in high-end sports equipment, wind turbine blades and a host of other products.”
Previously, the researchers had made fertilizer and cement without emitting CO2, which they reported. Now, the team, which includes postdoctoral fellow Jiawen Ren, Ph.D., and graduate student Jessica Stuart, says their research could shift CO2 from a global-warming problem to a feed stock for the manufacture of in-demand carbon nanofibers.
Licht calls his approach “diamonds from the sky.” That refers to carbon being the material that diamonds are made of, and also hints at the high value of the products, such as the carbon nanofibers that can be made from atmospheric carbon and oxygen.
Because of its efficiency, this low-energy process can be run using only a few volts of electricity, sunlight and a whole lot of carbon dioxide. At its root, the system uses electrolytic syntheses to make the nanofibers. CO2 is broken down in a high-temperature electrolytic bath of molten carbonates at 1,380 degrees F (750 degrees C). Atmospheric air is added to an electrolytic cell. Once there, the CO2 dissolves when subjected to the heat and direct current through electrodes of nickel and steel. The carbon nanofibers build up on the steel electrode, where they can be removed, Licht says.
To power the syntheses, heat and electricity are produced through a hybrid and extremely efficient concentrating solar-energy system. The system focuses the sun’s rays on a photovoltaic solar cell to generate electricity and on a second system to generate heat and thermal energy, which raises the temperature of the electrolytic cell.
Licht estimates electrical energy costs of this “solar thermal electrochemical process” to be around $1,000 per ton of carbon nanofiber product, which means the cost of running the system is hundreds of times less than the value of product output.
“We calculate that with a physical area less than 10 percent the size of the Sahara Desert, our process could remove enough CO2 to decrease atmospheric levels to those of the pre-industrial revolution within 10 years,” he says. [emphasis mine]
At this time, the system is experimental, and Licht’s biggest challenge will be to ramp up the process and gain experience to make consistently sized nanofibers. “We are scaling up quickly,” he adds, “and soon should be in range of making tens of grams of nanofibers an hour.”
Licht explains that one advance the group has recently achieved is the ability to synthesize carbon fibers using even less energy than when the process was initially developed. “Carbon nanofiber growth can occur at less than 1 volt at 750 degrees C, which for example is much less than the 3-5 volts used in the 1,000 degree C industrial formation of aluminum,” he says.
A low energy approach that cleans up the air by converting greenhouse gases into useful materials and does it quickly is incredibly exciting. Of course, there are a few questions to be asked. Are the research outcomes reproducible by other teams? Licht notes the team is scaling the technology up but how soon can we scale up to industrial strength?
For consumers searching for just the right sunblock this summer, the options can be overwhelming. But scientists are now turning to the natural sunscreen of algae — which is also found in fish slime — to make a novel kind of shield against the sun’s rays that could protect not only people, but also textiles and outdoor materials. …
Existing sunblock lotions typically work by either absorbing ultraviolet rays or physically blocking them. A variety of synthetic and natural compounds can accomplish this. But most commercial options have limited efficiency, pose risks to the environment and human health or are not stable. To address these shortcomings, Vincent Bulone, Susana C. M. Fernandes and colleagues looked to nature for inspiration.
The researchers used algae’s natural sunscreen molecules, which can also be found in reef fish mucus and microorganisms, and combined them with chitosan, a biopolymer from crustacean shells. Testing showed their materials were biocompatible, stood up well in heat and light, and absorbed both ultraviolet A and ultraviolet B radiation with high efficiency.
The authors acknowledge funding from the European Commission Marie Curie Intra-European Fellowship, the KTH Advanced Carbohydrate Materials Consortium (CarboMat), the Swedish Research Council for Environment, Agricultural Sciences and Spatial Planning (FORMAS) and the Basque Government Department of Education.
As engineered nanomaterials increasingly find their way into commercial products, researchers who study the potential environmental or health impacts of those materials face a growing challenge to accurately measure and characterize them. These challenges affect measurements of basic chemical and physical properties as well as toxicology assessments.
To help nano-EHS (Environment, Health and Safety)researchers navigate the often complex measurement issues, the National Institute of Standards and Technology (NIST) has launched a new website devoted to NIST-developed (or co-developed) and validated laboratory protocols for nano-EHS studies.
A July 1, 2015 NIST news release on EurekAlert, which originated the news item, offers more details about the information available through the web portal,
In common lab parlance, a “protocol” is a specific step-by-step procedure used to carry out a measurement or related activity, including all the chemicals and equipment required. Any peer-reviewed journal article reporting an experimental result has a “methods” section where the authors document their measurement protocol, but those descriptions are necessarily brief and condensed, and may lack validation of any sort. By comparison, on NIST’s new Protocols for Nano-EHS website the protocols are extraordinarily detailed. For ease of citation, they’re published individually–each with its own unique digital object identifier (DOI).
The protocols detail not only what you should do, but why and what could go wrong. The specificity is important, according to program director Debra Kaiser, because of the inherent difficulty of making reliable measurements of such small materials. “Often, if you do something seemingly trivial–use a different size pipette, for example–you get a different result. Our goal is to help people get data they can reproduce, data they can trust.”
A typical caution, for example, notes that if you’re using an instrument that measures the size of nanoparticles in a solution by how they scatter light, it’s important also to measure the transmission spectrum of the particles if they’re colored, because if they happen to absorb light strongly at the same frequency as your instrument, the result may be biased.
“These measurements are difficult because of the small size involved,” explains Kaiser. “Very few new instruments have been developed for this. People are adapting existing instruments and methods for the job, but often those instruments are being operated close to their limits and the methods were developed for chemicals or bulk materials and not for nanomaterials.”
“For example, NIST offers a reference material for measuring the size of gold nanoparticles in solution, and we report six different sizes depending on the instrument you use. We do it that way because different instruments sense different aspects of a nanoparticle’s dimensions. An electron microscope is telling you something different than a dynamic light scattering instrument, and the researcher needs to understand that.”
The nano-EHS protocols offered by the NIST site, Kaiser says, could form the basis for consensus-based, formal test methods such as those published by ASTM and ISO.
NIST’s nano-EHS protocol site currently lists 12 different protocols in three categories: sample preparation, physico-chemical measurements and toxicological measurements. More protocols will be added as they are validated and documented. Suggestions for additional protocols are welcome at email@example.com.
The next item concerns European nanomedicine.
CEA-LETI and Europe’s first nanomedicine characterization laboratory
A July 1, 2015 news item on Nanotechnology Now describes the partnership which has led to launch of the new laboratory,
CEA-Leti today announced the launch of the European Nano-Characterisation Laboratory (EU-NCL) funded by the European Union’s Horizon 2020 research and innovation programme. Its main objective is to reach a level of international excellence in nanomedicine characterisation for medical indications like cancer, diabetes, inflammatory diseases or infections, and make it accessible to all organisations developing candidate nanomedicines prior to their submission to regulatory agencies to get the approval for clinical trials and, later, marketing authorization.
“As reported in the ETPN White Paper, there is a lack of infrastructure to support nanotechnology-based innovation in healthcare,” said Patrick Boisseau, head of business development in nanomedicine at CEA-Leti and chairman of the European Technology Platform Nanomedicine (ETPN). “Nanocharacterisation is the first bottleneck encountered by companies developing nanotherapeutics. The EU-NCL project is of most importance for the nanomedicine community, as it will contribute to the competiveness of nanomedicine products and tools and facilitate regulation in Europe.”
EU-NCL is partnered with the sole international reference facility, the Nanotechnology Characterization Lab of the National Cancer Institute in the U.S. (US-NCL), to get faster international harmonization of analytical protocols.
“We are excited to be part of this cooperative arrangement between Europe and the U.S.,” said Scott E. McNeil, director of U.S. NCL. “We hope this collaboration will help standardize regulatory requirements for clinical evaluation and marketing of nanomedicines internationally. This venture holds great promise for using nanotechnologies to overcome cancer and other major diseases around the world.”
The «European Nanomedicine Characterization Laboratory» (EU-NCL), which was launched on 1 June 2015, has a clear-cut goal: to help bring more nanomedicine candidates into the clinic and on the market, for the benefit of patients and the European pharmaceutical industry. To achieve this, EU-NCL is partnered with the sole international reference facility, the «Nanotechnology Characterization Laboratory» (US-NCL) of the US-National Cancer Institute, to get faster international harmonization of analytical protocols. EU-NCL is also closely connected to national medicine agencies and the European Medicines Agency to continuously adapt its analytical services to requests of regulators. EU-NCL is designed, organized and operated according to the highest EU regulatory and quality standards. «We are excited to be part of this cooperative project between Europe and the U.S.,» says Scott E. McNeil, director of US-NCL. «We hope this collaboration will help standardize regulatory requirements for clinical evaluation and marketing of nanomedicines internationally. This venture holds great promise for using nanotechnologies to overcome cancer and other major diseases around the world.»
Nine partners from eight countries
EU-NCL, which is funded by the EU for a four-year period with nearly 5 million Euros, brings together nine partners from eight countries: CEA-Tech in Leti and Liten, France, the coordinator of the project; the Joint Research Centre of the European Commission in Ispra, Italy; European Research Services GmbH in Münster Germany; Leidos Biomedical Research, Inc. in Frederick, USA; Trinity College in Dublin, Ireland; SINTEF in Oslo, Norway; the University of Liverpool in the UK; Empa, the Swiss Federal Laboratories for Materials Science and Technology in St. Gallen, Switzerland; Westfälische Wilhelms-Universität (WWU) and Gesellschaft für Bioanalytik, both in Münster, Germany. Together, the partnering institutions will provide a trans-disciplinary testing infrastructure covering a comprehensive set of preclinical characterization assays (physical, chemical, in vitro and in vivo biological testing), which will allow researchers to fully comprehend the biodistribution, metabolism, pharmacokinetics, safety profiles and immunological effects of their medicinal nano-products. The project will also foster the use and deployment of standard operating procedures (SOPs), benchmark materials and quality management for the preclinical characterization of medicinal nano-products. Yet another objective is to promote intersectoral and interdisciplinary communication among key drivers of innovation, especially between developers and regulatory agencies.
The goal: to bring safe and efficient nano-therapeutics faster to the patient
Within EU-NCL, six analytical facilities will offer transnational access to their existing analytical services for public and private developers, and will also develop new or improved analytical assays to keep EU-NCL at the cutting edge of nanomedicine characterization. A complementary set of networking activities will enable EU-NCL to deliver to European academic or industrial scientists the high-quality analytical services they require for accelerating the industrial development of their candidate nanomedicines. The Empa team of Peter Wick at the «Particles-Biology Interactions» lab will be in charge of the quality management of all analytical methods, a key task to guarantee the best possible reproducibility and comparability of the data between the various analytical labs within the consortium. «EU-NCL supports our research activities in developing innovative and safe nanomaterials for healthcare within an international network, which will actively shape future standards in nanomedicine and strengthen Empa as an enabler to facilitate the transfer of novel nanomedicines from bench to bedside», says Wick.
Getting condiments out of their bottles should be a lot easier in several European countries in the near future. A June 30, 2015 news item on Nanowerk describes the technology and the business deal (Note: A link has been removed),
The days of wasting condiments — and other products — that stick stubbornly to the sides of their bottles may be gone, thanks to MIT [Massachusetts Institute of Technology] spinout LiquiGlide, which has licensed its nonstick coating to a major consumer-goods company.
Developed in 2009 by MIT’s Kripa Varanasi and David Smith, LiquiGlide is a liquid-impregnated coating that acts as a slippery barrier between a surface and a viscous liquid. Applied inside a condiment bottle, for instance, the coating clings permanently to its sides, while allowing the condiment to glide off completely, with no residue.
In 2012, amidst a flurry of media attention following LiquiGlide’s entry in MIT’s $100K Entrepreneurship Competition, Smith and Varanasi founded the startup — with help from the Institute — to commercialize the coating.
Today [June 30, 2015], Norwegian consumer-goods producer Orkla has signed a licensing agreement to use the LiquiGlide’s coating for mayonnaise products sold in Germany, Scandinavia, and several other European nations. This comes on the heels of another licensing deal, with Elmer’s [Elmer’s Glue & Adhesives], announced in March .
But this is only the beginning, says Varanasi, an associate professor of mechanical engineering who is now on LiquiGlide’s board of directors and chief science advisor. The startup, which just entered the consumer-goods market, is courting deals with numerous producers of foods, beauty supplies, and household products. “Our coatings can work with a whole range of products, because we can tailor each coating to meet the specific requirements of each application,” Varanasi says.
Apart from providing savings and convenience, LiquiGlide aims to reduce the surprising amount of wasted products — especially food — that stick to container sides and get tossed. For instance, in 2009 Consumer Reports found that up to 15 percent of bottled condiments are ultimately thrown away. Keeping bottles clean, Varanasi adds, could also drastically cut the use of water and energy, as well as the costs associated with rinsing bottles before recycling. “It has huge potential in terms of critical sustainability,” he says.
Varanasi says LiquiGlide aims next to tackle buildup in oil and gas pipelines, which can cause corrosion and clogs that reduce flow. [emphasis mine] Future uses, he adds, could include coatings for medical devices such as catheters, deicing roofs and airplane wings, and improving manufacturing and process efficiency. “Interfaces are ubiquitous,” he says. “We want to be everywhere.”
The news release goes on to describe the research process in more detail and offers a plug for MIT’s innovation efforts,
LiquiGlide was originally developed while Smith worked on his graduate research in Varanasi’s research group. Smith and Varanasi were interested in preventing ice buildup on airplane surfaces and methane hydrate buildup in oil and gas pipelines.
Some initial work was on superhydrophobic surfaces, which trap pockets of air and naturally repel water. But both researchers found that these surfaces don’t, in fact, shed every bit of liquid. During phase transitions — when vapor turns to liquid, for instance — water droplets condense within microscopic gaps on surfaces, and steadily accumulate. This leads to loss of anti-icing properties of the surface. “Something that is nonwetting to macroscopic drops does not remain nonwetting for microscopic drops,” Varanasi says.
Inspired by the work of researcher David Quéré, of ESPCI in Paris, on slippery “hemisolid-hemiliquid” surfaces, Varanasi and Smith invented permanently wet “liquid-impregnated surfaces” — coatings that don’t have such microscopic gaps. The coatings consist of textured solid material that traps a liquid lubricant through capillary and intermolecular forces. The coating wicks through the textured solid surface, clinging permanently under the product, allowing the product to slide off the surface easily; other materials can’t enter the gaps or displace the coating. “One can say that it’s a self-lubricating surface,” Varanasi says.
Mixing and matching the materials, however, is a complicated process, Varanasi says. Liquid components of the coating, for instance, must be compatible with the chemical and physical properties of the sticky product, and generally immiscible. The solid material must form a textured structure while adhering to the container. And the coating can’t spoil the contents: Foodstuffs, for instance, require safe, edible materials, such as plants and insoluble fibers.
To help choose ingredients, Smith and Varanasi developed the basic scientific principles and algorithms that calculate how the liquid and solid coating materials, and the product, as well as the geometry of the surface structures will all interact to find the optimal “recipe.”
Today, LiquiGlide develops coatings for clients and licenses the recipes to them. Included are instructions that detail the materials, equipment, and process required to create and apply the coating for their specific needs. “The state of the coating we end up with depends entirely on the properties of the product you want to slide over the surface,” says Smith, now LiquiGlide’s CEO.
Having researched materials for hundreds of different viscous liquids over the years — from peanut butter to crude oil to blood — LiquiGlide also has a database of optimal ingredients for its algorithms to pull from when customizing recipes. “Given any new product you want LiquiGlide for, we can zero in on a solution that meets all requirements necessary,” Varanasi says.
MIT: A lab for entrepreneurs
For years, Smith and Varanasi toyed around with commercial applications for LiquiGlide. But in 2012, with help from MIT’s entrepreneurial ecosystem, LiquiGlide went from lab to market in a matter of months.
Initially the idea was to bring coatings to the oil and gas industry. But one day, in early 2012, Varanasi saw his wife struggling to pour honey from its container. “And I thought, ‘We have a solution for that,’” Varanasi says.
The focus then became consumer packaging. Smith and Varanasi took the idea through several entrepreneurship classes — such as 6.933 (Entrepreneurship in Engineering: The Founder’s Journey) — and MIT’s Venture Mentoring Service and Innovation Teams, where student teams research the commercial potential of MIT technologies.
“I did pretty much every last thing you could do,” Smith says. “Because we have such a brilliant network here at MIT, I thought I should take advantage of it.”
That May , Smith, Varanasi, and several MIT students entered LiquiGlide in the MIT $100K Entrepreneurship Competition, earning the Audience Choice Award — and the national spotlight. A video of ketchup sliding out of a LiquiGlide-coated bottle went viral. Numerous media outlets picked up the story, while hundreds of companies reached out to Varanasi to buy the coating. “My phone didn’t stop ringing, my website crashed for a month,” Varanasi says. “It just went crazy.”
That summer , Smith and Varanasi took their startup idea to MIT’s Global Founders’ Skills Accelerator program, which introduced them to a robust network of local investors and helped them build a solid business plan. Soon after, they raised money from family and friends, and won $100,000 at the MassChallenge Entrepreneurship Competition.
When LiquiGlide Inc. launched in August 2012, clients were already knocking down the door. The startup chose a select number to pay for the development and testing of the coating for its products. Within a year, LiquiGlide was cash-flow positive, and had grown from three to 18 employees in its current Cambridge headquarters.
Looking back, Varanasi attributes much of LiquiGlide’s success to MIT’s innovation-based ecosystem, which promotes rapid prototyping for the marketplace through experimentation and collaboration. This ecosystem includes the Deshpande Center for Technological Innovation, the Martin Trust Center for MIT Entrepreneurship, the Venture Mentoring Service, and the Technology Licensing Office, among other initiatives. “Having a lab where we could think about … translating the technology to real-world applications, and having this ability to meet people, and bounce ideas … that whole MIT ecosystem was key,” Varanasi says.
I had thought the EU (European Union) offered more roadblocks to marketing nanotechnology-enabled products used in food packaging than the US. If anyone knows why a US company would market its products in Europe first I would love to find out.