Tag Archives: EU

Graphene-based neural probes

I have two news bits (dated almost one month apart) about the use of graphene in neural probes, one from the European Union and the other from Korea.

European Union (EU)

This work is being announced by the European Commission’s (a subset of the EU) Graphene Flagship (one of two mega-funding projects announced in 2013; 1B Euros each over ten years for the Graphene Flagship and the Human Brain Project).

According to a March 27, 2017 news item on ScienceDaily, researchers have developed a graphene-based neural probe that has been tested on rats,

Measuring brain activity with precision is essential to developing further understanding of diseases such as epilepsy and disorders that affect brain function and motor control. Neural probes with high spatial resolution are needed for both recording and stimulating specific functional areas of the brain. Now, researchers from the Graphene Flagship have developed a new device for recording brain activity in high resolution while maintaining excellent signal to noise ratio (SNR). Based on graphene field-effect transistors, the flexible devices open up new possibilities for the development of functional implants and interfaces.

The research, published in 2D Materials, was a collaborative effort involving Flagship partners Technical University of Munich (TU Munich; Germany), Institut d’Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS; Spain), Spanish National Research Council (CSIC; Spain), The Biomedical Research Networking Center in Bioengineering, Biomaterials and Nanomedicine (CIBER-BBN; Spain) and the Catalan Institute of Nanoscience and Nanotechnology (ICN2; Spain).

Caption: Graphene transistors integrated in a flexible neural probe enables electrical signals from neurons to be measured with high accuracy and density. Inset: The tip of the probe contains 16 flexible graphene transistors. Credit: ICN2

A March 27, 2017 Graphene Flagship press release on EurekAlert, which originated the news item, describes the work,  in more detail,

The devices were used to record the large signals generated by pre-epileptic activity in rats, as well as the smaller levels of brain activity during sleep and in response to visual light stimulation. These types of activities lead to much smaller electrical signals, and are at the level of typical brain activity. Neural activity is detected through the highly localised electric fields generated when neurons fire, so densely packed, ultra-small measuring devices is important for accurate brain readings.

The neural probes are placed directly on the surface of the brain, so safety is of paramount importance for the development of graphene-based neural implant devices. Importantly, the researchers determined that the graphene-based probes are non-toxic, and did not induce any significant inflammation.

Devices implanted in the brain as neural prosthesis for therapeutic brain stimulation technologies and interfaces for sensory and motor devices, such as artificial limbs, are an important goal for improving quality of life for patients. This work represents a first step towards the use of graphene in research as well as clinical neural devices, showing that graphene-based technologies can deliver the high resolution and high SNR needed for these applications.

First author Benno Blaschke (TU Munich) said “Graphene is one of the few materials that allows recording in a transistor configuration and simultaneously complies with all other requirements for neural probes such as flexibility, biocompability and chemical stability. Although graphene is ideally suited for flexible electronics, it was a great challenge to transfer our fabrication process from rigid substrates to flexible ones. The next step is to optimize the wafer-scale fabrication process and improve device flexibility and stability.”

Jose Antonio Garrido (ICN2), led the research. He said “Mechanical compliance is an important requirement for safe neural probes and interfaces. Currently, the focus is on ultra-soft materials that can adapt conformally to the brain surface. Graphene neural interfaces have shown already great potential, but we have to improve on the yield and homogeneity of the device production in order to advance towards a real technology. Once we have demonstrated the proof of concept in animal studies, the next goal will be to work towards the first human clinical trial with graphene devices during intraoperative mapping of the brain. This means addressing all regulatory issues associated to medical devices such as safety, biocompatibility, etc.”

Caption: The graphene-based neural probes were used to detect rats’ responses to visual stimulation, as well as neural signals during sleep. Both types of signals are small, and typically difficult to measure. Credit: ICN2

Here’s a link to and a citation for the paper,

Mapping brain activity with flexible graphene micro-transistors by Benno M Blaschke, Núria Tort-Colet, Anton Guimerà-Brunet, Julia Weinert, Lionel Rousseau, Axel Heimann, Simon Drieschner, Oliver Kempski, Rosa Villa, Maria V Sanchez-Vives. 2D Materials, Volume 4, Number 2 DOI https://doi.org/10.1088/2053-1583/aa5eff Published 24 February 2017

© 2017 IOP Publishing Ltd

This paper is behind a paywall.

Korea

While this research from Korea was published more recently, the probe itself has not been subjected to in vivo (animal testing). From an April 19, 2017 news item on ScienceDaily,

Electrodes placed in the brain record neural activity, and can help treat neural diseases like Parkinson’s and epilepsy. Interest is also growing in developing better brain-machine interfaces, in which electrodes can help control prosthetic limbs. Progress in these fields is hindered by limitations in electrodes, which are relatively stiff and can damage soft brain tissue.

Designing smaller, gentler electrodes that still pick up brain signals is a challenge because brain signals are so weak. Typically, the smaller the electrode, the harder it is to detect a signal. However, a team from the Daegu Gyeongbuk Institute of Science & Technology [DGIST} in Korea developed new probes that are small, flexible and read brain signals clearly.

This is a pretty interesting way to illustrate the research,

Caption: Graphene and gold make a better brain probe. Credit: DGIST

An April 19, 2017 DGIST press release (also on EurekAlert), which originated the news item, expands on the theme (Note: A link has been removed),

The probe consists of an electrode, which records the brain signal. The signal travels down an interconnection line to a connector, which transfers the signal to machines measuring and analysing the signals.

The electrode starts with a thin gold base. Attached to the base are tiny zinc oxide nanowires, which are coated in a thin layer of gold, and then a layer of conducting polymer called PEDOT. These combined materials increase the probe’s effective surface area, conducting properties, and strength of the electrode, while still maintaining flexibility and compatibility with soft tissue.

Packing several long, thin nanowires together onto one probe enables the scientists to make a smaller electrode that retains the same effective surface area of a larger, flat electrode. This means the electrode can shrink, but not reduce signal detection. The interconnection line is made of a mix of graphene and gold. Graphene is flexible and gold is an excellent conductor. The researchers tested the probe and found it read rat brain signals very clearly, much better than a standard flat, gold electrode.

“Our graphene and nanowires-based flexible electrode array can be useful for monitoring and recording the functions of the nervous system, or to deliver electrical signals to the brain,” the researchers conclude in their paper recently published in the journal ACS Applied Materials and Interfaces.

The probe requires further clinical tests before widespread commercialization. The researchers are also interested in developing a wireless version to make it more convenient for a variety of applications.

Here’s a link to and a citation for the paper,

Enhancement of Interface Characteristics of Neural Probe Based on Graphene, ZnO Nanowires, and Conducting Polymer PEDOT by Mingyu Ryu, Jae Hoon Yang, Yumi Ahn, Minkyung Sim, Kyung Hwa Lee, Kyungsoo Kim, Taeju Lee, Seung-Jun Yoo, So Yeun Kim, Cheil Moon, Minkyu Je, Ji-Woong Choi, Youngu Lee, and Jae Eun Jang. ACS Appl. Mater. Interfaces, 2017, 9 (12), pp 10577–10586 DOI: 10.1021/acsami.7b02975 Publication Date (Web): March 7, 2017

Copyright © 2017 American Chemical Society

This paper is behind a paywall.

OECD (Organization for Economic Cooperation and Development) Dossiers on Nanomaterials Are of “Little to No Value for assessing risk?”

The announcement that a significant portion of the OECD’s (Organization for Economic Cooperation and Development) dossiers on 11 nanomaterials have next to no value for assessing risk seems a harsh judgment from the Center for International Environmental Law (CIEL). From a March 1, 2017 posting by Lynn L. Bergeson on the Nanotechnology Now,

On February 23, 2017, the Center for International Environmental Law (CIEL) issued a press release announcing a new report, commissioned by CIEL, the European Environmental Citizens’ Organization for Standardization (ECOS), and the Oeko-Institute, that “shows that most of the information made available by the Sponsorship Testing Programme of the Organisation for Economic Co-operation and Development (OECD) is of little to no value for the regulatory risk assessment of nanomaterials.”

Here’s more from the Feb. 23, 3017 CIEL press release, which originated the posting,

The study published today [Feb. 23, 2017] was delivered by the Institute of Occupational Medicine (IOM) based in Singapore. IOM screened the 11,500 pages of raw data of the OECD dossiers on 11 nanomaterials, and analysed all characterisation and toxicity data on three specific nanomaterials – fullerenes, single-walled carbon nanotubes, and zinc oxide.

“EU policy makers and industry are using the existence of the data to dispel concerns about the potential health and environmental risks of manufactured nanomaterials,” said David Azoulay, Senior Attorney for CIEL. “When you analyse the data, in most cases, it is impossible to assess what material was actually tested. The fact that data exists about a nanomaterial does not mean that the information is reliable to assess the hazards or risks of the material.”

The dossiers were published in 2015 by the OECD’s Working Party on Manufactured Nanomaterials (WPMN), which has yet to draw conclusions on the data quality. Despite this missing analysis, some stakeholders participating in EU policy-making – notably the European Chemicals Agency (ECHA) and the European Commission’s Joint Research Centre – have presented the dossiers as containing information on nano-specific human health and environmental impacts. Industry federations and individual companies have taken this a step further emphasizing that there is enough information available to discard most concerns about potential health or environmental risks of manufactured nanomaterials.

“Our study shows these claims that there is sufficient data available on nanomaterials are not only false, but dangerously so,” said Doreen Fedrigo, Senior Policy Officer of ECOS. ”The lack of nano-specific information in the dossiers means that the results of the tests cannot be used as evidence of no ‘nano-effect’ of the tested material. This information is crucial for regulators and producers who need to know the hazard profile of these materials. Analysing the dossiers has shown that legislation detailing nano-specific information requirements is crucial for the regulatory risk assessment of nanomaterials.”

The report provides important recommendations on future steps in the governance of nanomaterials. “Based on our analysis, serious gaps in current dossiers must be filled in with characterisation information, preparation protocols, and exposure data,” said Andreas Hermann of the Oeko-Institute. “Using these dossiers as they are and ignoring these recommendations would mean making decisions on the safety of nanomaterials based on faulty and incomplete data. Our health and environment requires more from producers and regulators.”

CIEL has an Analysis of OECD WPMN Dossiers Regarding the Availability of Data to Evaluate and Regulate Risk (Dec 2016) webpage which provides more information about the dossiers and about the research into the dossiers and includes links to the report, the executive summer, and the dataset,

The Sponsorship Testing Programme of the Working Party on Manufactured Nanomaterials (WPMN) of the Organisation for Economic Co-operation and Development (OECD) started in 2007 with the aim to test a selection of 13 representative nanomaterials for many endpoints. The main objectives of the programme were to better understand what information on intrinsic properties of the nanomaterials might be relevant for exposure and hazards assessment and assess the validity of OECD chemicals Test Guidelines for nanomaterials. The testing programme concluded in 2015 with the publication of dossiers on 11 nanomaterials: 11,500 pages of raw data to be analysed and interpreted.

The WPMN has not drawn conclusions on the data quality, but some stakeholders participating in EU policy-making – notably the European Chemicals Agency and the European Commission’s Joint Research Centre – presented the dossiers as containing much scientific information that provided a better understanding of their nano-specific human health and environmental impacts. Industry federations and individual companies echoed the views, highlighting that there was enough information available to discard most concerns about potential health or environmental risks of manufactured nanomaterials.

As for the OECD, it concluded, even before the publication of the dossiers, that “many of the existing guidelines are also suitable for the safety assessment of nanomaterials” and “the outcomes (of the sponsorship programme) will provide useful information on the ‘intrinsic properties’ of nanomaterials.”

The Center for International Environmental Law (CIEL), the European Citizens’ Organisation for Standardisation (ECOS) and the Öko-Institut commissioned scientific analysis of these dossiers to assess the relevance of the data for regulatory risk assessment.

The resulting report: Analysis of OECD WPMN dossiers regarding the availability of data to evaluate and regulate risk, provides insights illustratating how most of the information made available by the sponsorship programme is of little to no value in identifying hazards or in assessing risks due to nanomaterials.

The analysis shows that:

  • Most studies and documents in the dossiers contain insufficient characterisation data about the specific nanomaterial addressed (size, particle distribution, surface shape, etc.), making it impossible to assess what material was actually tested.
  • This makes it impossible to make any firm statements regarding the nano-specificity of the hazard data published, or the relationship between observed effects and specific nano-scale properties.
  • Less than 2% of the study records provide detail on the size of the nanomaterial tested. Most studies use mass rather than number or size distribution (so not following scientifically recommended reporting practice).
  • The absence of details on the method used to prepare the nanomaterial makes it virtually impossible to correlate an identified hazard with specific nanomaterial characteristic. Since the studies do not indicate dispersion protocols used, it is impossible to assess whether the final dispersion contained the intended mass concentration (or even the actual presence of nanomaterials in the test system), how much agglomeration may have occurred, and how the preparation protocols may have influenced the size distribution.
  • There is not enough nano-specific information in the dossiers to inform about nano-characteristics of the raw material that influence their toxicology. This information is important for regulators and its absence makes information in the dossier irrelevant to develop read-across guidelines.
  • Only about half of the endpoint study records using OECD Test Guideliness (TGs) were delivered using unaltered OECD TGs, thereby respecting the Guidelines’ requirements. The reasons for modifications of the TGs used in the tests are not clear from the documentation. This includes whether the study record was modified to account for challenges related to specific nanomaterial properties or for other, non-nano-specific reasons.
  • The studies do not contain systematic testing of the influence of nano-specific characteristics on the study outcome, and they do not provide the data needed to assess the effect of nano-scale features on the Test Guidelines. Given the absence of fundamental information on nanomaterial characteristics, the dossiers do not provide evidence of the applicability of existing OECD Test Guidelines to nanomaterials.

The analysis therefore dispels several myths created by some stakeholders following publication of the dossiers and provides important perspective for the governance of nanomaterials. In particular, the analysis makes recommendations to:

  • Systematically assess the validity of existing Test Guidelines for relevance to nanomaterials
  • Develop Test Guidelines for dispersion and other test preparations
  • Define the minimum characteristics of nanomaterials that need to be reported
  • Support the build-up of exposure database
  • Fill the gaps in current dossiers with characterisation information, preparation protocols and exposure data

Read full report.
Read executive summary.
Download full dataset.

This is not my area of expertise and while I find the language a bit inflammatory, it’s my understanding that there are great gaps in our understanding of nanomaterials and testing for risk assessment has been criticized for many of the reasons pointed out by CIEL, ECOS, and the Oeko-Institute.

You can find out more about CIEL here; ECOS here; and the Oeko-Institute (also known as Öko-Institute) here.

Graphene Malaysia 2016 gathering and Malaysia’s National Graphene Action Plan 2020

Malaysia is getting ready to host a graphene conference according to an Oct. 10, 2016 news item on Nanotechnology Now,

The Graphene Malaysia 2016 [Nov. 8 – 9, 2016] (www.graphenemalaysiaconf.com) is jointly organized by NanoMalaysia Berhad and Phantoms Foundation. The conference will be centered on graphene industry interaction and collaborative innovation. The event will be launched under the National Graphene Action Plan 2020 (NGAP 2020), which will generate about 9,000 jobs and RM20 (US$4.86) billion GNI impact by the year 2020.

First speakers announced:
Murni Ali (Nanomalaysia, Malaysia) | Francesco Bonaccorso (Istituto Italiano di Tecnologia, Italy) | Antonio Castro Neto (NUS, Singapore) | Antonio Correia (Phantoms Foundation, Spain)| Pedro Gomez-Romero (ICN2 (CSIC-BIST), Spain) | Shu-Jen Han (Nanoscale Science & Technology IBM T.J. Watson Research Center, USA) | Kuan-Tsae Huang (AzTrong, USA/Taiwan) | Krzysztof Koziol (FGV Cambridge Nanosystems, UK) | Taavi Madiberk (Skeleton Technologies, Estonia) | Richard Mckie (BAE Systems, UK) | Pontus Nordin (Saab AB, Saab Aeronautics, Sweden) | Elena Polyakova (Graphene Laboratories Inc., USA) | Ahmad Khairuddin Abdul Rahim (Malaysian Investment Development Authority (MIDA), Malaysia) | Adisorn Tuantranont (Thailand Organic and Printed Electronics Innovation Center, Thailand) |Archana Venugopal (Texas Instruments, USA) | Won Jong Yoo (Samsung-SKKU Graphene-2D Center (SSGC), South Korea) | Hongwei Zhu (Tsinghua University, China)

You can check for more information and deadlines in the Nanotechnology Now Oct. 10, 2016 news item.

The Graphene Malalysia 2016 conference website can be found here and Malaysia’s National Graphene Action Plan 2020, which is well written, can be found here (PDF).  This portion from the executive summary offers some insight into Malyasia’s plans to launch itself into the world of high income nations,

Malaysia’s aspiration to become a high-income nation by 2020 with improved jobs and better outputs is driving the country’s shift away from “business as usual,” and towards more innovative and high value add products. Within this context, and in accordance with National policies and guidelines, Graphene, an emerging, highly versatile carbon-based nanomaterial, presents a unique opportunity for Malaysia to develop a high value economic ecosystem within its industries.  Isolated only in 2004, Graphene’s superior physical properties such as electrical/ thermal conductivity, high strength and high optical transparency, combined with its manufacturability have raised tremendous possibilities for its application across several functions and make it highly interesting for several applications and industries.  Currently, Graphene is still early in its development cycle, affording Malaysian companies time to develop their own applications instead of relying on international intellectual property and licenses.

Considering the potential, several leading countries are investing heavily in associated R&D. Approaches to Graphene research range from an expansive R&D focus (e.g., U.S. and the EU) to more focused approaches aimed at enhancing specific downstream applications with Graphene (e.g., South Korea). Faced with the need to push forward a multitude of development priorities, Malaysia must be targeted in its efforts to capture Graphene’s potential, both in terms of “how to compete” and “where to compete”. This National Graphene Action Plan 2020 lays out a set of priority applications that will be beneficial to the country as a whole and what the government will do to support these efforts.

Globally, much of the Graphene-related commercial innovation to date has been upstream, with producers developing techniques to manufacture Graphene at scale. There has also been some development in downstream sectors, as companies like Samsung, Bayer MaterialScience, BASF and Siemens explore product enhancement with Graphene in lithium-ion battery anodes and flexible displays, and specialty plastic and rubber composites. However the speed of development has been uneven, offering Malaysian industries willing to invest in innovation an opportunity to capture the value at stake. Since any innovation action plan has to be tailored to the needs and ambitions of local industry, Malaysia will focus its Graphene action plan initially on larger domestic industries (e.g., rubber) and areas already being targeted by the government for innovation such as energy storage for electric vehicles and conductive inks.

In addition to benefiting from the physical properties of Graphene, Malaysian downstream application providers may also capture the benefits of a modest input cost advantage for the domestic production of Graphene.  One commonly used Graphene manufacturing technique, the chemical vapour deposition (CVD) production method, requires methane as an input, which can be sourced economically from local biomass. While Graphene is available commercially from various producers around the world, downstream players may be able to enjoy some cost advantage from local Graphene supply. In addition, co-locating with a local producer for joint product development has the added benefit of speeding up the R&D lifecycle.

That business about finding downstream applications could also to the Canadian situation where we typically offer our resources (upstream) but don’t have an active downstream business focus. For example, we have graphite mines in Ontario and Québec which supply graphite flakes for graphene production which is all upstream. Less well developed are any plans for Canadian downstream applications.

Finally, it was interesting to note that the Phantoms Foundation is organizing this Malaysian conference since the same organization is organizing the ‘2nd edition of Graphene & 2D Materials Canada 2016 International Conference & Exhibition’ (you can find out more about the Oct. 18 – 20, 2016 event in my Sept. 23, 2016 posting). I think the Malaysians have a better title for their conference, far less unwieldy.

Phenomen: a future and emerging information technology project

A Sept. 19, 2016 news item on Nanowerk describes a new research project incorporating photonics, phononics, and radio frequency signal processing,

HENOMEN is a ground breaking project designed to harness the potential of combined phononics, photonics and radio-frequency (RF) electronic signals to lay the foundations of a new information technology. This new Project, funded though the highly competitive H2020 [the European Union’s Horizon 2020 science funding programme] FET [Future and Emerging Technologies]-Open call, joins the efforts of three leading research institutes, three internationally recognised universities and a high-tech SME. The Consortium members kick-offed the project with a meeting on Friday September 16, 2016, at the Catalan Institute of Nanoscience and Nanotechnology (ICN2), coordinated by ICREA Research Prof Dr Clivia M. Sotomayor-Torres, of the ICN2’ Phononic and Photonic Nanostructures (P2N) Group.

A Sept. 16, 2016 ICN2 press release, which originated the news item, provides more detail,

Most information is currently transported by electrical charge (electrons) and by light (photons). Phonons are the quanta of lattice vibrations with frequencies covering a wide range up to tens of THz and provide coupling to the surrounding environment. In PHENOMEN the core of the research will be focused on phonon-based signal processing to enable on-chip synchronisation and transfer information carried between optical channels by phonons.

This ambitious prospect could serve as a future scalable platform for, e.g., hybrid information processing with phonons. To achieve it, PHENOMEN proposes to build the first practical optically-driven phonon sources and detectors including the engineering of phonon lasers to deliver coherent phonons to the rest of the chip pumped by a continuous wave optical source. It brings together interdisciplinary scientific and technology oriented partners in an early-stage research towards the development of a radically new technology.

The experimental implementation of phonons as information carriers in a chip is completely novel and of a clear foundational character. It deals with interaction and manipulation of fundamental particles and their intrinsic dual wave-particle character. Thus, it can only be possible with the participation of an interdisciplinary consortium which will create knowledge in a synergetic fashion and add value in the form of new theoretical tools,  develop novel methods to manipulate coherent phonons with light and build all-optical phononic circuits enabled by optomechanics.

The H2020 FET-Open call “Novel ideas for radically new technologies” aims to support the early stages of joint science and technology research for radically new future technological possibilities. The call is entirely non-prescriptive with regards to the nature or purpose of the technologies that are envisaged and thus targets mainly the unexpected. PHENOMEN is one of the 13 funded Research & Innovation Actions and went through a selection process with a success rate (1.4%) ten times smaller than that for an ERC grant. The retained proposals are expected to foster international collaboration in a multitude of disciplines such as robotics, nanotechnology, neuroscience, information science, biology, artificial intelligence or chemistry.

The Consortium

The PHENOMEN Consortium is made up by:

  • 3 leading research institutes:
  • 3 universities with an internationally recognised track-record in their respective areas of expertise:
  • 1 industrial partner:

Radical copyright reform proposal in the European Union

It seems the impulse to maximize copyright control has overtaken European Union officials. A Sept. 14, 2016 news item on phys.org lays out a few details,

The EU will overhaul copyright law to shake up how online news and entertainment is paid for in Europe, under proposals announced by European Commission chief Jean-Claude Juncker Wednesday [Sept. 14, 2016].

Pop stars such as Coldplay and Lady Gaga will hail part of the plan as a new weapon to bring a fair fight to YouTube, the Google-owned video service that they say is sapping the music business.

But the reform plans have attracted the fury of filmmakers and start-up investors who see it as a threat to European innovation and a wrong-headed favour to powerful media groups.

A Sept. 14, 2016 European Commission press release provides the European Union’s version of why more stringent copyright is needed,

“I want journalists, publishers and authors to be paid fairly for their work, whether it is made in studios or living rooms, whether it is disseminated offline or online, whether it is published via a copying machine or commercially hyperlinked on the web.”–President Juncker, State of the Union 2016

On the occasion of President Juncker’s 2016 State of the Union address, the Commission today set out proposals on the modernisation of copyright to increase cultural diversity in Europe and content available online, while bringing clearer rules for all online players. The proposals will also bring tools for innovation to education, research and cultural heritage institutions.

Digital technologies are changing the way music, films, TV, radio, books and the press are produced, distributed and accessed. New online services such as music streaming, video-on-demand platforms and news aggregators have become very popular, while consumers increasingly expect to access cultural content on the move and across borders. The new digital landscape will create opportunities for European creators as long as the rules offer legal certainty and clarity to all players. As a key part of its Digital Single Market strategy, the Commission has adopted proposals today to allow:

  • Better choice and access to content online and across borders
  • Improved copyright rules on education, research, cultural heritage and inclusion of disabled people
  • A fairer and sustainable marketplace for creators, the creative industries and the press

Andrus Ansip, Vice-President for the Digital Single Market, said: “Europeans want cross-border access to our rich and diverse culture. Our proposal will ensure that more content will be available, transforming Europe’s copyright rules in light of a new digital reality. Europe’s creative content should not be locked-up, but it should also be highly protected, in particular to improve the remuneration possibilities for our creators. We said we would deliver all our initiatives to create a Digital Single Market by the end of the year and we keep our promises. Without a properly functioning Digital Single Market we will miss out on creativity, growth and jobs.

Günther H. Oettinger, Commissioner for the Digital Economy and Society, said: “Our creative industries [emphasis mine] will benefit from these reforms which tackle the challenges of the digital age successfully while offering European consumers a wider choice of content to enjoy. We are proposing a copyright environment that is stimulating, fair and rewards investment.”

Today, almost half of EU internet users listen to music, watch TV series and films or play games online; however broadcasters and other operators find it hard to clear rights for their online or digital services when they want to offer them in other EU countries. Similarly, the socio-economically important sectors of education, research and cultural heritage too often face restrictions or legal uncertainty which holds back their digital innovation when using copyright protected content, including across borders. Finally, creators, other right holders and press publishers are often unable to negotiate the conditions and also payment for the online use of their works and performances.

Altogether, today’s copyright proposals have three main priorities:

1. Better choice and access to content online and across borders

With our proposal on the portability of online content presented in December 2015, we gave consumers the right to use their online subscriptions to films, music, ebooks when they are away from their home country, for example on holidays or business trips. Today, we propose a legal mechanism for broadcasters to obtain more easily the authorisations they need from right holders to transmit programmes online in other EU Member States. This is about programmes that broadcasters transmit online at the same time as their broadcast as well as their catch-up services that they wish to make available online in other Member States, such as MyTF1 in France, ZDF Mediathek in Germany, TV3 Play in Denmark, Sweden and the Baltic States and AtresPlayer in Spain. Empowering broadcasters to make the vast majority of their content, such as news, cultural, political, documentary or entertainment programmes, shown also in other Member States will give more choice to consumers.

Today’s rules also make it easier for operators who offer packages of channels (such as Proximus TV in Belgium, Movistar+ in Spain, Deutsche Telekom’s IPTV Entertain in Germany), to get the authorisations they need: instead of having to negotiate individually with every right holder in order to offer such packages of channels originating in other EU Member States, they will be able to get the licenses from collective management organisations representing right holders. This will also increase the choice of content for their customers.

To help development of Video-on-Demand (VoD) offerings in Europe, we ask Member States to set up negotiation bodies to help reach licensing deals, including those for cross-border services, between audiovisual rightholders and VoD platforms. A dialogue with the audiovisual industry on licensing issues and the use of innovative tools like licensing hubs will complement this mechanism.

To enhance access to Europe’s rich cultural heritage, the new Copyright Directive will help museums, archives and other institutions to digitise and make available across borders out-of commerce works, such as books or films that are protected by copyright, but no longer available to the public.

In parallel the Commission will use its €1.46 billion Creative Europe MEDIA programme to further support the circulation of creative content across borders . This includes more funding for subtitling and dubbing; a new catalogue of European audiovisual works for VoD providers that they can directly use for programming; and online tools to improve the digital distribution of European audiovisual works and make them easier to find and view online.

These combined actions will encourage people to discover TV and radio programmes from other European countries, keep in touch with their home countries when living in another Member State and enhance the availability of European films, including across borders, hence highlighting Europe’s rich cultural diversity.

2. Improving copyright rules on research, education and inclusion of disable [sic] people

Students and teachers are eager to use digital materials and technologies for learning, but today almost 1 in 4 educators encounter copyright-related restrictions in their digital teaching activities every week. The Commission has proposed today a new exception to allow educational establishments to use materials to illustrate teaching through digital tools and in online courses across borders.

The proposed Directive will also make it easier for researchers across the EU to use text and data mining (TDM) technologies to analyse large sets of data. This will provide a much needed boost to innovative research considering that today nearly all scientific publications are digital and their overall volume is increasing by 8-9% every year worldwide.

The Commission also proposes a new mandatory EU exception which will allow cultural heritage institutions to preserve works digitally, crucial for the survival of cultural heritage and for citizens’ access in the long term.

Finally, the Commission is proposing legislation to implement the Marrakesh Treaty to facilitate access to published works for persons who are blind, have other visual impairments or are otherwise print disabled. These measures are important to ensure that copyright does not constitute a barrier to the full participation in society of all citizens and will allow for the exchange of accessible format copies within the EU and with third countries that are parties to the Treaty, avoiding duplication of work and waste of resources.

3. A fairer and sustainable marketplace for creators and press

The Copyright Directive aims to reinforce the position of right holders to negotiate and be remunerated for the online exploitation of their content on video-sharing platforms such as YouTube or Dailymotion. Such platforms will have an obligation to deploy effective means such as technology to automatically detect songs or audiovisual works which right holders have identified and agreed with the platforms either to authorise or remove.

Newspapers, magazines and other press publications have benefited from the shift from print to digital and online services like social media and news aggregators. It has led to broader audiences, but it has also impacted advertising revenue and made the licensing and enforcement of the rights in these publications increasingly difficult.The Commission proposes to introduce a new related right for publishers, similar to the right that already exists under EU law for film producers, record (phonogram) producers and other players in the creative industries like broadcasters.

The new right recognises the important role press publishers play in investing in and creating quality journalistic content, which is essential for citizens’ access to knowledge in our democratic societies. As they will be legally recognised as right holders for the very first time they will be in a better position when they negotiate the use of their content with online services using or enabling access to it, and better able to fight piracy. This approach will give all players a clear legal framework when licensing content for digital uses, and help the development of innovative business models for the benefit of consumers.

The draft Directive also obliges publishers and producers to be transparent and inform authors or performers about profits they made with their works. It also puts in place a mechanism to help authors and performers to obtain a fair share when negotiating remuneration with producers and publishers. This should lead to higher level of trust among all players in the digital value chain.

Towards a Digital Single Market

As part of the Digital Single Market strategy presented in May 2015, today’s proposals complement the proposed regulation on portability of legal content (December 2015), the revised Audiovisual Media and Services Directive, the Communication on online platforms (May 2016). Later this autumn the Commission will propose to improve enforcement of all types of intellectual property rights, including copyright.

Today’s EU copyright rules, presented along with initiatives to boost internet connectivity in the EU (press releasepress conference at 15.15 CET), are part of the EU strategy to create a Digital Single Market (DSM). The Commission set out 16 initiatives (press release) and is on the right track to deliver all of them the end of this year.

While Juncker mixes industry (publishers) with content creators (journalists, authors), Günther H. Oettinger, Commissioner for the Digital Economy and Society clearly states that ‘creative industries’ are to be the beneficiaries. Business interests have tended to benefit disproportionately under current copyright regimes. The disruption posed by digital content has caused these businesses some agony and they have responded by lobbying vigorously to maximize copyright. For the most part, individual musicians, authors, visual artists and other content creators are highly unlikely to benefit from this latest reform.

I’m not a big fan of Google or its ‘stepchild’, YouTube but it should be noted that at least one career would not have existed without free and easy access to videos, Justin Bieber’s. He may not have made a penny from his YouTube videos but that hasn’t hurt his financial picture. Without YouTube, he would have been unlikely to get the exposure and recognition which have in turn led him to some serious financial opportunities.

I am somewhat less interested in the show business aspect than I am in the impact this could have on science as per section (2. Improving copyright rules on research, education and inclusion of disable [sic] people) of the European Commission press release. A Sept. 14, 2016 posting about a previous ruling on copyright in Europe by Mike Masnick for Techdirt provides some insight into the possible future impacts on science research,

Last week [Sept. 8, 2016 posting], we wrote about a terrible copyright ruling from the Court of Justice of the EU, which basically says that any for-profit entity that links to infringing material can be held liable for direct infringement, as the “for-profit” nature of the work is seen as evidence that they knew or should have known the work was infringing. We discussed the problems with this standard in our post, and there’s been a lot of commentary on what this will mean for Europe — with a variety of viewpoints being expressed. One really interesting set of concerns comes from Egon Willighagen, from Maastricht University, noting what a total and complete mess this is going to be for scientists, who rarely consider the copyright status of various data as databases they rely on are built up …

This is, of course, not the first time we’ve noted the problems of intellectual property in the science world. From various journals locking up research to the rise of patents scaring off researchers from sharing data, intellectual property keeps getting in the way of science, rather than supporting it. And that’s extremely unfortunate. I mean, after all, in the US specifically, the Constitution specifically says that copyrights and patents are supposed to be about “promoting the progress of science and the useful arts.”

Over and over again, though, we see that the law has been twisted and distorted and extended and expanded in such a way that is designed to protect a very narrow set of interests, at the expense of many others, including the public who would benefit from greater sharing and collaboration and open flow of data among scientific researchers. …

Masnick has also written up a Sept. 14, 2016 posting devoted to the EU copyright proposal itself,

This is not a surprise given the earlier leaks of what the EU Commission was cooking up for a copyright reform package, but the end result is here and it’s a complete disaster for everyone. And I do mean everyone. Some will argue that it’s a gift to Hollywood and legacy copyright interests — and there’s an argument that that’s the case. But the reality is that this proposal is so bad that it will end up doing massive harm to everyone. It will clearly harm independent creators and the innovative platforms that they rely on. And, because those platforms have become so important to even the legacy entertainment industry, it will harm them too. And, worst of all, it will harm the public greatly. It’s difficult to see how this proposal will benefit anyone, other than maybe some lawyers.

So the EU Commission has taken the exact wrong approach. It’s one that’s almost entirely about looking backwards and “protecting” old ways of doing business, rather than looking forward, and looking at what benefits the public, creators and innovators the most. If this proposal actually gets traction, it will be a complete disaster for the EU innovative community. Hopefully, Europeans speak out, vocally, about what a complete disaster this would be.

So, according to Masnick not even business interests will benefit.

Trans-Atlantic Platform (T-AP) is a unique collaboration of humanities and social science researchers from Europe and the Americas

Launched in 2013, the Trans-Atlantic Platform is co-chaired by Dr.Ted Hewitt, president of the Social Sciences and Humanities Research Council of Canada (SSHRC) , and Dr. Renée van Kessel-Hagesteijn, Netherlands Organisation for Scientific Research—Social Sciences (NWO—Social Sciences).

An EU (European Union) publication, International Innovation features an interview about T-AP with Ted Hewitt in a June 30, 2016 posting,

The Trans-Atlantic Platform is a unique collaboration of humanities and social science funders from Europe and the Americas. International Innovation’s Rebecca Torr speaks with Ted Hewitt, President of the Social Sciences and Humanities Research Council and Co-Chair of T-AP to understand more about the Platform and its pilot funding programme, Digging into Data.

Many commentators have called for better integration between natural and social scientists, to ensure that the societal benefits of STEM research are fully realised. Does the integration of diverse scientific disciplines form part of T-AP’s remit, and if so, how are you working to achieve this?

T-AP was designed primarily to promote and facilitate research across SSH. However, given the Platform’s thematic priorities and the funding opportunities being contemplated, we anticipate that a good number of non-SSH [emphasis mine] researchers will be involved.

As an example, on March 1, T-AP launched its first pilot funding opportunity: the T-AP Digging into Data Challenge. One of the sponsors is the Natural Sciences and Engineering Research Council of Canada (NSERC), Canada’s federal funding agency for research in the natural sciences and engineering. Their involvement ensures that the perspective of the natural sciences is included in the challenge. The Digging into Data Challenge is open to any project that addresses research questions in the SSH by using large-scale digital data analysis techniques, and is then able to show how these techniques can lead to new insights. And the challenge specifically aims to advance multidisciplinary collaborative projects.

When you tackle a research question or undertake research to address a social challenge, you need collaboration between various SSH disciplines or between SSH and STEM disciplines. So, while proposals must address SSH research questions, the individual teams often involve STEM researchers, such as computer scientists.

In previous rounds of the Digging into Data Challenge, this has led to invaluable research. One project looked at how the media shaped public opinion around the 1918 Spanish flu pandemic. Another used CT scans to examine hundreds of mummies, ultimately discovering that atherosclerosis, a form of heart disease, was prevalent 4,000 years ago. In both cases, these multidisciplinary historical research projects have helped inform our thinking of the present.

Of course, Digging into Data isn’t the only research area in which T-AP will be involved. Since its inception, T-AP partners have identified three priority areas beyond digital scholarship: diversity, inequality and difference; resilient and innovative societies; and transformative research on the environment. Each of these areas touches on a variety of SSH fields, while the transformative research on the environment area has strong connections with STEM fields. In September 2015, T-AP organised a workshop around this third priority area; environmental science researchers were among the workshop participants.

I wish Hewitt hadn’t described researchers from disciplines other than the humanities and social sciences as “non-SSH.” The designation divides the world in two: us and non-take your pick: non-Catholic/Muslim/American/STEM/SSH/etc.

Getting back to the interview, it is surprisingly Canuck-centric in places,

How does T-AP fit in with Social Sciences and Humanities Research Council of Canada (SSHRC)’s priorities?

One of the objectives in SSHRC’s new strategic plan is to develop partnerships that enable us to expand the reach of our funding. As T-AP provides SSHRC with links to 16 agencies across Europe and the Americas, it is an efficient mechanism for us to broaden the scope of our support and promotion of post-secondary-based research and training in SSH.

It also provides an opportunity to explore cutting edge areas of research, such as big data (as we did with the first call we put out, Digging into Data). The research enterprise is becoming increasingly international, by which I mean that researchers are working on issues with international dimensions or collaborating in international teams. In this globalised environment, SSHRC must partner with international funders to support research excellence. By developing international funding opportunities, T-AP helps researchers create teams better positioned to tackle the most exciting and promising research topics.

Finally, it is a highly effective way of broadly promoting the value of SSH research throughout Canada and around the globe. There are significant costs and complexities involved in international research, and uncoordinated funding from multiple national funders can actually create barriers to collaboration. A platform like T-AP helps funders coordinate and streamline processes.

The interview gets a little more international scope when it turns to the data project,

What is the significance of your pilot funding programme in digital scholarship and what types of projects will it support?

The T-AP Digging into Data Challenge is significant for several reasons. First, the geographic reach of Digging is truly significant. With 16 participants from 11 countries, this round of Digging has significantly broader participation from previous rounds. This is also the first time Digging into Data includes funders from South America.

The T-AP Digging into Data Challenge is open to any research project that addresses questions in SSH. In terms of what those projects will end up being is anybody’s guess – projects from past competitions have involved fields ranging from musicology to anthropology to political science.

The Challenge’s main focus is, of course, the use of big data in research.

You may want to read the interview in its entirety here.

I have checked out the Trans-Atlantic Platform website but cannot determine how someone or some institution might consult that site for information on how to get involved in their projects or get funding. However, there is a T-AP Digging into Data website where there is evidence of the first international call for funding submissions. Sadly, the deadline for the 2016 call has passed if the website is to be believed (sometimes people are late when changing deadline dates).

nanoIndEx publishes guidance document on assessing exposure to airborne nanomaterials

Lynn Bergeson’s June 21, 2016 posting on Nanotechnology Now announced a newly published guidance document from the European Union’s nanoIndEx,

… The guidance document summarizes the key findings of the project, and is intended to present the state of the art in personal exposure assessment for nanomaterials. The conclusions section states: “Unfortunately, many nanotoxicological studies have used excessive, unrealistically high doses of [manufactured nanomaterials] and it is therefore debatable what their findings mean for the lower real-world exposures of humans. Moreover, it is not clear how to establish realistic exposure dose testing in toxicological studies, as available data on occupational exposure levels are still sparse.” According to the guidance document, future studies should focus on the potentially adverse effects of low-level and realistic exposure to manufactured nanomaterials, especially through the use of exposure doses similar to those identified in environmental sampling.

You can find the 49pp PDF here or here. To whet your appetite, here’s a bit from the introduction to the “Exposure to Airborne Nanomaterials; A Guidance Document,”

… While human exposure to MNMs may in principle occur during any stage of the material’s lifecycle, it is most likely in workplaces, where these materials are produced or handled in large quantities or over long periods of time. Inhalation is considered as the most critical uptake route, because the small particles are able to penetrate deep into the lung and deposit in the gas exchange region. Inhalation exposure to airborne nanomaterials therefore needs to be assessed in view of worker protection.

Exposure to airborne particles can generally best be assessed by measuring the individual exposure in the personal breathing zone (PBZ) of an individual. The PBZ is defined as a 30 cm hemisphere around mouth and nose [2]. Measurements in the PBZ require instruments that are small and light-weight. The individual exposure specifically to MNMs [manufactured nanomaterials, sometimes also known as engineered nanomaterials or nanoparticles] has not been assessable in the past due to the lack of suitable personal samplers and/or monitors. Instead, most studies related to exposure to MNMs have been carried out using either bulky static measurement equipment or not nanospecific personal samplers. In recent years, novel samplers and monitors have been introduced that allow for an assessment of the more nanospecific personal exposure to airborne MNMs. In the terminology used in nanoIndEx, samplers are devices that collect particles on a substrate, e.g. a filter
of flat surface, for subsequent analysis, whereas monitors are real-time instruments that deliver
information on the airborne concentrations with high time resolution. Scientifically sound investigations on the accuracy, comparability and field applicability of these novel samplers and monitors had been lacking. … (p. 4 print; p. 6 PDF)

There’s also a brief description of the nanoindEX project in the Introduction,

The three-year project started on June 1st, 2013, and has been funded under the frame of SIINN, the ERA-NET [European Research Area Network] for a Safe Implementation of Innovative Nanoscience and Nanotechnology [SINN]. The aim of the project was to scrutinise the instrumentation available for personal exposure assessment concerning their field readiness and usability in order to use this information to generate reliable data on personal exposure in real workplaces and to eventually widely distribute the findings among the interested public. This Guidance Document you are holding in your hands summarises the key findings of the project. (p. 5 print; p. 7 PDF)

As I understand it, the area of most concern where nanotoxicology is concerned would be inhalation of nanoparticles into the lungs as the body has fewer protections in the respiratory tract than it has elsewhere, e.g. skin or digestive system.

Lungs: EU SmartNanoTox and Pneumo NP

I have three news bits about lungs one concerning relatively new techniques for testing the impact nanomaterials may have on lungs and two concerning developments at PneumoNP; the first regarding a new technique for getting antibiotics to a lung infected with pneumonia and the second, a new antibiotic.

Predicting nanotoxicity in the lungs

From a June 13, 2016 news item on Nanowerk,

Scientists at the Helmholtz Zentrum München [German Research Centre for Environmental Health] have received more than one million euros in the framework of the European Horizon 2020 Initiative [a major European Commission science funding initiative successor to the Framework Programme 7 initiative]. Dr. Tobias Stöger and Dr. Otmar Schmid from the Institute of Lung Biology and Disease and the Comprehensive Pneumology Center (CPC) will be using the funds to develop new tests to assess risks posed by nanomaterials in the airways. This could contribute to reducing the need for complex toxicity tests.

A June 13, 2016 Helmholtz Zentrum München (German Research Centre for Environmental Health) press release, which originated the news item, expands on the theme,

Nanoparticles are extremely small particles that can penetrate into remote parts of the body. While researchers are investigating various strategies for harvesting the potential of nanoparticles for medical applications, they could also pose inherent health risks*. Currently the hazard assessment of nanomaterials necessitates a complex and laborious procedure. In addition to complete material characterization, controlled exposure studies are needed for each nanomaterial in order to guarantee the toxicological safety.

As a part of the EU SmartNanoTox project, which has now been funded with a total of eight million euros, eleven European research partners, including the Helmholtz Zentrum München, want to develop a new concept for the toxicological assessment of nanomaterials.

Reference database for hazardous substances

Biologist Tobias Stöger and physicist Otmar Schmid, both research group heads at the Institute of Lung Biology and Disease, hope that the use of modern methods will help to advance the assessment procedure. “We hope to make more reliable nanotoxicity predictions by using modern approaches involving systems biology, computer modelling, and appropriate statistical methods,” states Stöger.

The lung experts are concentrating primarily on the respiratory tract. The approach involves defining a representative selection of toxic nanomaterials and conducting an in-depth examination of their structure and the various molecular modes of action that lead to their toxicity. These data are then digitalized and transferred to a reference database for new nanomaterials. Economical tests that are easy to conduct should then make it possible to assess the toxicological potential of these new nanomaterials by comparing the test results s with what is already known from the database. “This should make it possible to predict whether or not a newly developed nanomaterial poses a health risk,” Otmar Schmid says.

* Review: Schmid, O. and Stoeger, T. (2016). Surface area is the biologically most effective dose metric for acute nanoparticle toxicity in the lung. Journal of Aerosol Science, DOI:10.1016/j.jaerosci.2015.12.006

The SmartNanoTox webpage is here on the European Commission’s Cordis website.

Carrying antibiotics into lungs (PneumoNP)

I received this news from the European Commission’s PneumoNP project (I wrote about PneumoNP in a June 26, 2014 posting when it was first announced). This latest development is from a March 21, 2016 email (the original can be found here on the How to pack antibiotics in nanocarriers webpage on the PneumoNP website),

PneumoNP researchers work on a complex task: attach or encapsulate antibiotics with nanocarriers that are stable enough to be included in an aerosol formulation, to pass through respiratory tracts and finally deliver antibiotics on areas of lungs affected by pneumonia infections. The good news is that they finally identify two promising methods to generate nanocarriers.

So far, compacting polymer coils into single-chain nanoparticles in water and mild conditions was an unsolved issue. But in Spain, IK4-CIDETEC scientists developed a covalent-based method that produces nanocarriers with remarkable stability under those particular conditions. Cherry on the cake, the preparation is scalable for more industrial production. IK4-CIDETEC patented the process.

Fig.: A polymer coil (step 1) compacts into a nanocarrier with cross-linkers (step 2). Then, antibiotics get attached to the nanocarrier (step 3).

Fig.: A polymer coil (step 1) compacts into a nanocarrier with cross-linkers (step 2). Then, antibiotics get attached to the nanocarrier (step 3).

At the same time, another route to produce lipidic nanocarriers have been developed by researchers from Utrecht University. In particular, they optimized the method consisting in assembling lipids directly around a drug. As a result, generated lipidic nanocarriers show encouraging stability properties and are able to carry sufficient quantity of antibiotics.

Fig.: On presence of antibiotics, the lipidic layer (step 1) aggregates the the drug (step 2) until the lipids forms a capsule around the antibiotics (step 3).

Fig.: On presence of antibiotics, a lipidic layer (step 1) aggregates the drug (step 2) until the lipids forms a capsule around antibiotics (step 3).

Assays of both polymeric and lipidic nanocarriers are currently performed by ITEM Fraunhofer Institute in Germany, Ingeniatrics Tecnologias in Spain and Erasmus Medical Centre in the Netherlands. Part of these tests allows to make sure that the nanocarriers are not toxic to cells. Other tests are also done to verify that the efficiency of antibiotics on Klebsiella Pneumoniae bacteria when they are attached to nanocarriers.

A new antibiotic for pneumonia (PneumoNP)

A June 14, 2016 PneumoNP press release (received via email) announces work on a promising new approach to an antibiotic for pneumonia,

The antimicrobial peptide M33 may be the long-sought substitute to treat difficult lung infections, like multi-drug resistant pneumonia.

In 2013, the European Respiratory Society predicted 3 millions cases of pneumonia in Europe every year [1]. The standard treatment for pneumonia is an intravenous administration of a combination of drugs. This leads to the development of antibiotic resistance in the population. Gradually, doctors are running out of solutions to cure patients. An Italian company suggests a new option: the M33 peptide.

Few years ago, the Italian company SetLance SRL decided to investigate the M33 peptide. The antimicrobial peptide is an optimized version of an artificial peptide sequence selected for its efficacy and stability. So far, it showed encouraging in-vitro results against multidrug-resistant Gram-negative bacteria, including Klebsiella Pneumoniae. With the support of EU funding to the PneumoNP project, SetLance SRL had the opportunity to develop a new formulation of M33 that enhances its antimicrobial activity.

The new formulation of M33 fights Gram-negative bacteria in three steps. First of all, the M33 binds with the lipopolysaccharides (LPS) on the outer membrane of bacteria. Then, the molecule forms a helix and finally disrupts the membrane provoking cytoplasm leaking. The peptide enabled up to 80% of mices to survive Pseudomonas Aeruginosa-based lung infections. Beyond these encouraging results, toxicity to the new M33 formulation seems to be much lower than antimicrobial peptides currently used in clinical practice like colistin [2].

Lately, SetLance scaled-up the synthesis route and is now able to produce several hundred milligrams per batch. The molecule is robust enough for industrial production. We may expect this drug to go on clinical development and validation at the beginning of 2018.

[1] http://www.erswhitebook.org/chapters/acute-lower-respiratory-infections/pneumonia/
[2] Ceccherini et al., Antimicrobial activity of levofloxacin-M33 peptide conjugation or combination, Chem Med Comm. 2016; Brunetti et al., In vitro and in vivo efficacy, toxicity, bio-distribution and resistance selection of a novel antibacterial drug candidate. Scientific Reports 2016

I believe all the references are open access.

Brief final comment

The only element linking these news bits together is that they concern the lungs.

UK and US issue documents nanomaterial safety to support safe work with nanomaterials

I am featuring two bits of information about nanosafety first from the UK and then from the US.

UK and nanosafety

A May 30, 2016 news item on Nanowerk announces a not particularly exciting but necessary report on handling nanomaterials safely (Note: A link has been removed),

The UK Nanosafety Group (UKNSG) has updated and published a 2nd edition of guidance (pdf) to support safe and responsible working practices with nanomaterials in research and development laboratories.

A May 25, 2016 UK Nanosafety Group press release, which originated the news item, provides more detail,

The document aims to provide guidance on factors relating to establishing a safe workplace and good safety practice when working with particulate nanomaterials. It is applicable to a wide range of nanomaterials, including particles, fibres, powders, tubes and wires as well as aggregates and agglomerates, and recognises previous and current uncertainty in developing effective risk management when dealing with nanomaterials and advocates a precautionary strategy to minimise potential exposure.

The 2nd edition of the guidance provides updates to account for changes in legislation, recent studies in the literature, and best practice since 2012. In particular, specific sections have been revised to account for the full implementation of Global Harmonised System (GHS) which came into force on 1 June 2015 through the CLP [Classification, Labelling and Packaging] regulations. The document explains the approaches that are presently being used to select effective control measures for the management of nanomaterials, more specifically control banding tools presently in use. Significant changes can be found in the following sections: ‘Hazard Banding’, ‘Exposure Control’, ‘Toxicology’, and ‘Monitoring’.

Of relevance to employers, managers, health and safety advisors, and users of particulate nanomaterials in research and development, the guidance should be read in conjunction with the Approved Code of Practice on COSHH [Control of Substances Hazardous to Health], together with the other literature referred to in the document. The document has been produced taking account of the safety information currently available and is presented in the format of guidance and recommendations to support implementation of suitable protocols and control measures by employers and employees. It is intended that the document will be reviewed and updated on a periodic basis to keep abreast of the evolving nature of the content.

The guidance titled “Working Safely with Nanomaterials in Research & Development” is about 48 pp. and can be found here.

Tidbit about US nano environmental, health, and safety

Sylvia Palmer has written a May 27, 2016 update for ChemicalWatch on reports about or including information about environmental, health, and safety measures being taken in the US,

Three reports released recently by the National Nanotechnology Initiative (NNI) highlight the US government’ investments and initiatives in nanotechnology. They also detail current progress and the need for further understanding of exposure to nanomaterials in consumer products –and how companies can protect their nanotechnology workforce.

NNI’s Quantifying exposure to engineered nanomaterials (QEEN) from manufactured products: addressing environmental, health, and safety implications notes significant progress has been made in the ability to quantify nanomaterial exposures. However, it says greater understanding of exposure risks in “real-world” scenarios is needed. Alternative testing models and high-throughput methods for rapidly estimating exposures will be further explored, it adds.

You can find the report, Quantifying exposure to engineered nanomaterials (QEEN) from manufactured products: addressing environmental, health, and safety implications, here. Palmer’s article briefly describes the other two reports which contain information about US nano environmental, health, and safety efforts.

There is more about the three reports in an April 11, 2016 posting by Lloyd Whitman (Assistant Director for Nanotechnology and Advanced Materials, White House Office of Science and Technology Policy) and Treye Thomas (leader of the Chemical Hazards Program team in the U.S. Consumer Product Safety Commission, and Coordinator for Environmental, Health, and Safety Research under the National Nanotechnology Initiative) on the White House blog,

The recently released NNI Supplement to the President’s Budget for Fiscal Year 2017, which serves as the annual report for the NNI, highlights the programs and coordinated activities taking place across the many departments, independent agencies, and commissions participating today in the NNI—an initiative that continues to serve as a model for effective coordination of Federal science and technology R&D. As detailed in this report, nanoEHS activities continue to account for about 10 percent of the annual NNI budget, with cumulative Federal R&D investments in this area exceeding $1 billion over the past decade. This report includes descriptions of a wide variety of individual agency and coordinated activities supporting the responsible development of nanotechnology.

To understand and control the risks of using any new materials in consumer products, it is important to understand the potential for exposure and any associated hazards across product life cycles. Last month, the NNI released a report, Quantifying Exposure to Engineered Nanomaterials (QEEN) from Manufactured Products: Addressing Environmental, Health, and Safety Implications, summarizing a workshop on this topic sponsored by the U.S. Consumer Product Safety Commission (CPSC). The main goals of the workshop were to assess progress in developing tools and methods for quantifying exposure to engineered nanomaterials across the product life cycle, and to identify new research needed to advance exposure assessment for nanotechnology-enabled products. …

The technical experts who participated in CPSC’s workshop recommended that future work focus on the complex issue of determining biomarkers of exposure linked to disease, which will require substantive public–private collaboration, partnership, and knowledge sharing. Recognizing these needs, the President’s 2017 Budget request for CPSC includes funds for a new nanotechnology center led by the National Institute of Environmental Health Sciences (NIEHS) to develop test methods and to quantify and characterize the presence, release, and mechanisms of consumer exposure to nanomaterials in consumer products. This cost-effective, interagency collaboration will enable CPSC—through NIEHS—to collect the needed data to inform the safety of nanotechnology in consumer products and allow CPSC to benefit from NIEHS’s scientific network and experience.

Managing EHS risks across a product’s lifecycle includes protecting the workers who manufacture those products. The National Institute for Occupational Safety and Health has issued a series of documents providing guidance to this emerging industry, including the recently released publication Building a Safety Program to Protect the Nanotechnology Workforce: A Guide for Small to Medium-Sized Enterprises. This guide provides business owners with the tools necessary to develop and implement a written health and safety program to protect their employees.

Whitman also mentions a June 2016 international conference in the context of this news,

The responsible development of nanotechnology is a goal that the United States shares with many countries. The United States and the European Union are engaged in notable cooperation on this front. European and American scientists engaged in nanoEHS research convene annually for a joint workshop to identify areas of shared interest and mechanisms for collaboration to advance nanoEHS science. The 2016 joint workshop will be held on June 6–7, 2016 in Arlington, VA, and is free and open to the public. …