Tag Archives: European Union

Phenomen: a future and emerging information technology project

A Sept. 19, 2016 news item on Nanowerk describes a new research project incorporating photonics, phononics, and radio frequency signal processing,

HENOMEN is a ground breaking project designed to harness the potential of combined phononics, photonics and radio-frequency (RF) electronic signals to lay the foundations of a new information technology. This new Project, funded though the highly competitive H2020 [the European Union’s Horizon 2020 science funding programme] FET [Future and Emerging Technologies]-Open call, joins the efforts of three leading research institutes, three internationally recognised universities and a high-tech SME. The Consortium members kick-offed the project with a meeting on Friday September 16, 2016, at the Catalan Institute of Nanoscience and Nanotechnology (ICN2), coordinated by ICREA Research Prof Dr Clivia M. Sotomayor-Torres, of the ICN2’ Phononic and Photonic Nanostructures (P2N) Group.

A Sept. 16, 2016 ICN2 press release, which originated the news item, provides more detail,

Most information is currently transported by electrical charge (electrons) and by light (photons). Phonons are the quanta of lattice vibrations with frequencies covering a wide range up to tens of THz and provide coupling to the surrounding environment. In PHENOMEN the core of the research will be focused on phonon-based signal processing to enable on-chip synchronisation and transfer information carried between optical channels by phonons.

This ambitious prospect could serve as a future scalable platform for, e.g., hybrid information processing with phonons. To achieve it, PHENOMEN proposes to build the first practical optically-driven phonon sources and detectors including the engineering of phonon lasers to deliver coherent phonons to the rest of the chip pumped by a continuous wave optical source. It brings together interdisciplinary scientific and technology oriented partners in an early-stage research towards the development of a radically new technology.

The experimental implementation of phonons as information carriers in a chip is completely novel and of a clear foundational character. It deals with interaction and manipulation of fundamental particles and their intrinsic dual wave-particle character. Thus, it can only be possible with the participation of an interdisciplinary consortium which will create knowledge in a synergetic fashion and add value in the form of new theoretical tools,  develop novel methods to manipulate coherent phonons with light and build all-optical phononic circuits enabled by optomechanics.

The H2020 FET-Open call “Novel ideas for radically new technologies” aims to support the early stages of joint science and technology research for radically new future technological possibilities. The call is entirely non-prescriptive with regards to the nature or purpose of the technologies that are envisaged and thus targets mainly the unexpected. PHENOMEN is one of the 13 funded Research & Innovation Actions and went through a selection process with a success rate (1.4%) ten times smaller than that for an ERC grant. The retained proposals are expected to foster international collaboration in a multitude of disciplines such as robotics, nanotechnology, neuroscience, information science, biology, artificial intelligence or chemistry.

The Consortium

The PHENOMEN Consortium is made up by:

  • 3 leading research institutes:
  • 3 universities with an internationally recognised track-record in their respective areas of expertise:
  • 1 industrial partner:

Radical copyright reform proposal in the European Union

It seems the impulse to maximize copyright control has overtaken European Union officials. A Sept. 14, 2016 news item on phys.org lays out a few details,

The EU will overhaul copyright law to shake up how online news and entertainment is paid for in Europe, under proposals announced by European Commission chief Jean-Claude Juncker Wednesday [Sept. 14, 2016].

Pop stars such as Coldplay and Lady Gaga will hail part of the plan as a new weapon to bring a fair fight to YouTube, the Google-owned video service that they say is sapping the music business.

But the reform plans have attracted the fury of filmmakers and start-up investors who see it as a threat to European innovation and a wrong-headed favour to powerful media groups.

A Sept. 14, 2016 European Commission press release provides the European Union’s version of why more stringent copyright is needed,

“I want journalists, publishers and authors to be paid fairly for their work, whether it is made in studios or living rooms, whether it is disseminated offline or online, whether it is published via a copying machine or commercially hyperlinked on the web.”–President Juncker, State of the Union 2016

On the occasion of President Juncker’s 2016 State of the Union address, the Commission today set out proposals on the modernisation of copyright to increase cultural diversity in Europe and content available online, while bringing clearer rules for all online players. The proposals will also bring tools for innovation to education, research and cultural heritage institutions.

Digital technologies are changing the way music, films, TV, radio, books and the press are produced, distributed and accessed. New online services such as music streaming, video-on-demand platforms and news aggregators have become very popular, while consumers increasingly expect to access cultural content on the move and across borders. The new digital landscape will create opportunities for European creators as long as the rules offer legal certainty and clarity to all players. As a key part of its Digital Single Market strategy, the Commission has adopted proposals today to allow:

  • Better choice and access to content online and across borders
  • Improved copyright rules on education, research, cultural heritage and inclusion of disabled people
  • A fairer and sustainable marketplace for creators, the creative industries and the press

Andrus Ansip, Vice-President for the Digital Single Market, said: “Europeans want cross-border access to our rich and diverse culture. Our proposal will ensure that more content will be available, transforming Europe’s copyright rules in light of a new digital reality. Europe’s creative content should not be locked-up, but it should also be highly protected, in particular to improve the remuneration possibilities for our creators. We said we would deliver all our initiatives to create a Digital Single Market by the end of the year and we keep our promises. Without a properly functioning Digital Single Market we will miss out on creativity, growth and jobs.

Günther H. Oettinger, Commissioner for the Digital Economy and Society, said: “Our creative industries [emphasis mine] will benefit from these reforms which tackle the challenges of the digital age successfully while offering European consumers a wider choice of content to enjoy. We are proposing a copyright environment that is stimulating, fair and rewards investment.”

Today, almost half of EU internet users listen to music, watch TV series and films or play games online; however broadcasters and other operators find it hard to clear rights for their online or digital services when they want to offer them in other EU countries. Similarly, the socio-economically important sectors of education, research and cultural heritage too often face restrictions or legal uncertainty which holds back their digital innovation when using copyright protected content, including across borders. Finally, creators, other right holders and press publishers are often unable to negotiate the conditions and also payment for the online use of their works and performances.

Altogether, today’s copyright proposals have three main priorities:

1. Better choice and access to content online and across borders

With our proposal on the portability of online content presented in December 2015, we gave consumers the right to use their online subscriptions to films, music, ebooks when they are away from their home country, for example on holidays or business trips. Today, we propose a legal mechanism for broadcasters to obtain more easily the authorisations they need from right holders to transmit programmes online in other EU Member States. This is about programmes that broadcasters transmit online at the same time as their broadcast as well as their catch-up services that they wish to make available online in other Member States, such as MyTF1 in France, ZDF Mediathek in Germany, TV3 Play in Denmark, Sweden and the Baltic States and AtresPlayer in Spain. Empowering broadcasters to make the vast majority of their content, such as news, cultural, political, documentary or entertainment programmes, shown also in other Member States will give more choice to consumers.

Today’s rules also make it easier for operators who offer packages of channels (such as Proximus TV in Belgium, Movistar+ in Spain, Deutsche Telekom’s IPTV Entertain in Germany), to get the authorisations they need: instead of having to negotiate individually with every right holder in order to offer such packages of channels originating in other EU Member States, they will be able to get the licenses from collective management organisations representing right holders. This will also increase the choice of content for their customers.

To help development of Video-on-Demand (VoD) offerings in Europe, we ask Member States to set up negotiation bodies to help reach licensing deals, including those for cross-border services, between audiovisual rightholders and VoD platforms. A dialogue with the audiovisual industry on licensing issues and the use of innovative tools like licensing hubs will complement this mechanism.

To enhance access to Europe’s rich cultural heritage, the new Copyright Directive will help museums, archives and other institutions to digitise and make available across borders out-of commerce works, such as books or films that are protected by copyright, but no longer available to the public.

In parallel the Commission will use its €1.46 billion Creative Europe MEDIA programme to further support the circulation of creative content across borders . This includes more funding for subtitling and dubbing; a new catalogue of European audiovisual works for VoD providers that they can directly use for programming; and online tools to improve the digital distribution of European audiovisual works and make them easier to find and view online.

These combined actions will encourage people to discover TV and radio programmes from other European countries, keep in touch with their home countries when living in another Member State and enhance the availability of European films, including across borders, hence highlighting Europe’s rich cultural diversity.

2. Improving copyright rules on research, education and inclusion of disable [sic] people

Students and teachers are eager to use digital materials and technologies for learning, but today almost 1 in 4 educators encounter copyright-related restrictions in their digital teaching activities every week. The Commission has proposed today a new exception to allow educational establishments to use materials to illustrate teaching through digital tools and in online courses across borders.

The proposed Directive will also make it easier for researchers across the EU to use text and data mining (TDM) technologies to analyse large sets of data. This will provide a much needed boost to innovative research considering that today nearly all scientific publications are digital and their overall volume is increasing by 8-9% every year worldwide.

The Commission also proposes a new mandatory EU exception which will allow cultural heritage institutions to preserve works digitally, crucial for the survival of cultural heritage and for citizens’ access in the long term.

Finally, the Commission is proposing legislation to implement the Marrakesh Treaty to facilitate access to published works for persons who are blind, have other visual impairments or are otherwise print disabled. These measures are important to ensure that copyright does not constitute a barrier to the full participation in society of all citizens and will allow for the exchange of accessible format copies within the EU and with third countries that are parties to the Treaty, avoiding duplication of work and waste of resources.

3. A fairer and sustainable marketplace for creators and press

The Copyright Directive aims to reinforce the position of right holders to negotiate and be remunerated for the online exploitation of their content on video-sharing platforms such as YouTube or Dailymotion. Such platforms will have an obligation to deploy effective means such as technology to automatically detect songs or audiovisual works which right holders have identified and agreed with the platforms either to authorise or remove.

Newspapers, magazines and other press publications have benefited from the shift from print to digital and online services like social media and news aggregators. It has led to broader audiences, but it has also impacted advertising revenue and made the licensing and enforcement of the rights in these publications increasingly difficult.The Commission proposes to introduce a new related right for publishers, similar to the right that already exists under EU law for film producers, record (phonogram) producers and other players in the creative industries like broadcasters.

The new right recognises the important role press publishers play in investing in and creating quality journalistic content, which is essential for citizens’ access to knowledge in our democratic societies. As they will be legally recognised as right holders for the very first time they will be in a better position when they negotiate the use of their content with online services using or enabling access to it, and better able to fight piracy. This approach will give all players a clear legal framework when licensing content for digital uses, and help the development of innovative business models for the benefit of consumers.

The draft Directive also obliges publishers and producers to be transparent and inform authors or performers about profits they made with their works. It also puts in place a mechanism to help authors and performers to obtain a fair share when negotiating remuneration with producers and publishers. This should lead to higher level of trust among all players in the digital value chain.

Towards a Digital Single Market

As part of the Digital Single Market strategy presented in May 2015, today’s proposals complement the proposed regulation on portability of legal content (December 2015), the revised Audiovisual Media and Services Directive, the Communication on online platforms (May 2016). Later this autumn the Commission will propose to improve enforcement of all types of intellectual property rights, including copyright.

Today’s EU copyright rules, presented along with initiatives to boost internet connectivity in the EU (press releasepress conference at 15.15 CET), are part of the EU strategy to create a Digital Single Market (DSM). The Commission set out 16 initiatives (press release) and is on the right track to deliver all of them the end of this year.

While Juncker mixes industry (publishers) with content creators (journalists, authors), Günther H. Oettinger, Commissioner for the Digital Economy and Society clearly states that ‘creative industries’ are to be the beneficiaries. Business interests have tended to benefit disproportionately under current copyright regimes. The disruption posed by digital content has caused these businesses some agony and they have responded by lobbying vigorously to maximize copyright. For the most part, individual musicians, authors, visual artists and other content creators are highly unlikely to benefit from this latest reform.

I’m not a big fan of Google or its ‘stepchild’, YouTube but it should be noted that at least one career would not have existed without free and easy access to videos, Justin Bieber’s. He may not have made a penny from his YouTube videos but that hasn’t hurt his financial picture. Without YouTube, he would have been unlikely to get the exposure and recognition which have in turn led him to some serious financial opportunities.

I am somewhat less interested in the show business aspect than I am in the impact this could have on science as per section (2. Improving copyright rules on research, education and inclusion of disable [sic] people) of the European Commission press release. A Sept. 14, 2016 posting about a previous ruling on copyright in Europe by Mike Masnick for Techdirt provides some insight into the possible future impacts on science research,

Last week [Sept. 8, 2016 posting], we wrote about a terrible copyright ruling from the Court of Justice of the EU, which basically says that any for-profit entity that links to infringing material can be held liable for direct infringement, as the “for-profit” nature of the work is seen as evidence that they knew or should have known the work was infringing. We discussed the problems with this standard in our post, and there’s been a lot of commentary on what this will mean for Europe — with a variety of viewpoints being expressed. One really interesting set of concerns comes from Egon Willighagen, from Maastricht University, noting what a total and complete mess this is going to be for scientists, who rarely consider the copyright status of various data as databases they rely on are built up …

This is, of course, not the first time we’ve noted the problems of intellectual property in the science world. From various journals locking up research to the rise of patents scaring off researchers from sharing data, intellectual property keeps getting in the way of science, rather than supporting it. And that’s extremely unfortunate. I mean, after all, in the US specifically, the Constitution specifically says that copyrights and patents are supposed to be about “promoting the progress of science and the useful arts.”

Over and over again, though, we see that the law has been twisted and distorted and extended and expanded in such a way that is designed to protect a very narrow set of interests, at the expense of many others, including the public who would benefit from greater sharing and collaboration and open flow of data among scientific researchers. …

Masnick has also written up a Sept. 14, 2016 posting devoted to the EU copyright proposal itself,

This is not a surprise given the earlier leaks of what the EU Commission was cooking up for a copyright reform package, but the end result is here and it’s a complete disaster for everyone. And I do mean everyone. Some will argue that it’s a gift to Hollywood and legacy copyright interests — and there’s an argument that that’s the case. But the reality is that this proposal is so bad that it will end up doing massive harm to everyone. It will clearly harm independent creators and the innovative platforms that they rely on. And, because those platforms have become so important to even the legacy entertainment industry, it will harm them too. And, worst of all, it will harm the public greatly. It’s difficult to see how this proposal will benefit anyone, other than maybe some lawyers.

So the EU Commission has taken the exact wrong approach. It’s one that’s almost entirely about looking backwards and “protecting” old ways of doing business, rather than looking forward, and looking at what benefits the public, creators and innovators the most. If this proposal actually gets traction, it will be a complete disaster for the EU innovative community. Hopefully, Europeans speak out, vocally, about what a complete disaster this would be.

So, according to Masnick not even business interests will benefit.

D-PLACE: an open access database of places, language, culture, and enviroment

In an attempt to be a bit more broad in my interpretation of the ‘society’ part of my commentary I’m including this July 8, 2016 news item on ScienceDaily (Note: A link has been removed),

An international team of researchers has developed a website at d-place.org to help answer long-standing questions about the forces that shaped human cultural diversity.

D-PLACE — the Database of Places, Language, Culture and Environment — is an expandable, open access database that brings together a dispersed body of information on the language, geography, culture and environment of more than 1,400 human societies. It comprises information mainly on pre-industrial societies that were described by ethnographers in the 19th and early 20th centuries.

A July 8, 2016 University of Toronto news release (also on EurekAlert), which originated the news item, expands on the theme,

“Human cultural diversity is expressed in numerous ways: from the foods we eat and the houses we build, to our religious practices and political organisation, to who we marry and the types of games we teach our children,” said Kathryn Kirby, a postdoctoral fellow in the Departments of Ecology & Evolutionary Biology and Geography at the University of Toronto and lead author of the study. “Cultural practices vary across space and time, but the factors and processes that drive cultural change and shape patterns of diversity remain largely unknown.

“D-PLACE will enable a whole new generation of scholars to answer these long-standing questions about the forces that have shaped human cultural diversity.”

Co-author Fiona Jordan, senior lecturer in anthropology at the University of Bristol and one of the project leads said, “Comparative research is critical for understanding the processes behind cultural diversity. Over a century of anthropological research around the globe has given us a rich resource for understanding the diversity of humanity – but bringing different resources and datasets together has been a huge challenge in the past.

“We’ve drawn on the emerging big data sets from ecology, and combined these with cultural and linguistic data so researchers can visualise diversity at a glance, and download data to analyse in their own projects.”

D-PLACE allows users to search by cultural practice (e.g., monogamy vs. polygamy), environmental variable (e.g. elevation, mean annual temperature), language family (e.g. Indo-European, Austronesian), or region (e.g. Siberia). The search results can be displayed on a map, a language tree or in a table, and can also be downloaded for further analysis.

It aims to enable researchers to investigate the extent to which patterns in cultural diversity are shaped by different forces, including shared history, demographics, migration/diffusion, cultural innovations, and environmental and ecological conditions.

D-PLACE was developed by an international team of scientists interested in cross-cultural research. It includes researchers from Max Planck Institute for the Science of Human history in Jena Germany, University of Auckland, Colorado State University, University of Toronto, University of Bristol, Yale, Human Relations Area Files, Washington University in Saint Louis, University of Michigan, American Museum of Natural History, and City University of New York.

The diverse team included: linguists; anthropologists; biogeographers; data scientists; ethnobiologists; and evolutionary ecologists, who employ a variety of research methods including field-based primary data collection; compilation of cross-cultural data sources; and analyses of existing cross-cultural datasets.

“The team’s diversity is reflected in D-PLACE, which is designed to appeal to a broad user base,” said Kirby. “Envisioned users range from members of the public world-wide interested in comparing their cultural practices with those of other groups, to cross-cultural researchers interested in pushing the boundaries of existing research into the drivers of cultural change.”

Here’s a link to and a citation for the paper,

D-PLACE: A Global Database of Cultural, Linguistic and Environmental Diversity by Kathryn R. Kirby, Russell D. Gray, Simon J. Greenhill, Fiona M. Jordan, Stephanie Gomes-Ng, Hans-Jörg Bibiko, Damián E. Blasi, Carlos A. Botero, Claire Bowern, Carol R. Ember, Dan Leehr, Bobbi S. Low, Joe McCarter, William Divale, Michael C. Gavin.  PLOS ONE, 2016; 11 (7): e0158391 DOI: 10.1371/journal.pone.0158391 Published July 8, 2016.

This paper is open access.

You can find D-PLACE here.

While it might not seem like that there would be a close link between anthropology and physics in the 19th and early 20th centuries, that information can be mined for more contemporary applications. For example, someone who wants to make a case for a more diverse scientific community may want to develop a social science approach to the discussion. The situation in my June 16, 2016 post titled: Science literacy, science advice, the US Supreme Court, and Britain’s House of Commons, could  be extended into a discussion and educational process using data from D-Place and other sources to make the point,

Science literacy may not be just for the public, it would seem that US Supreme Court judges may not have a basic understanding of how science works. David Bruggeman’s March 24, 2016 posting (on his Pasco Phronesis blog) describes a then current case before the Supreme Court (Justice Antonin Scalia has since died), Note: Links have been removed,

It’s a case concerning aspects of the University of Texas admissions process for undergraduates and the case is seen as a possible means of restricting race-based considerations for admission.  While I think the arguments in the case will likely revolve around factors far removed from science and or technology, there were comments raised by two Justices that struck a nerve with many scientists and engineers.

Both Justice Antonin Scalia and Chief Justice John Roberts raised questions about the validity of having diversity where science and scientists are concerned [emphasis mine].  Justice Scalia seemed to imply that diversity wasn’t esential for the University of Texas as most African-American scientists didn’t come from schools at the level of the University of Texas (considered the best university in Texas).  Chief Justice Roberts was a bit more plain about not understanding the benefits of diversity.  He stated, “What unique perspective does a black student bring to a class in physics?”

To that end, Dr. S. James Gates, theoretical physicist at the University of Maryland, and member of the President’s Council of Advisers on Science and Technology (and commercial actor) has an editorial in the March 25 [2016] issue of Science explaining that the value of having diversity in science does not accrue *just* to those who are underrepresented.

Dr. Gates relates his personal experience as a researcher and teacher of how people’s background inform their practice of science, and that two different people may use the same scientific method, but think about the problem differently.

I’m guessing that both Scalia and Roberts and possibly others believe that science is the discovery and accumulation of facts. In this worldview science facts such as gravity are waiting for discovery and formulation into a ‘law’. They do not recognize that most science is a collection of beliefs and may be influenced by personal beliefs. For example, we believe we’ve proved the existence of the Higgs boson but no one associated with the research has ever stated unequivocally that it exists.

More generally, with D-PLACE and the recently announced Trans-Atlantic Platform (see my July 15, 2016 post about it), it seems Canada’s humanities and social sciences communities are taking strides toward greater international collaboration and a more profound investment in digital scholarship.

Trans-Atlantic Platform (T-AP) is a unique collaboration of humanities and social science researchers from Europe and the Americas

Launched in 2013, the Trans-Atlantic Platform is co-chaired by Dr.Ted Hewitt, president of the Social Sciences and Humanities Research Council of Canada (SSHRC) , and Dr. Renée van Kessel-Hagesteijn, Netherlands Organisation for Scientific Research—Social Sciences (NWO—Social Sciences).

An EU (European Union) publication, International Innovation features an interview about T-AP with Ted Hewitt in a June 30, 2016 posting,

The Trans-Atlantic Platform is a unique collaboration of humanities and social science funders from Europe and the Americas. International Innovation’s Rebecca Torr speaks with Ted Hewitt, President of the Social Sciences and Humanities Research Council and Co-Chair of T-AP to understand more about the Platform and its pilot funding programme, Digging into Data.

Many commentators have called for better integration between natural and social scientists, to ensure that the societal benefits of STEM research are fully realised. Does the integration of diverse scientific disciplines form part of T-AP’s remit, and if so, how are you working to achieve this?

T-AP was designed primarily to promote and facilitate research across SSH. However, given the Platform’s thematic priorities and the funding opportunities being contemplated, we anticipate that a good number of non-SSH [emphasis mine] researchers will be involved.

As an example, on March 1, T-AP launched its first pilot funding opportunity: the T-AP Digging into Data Challenge. One of the sponsors is the Natural Sciences and Engineering Research Council of Canada (NSERC), Canada’s federal funding agency for research in the natural sciences and engineering. Their involvement ensures that the perspective of the natural sciences is included in the challenge. The Digging into Data Challenge is open to any project that addresses research questions in the SSH by using large-scale digital data analysis techniques, and is then able to show how these techniques can lead to new insights. And the challenge specifically aims to advance multidisciplinary collaborative projects.

When you tackle a research question or undertake research to address a social challenge, you need collaboration between various SSH disciplines or between SSH and STEM disciplines. So, while proposals must address SSH research questions, the individual teams often involve STEM researchers, such as computer scientists.

In previous rounds of the Digging into Data Challenge, this has led to invaluable research. One project looked at how the media shaped public opinion around the 1918 Spanish flu pandemic. Another used CT scans to examine hundreds of mummies, ultimately discovering that atherosclerosis, a form of heart disease, was prevalent 4,000 years ago. In both cases, these multidisciplinary historical research projects have helped inform our thinking of the present.

Of course, Digging into Data isn’t the only research area in which T-AP will be involved. Since its inception, T-AP partners have identified three priority areas beyond digital scholarship: diversity, inequality and difference; resilient and innovative societies; and transformative research on the environment. Each of these areas touches on a variety of SSH fields, while the transformative research on the environment area has strong connections with STEM fields. In September 2015, T-AP organised a workshop around this third priority area; environmental science researchers were among the workshop participants.

I wish Hewitt hadn’t described researchers from disciplines other than the humanities and social sciences as “non-SSH.” The designation divides the world in two: us and non-take your pick: non-Catholic/Muslim/American/STEM/SSH/etc.

Getting back to the interview, it is surprisingly Canuck-centric in places,

How does T-AP fit in with Social Sciences and Humanities Research Council of Canada (SSHRC)’s priorities?

One of the objectives in SSHRC’s new strategic plan is to develop partnerships that enable us to expand the reach of our funding. As T-AP provides SSHRC with links to 16 agencies across Europe and the Americas, it is an efficient mechanism for us to broaden the scope of our support and promotion of post-secondary-based research and training in SSH.

It also provides an opportunity to explore cutting edge areas of research, such as big data (as we did with the first call we put out, Digging into Data). The research enterprise is becoming increasingly international, by which I mean that researchers are working on issues with international dimensions or collaborating in international teams. In this globalised environment, SSHRC must partner with international funders to support research excellence. By developing international funding opportunities, T-AP helps researchers create teams better positioned to tackle the most exciting and promising research topics.

Finally, it is a highly effective way of broadly promoting the value of SSH research throughout Canada and around the globe. There are significant costs and complexities involved in international research, and uncoordinated funding from multiple national funders can actually create barriers to collaboration. A platform like T-AP helps funders coordinate and streamline processes.

The interview gets a little more international scope when it turns to the data project,

What is the significance of your pilot funding programme in digital scholarship and what types of projects will it support?

The T-AP Digging into Data Challenge is significant for several reasons. First, the geographic reach of Digging is truly significant. With 16 participants from 11 countries, this round of Digging has significantly broader participation from previous rounds. This is also the first time Digging into Data includes funders from South America.

The T-AP Digging into Data Challenge is open to any research project that addresses questions in SSH. In terms of what those projects will end up being is anybody’s guess – projects from past competitions have involved fields ranging from musicology to anthropology to political science.

The Challenge’s main focus is, of course, the use of big data in research.

You may want to read the interview in its entirety here.

I have checked out the Trans-Atlantic Platform website but cannot determine how someone or some institution might consult that site for information on how to get involved in their projects or get funding. However, there is a T-AP Digging into Data website where there is evidence of the first international call for funding submissions. Sadly, the deadline for the 2016 call has passed if the website is to be believed (sometimes people are late when changing deadline dates).

nanoIndEx publishes guidance document on assessing exposure to airborne nanomaterials

Lynn Bergeson’s June 21, 2016 posting on Nanotechnology Now announced a newly published guidance document from the European Union’s nanoIndEx,

… The guidance document summarizes the key findings of the project, and is intended to present the state of the art in personal exposure assessment for nanomaterials. The conclusions section states: “Unfortunately, many nanotoxicological studies have used excessive, unrealistically high doses of [manufactured nanomaterials] and it is therefore debatable what their findings mean for the lower real-world exposures of humans. Moreover, it is not clear how to establish realistic exposure dose testing in toxicological studies, as available data on occupational exposure levels are still sparse.” According to the guidance document, future studies should focus on the potentially adverse effects of low-level and realistic exposure to manufactured nanomaterials, especially through the use of exposure doses similar to those identified in environmental sampling.

You can find the 49pp PDF here or here. To whet your appetite, here’s a bit from the introduction to the “Exposure to Airborne Nanomaterials; A Guidance Document,”

… While human exposure to MNMs may in principle occur during any stage of the material’s lifecycle, it is most likely in workplaces, where these materials are produced or handled in large quantities or over long periods of time. Inhalation is considered as the most critical uptake route, because the small particles are able to penetrate deep into the lung and deposit in the gas exchange region. Inhalation exposure to airborne nanomaterials therefore needs to be assessed in view of worker protection.

Exposure to airborne particles can generally best be assessed by measuring the individual exposure in the personal breathing zone (PBZ) of an individual. The PBZ is defined as a 30 cm hemisphere around mouth and nose [2]. Measurements in the PBZ require instruments that are small and light-weight. The individual exposure specifically to MNMs [manufactured nanomaterials, sometimes also known as engineered nanomaterials or nanoparticles] has not been assessable in the past due to the lack of suitable personal samplers and/or monitors. Instead, most studies related to exposure to MNMs have been carried out using either bulky static measurement equipment or not nanospecific personal samplers. In recent years, novel samplers and monitors have been introduced that allow for an assessment of the more nanospecific personal exposure to airborne MNMs. In the terminology used in nanoIndEx, samplers are devices that collect particles on a substrate, e.g. a filter
of flat surface, for subsequent analysis, whereas monitors are real-time instruments that deliver
information on the airborne concentrations with high time resolution. Scientifically sound investigations on the accuracy, comparability and field applicability of these novel samplers and monitors had been lacking. … (p. 4 print; p. 6 PDF)

There’s also a brief description of the nanoindEX project in the Introduction,

The three-year project started on June 1st, 2013, and has been funded under the frame of SIINN, the ERA-NET [European Research Area Network] for a Safe Implementation of Innovative Nanoscience and Nanotechnology [SINN]. The aim of the project was to scrutinise the instrumentation available for personal exposure assessment concerning their field readiness and usability in order to use this information to generate reliable data on personal exposure in real workplaces and to eventually widely distribute the findings among the interested public. This Guidance Document you are holding in your hands summarises the key findings of the project. (p. 5 print; p. 7 PDF)

As I understand it, the area of most concern where nanotoxicology is concerned would be inhalation of nanoparticles into the lungs as the body has fewer protections in the respiratory tract than it has elsewhere, e.g. skin or digestive system.

Lungs: EU SmartNanoTox and Pneumo NP

I have three news bits about lungs one concerning relatively new techniques for testing the impact nanomaterials may have on lungs and two concerning developments at PneumoNP; the first regarding a new technique for getting antibiotics to a lung infected with pneumonia and the second, a new antibiotic.

Predicting nanotoxicity in the lungs

From a June 13, 2016 news item on Nanowerk,

Scientists at the Helmholtz Zentrum München [German Research Centre for Environmental Health] have received more than one million euros in the framework of the European Horizon 2020 Initiative [a major European Commission science funding initiative successor to the Framework Programme 7 initiative]. Dr. Tobias Stöger and Dr. Otmar Schmid from the Institute of Lung Biology and Disease and the Comprehensive Pneumology Center (CPC) will be using the funds to develop new tests to assess risks posed by nanomaterials in the airways. This could contribute to reducing the need for complex toxicity tests.

A June 13, 2016 Helmholtz Zentrum München (German Research Centre for Environmental Health) press release, which originated the news item, expands on the theme,

Nanoparticles are extremely small particles that can penetrate into remote parts of the body. While researchers are investigating various strategies for harvesting the potential of nanoparticles for medical applications, they could also pose inherent health risks*. Currently the hazard assessment of nanomaterials necessitates a complex and laborious procedure. In addition to complete material characterization, controlled exposure studies are needed for each nanomaterial in order to guarantee the toxicological safety.

As a part of the EU SmartNanoTox project, which has now been funded with a total of eight million euros, eleven European research partners, including the Helmholtz Zentrum München, want to develop a new concept for the toxicological assessment of nanomaterials.

Reference database for hazardous substances

Biologist Tobias Stöger and physicist Otmar Schmid, both research group heads at the Institute of Lung Biology and Disease, hope that the use of modern methods will help to advance the assessment procedure. “We hope to make more reliable nanotoxicity predictions by using modern approaches involving systems biology, computer modelling, and appropriate statistical methods,” states Stöger.

The lung experts are concentrating primarily on the respiratory tract. The approach involves defining a representative selection of toxic nanomaterials and conducting an in-depth examination of their structure and the various molecular modes of action that lead to their toxicity. These data are then digitalized and transferred to a reference database for new nanomaterials. Economical tests that are easy to conduct should then make it possible to assess the toxicological potential of these new nanomaterials by comparing the test results s with what is already known from the database. “This should make it possible to predict whether or not a newly developed nanomaterial poses a health risk,” Otmar Schmid says.

* Review: Schmid, O. and Stoeger, T. (2016). Surface area is the biologically most effective dose metric for acute nanoparticle toxicity in the lung. Journal of Aerosol Science, DOI:10.1016/j.jaerosci.2015.12.006

The SmartNanoTox webpage is here on the European Commission’s Cordis website.

Carrying antibiotics into lungs (PneumoNP)

I received this news from the European Commission’s PneumoNP project (I wrote about PneumoNP in a June 26, 2014 posting when it was first announced). This latest development is from a March 21, 2016 email (the original can be found here on the How to pack antibiotics in nanocarriers webpage on the PneumoNP website),

PneumoNP researchers work on a complex task: attach or encapsulate antibiotics with nanocarriers that are stable enough to be included in an aerosol formulation, to pass through respiratory tracts and finally deliver antibiotics on areas of lungs affected by pneumonia infections. The good news is that they finally identify two promising methods to generate nanocarriers.

So far, compacting polymer coils into single-chain nanoparticles in water and mild conditions was an unsolved issue. But in Spain, IK4-CIDETEC scientists developed a covalent-based method that produces nanocarriers with remarkable stability under those particular conditions. Cherry on the cake, the preparation is scalable for more industrial production. IK4-CIDETEC patented the process.

Fig.: A polymer coil (step 1) compacts into a nanocarrier with cross-linkers (step 2). Then, antibiotics get attached to the nanocarrier (step 3).

Fig.: A polymer coil (step 1) compacts into a nanocarrier with cross-linkers (step 2). Then, antibiotics get attached to the nanocarrier (step 3).

At the same time, another route to produce lipidic nanocarriers have been developed by researchers from Utrecht University. In particular, they optimized the method consisting in assembling lipids directly around a drug. As a result, generated lipidic nanocarriers show encouraging stability properties and are able to carry sufficient quantity of antibiotics.

Fig.: On presence of antibiotics, the lipidic layer (step 1) aggregates the the drug (step 2) until the lipids forms a capsule around the antibiotics (step 3).

Fig.: On presence of antibiotics, a lipidic layer (step 1) aggregates the drug (step 2) until the lipids forms a capsule around antibiotics (step 3).

Assays of both polymeric and lipidic nanocarriers are currently performed by ITEM Fraunhofer Institute in Germany, Ingeniatrics Tecnologias in Spain and Erasmus Medical Centre in the Netherlands. Part of these tests allows to make sure that the nanocarriers are not toxic to cells. Other tests are also done to verify that the efficiency of antibiotics on Klebsiella Pneumoniae bacteria when they are attached to nanocarriers.

A new antibiotic for pneumonia (PneumoNP)

A June 14, 2016 PneumoNP press release (received via email) announces work on a promising new approach to an antibiotic for pneumonia,

The antimicrobial peptide M33 may be the long-sought substitute to treat difficult lung infections, like multi-drug resistant pneumonia.

In 2013, the European Respiratory Society predicted 3 millions cases of pneumonia in Europe every year [1]. The standard treatment for pneumonia is an intravenous administration of a combination of drugs. This leads to the development of antibiotic resistance in the population. Gradually, doctors are running out of solutions to cure patients. An Italian company suggests a new option: the M33 peptide.

Few years ago, the Italian company SetLance SRL decided to investigate the M33 peptide. The antimicrobial peptide is an optimized version of an artificial peptide sequence selected for its efficacy and stability. So far, it showed encouraging in-vitro results against multidrug-resistant Gram-negative bacteria, including Klebsiella Pneumoniae. With the support of EU funding to the PneumoNP project, SetLance SRL had the opportunity to develop a new formulation of M33 that enhances its antimicrobial activity.

The new formulation of M33 fights Gram-negative bacteria in three steps. First of all, the M33 binds with the lipopolysaccharides (LPS) on the outer membrane of bacteria. Then, the molecule forms a helix and finally disrupts the membrane provoking cytoplasm leaking. The peptide enabled up to 80% of mices to survive Pseudomonas Aeruginosa-based lung infections. Beyond these encouraging results, toxicity to the new M33 formulation seems to be much lower than antimicrobial peptides currently used in clinical practice like colistin [2].

Lately, SetLance scaled-up the synthesis route and is now able to produce several hundred milligrams per batch. The molecule is robust enough for industrial production. We may expect this drug to go on clinical development and validation at the beginning of 2018.

[1] http://www.erswhitebook.org/chapters/acute-lower-respiratory-infections/pneumonia/
[2] Ceccherini et al., Antimicrobial activity of levofloxacin-M33 peptide conjugation or combination, Chem Med Comm. 2016; Brunetti et al., In vitro and in vivo efficacy, toxicity, bio-distribution and resistance selection of a novel antibacterial drug candidate. Scientific Reports 2016

I believe all the references are open access.

Brief final comment

The only element linking these news bits together is that they concern the lungs.

Introducing the LIFE project NanoMONITOR

I believe LIFE in the project title refers to life cycle. Here’s more from a June 9, 2016 news item from Nanowerk (Note: A link has been removed),

The newly started European Commission LIFE project NanoMONITOR addresses the challenges of supporting the risk assessment of nanomaterials under REACH by development of a real-time information and monitoring system. At the project’s kickoff meeting held on the 19th January 2016 in Valencia (Spain) participants discussed how this goal could be achieved.

Despite the growing number of engineered nanomaterials (ENMs) already available on the market and in contract to their benefits the use, production, and disposal of ENMs raises concerns about their environmental impact.

A REACH Centre June 8, 2016 press release, which originated the news item, expands on the theme,

Within this context, the overall aim of LIFE NanoMONITOR is to improve the use of environmental monitoring data to support the implementation of REACH regulation and promote the protection of human health and the environment when dealing with ENMs. Within the EU REACH Regulation, a chemical safety assessment report, including risk characterisation ratio (RCR), must be provided for any registered ENMs. In order to address these objectives, the project partners have developed a rigorous methodology encompassing the following aims:

  • Develop a novel software application to support the acquisition, management and processing of data on the concentration of ENMs.
  • Develop an on-line environmental monitoring database (EMD) to support the sharing of information.
  • Design and develop a proven monitoring station prototype for continuous monitoring of particles below 100 nm in air (PM0.1).
  • Design and develop standardized sampling and data analysis procedures to ensure the quality, comparability and reliability of the monitoring data used for risk assessment.
  • Support the calculation of the predicted environmental concentration (PEC) of ENMs in the context of REACH.

Throughout the project’s kick off meeting, participants discussed the status of the research area, project goals, and expectations of the different stakeholders with respect to the project outcome.

The project has made this graphic available,

LIFE_NanoMONITOR

You can find the LIFE project NanoMONITOR website here.

UK and US issue documents nanomaterial safety to support safe work with nanomaterials

I am featuring two bits of information about nanosafety first from the UK and then from the US.

UK and nanosafety

A May 30, 2016 news item on Nanowerk announces a not particularly exciting but necessary report on handling nanomaterials safely (Note: A link has been removed),

The UK Nanosafety Group (UKNSG) has updated and published a 2nd edition of guidance (pdf) to support safe and responsible working practices with nanomaterials in research and development laboratories.

A May 25, 2016 UK Nanosafety Group press release, which originated the news item, provides more detail,

The document aims to provide guidance on factors relating to establishing a safe workplace and good safety practice when working with particulate nanomaterials. It is applicable to a wide range of nanomaterials, including particles, fibres, powders, tubes and wires as well as aggregates and agglomerates, and recognises previous and current uncertainty in developing effective risk management when dealing with nanomaterials and advocates a precautionary strategy to minimise potential exposure.

The 2nd edition of the guidance provides updates to account for changes in legislation, recent studies in the literature, and best practice since 2012. In particular, specific sections have been revised to account for the full implementation of Global Harmonised System (GHS) which came into force on 1 June 2015 through the CLP [Classification, Labelling and Packaging] regulations. The document explains the approaches that are presently being used to select effective control measures for the management of nanomaterials, more specifically control banding tools presently in use. Significant changes can be found in the following sections: ‘Hazard Banding’, ‘Exposure Control’, ‘Toxicology’, and ‘Monitoring’.

Of relevance to employers, managers, health and safety advisors, and users of particulate nanomaterials in research and development, the guidance should be read in conjunction with the Approved Code of Practice on COSHH [Control of Substances Hazardous to Health], together with the other literature referred to in the document. The document has been produced taking account of the safety information currently available and is presented in the format of guidance and recommendations to support implementation of suitable protocols and control measures by employers and employees. It is intended that the document will be reviewed and updated on a periodic basis to keep abreast of the evolving nature of the content.

The guidance titled “Working Safely with Nanomaterials in Research & Development” is about 48 pp. and can be found here.

Tidbit about US nano environmental, health, and safety

Sylvia Palmer has written a May 27, 2016 update for ChemicalWatch on reports about or including information about environmental, health, and safety measures being taken in the US,

Three reports released recently by the National Nanotechnology Initiative (NNI) highlight the US government’ investments and initiatives in nanotechnology. They also detail current progress and the need for further understanding of exposure to nanomaterials in consumer products –and how companies can protect their nanotechnology workforce.

NNI’s Quantifying exposure to engineered nanomaterials (QEEN) from manufactured products: addressing environmental, health, and safety implications notes significant progress has been made in the ability to quantify nanomaterial exposures. However, it says greater understanding of exposure risks in “real-world” scenarios is needed. Alternative testing models and high-throughput methods for rapidly estimating exposures will be further explored, it adds.

You can find the report, Quantifying exposure to engineered nanomaterials (QEEN) from manufactured products: addressing environmental, health, and safety implications, here. Palmer’s article briefly describes the other two reports which contain information about US nano environmental, health, and safety efforts.

There is more about the three reports in an April 11, 2016 posting by Lloyd Whitman (Assistant Director for Nanotechnology and Advanced Materials, White House Office of Science and Technology Policy) and Treye Thomas (leader of the Chemical Hazards Program team in the U.S. Consumer Product Safety Commission, and Coordinator for Environmental, Health, and Safety Research under the National Nanotechnology Initiative) on the White House blog,

The recently released NNI Supplement to the President’s Budget for Fiscal Year 2017, which serves as the annual report for the NNI, highlights the programs and coordinated activities taking place across the many departments, independent agencies, and commissions participating today in the NNI—an initiative that continues to serve as a model for effective coordination of Federal science and technology R&D. As detailed in this report, nanoEHS activities continue to account for about 10 percent of the annual NNI budget, with cumulative Federal R&D investments in this area exceeding $1 billion over the past decade. This report includes descriptions of a wide variety of individual agency and coordinated activities supporting the responsible development of nanotechnology.

To understand and control the risks of using any new materials in consumer products, it is important to understand the potential for exposure and any associated hazards across product life cycles. Last month, the NNI released a report, Quantifying Exposure to Engineered Nanomaterials (QEEN) from Manufactured Products: Addressing Environmental, Health, and Safety Implications, summarizing a workshop on this topic sponsored by the U.S. Consumer Product Safety Commission (CPSC). The main goals of the workshop were to assess progress in developing tools and methods for quantifying exposure to engineered nanomaterials across the product life cycle, and to identify new research needed to advance exposure assessment for nanotechnology-enabled products. …

The technical experts who participated in CPSC’s workshop recommended that future work focus on the complex issue of determining biomarkers of exposure linked to disease, which will require substantive public–private collaboration, partnership, and knowledge sharing. Recognizing these needs, the President’s 2017 Budget request for CPSC includes funds for a new nanotechnology center led by the National Institute of Environmental Health Sciences (NIEHS) to develop test methods and to quantify and characterize the presence, release, and mechanisms of consumer exposure to nanomaterials in consumer products. This cost-effective, interagency collaboration will enable CPSC—through NIEHS—to collect the needed data to inform the safety of nanotechnology in consumer products and allow CPSC to benefit from NIEHS’s scientific network and experience.

Managing EHS risks across a product’s lifecycle includes protecting the workers who manufacture those products. The National Institute for Occupational Safety and Health has issued a series of documents providing guidance to this emerging industry, including the recently released publication Building a Safety Program to Protect the Nanotechnology Workforce: A Guide for Small to Medium-Sized Enterprises. This guide provides business owners with the tools necessary to develop and implement a written health and safety program to protect their employees.

Whitman also mentions a June 2016 international conference in the context of this news,

The responsible development of nanotechnology is a goal that the United States shares with many countries. The United States and the European Union are engaged in notable cooperation on this front. European and American scientists engaged in nanoEHS research convene annually for a joint workshop to identify areas of shared interest and mechanisms for collaboration to advance nanoEHS science. The 2016 joint workshop will be held on June 6–7, 2016 in Arlington, VA, and is free and open to the public. …