Tag Archives: EU

2D printed transistors in Ireland

2D transistors seem to be a hot area for research these days. In Ireland, the AMBER Centre has announced a transistor consisting entirely of 2D nanomaterials in an April 6, 2017 news item on Nanowerk,

Researchers in AMBER, the Science Foundation Ireland-funded materials science research centre hosted in Trinity College Dublin, have fabricated printed transistors consisting entirely of 2-dimensional nanomaterials for the first time. These 2D materials combine exciting electronic properties with the potential for low-cost production.

This breakthrough could unlock the potential for applications such as food packaging that displays a digital countdown to warn you of spoiling, wine labels that alert you when your white wine is at its optimum temperature, or even a window pane that shows the day’s forecast. …

An April 7, 2017 AMBER Centre press release (also on EurekAlert), which originated the news item, expands on the theme,

Prof Jonathan Coleman, who is an investigator in AMBER and Trinity’s School of Physics, said, “In the future, printed devices will be incorporated into even the most mundane objects such as labels, posters and packaging.

Printed electronic circuitry (constructed from the devices we have created) will allow consumer products to gather, process, display and transmit information: for example, milk cartons could send messages to your phone warning that the milk is about to go out-of-date.

We believe that 2D nanomaterials can compete with the materials currently used for printed electronics. Compared to other materials employed in this field, our 2D nanomaterials have the capability to yield more cost effective and higher performance printed devices. However, while the last decade has underlined the potential of 2D materials for a range of electronic applications, only the first steps have been taken to demonstrate their worth in printed electronics. This publication is important because it shows that conducting, semiconducting and insulating 2D nanomaterials can be combined together in complex devices. We felt that it was critically important to focus on printing transistors as they are the electric switches at the heart of modern computing. We believe this work opens the way to print a whole host of devices solely from 2D nanosheets.”

Led by Prof Coleman, in collaboration with the groups of Prof Georg Duesberg (AMBER) and Prof. Laurens Siebbeles (TU Delft,Netherlands), the team used standard printing techniques to combine graphene nanosheets as the electrodes with two other nanomaterials, tungsten diselenide and boron nitride as the channel and separator (two important parts of a transistor) to form an all-printed, all-nanosheet, working transistor.

Printable electronics have developed over the last thirty years based mainly on printable carbon-based molecules. While these molecules can easily be turned into printable inks, such materials are somewhat unstable and have well-known performance limitations. There have been many attempts to surpass these obstacles using alternative materials, such as carbon nanotubes or inorganic nanoparticles, but these materials have also shown limitations in either performance or in manufacturability. While the performance of printed 2D devices cannot yet compare with advanced transistors, the team believe there is a wide scope to improve performance beyond the current state-of-the-art for printed transistors.

The ability to print 2D nanomaterials is based on Prof. Coleman’s scalable method of producing 2D nanomaterials, including graphene, boron nitride, and tungsten diselenide nanosheets, in liquids, a method he has licensed to Samsung and Thomas Swan. These nanosheets are flat nanoparticles that are a few nanometres thick but hundreds of nanometres wide. Critically, nanosheets made from different materials have electronic properties that can be conducting, insulating or semiconducting and so include all the building blocks of electronics. Liquid processing is especially advantageous in that it yields large quantities of high quality 2D materials in a form that is easy to process into inks. Prof. Coleman’s publication provides the potential to print circuitry at extremely low cost which will facilitate a range of applications from animated posters to smart labels.

Prof Coleman is a partner in Graphene flagship, a €1 billion EU initiative to boost new technologies and innovation during the next 10 years.

Here’s a link to and a citation for the paper,

All-printed thin-film transistors from networks of liquid-exfoliated nanosheets by Adam G. Kelly, Toby Hallam, Claudia Backes, Andrew Harvey, Amir Sajad Esmaeily, Ian Godwin, João Coelho, Valeria Nicolosi, Jannika Lauth, Aditya Kulkarni, Sachin Kinge, Laurens D. A. Siebbeles, Georg S. Duesberg, Jonathan N. Coleman. Science  07 Apr 2017: Vol. 356, Issue 6333, pp. 69-73 DOI: 10.1126/science.aal4062

This paper is behind a paywall.

Canada and its Vancouver tech scene gets a boost

Prime Minister Justin Trudeau has been running around attending tech events both in the Vancouver area (Canada) and in Seattle these last few days (May 17 and May 18, 2017). First he attended the Microsoft CEO Summit as noted in a May 11, 2017 news release from the Prime Minister’s Office (Note: I have a few comments about this performance and the Canadian tech scene at the end of this post),

The Prime Minister, Justin Trudeau, today [May 11, 2017] announced that he will participate in the Microsoft CEO Summit in Seattle, Washington, on May 17 and 18 [2017], to promote the Cascadia Innovation Corridor, encourage investment in the Canadian technology sector, and draw global talent to Canada.

This year’s summit, under the theme “The CEO Agenda: Navigating Change,” will bring together more than 150 chief executive officers. While at the Summit, Prime Minister Trudeau will showcase Budget 2017’s Innovation and Skills Plan and demonstrate how Canada is making it easier for Canadian entrepreneurs and innovators to turn their ideas into thriving businesses.

Prime Minister Trudeau will also meet with Washington Governor Jay Inslee.

Quote

“Canada’s greatest strength is its skilled, hard-working, creative, and diverse workforce. Canada is recognized as a world leader in research and development in many areas like artificial intelligence, quantum computing, and 3D programming. Our government will continue to help Canadian businesses grow and create good, well-paying middle class jobs in today’s high-tech economy.”
— Rt. Honourable Justin Trudeau, Prime Minister of Canada

Quick Facts

  • Canada-U.S. bilateral trade in goods and services reached approximately $882 billion in 2016.
  • Nearly 400,000 people and over $2 billion-worth of goods and services cross the Canada-U.S. border every day.
  • Canada-Washington bilateral trade was $19.8 billion in 2016. Some 223,300 jobs in the State of Washington depend on trade and investment with Canada. Canada is among Washington’s top export destinations.

Associated Link

Here’s a little more about the Microsoft meeting from a May 17, 2017 article by Alan Boyle for GeekWire.com (Note: Links have been removed),

So far, this year’s Microsoft CEO Summit has been all about Canadian Prime Minister Justin Trudeau’s talk today, but there’s been precious little information available about who else is attending – and Trudeau may be one of the big reasons why.

Microsoft co-founder Bill Gates created the annual summit back in 1997, to give global business leaders an opportunity to share their experiences and learn about new technologies that will have an impact on business in the future. The event’s attendee list is kept largely confidential, as is the substance of the discussions.

This year, Microsoft says the summit’s two themes are “trust in technology” (as in cybersecurity, international hacking, privacy and the flow of data) and “the race to space” (as in privately funded space efforts such as Amazon billionaire Jeff Bezos’ Blue Origin rocket venture).

Usually, Microsoft lists a few folks who are attending the summit on the company’s Redmond campus, just to give a sense of the event’s cachet. For example, last year’s headliners included Berkshire Hathaway CEO Warren Buffett and Exxon Mobil CEO Rex Tillerson (who is now the Trump administration’s secretary of state)

This year, however, the spotlight has fallen almost exclusively on the hunky 45-year-old Trudeau, the first sitting head of government or state to address the summit. Microsoft isn’t saying anything about the other 140-plus VIPs attending the discussions. “Out of respect for the privacy of our guests, we are not providing any additional information,” a Microsoft spokesperson told GeekWire via email.

Even Trudeau’s remarks at the summit are hush-hush, although officials say he’s talking up Canada’s tech sector.  …

Laura Kane’s May 18, 2017 article for therecord.com provides a little more information about Trudeau’s May 18, 2017 activities in Washington state,

Prime Minister Justin Trudeau continued his efforts to promote Canada’s technology sector to officials in Washington state on Thursday [May 18, 2017], meeting with Gov. Jay Inslee a day after attending the secretive Microsoft CEO Summit.

Trudeau and Inslee discussed, among other issues, the development of the Cascadia Innovation Corridor, an initiative that aims to strengthen technology industry ties between British Columbia and Washington.

The pair also spoke about trade and investment opportunities and innovation in the energy sector, said Trudeau’s office. In brief remarks before the meeting, the prime minister said Washington and Canada share a lot in common.

But protesters clad in yellow hazardous material suits that read “Keystone XL Toxic Cleanup Crew” gathered outside the hotel to criticize Trudeau’s environmental record, arguing his support of pipelines is at odds with any global warming promises he has made.

Later that afternoon, Trudeau visited Electronic Arts (a US games company with offices in the Vancouver area) for more tech talk as Stephanie Ip notes in her May 18, 2017 article for The Vancouver Sun,

Prime Minister Justin Trudeau was in Metro Vancouver Thursday [may 18, 2017] to learn from local tech and business leaders how the federal government can boost B.C.’s tech sector.

The roundtable discussion was organized by the Vancouver Economic Commission and hosted in Burnaby at Electronic Arts’ Capture Lab, where the video game company behind the popular FIFA, Madden and NHL franchises records human movement to add more realism to its digital characters. Representatives from Amazon, Launch Academy, Sony Pictures, Darkhorse 101 Pictures and Front Fundr were also there.

While the roundtable was not open to media, Trudeau met beforehand with media.

“We’re going to talk about how the government can be a better partner or better get out of your way in some cases to allow you to continue to grow, to succeed, to create great opportunities to allow innovation to advance success in Canada and to create good jobs for Canadians and draw in people from around the world and continue to lead the way in the world,” he said.

“Everything from clean tech, to bio-medical advances, to innovation in digital economy — there’s a lot of very, very exciting things going on”

Comments on the US tech sector and the supposed Canadian tech sector

I wonder at all the secrecy. As for the companies mentioned as being at the roundtable, you’ll notice a preponderance of US companies with Launch Academy and Front Fundr (which is not a tech company but a crowdfunding equity company) supplying Canadian content. As for Darkhorse 101 Pictures,  I strongly suspect (after an online search) it is part of Darkhorse Comics (as US company) which has an entertainment division.

Perhaps it didn’t seem worthwhile to mention the Canadian companies? In that case, that’s a sad reflection on how poorly we and our media support our tech sector.

In fact, it seems Trudeau’s version of the Canadian technology sector is for us to continue in our role as a branch plant remaining forever in service of the US economy or at least the US tech sector which may be experiencing some concerns with the US Trump administration and what appears to be an increasingly isolationist perspective with regard to trade and immigration. It’s a perspective that the tech sector, especially the entertainment component, can ill afford.

As for the Cascadia Innovation Corridor mentioned in the Prime Minister’s news release and in Kane’s article, I have more about that in a Feb. 28, 2017 posting about the Cascadia Data Analytics Cooperative.

I noticed he mentioned clean tech as an area of excitement. Well, we just lost a significant player not to the US this time but to the EU (European Union) or more specifically, Germany. (There’ll be more about that in an upcoming post.)

I’m glad to see that Trudeau remains interested in Canadian science and technology but perhaps he could concentrate on new ways of promoting sectoral health rather than relying on the same old thing.

European Commission has issued evaluation of nanomaterial risk frameworks and tools

Despite complaints that there should have been more, there has been some research into risks where nanomaterials are concerned. While additional research would be welcome, it’s perhaps more imperative that standardized testing and risk frameworks are developed so, for example, carbon nanotube safety research in Japan can be compared with the similar research in the Netherlands, the US, and elsewhere. This March 15, 2017 news item on Nanowerk features some research analyzing risk assessment frameworks and tools in Europe,

A recent study has evaluated frameworks and tools used in Europe to assess the potential health and environmental risks of manufactured nanomaterials. The study identifies a trend towards tools that provide protocols for conducting experiments, which enable more flexible and efficient hazard testing. Among its conclusions, however, it notes that no existing frameworks meet all the study’s evaluation criteria and calls for a new, more comprehensive framework.

A March 9, 2017 news alert in the European Commission’s Science for Environment Policy series, which originated the news item, provides more detail (Note: Links have been removed),

Nanotechnology is identified as a key emerging technology in the EU’s growth strategy, Europe 2020. It has great potential to contribute to innovation and economic growth and many of its applications have already received large investments. However,there are some uncertainties surrounding the environmental, health and safety risks of manufactured nanomaterials. For effective regulation, careful scientific analysis of their potential impacts is needed, as conducted through risk assessment exercises.

This study, conducted under the EU-funded MARINA project1, reviewed existing frameworks and tools for risk assessing manufactured nanomaterials. The researchers define a framework as a ‘conceptual paradigm’ of how a risk assessment should be conducted and understood, and give the REACH chemical safety assessment as an example. Tools are defined as implements used to carry out a specific task or function, such as experimental protocols, computer models or databases.

In all, 12 frameworks and 48 tools were evaluated. These were identified from other studies and projects. The frameworks were assessed against eight criteria which represent different strengths, such as whether they consider properties specific to nanomaterials, whether they consider the entire life cycle of a nanomaterial and whether they include careful planning and prioritise objectives before the risk assessment is conducted.

The tools were assessed against seven criteria, such as ease of use, whether they provide quantitative information and if they clearly communicate uncertainty in their results. The researchers defined the criteria for both frameworks and tools by reviewing other studies and by interviewing staff at organisations who develop tools.

The evaluation was thus able to produce a list of strengths and areas for improvement for the frameworks and tools, based on whether they meet each of the criteria. Among its many findings, the evaluation showed that most of the frameworks stress that ‘problem formulation’, which sets the goals and scope of an assessment during the planning process, is essential to avoid unnecessary testing. In addition, most frameworks consider routes of exposure in the initial stages of assessment, which is beneficial as it can exclude irrelevant exposure routes and avoid unnecessary tests.

However, none of the frameworks met all eight of the criteria. The study therefore recommends that a new, comprehensive framework is developed that meets all criteria. Such a framework is needed to inform regulation, the researchers say, and should integrate human health and environmental factors, and cover all stages of the life cycle of a product containing nanomaterials.

The evaluation of the tools suggested that many of them are designed to screen risks, and not necessarily to support regulatory risk assessment. However, their strengths include a growing trend in quantitative models, which can assess uncertainty; for example, one tool analysed can identify uncertainties in its results that are due to gaps in knowledge about a material’s origin, characteristics and use.

The researchers also identified a growing trend in tools that provide protocols for experiments, such as identifying materials and test hazards, which are reproducible across laboratories. These tools could lead to a shift from expensive case-by-case testing for risk assessment of manufactured nanomaterials towards a more efficient process based on groupings of nanomaterials; and ‘read-across’ methods, where the properties of one material can be inferred without testing, based on the known properties of a similar material. The researchers do note, however, that although read-across methods are well established for chemical substances, they are still being developed for nanomaterials. To improve nanomaterial read-across methods, they suggest that more data are needed on the links between nanomaterials’ specific properties and their biological effects.

That’s all, folks.

Graphene-based neural probes

I have two news bits (dated almost one month apart) about the use of graphene in neural probes, one from the European Union and the other from Korea.

European Union (EU)

This work is being announced by the European Commission’s (a subset of the EU) Graphene Flagship (one of two mega-funding projects announced in 2013; 1B Euros each over ten years for the Graphene Flagship and the Human Brain Project).

According to a March 27, 2017 news item on ScienceDaily, researchers have developed a graphene-based neural probe that has been tested on rats,

Measuring brain activity with precision is essential to developing further understanding of diseases such as epilepsy and disorders that affect brain function and motor control. Neural probes with high spatial resolution are needed for both recording and stimulating specific functional areas of the brain. Now, researchers from the Graphene Flagship have developed a new device for recording brain activity in high resolution while maintaining excellent signal to noise ratio (SNR). Based on graphene field-effect transistors, the flexible devices open up new possibilities for the development of functional implants and interfaces.

The research, published in 2D Materials, was a collaborative effort involving Flagship partners Technical University of Munich (TU Munich; Germany), Institut d’Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS; Spain), Spanish National Research Council (CSIC; Spain), The Biomedical Research Networking Center in Bioengineering, Biomaterials and Nanomedicine (CIBER-BBN; Spain) and the Catalan Institute of Nanoscience and Nanotechnology (ICN2; Spain).

Caption: Graphene transistors integrated in a flexible neural probe enables electrical signals from neurons to be measured with high accuracy and density. Inset: The tip of the probe contains 16 flexible graphene transistors. Credit: ICN2

A March 27, 2017 Graphene Flagship press release on EurekAlert, which originated the news item, describes the work,  in more detail,

The devices were used to record the large signals generated by pre-epileptic activity in rats, as well as the smaller levels of brain activity during sleep and in response to visual light stimulation. These types of activities lead to much smaller electrical signals, and are at the level of typical brain activity. Neural activity is detected through the highly localised electric fields generated when neurons fire, so densely packed, ultra-small measuring devices is important for accurate brain readings.

The neural probes are placed directly on the surface of the brain, so safety is of paramount importance for the development of graphene-based neural implant devices. Importantly, the researchers determined that the graphene-based probes are non-toxic, and did not induce any significant inflammation.

Devices implanted in the brain as neural prosthesis for therapeutic brain stimulation technologies and interfaces for sensory and motor devices, such as artificial limbs, are an important goal for improving quality of life for patients. This work represents a first step towards the use of graphene in research as well as clinical neural devices, showing that graphene-based technologies can deliver the high resolution and high SNR needed for these applications.

First author Benno Blaschke (TU Munich) said “Graphene is one of the few materials that allows recording in a transistor configuration and simultaneously complies with all other requirements for neural probes such as flexibility, biocompability and chemical stability. Although graphene is ideally suited for flexible electronics, it was a great challenge to transfer our fabrication process from rigid substrates to flexible ones. The next step is to optimize the wafer-scale fabrication process and improve device flexibility and stability.”

Jose Antonio Garrido (ICN2), led the research. He said “Mechanical compliance is an important requirement for safe neural probes and interfaces. Currently, the focus is on ultra-soft materials that can adapt conformally to the brain surface. Graphene neural interfaces have shown already great potential, but we have to improve on the yield and homogeneity of the device production in order to advance towards a real technology. Once we have demonstrated the proof of concept in animal studies, the next goal will be to work towards the first human clinical trial with graphene devices during intraoperative mapping of the brain. This means addressing all regulatory issues associated to medical devices such as safety, biocompatibility, etc.”

Caption: The graphene-based neural probes were used to detect rats’ responses to visual stimulation, as well as neural signals during sleep. Both types of signals are small, and typically difficult to measure. Credit: ICN2

Here’s a link to and a citation for the paper,

Mapping brain activity with flexible graphene micro-transistors by Benno M Blaschke, Núria Tort-Colet, Anton Guimerà-Brunet, Julia Weinert, Lionel Rousseau, Axel Heimann, Simon Drieschner, Oliver Kempski, Rosa Villa, Maria V Sanchez-Vives. 2D Materials, Volume 4, Number 2 DOI https://doi.org/10.1088/2053-1583/aa5eff Published 24 February 2017

© 2017 IOP Publishing Ltd

This paper is behind a paywall.

Korea

While this research from Korea was published more recently, the probe itself has not been subjected to in vivo (animal testing). From an April 19, 2017 news item on ScienceDaily,

Electrodes placed in the brain record neural activity, and can help treat neural diseases like Parkinson’s and epilepsy. Interest is also growing in developing better brain-machine interfaces, in which electrodes can help control prosthetic limbs. Progress in these fields is hindered by limitations in electrodes, which are relatively stiff and can damage soft brain tissue.

Designing smaller, gentler electrodes that still pick up brain signals is a challenge because brain signals are so weak. Typically, the smaller the electrode, the harder it is to detect a signal. However, a team from the Daegu Gyeongbuk Institute of Science & Technology [DGIST} in Korea developed new probes that are small, flexible and read brain signals clearly.

This is a pretty interesting way to illustrate the research,

Caption: Graphene and gold make a better brain probe. Credit: DGIST

An April 19, 2017 DGIST press release (also on EurekAlert), which originated the news item, expands on the theme (Note: A link has been removed),

The probe consists of an electrode, which records the brain signal. The signal travels down an interconnection line to a connector, which transfers the signal to machines measuring and analysing the signals.

The electrode starts with a thin gold base. Attached to the base are tiny zinc oxide nanowires, which are coated in a thin layer of gold, and then a layer of conducting polymer called PEDOT. These combined materials increase the probe’s effective surface area, conducting properties, and strength of the electrode, while still maintaining flexibility and compatibility with soft tissue.

Packing several long, thin nanowires together onto one probe enables the scientists to make a smaller electrode that retains the same effective surface area of a larger, flat electrode. This means the electrode can shrink, but not reduce signal detection. The interconnection line is made of a mix of graphene and gold. Graphene is flexible and gold is an excellent conductor. The researchers tested the probe and found it read rat brain signals very clearly, much better than a standard flat, gold electrode.

“Our graphene and nanowires-based flexible electrode array can be useful for monitoring and recording the functions of the nervous system, or to deliver electrical signals to the brain,” the researchers conclude in their paper recently published in the journal ACS Applied Materials and Interfaces.

The probe requires further clinical tests before widespread commercialization. The researchers are also interested in developing a wireless version to make it more convenient for a variety of applications.

Here’s a link to and a citation for the paper,

Enhancement of Interface Characteristics of Neural Probe Based on Graphene, ZnO Nanowires, and Conducting Polymer PEDOT by Mingyu Ryu, Jae Hoon Yang, Yumi Ahn, Minkyung Sim, Kyung Hwa Lee, Kyungsoo Kim, Taeju Lee, Seung-Jun Yoo, So Yeun Kim, Cheil Moon, Minkyu Je, Ji-Woong Choi, Youngu Lee, and Jae Eun Jang. ACS Appl. Mater. Interfaces, 2017, 9 (12), pp 10577–10586 DOI: 10.1021/acsami.7b02975 Publication Date (Web): March 7, 2017

Copyright © 2017 American Chemical Society

This paper is behind a paywall.

OECD (Organization for Economic Cooperation and Development) Dossiers on Nanomaterials Are of “Little to No Value for assessing risk?”

The announcement that a significant portion of the OECD’s (Organization for Economic Cooperation and Development) dossiers on 11 nanomaterials have next to no value for assessing risk seems a harsh judgment from the Center for International Environmental Law (CIEL). From a March 1, 2017 posting by Lynn L. Bergeson on the Nanotechnology Now,

On February 23, 2017, the Center for International Environmental Law (CIEL) issued a press release announcing a new report, commissioned by CIEL, the European Environmental Citizens’ Organization for Standardization (ECOS), and the Oeko-Institute, that “shows that most of the information made available by the Sponsorship Testing Programme of the Organisation for Economic Co-operation and Development (OECD) is of little to no value for the regulatory risk assessment of nanomaterials.”

Here’s more from the Feb. 23, 3017 CIEL press release, which originated the posting,

The study published today [Feb. 23, 2017] was delivered by the Institute of Occupational Medicine (IOM) based in Singapore. IOM screened the 11,500 pages of raw data of the OECD dossiers on 11 nanomaterials, and analysed all characterisation and toxicity data on three specific nanomaterials – fullerenes, single-walled carbon nanotubes, and zinc oxide.

“EU policy makers and industry are using the existence of the data to dispel concerns about the potential health and environmental risks of manufactured nanomaterials,” said David Azoulay, Senior Attorney for CIEL. “When you analyse the data, in most cases, it is impossible to assess what material was actually tested. The fact that data exists about a nanomaterial does not mean that the information is reliable to assess the hazards or risks of the material.”

The dossiers were published in 2015 by the OECD’s Working Party on Manufactured Nanomaterials (WPMN), which has yet to draw conclusions on the data quality. Despite this missing analysis, some stakeholders participating in EU policy-making – notably the European Chemicals Agency (ECHA) and the European Commission’s Joint Research Centre – have presented the dossiers as containing information on nano-specific human health and environmental impacts. Industry federations and individual companies have taken this a step further emphasizing that there is enough information available to discard most concerns about potential health or environmental risks of manufactured nanomaterials.

“Our study shows these claims that there is sufficient data available on nanomaterials are not only false, but dangerously so,” said Doreen Fedrigo, Senior Policy Officer of ECOS. ”The lack of nano-specific information in the dossiers means that the results of the tests cannot be used as evidence of no ‘nano-effect’ of the tested material. This information is crucial for regulators and producers who need to know the hazard profile of these materials. Analysing the dossiers has shown that legislation detailing nano-specific information requirements is crucial for the regulatory risk assessment of nanomaterials.”

The report provides important recommendations on future steps in the governance of nanomaterials. “Based on our analysis, serious gaps in current dossiers must be filled in with characterisation information, preparation protocols, and exposure data,” said Andreas Hermann of the Oeko-Institute. “Using these dossiers as they are and ignoring these recommendations would mean making decisions on the safety of nanomaterials based on faulty and incomplete data. Our health and environment requires more from producers and regulators.”

CIEL has an Analysis of OECD WPMN Dossiers Regarding the Availability of Data to Evaluate and Regulate Risk (Dec 2016) webpage which provides more information about the dossiers and about the research into the dossiers and includes links to the report, the executive summer, and the dataset,

The Sponsorship Testing Programme of the Working Party on Manufactured Nanomaterials (WPMN) of the Organisation for Economic Co-operation and Development (OECD) started in 2007 with the aim to test a selection of 13 representative nanomaterials for many endpoints. The main objectives of the programme were to better understand what information on intrinsic properties of the nanomaterials might be relevant for exposure and hazards assessment and assess the validity of OECD chemicals Test Guidelines for nanomaterials. The testing programme concluded in 2015 with the publication of dossiers on 11 nanomaterials: 11,500 pages of raw data to be analysed and interpreted.

The WPMN has not drawn conclusions on the data quality, but some stakeholders participating in EU policy-making – notably the European Chemicals Agency and the European Commission’s Joint Research Centre – presented the dossiers as containing much scientific information that provided a better understanding of their nano-specific human health and environmental impacts. Industry federations and individual companies echoed the views, highlighting that there was enough information available to discard most concerns about potential health or environmental risks of manufactured nanomaterials.

As for the OECD, it concluded, even before the publication of the dossiers, that “many of the existing guidelines are also suitable for the safety assessment of nanomaterials” and “the outcomes (of the sponsorship programme) will provide useful information on the ‘intrinsic properties’ of nanomaterials.”

The Center for International Environmental Law (CIEL), the European Citizens’ Organisation for Standardisation (ECOS) and the Öko-Institut commissioned scientific analysis of these dossiers to assess the relevance of the data for regulatory risk assessment.

The resulting report: Analysis of OECD WPMN dossiers regarding the availability of data to evaluate and regulate risk, provides insights illustratating how most of the information made available by the sponsorship programme is of little to no value in identifying hazards or in assessing risks due to nanomaterials.

The analysis shows that:

  • Most studies and documents in the dossiers contain insufficient characterisation data about the specific nanomaterial addressed (size, particle distribution, surface shape, etc.), making it impossible to assess what material was actually tested.
  • This makes it impossible to make any firm statements regarding the nano-specificity of the hazard data published, or the relationship between observed effects and specific nano-scale properties.
  • Less than 2% of the study records provide detail on the size of the nanomaterial tested. Most studies use mass rather than number or size distribution (so not following scientifically recommended reporting practice).
  • The absence of details on the method used to prepare the nanomaterial makes it virtually impossible to correlate an identified hazard with specific nanomaterial characteristic. Since the studies do not indicate dispersion protocols used, it is impossible to assess whether the final dispersion contained the intended mass concentration (or even the actual presence of nanomaterials in the test system), how much agglomeration may have occurred, and how the preparation protocols may have influenced the size distribution.
  • There is not enough nano-specific information in the dossiers to inform about nano-characteristics of the raw material that influence their toxicology. This information is important for regulators and its absence makes information in the dossier irrelevant to develop read-across guidelines.
  • Only about half of the endpoint study records using OECD Test Guideliness (TGs) were delivered using unaltered OECD TGs, thereby respecting the Guidelines’ requirements. The reasons for modifications of the TGs used in the tests are not clear from the documentation. This includes whether the study record was modified to account for challenges related to specific nanomaterial properties or for other, non-nano-specific reasons.
  • The studies do not contain systematic testing of the influence of nano-specific characteristics on the study outcome, and they do not provide the data needed to assess the effect of nano-scale features on the Test Guidelines. Given the absence of fundamental information on nanomaterial characteristics, the dossiers do not provide evidence of the applicability of existing OECD Test Guidelines to nanomaterials.

The analysis therefore dispels several myths created by some stakeholders following publication of the dossiers and provides important perspective for the governance of nanomaterials. In particular, the analysis makes recommendations to:

  • Systematically assess the validity of existing Test Guidelines for relevance to nanomaterials
  • Develop Test Guidelines for dispersion and other test preparations
  • Define the minimum characteristics of nanomaterials that need to be reported
  • Support the build-up of exposure database
  • Fill the gaps in current dossiers with characterisation information, preparation protocols and exposure data

Read full report.
Read executive summary.
Download full dataset.

This is not my area of expertise and while I find the language a bit inflammatory, it’s my understanding that there are great gaps in our understanding of nanomaterials and testing for risk assessment has been criticized for many of the reasons pointed out by CIEL, ECOS, and the Oeko-Institute.

You can find out more about CIEL here; ECOS here; and the Oeko-Institute (also known as Öko-Institute) here.

Graphene Malaysia 2016 gathering and Malaysia’s National Graphene Action Plan 2020

Malaysia is getting ready to host a graphene conference according to an Oct. 10, 2016 news item on Nanotechnology Now,

The Graphene Malaysia 2016 [Nov. 8 – 9, 2016] (www.graphenemalaysiaconf.com) is jointly organized by NanoMalaysia Berhad and Phantoms Foundation. The conference will be centered on graphene industry interaction and collaborative innovation. The event will be launched under the National Graphene Action Plan 2020 (NGAP 2020), which will generate about 9,000 jobs and RM20 (US$4.86) billion GNI impact by the year 2020.

First speakers announced:
Murni Ali (Nanomalaysia, Malaysia) | Francesco Bonaccorso (Istituto Italiano di Tecnologia, Italy) | Antonio Castro Neto (NUS, Singapore) | Antonio Correia (Phantoms Foundation, Spain)| Pedro Gomez-Romero (ICN2 (CSIC-BIST), Spain) | Shu-Jen Han (Nanoscale Science & Technology IBM T.J. Watson Research Center, USA) | Kuan-Tsae Huang (AzTrong, USA/Taiwan) | Krzysztof Koziol (FGV Cambridge Nanosystems, UK) | Taavi Madiberk (Skeleton Technologies, Estonia) | Richard Mckie (BAE Systems, UK) | Pontus Nordin (Saab AB, Saab Aeronautics, Sweden) | Elena Polyakova (Graphene Laboratories Inc., USA) | Ahmad Khairuddin Abdul Rahim (Malaysian Investment Development Authority (MIDA), Malaysia) | Adisorn Tuantranont (Thailand Organic and Printed Electronics Innovation Center, Thailand) |Archana Venugopal (Texas Instruments, USA) | Won Jong Yoo (Samsung-SKKU Graphene-2D Center (SSGC), South Korea) | Hongwei Zhu (Tsinghua University, China)

You can check for more information and deadlines in the Nanotechnology Now Oct. 10, 2016 news item.

The Graphene Malalysia 2016 conference website can be found here and Malaysia’s National Graphene Action Plan 2020, which is well written, can be found here (PDF).  This portion from the executive summary offers some insight into Malyasia’s plans to launch itself into the world of high income nations,

Malaysia’s aspiration to become a high-income nation by 2020 with improved jobs and better outputs is driving the country’s shift away from “business as usual,” and towards more innovative and high value add products. Within this context, and in accordance with National policies and guidelines, Graphene, an emerging, highly versatile carbon-based nanomaterial, presents a unique opportunity for Malaysia to develop a high value economic ecosystem within its industries.  Isolated only in 2004, Graphene’s superior physical properties such as electrical/ thermal conductivity, high strength and high optical transparency, combined with its manufacturability have raised tremendous possibilities for its application across several functions and make it highly interesting for several applications and industries.  Currently, Graphene is still early in its development cycle, affording Malaysian companies time to develop their own applications instead of relying on international intellectual property and licenses.

Considering the potential, several leading countries are investing heavily in associated R&D. Approaches to Graphene research range from an expansive R&D focus (e.g., U.S. and the EU) to more focused approaches aimed at enhancing specific downstream applications with Graphene (e.g., South Korea). Faced with the need to push forward a multitude of development priorities, Malaysia must be targeted in its efforts to capture Graphene’s potential, both in terms of “how to compete” and “where to compete”. This National Graphene Action Plan 2020 lays out a set of priority applications that will be beneficial to the country as a whole and what the government will do to support these efforts.

Globally, much of the Graphene-related commercial innovation to date has been upstream, with producers developing techniques to manufacture Graphene at scale. There has also been some development in downstream sectors, as companies like Samsung, Bayer MaterialScience, BASF and Siemens explore product enhancement with Graphene in lithium-ion battery anodes and flexible displays, and specialty plastic and rubber composites. However the speed of development has been uneven, offering Malaysian industries willing to invest in innovation an opportunity to capture the value at stake. Since any innovation action plan has to be tailored to the needs and ambitions of local industry, Malaysia will focus its Graphene action plan initially on larger domestic industries (e.g., rubber) and areas already being targeted by the government for innovation such as energy storage for electric vehicles and conductive inks.

In addition to benefiting from the physical properties of Graphene, Malaysian downstream application providers may also capture the benefits of a modest input cost advantage for the domestic production of Graphene.  One commonly used Graphene manufacturing technique, the chemical vapour deposition (CVD) production method, requires methane as an input, which can be sourced economically from local biomass. While Graphene is available commercially from various producers around the world, downstream players may be able to enjoy some cost advantage from local Graphene supply. In addition, co-locating with a local producer for joint product development has the added benefit of speeding up the R&D lifecycle.

That business about finding downstream applications could also to the Canadian situation where we typically offer our resources (upstream) but don’t have an active downstream business focus. For example, we have graphite mines in Ontario and Québec which supply graphite flakes for graphene production which is all upstream. Less well developed are any plans for Canadian downstream applications.

Finally, it was interesting to note that the Phantoms Foundation is organizing this Malaysian conference since the same organization is organizing the ‘2nd edition of Graphene & 2D Materials Canada 2016 International Conference & Exhibition’ (you can find out more about the Oct. 18 – 20, 2016 event in my Sept. 23, 2016 posting). I think the Malaysians have a better title for their conference, far less unwieldy.

Phenomen: a future and emerging information technology project

A Sept. 19, 2016 news item on Nanowerk describes a new research project incorporating photonics, phononics, and radio frequency signal processing,

HENOMEN is a ground breaking project designed to harness the potential of combined phononics, photonics and radio-frequency (RF) electronic signals to lay the foundations of a new information technology. This new Project, funded though the highly competitive H2020 [the European Union’s Horizon 2020 science funding programme] FET [Future and Emerging Technologies]-Open call, joins the efforts of three leading research institutes, three internationally recognised universities and a high-tech SME. The Consortium members kick-offed the project with a meeting on Friday September 16, 2016, at the Catalan Institute of Nanoscience and Nanotechnology (ICN2), coordinated by ICREA Research Prof Dr Clivia M. Sotomayor-Torres, of the ICN2’ Phononic and Photonic Nanostructures (P2N) Group.

A Sept. 16, 2016 ICN2 press release, which originated the news item, provides more detail,

Most information is currently transported by electrical charge (electrons) and by light (photons). Phonons are the quanta of lattice vibrations with frequencies covering a wide range up to tens of THz and provide coupling to the surrounding environment. In PHENOMEN the core of the research will be focused on phonon-based signal processing to enable on-chip synchronisation and transfer information carried between optical channels by phonons.

This ambitious prospect could serve as a future scalable platform for, e.g., hybrid information processing with phonons. To achieve it, PHENOMEN proposes to build the first practical optically-driven phonon sources and detectors including the engineering of phonon lasers to deliver coherent phonons to the rest of the chip pumped by a continuous wave optical source. It brings together interdisciplinary scientific and technology oriented partners in an early-stage research towards the development of a radically new technology.

The experimental implementation of phonons as information carriers in a chip is completely novel and of a clear foundational character. It deals with interaction and manipulation of fundamental particles and their intrinsic dual wave-particle character. Thus, it can only be possible with the participation of an interdisciplinary consortium which will create knowledge in a synergetic fashion and add value in the form of new theoretical tools,  develop novel methods to manipulate coherent phonons with light and build all-optical phononic circuits enabled by optomechanics.

The H2020 FET-Open call “Novel ideas for radically new technologies” aims to support the early stages of joint science and technology research for radically new future technological possibilities. The call is entirely non-prescriptive with regards to the nature or purpose of the technologies that are envisaged and thus targets mainly the unexpected. PHENOMEN is one of the 13 funded Research & Innovation Actions and went through a selection process with a success rate (1.4%) ten times smaller than that for an ERC grant. The retained proposals are expected to foster international collaboration in a multitude of disciplines such as robotics, nanotechnology, neuroscience, information science, biology, artificial intelligence or chemistry.

The Consortium

The PHENOMEN Consortium is made up by:

  • 3 leading research institutes:
  • 3 universities with an internationally recognised track-record in their respective areas of expertise:
  • 1 industrial partner:

Radical copyright reform proposal in the European Union

It seems the impulse to maximize copyright control has overtaken European Union officials. A Sept. 14, 2016 news item on phys.org lays out a few details,

The EU will overhaul copyright law to shake up how online news and entertainment is paid for in Europe, under proposals announced by European Commission chief Jean-Claude Juncker Wednesday [Sept. 14, 2016].

Pop stars such as Coldplay and Lady Gaga will hail part of the plan as a new weapon to bring a fair fight to YouTube, the Google-owned video service that they say is sapping the music business.

But the reform plans have attracted the fury of filmmakers and start-up investors who see it as a threat to European innovation and a wrong-headed favour to powerful media groups.

A Sept. 14, 2016 European Commission press release provides the European Union’s version of why more stringent copyright is needed,

“I want journalists, publishers and authors to be paid fairly for their work, whether it is made in studios or living rooms, whether it is disseminated offline or online, whether it is published via a copying machine or commercially hyperlinked on the web.”–President Juncker, State of the Union 2016

On the occasion of President Juncker’s 2016 State of the Union address, the Commission today set out proposals on the modernisation of copyright to increase cultural diversity in Europe and content available online, while bringing clearer rules for all online players. The proposals will also bring tools for innovation to education, research and cultural heritage institutions.

Digital technologies are changing the way music, films, TV, radio, books and the press are produced, distributed and accessed. New online services such as music streaming, video-on-demand platforms and news aggregators have become very popular, while consumers increasingly expect to access cultural content on the move and across borders. The new digital landscape will create opportunities for European creators as long as the rules offer legal certainty and clarity to all players. As a key part of its Digital Single Market strategy, the Commission has adopted proposals today to allow:

  • Better choice and access to content online and across borders
  • Improved copyright rules on education, research, cultural heritage and inclusion of disabled people
  • A fairer and sustainable marketplace for creators, the creative industries and the press

Andrus Ansip, Vice-President for the Digital Single Market, said: “Europeans want cross-border access to our rich and diverse culture. Our proposal will ensure that more content will be available, transforming Europe’s copyright rules in light of a new digital reality. Europe’s creative content should not be locked-up, but it should also be highly protected, in particular to improve the remuneration possibilities for our creators. We said we would deliver all our initiatives to create a Digital Single Market by the end of the year and we keep our promises. Without a properly functioning Digital Single Market we will miss out on creativity, growth and jobs.

Günther H. Oettinger, Commissioner for the Digital Economy and Society, said: “Our creative industries [emphasis mine] will benefit from these reforms which tackle the challenges of the digital age successfully while offering European consumers a wider choice of content to enjoy. We are proposing a copyright environment that is stimulating, fair and rewards investment.”

Today, almost half of EU internet users listen to music, watch TV series and films or play games online; however broadcasters and other operators find it hard to clear rights for their online or digital services when they want to offer them in other EU countries. Similarly, the socio-economically important sectors of education, research and cultural heritage too often face restrictions or legal uncertainty which holds back their digital innovation when using copyright protected content, including across borders. Finally, creators, other right holders and press publishers are often unable to negotiate the conditions and also payment for the online use of their works and performances.

Altogether, today’s copyright proposals have three main priorities:

1. Better choice and access to content online and across borders

With our proposal on the portability of online content presented in December 2015, we gave consumers the right to use their online subscriptions to films, music, ebooks when they are away from their home country, for example on holidays or business trips. Today, we propose a legal mechanism for broadcasters to obtain more easily the authorisations they need from right holders to transmit programmes online in other EU Member States. This is about programmes that broadcasters transmit online at the same time as their broadcast as well as their catch-up services that they wish to make available online in other Member States, such as MyTF1 in France, ZDF Mediathek in Germany, TV3 Play in Denmark, Sweden and the Baltic States and AtresPlayer in Spain. Empowering broadcasters to make the vast majority of their content, such as news, cultural, political, documentary or entertainment programmes, shown also in other Member States will give more choice to consumers.

Today’s rules also make it easier for operators who offer packages of channels (such as Proximus TV in Belgium, Movistar+ in Spain, Deutsche Telekom’s IPTV Entertain in Germany), to get the authorisations they need: instead of having to negotiate individually with every right holder in order to offer such packages of channels originating in other EU Member States, they will be able to get the licenses from collective management organisations representing right holders. This will also increase the choice of content for their customers.

To help development of Video-on-Demand (VoD) offerings in Europe, we ask Member States to set up negotiation bodies to help reach licensing deals, including those for cross-border services, between audiovisual rightholders and VoD platforms. A dialogue with the audiovisual industry on licensing issues and the use of innovative tools like licensing hubs will complement this mechanism.

To enhance access to Europe’s rich cultural heritage, the new Copyright Directive will help museums, archives and other institutions to digitise and make available across borders out-of commerce works, such as books or films that are protected by copyright, but no longer available to the public.

In parallel the Commission will use its €1.46 billion Creative Europe MEDIA programme to further support the circulation of creative content across borders . This includes more funding for subtitling and dubbing; a new catalogue of European audiovisual works for VoD providers that they can directly use for programming; and online tools to improve the digital distribution of European audiovisual works and make them easier to find and view online.

These combined actions will encourage people to discover TV and radio programmes from other European countries, keep in touch with their home countries when living in another Member State and enhance the availability of European films, including across borders, hence highlighting Europe’s rich cultural diversity.

2. Improving copyright rules on research, education and inclusion of disable [sic] people

Students and teachers are eager to use digital materials and technologies for learning, but today almost 1 in 4 educators encounter copyright-related restrictions in their digital teaching activities every week. The Commission has proposed today a new exception to allow educational establishments to use materials to illustrate teaching through digital tools and in online courses across borders.

The proposed Directive will also make it easier for researchers across the EU to use text and data mining (TDM) technologies to analyse large sets of data. This will provide a much needed boost to innovative research considering that today nearly all scientific publications are digital and their overall volume is increasing by 8-9% every year worldwide.

The Commission also proposes a new mandatory EU exception which will allow cultural heritage institutions to preserve works digitally, crucial for the survival of cultural heritage and for citizens’ access in the long term.

Finally, the Commission is proposing legislation to implement the Marrakesh Treaty to facilitate access to published works for persons who are blind, have other visual impairments or are otherwise print disabled. These measures are important to ensure that copyright does not constitute a barrier to the full participation in society of all citizens and will allow for the exchange of accessible format copies within the EU and with third countries that are parties to the Treaty, avoiding duplication of work and waste of resources.

3. A fairer and sustainable marketplace for creators and press

The Copyright Directive aims to reinforce the position of right holders to negotiate and be remunerated for the online exploitation of their content on video-sharing platforms such as YouTube or Dailymotion. Such platforms will have an obligation to deploy effective means such as technology to automatically detect songs or audiovisual works which right holders have identified and agreed with the platforms either to authorise or remove.

Newspapers, magazines and other press publications have benefited from the shift from print to digital and online services like social media and news aggregators. It has led to broader audiences, but it has also impacted advertising revenue and made the licensing and enforcement of the rights in these publications increasingly difficult.The Commission proposes to introduce a new related right for publishers, similar to the right that already exists under EU law for film producers, record (phonogram) producers and other players in the creative industries like broadcasters.

The new right recognises the important role press publishers play in investing in and creating quality journalistic content, which is essential for citizens’ access to knowledge in our democratic societies. As they will be legally recognised as right holders for the very first time they will be in a better position when they negotiate the use of their content with online services using or enabling access to it, and better able to fight piracy. This approach will give all players a clear legal framework when licensing content for digital uses, and help the development of innovative business models for the benefit of consumers.

The draft Directive also obliges publishers and producers to be transparent and inform authors or performers about profits they made with their works. It also puts in place a mechanism to help authors and performers to obtain a fair share when negotiating remuneration with producers and publishers. This should lead to higher level of trust among all players in the digital value chain.

Towards a Digital Single Market

As part of the Digital Single Market strategy presented in May 2015, today’s proposals complement the proposed regulation on portability of legal content (December 2015), the revised Audiovisual Media and Services Directive, the Communication on online platforms (May 2016). Later this autumn the Commission will propose to improve enforcement of all types of intellectual property rights, including copyright.

Today’s EU copyright rules, presented along with initiatives to boost internet connectivity in the EU (press releasepress conference at 15.15 CET), are part of the EU strategy to create a Digital Single Market (DSM). The Commission set out 16 initiatives (press release) and is on the right track to deliver all of them the end of this year.

While Juncker mixes industry (publishers) with content creators (journalists, authors), Günther H. Oettinger, Commissioner for the Digital Economy and Society clearly states that ‘creative industries’ are to be the beneficiaries. Business interests have tended to benefit disproportionately under current copyright regimes. The disruption posed by digital content has caused these businesses some agony and they have responded by lobbying vigorously to maximize copyright. For the most part, individual musicians, authors, visual artists and other content creators are highly unlikely to benefit from this latest reform.

I’m not a big fan of Google or its ‘stepchild’, YouTube but it should be noted that at least one career would not have existed without free and easy access to videos, Justin Bieber’s. He may not have made a penny from his YouTube videos but that hasn’t hurt his financial picture. Without YouTube, he would have been unlikely to get the exposure and recognition which have in turn led him to some serious financial opportunities.

I am somewhat less interested in the show business aspect than I am in the impact this could have on science as per section (2. Improving copyright rules on research, education and inclusion of disable [sic] people) of the European Commission press release. A Sept. 14, 2016 posting about a previous ruling on copyright in Europe by Mike Masnick for Techdirt provides some insight into the possible future impacts on science research,

Last week [Sept. 8, 2016 posting], we wrote about a terrible copyright ruling from the Court of Justice of the EU, which basically says that any for-profit entity that links to infringing material can be held liable for direct infringement, as the “for-profit” nature of the work is seen as evidence that they knew or should have known the work was infringing. We discussed the problems with this standard in our post, and there’s been a lot of commentary on what this will mean for Europe — with a variety of viewpoints being expressed. One really interesting set of concerns comes from Egon Willighagen, from Maastricht University, noting what a total and complete mess this is going to be for scientists, who rarely consider the copyright status of various data as databases they rely on are built up …

This is, of course, not the first time we’ve noted the problems of intellectual property in the science world. From various journals locking up research to the rise of patents scaring off researchers from sharing data, intellectual property keeps getting in the way of science, rather than supporting it. And that’s extremely unfortunate. I mean, after all, in the US specifically, the Constitution specifically says that copyrights and patents are supposed to be about “promoting the progress of science and the useful arts.”

Over and over again, though, we see that the law has been twisted and distorted and extended and expanded in such a way that is designed to protect a very narrow set of interests, at the expense of many others, including the public who would benefit from greater sharing and collaboration and open flow of data among scientific researchers. …

Masnick has also written up a Sept. 14, 2016 posting devoted to the EU copyright proposal itself,

This is not a surprise given the earlier leaks of what the EU Commission was cooking up for a copyright reform package, but the end result is here and it’s a complete disaster for everyone. And I do mean everyone. Some will argue that it’s a gift to Hollywood and legacy copyright interests — and there’s an argument that that’s the case. But the reality is that this proposal is so bad that it will end up doing massive harm to everyone. It will clearly harm independent creators and the innovative platforms that they rely on. And, because those platforms have become so important to even the legacy entertainment industry, it will harm them too. And, worst of all, it will harm the public greatly. It’s difficult to see how this proposal will benefit anyone, other than maybe some lawyers.

So the EU Commission has taken the exact wrong approach. It’s one that’s almost entirely about looking backwards and “protecting” old ways of doing business, rather than looking forward, and looking at what benefits the public, creators and innovators the most. If this proposal actually gets traction, it will be a complete disaster for the EU innovative community. Hopefully, Europeans speak out, vocally, about what a complete disaster this would be.

So, according to Masnick not even business interests will benefit.

Trans-Atlantic Platform (T-AP) is a unique collaboration of humanities and social science researchers from Europe and the Americas

Launched in 2013, the Trans-Atlantic Platform is co-chaired by Dr.Ted Hewitt, president of the Social Sciences and Humanities Research Council of Canada (SSHRC) , and Dr. Renée van Kessel-Hagesteijn, Netherlands Organisation for Scientific Research—Social Sciences (NWO—Social Sciences).

An EU (European Union) publication, International Innovation features an interview about T-AP with Ted Hewitt in a June 30, 2016 posting,

The Trans-Atlantic Platform is a unique collaboration of humanities and social science funders from Europe and the Americas. International Innovation’s Rebecca Torr speaks with Ted Hewitt, President of the Social Sciences and Humanities Research Council and Co-Chair of T-AP to understand more about the Platform and its pilot funding programme, Digging into Data.

Many commentators have called for better integration between natural and social scientists, to ensure that the societal benefits of STEM research are fully realised. Does the integration of diverse scientific disciplines form part of T-AP’s remit, and if so, how are you working to achieve this?

T-AP was designed primarily to promote and facilitate research across SSH. However, given the Platform’s thematic priorities and the funding opportunities being contemplated, we anticipate that a good number of non-SSH [emphasis mine] researchers will be involved.

As an example, on March 1, T-AP launched its first pilot funding opportunity: the T-AP Digging into Data Challenge. One of the sponsors is the Natural Sciences and Engineering Research Council of Canada (NSERC), Canada’s federal funding agency for research in the natural sciences and engineering. Their involvement ensures that the perspective of the natural sciences is included in the challenge. The Digging into Data Challenge is open to any project that addresses research questions in the SSH by using large-scale digital data analysis techniques, and is then able to show how these techniques can lead to new insights. And the challenge specifically aims to advance multidisciplinary collaborative projects.

When you tackle a research question or undertake research to address a social challenge, you need collaboration between various SSH disciplines or between SSH and STEM disciplines. So, while proposals must address SSH research questions, the individual teams often involve STEM researchers, such as computer scientists.

In previous rounds of the Digging into Data Challenge, this has led to invaluable research. One project looked at how the media shaped public opinion around the 1918 Spanish flu pandemic. Another used CT scans to examine hundreds of mummies, ultimately discovering that atherosclerosis, a form of heart disease, was prevalent 4,000 years ago. In both cases, these multidisciplinary historical research projects have helped inform our thinking of the present.

Of course, Digging into Data isn’t the only research area in which T-AP will be involved. Since its inception, T-AP partners have identified three priority areas beyond digital scholarship: diversity, inequality and difference; resilient and innovative societies; and transformative research on the environment. Each of these areas touches on a variety of SSH fields, while the transformative research on the environment area has strong connections with STEM fields. In September 2015, T-AP organised a workshop around this third priority area; environmental science researchers were among the workshop participants.

I wish Hewitt hadn’t described researchers from disciplines other than the humanities and social sciences as “non-SSH.” The designation divides the world in two: us and non-take your pick: non-Catholic/Muslim/American/STEM/SSH/etc.

Getting back to the interview, it is surprisingly Canuck-centric in places,

How does T-AP fit in with Social Sciences and Humanities Research Council of Canada (SSHRC)’s priorities?

One of the objectives in SSHRC’s new strategic plan is to develop partnerships that enable us to expand the reach of our funding. As T-AP provides SSHRC with links to 16 agencies across Europe and the Americas, it is an efficient mechanism for us to broaden the scope of our support and promotion of post-secondary-based research and training in SSH.

It also provides an opportunity to explore cutting edge areas of research, such as big data (as we did with the first call we put out, Digging into Data). The research enterprise is becoming increasingly international, by which I mean that researchers are working on issues with international dimensions or collaborating in international teams. In this globalised environment, SSHRC must partner with international funders to support research excellence. By developing international funding opportunities, T-AP helps researchers create teams better positioned to tackle the most exciting and promising research topics.

Finally, it is a highly effective way of broadly promoting the value of SSH research throughout Canada and around the globe. There are significant costs and complexities involved in international research, and uncoordinated funding from multiple national funders can actually create barriers to collaboration. A platform like T-AP helps funders coordinate and streamline processes.

The interview gets a little more international scope when it turns to the data project,

What is the significance of your pilot funding programme in digital scholarship and what types of projects will it support?

The T-AP Digging into Data Challenge is significant for several reasons. First, the geographic reach of Digging is truly significant. With 16 participants from 11 countries, this round of Digging has significantly broader participation from previous rounds. This is also the first time Digging into Data includes funders from South America.

The T-AP Digging into Data Challenge is open to any research project that addresses questions in SSH. In terms of what those projects will end up being is anybody’s guess – projects from past competitions have involved fields ranging from musicology to anthropology to political science.

The Challenge’s main focus is, of course, the use of big data in research.

You may want to read the interview in its entirety here.

I have checked out the Trans-Atlantic Platform website but cannot determine how someone or some institution might consult that site for information on how to get involved in their projects or get funding. However, there is a T-AP Digging into Data website where there is evidence of the first international call for funding submissions. Sadly, the deadline for the 2016 call has passed if the website is to be believed (sometimes people are late when changing deadline dates).