Tag Archives: England

Crackpot or visionary? Teaching seven-year-olds about intellectual property

It’s been a while since I’ve devoted a posting to intellectual property issues and my focus is usually on science/technology and how intellectual property issues relate to those fields. As a writer, I support a more relaxed approach to copyright and patent law and, in particular, I want to see the continuation of ‘fair use’ as it’s called in the US and ‘fair dealing’ as it’s called in Canada. I support the principle of making money from your work so you can continue to contribute creatively. But, the application of intellectual property law seems to have been turned into a weapon against creativity of all sorts. (At the end of this post you’ll find links to three typical posts from the many I have written on this topic.)

I do take the point being made in the following video (but for seven-year-olds and up!!!) about trademarks/logos and trademark infringement from the UK’s Intellectual Property Office,

Here’s the description from Youtube’s Logo Mania webpage,

Published on Jan 16, 2018

Brian Wheeler’s January 17, 2018 article for BBC (British Broadcasting Corporation) online news on UK Politics sheds a bit of light on this ‘campaign’ (Note: A link has been removed),

A campaign to teach children about copyright infringement on the internet, is employing cartoons and puns on pop stars’ names, to get the message across.

Even its makers admit it is a “dry” and “niche” subject for a cartoon aimed at seven-year-olds.

But the Intellectual Property Office adds learning to “respect” copyrights and trademarks is a “key life skill”.

And it is hoping the adventures of Nancy and the Meerkats can finally make intellectual property “fun”.

The series, which began life five years ago on Fun Kids Radio, was re-launched this week with the aim of getting its message into primary schools.

The Intellectual Property Office is leading the government’s efforts to crack down on internet piracy and protect the revenues of Britain’s creative industries.

The government agency is spending £20,000 of its own money on the latest Nancy campaign, which is part-funded by the UK music industry.

Catherine Davies, head of the IPO’s education outreach department, which already produces teaching materials for GCSE students, admitted IP was a “complex subject” for small children and something of a challenge to make accessible and entertaining.

Wheeler’s article is definitely worth reading in its entirety. In fact, I was so intrigued I chased down the government press release (from the www.gov.uk webpage),

Nancy and the Meerkats logo

Nancy and the Meerkats, with the help of Big Joe, present a new radio series to engage pupils with the concept of intellectual property (IP). Aimed at primary education, the resource guides pupils through the process of setting up a band and recording and releasing a song, which is promoted and performed live on tour.

Building on the success of the previous two series, Nancy and the Meerkats consists of a new radio series, short videos, comic book, lesson plans and competition. The supporting teaching resource also includes themed activities and engaging lesson plans. Together, these support and develop pupils’ understanding of copyright, trade marks and the importance of respecting IP.

Curriculum links are provided for England, Northern Ireland, Scotland and Wales.

The series will launch on Monday 15th January [2018] at 5pm on Fun Kids Radio

Along with ‘Logo Mania’, you can find such gems as ‘Track Attack’ concerning song lyrics and, presumably, copyright issues, ‘The Hum Bone’ concerning patents, and ‘Pirates on the Internet’ about illegal downloading on the Fun Kids Radio website. Previous seasons have included ‘Are forks just for eating with?’, ‘Is a kaleidoscope useful?’, ‘Rubber Bands’, ‘Cornish Pasties’, and more. It seems Fun Kids Radio has moved from its focus on the types of questions and topics that might interest children to topics of interest for the music industry and the UK’s Intellectual Property Office. At a guess, I’m guessing those groups might be maximalists where copyright is concerned.

By the way, for those interested in teaching resources and more, go to http://crackingideas.com/third_party/Nancy+and+the+Meerkats.

Finally, I’m not sure whether to laugh or cry. I do know that I’m curious about how they decided to focus on seven to 11-year-olds. Are children in the UK heavily involved in content piracy? Is there a generation of grade school pop stars about to enter the music market? Where is the data and how did they gather it?

Should anyone be inclined to answer those questions, I look forward to reading your reply in the Comments section.

ETA January 19, 2018 (five minutes later) Oops! Here are the links promised earlier,

October 31, 2011: Patents as weapons and obstacles

June 28, 2012: Billions lost to patent trolls; US White House asks for comments on intellectual property (IP) enforcement; and more on IP

March 28, 2013: Intellectual property, innovation, and hindrances

There are many, many more posts. Just click on the category for ‘intellectual property’.

In scientific race US sees China coming up from rear

Sometime it seems as if scientific research is like a race with everyone competing for first place. As in most sports, there are multiple competitions for various sub-groups but only one important race. The US has held the lead position for decades although always with some anxiety. These days the anxiety is focused on China. A June 15, 2017 news item on ScienceDaily suggests that US dominance is threatened in at least one area of research—the biomedical sector,

American scientific teams still publish significantly more biomedical research discoveries than teams from any other country, a new study shows, and the U.S. still leads the world in research and development expenditures.

But American dominance is slowly shrinking, the analysis finds, as China’s skyrocketing investing on science over the last two decades begins to pay off. Chinese biomedical research teams now rank fourth in the world for total number of new discoveries published in six top-tier journals, and the country spent three-quarters what the U.S. spent on research and development during 2015.

Meanwhile, the analysis shows, scientists from the U.S. and other countries increasingly make discoveries and advancements as part of teams that involve researchers from around the world.

A June 15, 2017 Michigan Medicine University of Michigan news release (also on EurekAlert), which originated the news item, details the research team’s insights,

The last 15 years have ushered in an era of “team science” as research funding in the U.S., Great Britain and other European countries, as well as Canada and Australia, stagnated. The number of authors has also grown over time. For example, in 2000 only two percent of the research papers the new study looked include 21 or more authors — a number that increased to 12.5 percent in 2015.

The new findings, published in JCI Insight by a team of University of Michigan researchers, come at a critical time for the debate over the future of U.S. federal research funding. The study is based on a careful analysis of original research papers published in six top-tier and four mid-tier journals from 2000 to 2015, in addition to data on R&D investment from those same years.

The study builds on other work that has also warned of America’s slipping status in the world of science and medical research, and the resulting impact on the next generation of aspiring scientists.

“It’s time for U.S. policy-makers to reflect and decide whether the year-to-year uncertainty in National Institutes of Health budget and the proposed cuts are in our societal and national best interest,” says Bishr Omary, M.D., Ph.D., senior author of the new data-supported opinion piece and chief scientific officer of Michigan Medicine, U-M’s academic medical center. “If we continue on the path we’re on, it will be harder to maintain our lead and, even more importantly, we could be disenchanting the next generation of bright and passionate biomedical scientists who see a limited future in pursuing a scientist or physician-investigator career.”

The analysis charts South Korea’s entry into the top 10 countries for publications, as well as China’s leap from outside the top 10 in 2000 to fourth place in 2015. They also track the major increases in support for research in South Korea and Singapore since the start of the 21st Century.

Meticulous tracking

First author of the study, U-M informationist Marisa Conte, and Omary co-led a team that looked carefully at the currency of modern science: peer-reviewed basic science and clinical research papers describing new findings, published in journals with long histories of accepting among the world’s most significant discoveries.

They reviewed every issue of six top-tier international journals (JAMA, Lancet, the New England Journal of Medicine, Cell, Nature and Science), and four mid-ranking journals (British Medical Journal, JAMA Internal Medicine, Journal of Cell Science, FASEB Journal), chosen to represent the clinical and basic science aspects of research.

The analysis included only papers that reported new results from basic research experiments, translational studies, clinical trials, metanalyses, and studies of disease outcomes. Author affiliations for corresponding authors and all other authors were recorded by country.

The rise in global cooperation is striking. In 2000, 25 percent of papers in the six top-tier journals were by teams that included researchers from at least two countries. In 2015, that figure was closer to 50 percent. The increasing need for multidisciplinary approaches to make major advances, coupled with the advances of Internet-based collaboration tools, likely have something to do with this, Omary says.

The authors, who also include Santiago Schnell, Ph.D. and Jing Liu, Ph.D., note that part of their group’s interest in doing the study sprang from their hypothesis that a flat NIH budget is likely to have negative consequences but they wanted to gather data to test their hypothesis.

They also observed what appears to be an increasing number of Chinese-born scientists who had trained in the U.S. going back to China after their training, where once most of them would have sought to stay in the U.S. In addition, Singapore has been able to recruit several top notch U.S. and other international scientists due to their marked increase in R&D investments.

The same trends appear to be happening in Great Britain, Australia, Canada, France, Germany and other countries the authors studied – where research investing has stayed consistent when measured as a percentage of the U.S. total over the last 15 years.

The authors note that their study is based on data up to 2015, and that in the current 2017 federal fiscal year, funding for NIH has increased thanks to bipartisan Congressional appropriations. The NIH contributes to most of the federal support for medical and basic biomedical research in the U.S. But discussion of cuts to research funding that hinders many federal agencies is in the air during the current debates for the 2018 budget. Meanwhile, the Chinese R&D spending is projected to surpass the U.S. total by 2022.

“Our analysis, albeit limited to a small number of representative journals, supports the importance of financial investment in research,” Omary says. “I would still strongly encourage any child interested in science to pursue their dream and passion, but I hope that our current and future investment in NIH and other federal research support agencies will rise above any branch of government to help our next generation reach their potential and dreams.”

Here’s a link to and a citation for the paper,

Globalization and changing trends of biomedical research output by Marisa L. Conte, Jing Liu, Santiago Schnell, and M. Bishr Omary. JCI Insight. 2017;2(12):e95206 doi:10.1172/jci.insight.95206 Volume 2, Issue 12 (June 15, 2017)

Copyright © 2017, American Society for Clinical Investigation

This paper is open access.

The notion of a race and looking back to see who, if anyone, is gaining on you reminded me of a local piece of sports lore, the Roger Banister-John Landy ‘Miracle Mile’. In the run up to the 1954 Commonwealth Games held in Vancouver, Canada, two runners were known to have broken the 4-minute mile limit (previously thought to have been impossible) and this meeting was considered an historic meeting. Here’s more from the miraclemile1954.com website,

On August 7, 1954 during the British Empire and Commonwealth Games in Vancouver, B.C., England’s Roger Bannister and Australian John Landy met for the first time in the one mile run at the newly constructed Empire Stadium.

Both men had broken the four minute barrier previously that year. Bannister was the first to break the mark with a time of 3:59.4 on May 6th in Oxford, England. Subsequently, on June 21st in Turku, Finland, John Landy became the new record holder with an official time of 3:58.

The world watched eagerly as both men approached the starting blocks. As 35,000 enthusiastic fans looked on, no one knew what would take place on that historic day.

Promoted as “The Mile of the Century”, it would later be known as the “Miracle Mile”.

With only 90 yards to go in one of the world’s most memorable races, John Landy glanced over his left shoulder to check his opponent’s position. At that instant Bannister streaked by him to victory in a Commonwealth record time of 3:58.8. Landy’s second place finish in 3:59.6 marked the first time the four minute mile had been broken by two men in the same race.

The website hosts an image of the moment memorialized in bronze when Landy looks to his left as Banister passes him on his right,

By Statue: Jack HarmanPhoto: Paul Joseph from vancouver, bc, canada – roger bannister running the four minute mileUploaded by Skeezix1000, CC BY 2.0, https://commons.wikimedia.org/w/index.php?curid=9801121

Getting back to science, I wonder if some day we’ll stop thinking of it as a race where, inevitably, there’s one winner and everyone else loses and find a new metaphor.

Vector Institute and Canada’s artificial intelligence sector

On the heels of the March 22, 2017 federal budget announcement of $125M for a Pan-Canadian Artificial Intelligence Strategy, the University of Toronto (U of T) has announced the inception of the Vector Institute for Artificial Intelligence in a March 28, 2017 news release by Jennifer Robinson (Note: Links have been removed),

A team of globally renowned researchers at the University of Toronto is driving the planning of a new institute staking Toronto’s and Canada’s claim as the global leader in AI.

Geoffrey Hinton, a University Professor Emeritus in computer science at U of T and vice-president engineering fellow at Google, will serve as the chief scientific adviser of the newly created Vector Institute based in downtown Toronto.

“The University of Toronto has long been considered a global leader in artificial intelligence research,” said U of T President Meric Gertler. “It’s wonderful to see that expertise act as an anchor to bring together researchers, government and private sector actors through the Vector Institute, enabling them to aim even higher in leading advancements in this fast-growing, critical field.”

As part of the Government of Canada’s Pan-Canadian Artificial Intelligence Strategy, Vector will share $125 million in federal funding with fellow institutes in Montreal and Edmonton. All three will conduct research and secure talent to cement Canada’s position as a world leader in AI.

In addition, Vector is expected to receive funding from the Province of Ontario and more than 30 top Canadian and global companies eager to tap this pool of talent to grow their businesses. The institute will also work closely with other Ontario universities with AI talent.

(See my March 24, 2017 posting; scroll down about 25% for the science part, including the Pan-Canadian Artificial Intelligence Strategy of the budget.)

Not obvious in last week’s coverage of the Pan-Canadian Artificial Intelligence Strategy is that the much lauded Hinton has been living in the US and working for Google. These latest announcements (Pan-Canadian AI Strategy and Vector Institute) mean that he’s moving back.

A March 28, 2017 article by Kate Allen for TorontoStar.com provides more details about the Vector Institute, Hinton, and the Canadian ‘brain drain’ as it applies to artificial intelligence, (Note:  A link has been removed)

Toronto will host a new institute devoted to artificial intelligence, a major gambit to bolster a field of research pioneered in Canada but consistently drained of talent by major U.S. technology companies like Google, Facebook and Microsoft.

The Vector Institute, an independent non-profit affiliated with the University of Toronto, will hire about 25 new faculty and research scientists. It will be backed by more than $150 million in public and corporate funding in an unusual hybridization of pure research and business-minded commercial goals.

The province will spend $50 million over five years, while the federal government, which announced a $125-million Pan-Canadian Artificial Intelligence Strategy in last week’s budget, is providing at least $40 million, backers say. More than two dozen companies have committed millions more over 10 years, including $5 million each from sponsors including Google, Air Canada, Loblaws, and Canada’s five biggest banks [Bank of Montreal (BMO). Canadian Imperial Bank of Commerce ({CIBC} President’s Choice Financial},  Royal Bank of Canada (RBC), Scotiabank (Tangerine), Toronto-Dominion Bank (TD Canada Trust)].

The mode of artificial intelligence that the Vector Institute will focus on, deep learning, has seen remarkable results in recent years, particularly in image and speech recognition. Geoffrey Hinton, considered the “godfather” of deep learning for the breakthroughs he made while a professor at U of T, has worked for Google since 2013 in California and Toronto.

Hinton will move back to Canada to lead a research team based at the tech giant’s Toronto offices and act as chief scientific adviser of the new institute.

Researchers trained in Canadian artificial intelligence labs fill the ranks of major technology companies, working on tools like instant language translation, facial recognition, and recommendation services. Academic institutions and startups in Toronto, Waterloo, Montreal and Edmonton boast leaders in the field, but other researchers have left for U.S. universities and corporate labs.

The goals of the Vector Institute are to retain, repatriate and attract AI talent, to create more trained experts, and to feed that expertise into existing Canadian companies and startups.

Hospitals are expected to be a major partner, since health care is an intriguing application for AI. Last month, researchers from Stanford University announced they had trained a deep learning algorithm to identify potentially cancerous skin lesions with accuracy comparable to human dermatologists. The Toronto company Deep Genomics is using deep learning to read genomes and identify mutations that may lead to disease, among other things.

Intelligent algorithms can also be applied to tasks that might seem less virtuous, like reading private data to better target advertising. Zemel [Richard Zemel, the institute’s research director and a professor of computer science at U of T] says the centre is creating an ethics working group [emphasis mine] and maintaining ties with organizations that promote fairness and transparency in machine learning. As for privacy concerns, “that’s something we are well aware of. We don’t have a well-formed policy yet but we will fairly soon.”

The institute’s annual funding pales in comparison to the revenues of the American tech giants, which are measured in tens of billions. The risk the institute’s backers are taking is simply creating an even more robust machine learning PhD mill for the U.S.

“They obviously won’t all stay in Canada, but Toronto industry is very keen to get them,” Hinton said. “I think Trump might help there.” Two researchers on Hinton’s new Toronto-based team are Iranian, one of the countries targeted by U.S. President Donald Trump’s travel bans.

Ethics do seem to be a bit of an afterthought. Presumably the Vector Institute’s ‘ethics working group’ won’t include any regular folks. Is there any thought to what the rest of us think about these developments? As there will also be some collaboration with other proposed AI institutes including ones at the University of Montreal (Université de Montréal) and the University of Alberta (Kate McGillivray’s article coming up shortly mentions them), might the ethics group be centered in either Edmonton or Montreal? Interestingly, two Canadians (Timothy Caulfield at the University of Alberta and Eric Racine at Université de Montréa) testified at the US Commission for the Study of Bioethical Issues Feb. 10 – 11, 2014 meeting, the Brain research, ethics, and nanotechnology. Still speculating here but I imagine Caulfield and/or Racine could be persuaded to extend their expertise in ethics and the human brain to AI and its neural networks.

Getting back to the topic at hand the ‘AI sceneCanada’, Allen’s article is worth reading in its entirety if you have the time.

Kate McGillivray’s March 29, 2017 article for the Canadian Broadcasting Corporation’s (CBC) news online provides more details about the Canadian AI situation and the new strategies,

With artificial intelligence set to transform our world, a new institute is putting Toronto to the front of the line to lead the charge.

The Vector Institute for Artificial Intelligence, made possible by funding from the federal government revealed in the 2017 budget, will move into new digs in the MaRS Discovery District by the end of the year.

Vector’s funding comes partially from a $125 million investment announced in last Wednesday’s federal budget to launch a pan-Canadian artificial intelligence strategy, with similar institutes being established in Montreal and Edmonton.

“[A.I.] cuts across pretty well every sector of the economy,” said Dr. Alan Bernstein, CEO and president of the Canadian Institute for Advanced Research, the organization tasked with administering the federal program.

“Silicon Valley and England and other places really jumped on it, so we kind of lost the lead a little bit. I think the Canadian federal government has now realized that,” he said.

Stopping up the brain drain

Critical to the strategy’s success is building a homegrown base of A.I. experts and innovators — a problem in the last decade, despite pioneering work on so-called “Deep Learning” by Canadian scholars such as Yoshua Bengio and Geoffrey Hinton, a former University of Toronto professor who will now serve as Vector’s chief scientific advisor.

With few university faculty positions in Canada and with many innovative companies headquartered elsewhere, it has been tough to keep the few graduates specializing in A.I. in town.

“We were paying to educate people and shipping them south,” explained Ed Clark, chair of the Vector Institute and business advisor to Ontario Premier Kathleen Wynne.

The existence of that “fantastic science” will lean heavily on how much buy-in Vector and Canada’s other two A.I. centres get.

Toronto’s portion of the $125 million is a “great start,” said Bernstein, but taken alone, “it’s not enough money.”

“My estimate of the right amount of money to make a difference is a half a billion or so, and I think we will get there,” he said.

Jessica Murphy’s March 29, 2017 article for the British Broadcasting Corporation’s (BBC) news online offers some intriguing detail about the Canadian AI scene,

Canadian researchers have been behind some recent major breakthroughs in artificial intelligence. Now, the country is betting on becoming a big player in one of the hottest fields in technology, with help from the likes of Google and RBC [Royal Bank of Canada].

In an unassuming building on the University of Toronto’s downtown campus, Geoff Hinton laboured for years on the “lunatic fringe” of academia and artificial intelligence, pursuing research in an area of AI called neural networks.

Also known as “deep learning”, neural networks are computer programs that learn in similar way to human brains. The field showed early promise in the 1980s, but the tech sector turned its attention to other AI methods after that promise seemed slow to develop.

“The approaches that I thought were silly were in the ascendancy and the approach that I thought was the right approach was regarded as silly,” says the British-born [emphasis mine] professor, who splits his time between the university and Google, where he is a vice-president of engineering fellow.

Neural networks are used by the likes of Netflix to recommend what you should binge watch and smartphones with voice assistance tools. Google DeepMind’s AlphaGo AI used them to win against a human in the ancient game of Go in 2016.

Foteini Agrafioti, who heads up the new RBC Research in Machine Learning lab at the University of Toronto, said those recent innovations made AI attractive to researchers and the tech industry.

“Anything that’s powering Google’s engines right now is powered by deep learning,” she says.

Developments in the field helped jumpstart innovation and paved the way for the technology’s commercialisation. They also captured the attention of Google, IBM and Microsoft, and kicked off a hiring race in the field.

The renewed focus on neural networks has boosted the careers of early Canadian AI machine learning pioneers like Hinton, the University of Montreal’s Yoshua Bengio, and University of Alberta’s Richard Sutton.

Money from big tech is coming north, along with investments by domestic corporations like banking multinational RBC and auto parts giant Magna, and millions of dollars in government funding.

Former banking executive Ed Clark will head the institute, and says the goal is to make Toronto, which has the largest concentration of AI-related industries in Canada, one of the top five places in the world for AI innovation and business.

The founders also want it to serve as a magnet and retention tool for top talent aggressively head-hunted by US firms.

Clark says they want to “wake up” Canadian industry to the possibilities of AI, which is expected to have a massive impact on fields like healthcare, banking, manufacturing and transportation.

Google invested C$4.5m (US$3.4m/£2.7m) last November [2016] in the University of Montreal’s Montreal Institute for Learning Algorithms.

Microsoft is funding a Montreal startup, Element AI. The Seattle-based company also announced it would acquire Montreal-based Maluuba and help fund AI research at the University of Montreal and McGill University.

Thomson Reuters and General Motors both recently moved AI labs to Toronto.

RBC is also investing in the future of AI in Canada, including opening a machine learning lab headed by Agrafioti, co-funding a program to bring global AI talent and entrepreneurs to Toronto, and collaborating with Sutton and the University of Alberta’s Machine Intelligence Institute.

Canadian tech also sees the travel uncertainty created by the Trump administration in the US as making Canada more attractive to foreign talent. (One of Clark’s the selling points is that Toronto as an “open and diverse” city).

This may reverse the ‘brain drain’ but it appears Canada’s role as a ‘branch plant economy’ for foreign (usually US) companies could become an important discussion once more. From the ‘Foreign ownership of companies of Canada’ Wikipedia entry (Note: Links have been removed),

Historically, foreign ownership was a political issue in Canada in the late 1960s and early 1970s, when it was believed by some that U.S. investment had reached new heights (though its levels had actually remained stable for decades), and then in the 1980s, during debates over the Free Trade Agreement.

But the situation has changed, since in the interim period Canada itself became a major investor and owner of foreign corporations. Since the 1980s, Canada’s levels of investment and ownership in foreign companies have been larger than foreign investment and ownership in Canada. In some smaller countries, such as Montenegro, Canadian investment is sizable enough to make up a major portion of the economy. In Northern Ireland, for example, Canada is the largest foreign investor. By becoming foreign owners themselves, Canadians have become far less politically concerned about investment within Canada.

Of note is that Canada’s largest companies by value, and largest employers, tend to be foreign-owned in a way that is more typical of a developing nation than a G8 member. The best example is the automotive sector, one of Canada’s most important industries. It is dominated by American, German, and Japanese giants. Although this situation is not unique to Canada in the global context, it is unique among G-8 nations, and many other relatively small nations also have national automotive companies.

It’s interesting to note that sometimes Canadian companies are the big investors but that doesn’t change our basic position. And, as I’ve noted in other postings (including the March 24, 2017 posting), these government investments in science and technology won’t necessarily lead to a move away from our ‘branch plant economy’ towards an innovative Canada.

You can find out more about the Vector Institute for Artificial Intelligence here.

BTW, I noted that reference to Hinton as ‘British-born’ in the BBC article. He was educated in the UK and subsidized by UK taxpayers (from his Wikipedia entry; Note: Links have been removed),

Hinton was educated at King’s College, Cambridge graduating in 1970, with a Bachelor of Arts in experimental psychology.[1] He continued his study at the University of Edinburgh where he was awarded a PhD in artificial intelligence in 1977 for research supervised by H. Christopher Longuet-Higgins.[3][12]

It seems Canadians are not the only ones to experience  ‘brain drains’.

Finally, I wrote at length about a recent initiative taking place between the University of British Columbia (Vancouver, Canada) and the University of Washington (Seattle, Washington), the Cascadia Urban Analytics Cooperative in a Feb. 28, 2017 posting noting that the initiative is being funded by Microsoft to the tune $1M and is part of a larger cooperative effort between the province of British Columbia and the state of Washington. Artificial intelligence is not the only area where US technology companies are hedging their bets (against Trump’s administration which seems determined to terrify people from crossing US borders) by investing in Canada.

For anyone interested in a little more information about AI in the US and China, there’s today’s (March 31, 2017)earlier posting: China, US, and the race for artificial intelligence research domination.

Breathing nanoparticles into your brain

Thanks to Dexter Johnson and his Sept. 8, 2016 posting (on the Nanoclast blog on the IEEE [Institute for Electrical and Electronics Engineers]) for bringing this news about nanoparticles in the brain to my attention (Note: Links have been removed),

An international team of researchers, led by Barbara Maher, a professor at Lancaster University, in England, has found evidence that suggests that the nanoparticles that were first detected in the human brain over 20 years ago may have an external rather an internal source.

These magnetite nanoparticles are an airborne particulate that are abundant in urban environments and formed by combustion or friction-derived heating. In other words, they have been part of the pollution in the air of our cities since the dawn of the Industrial Revolution.

However, according to Andrew Maynard, a professor at Arizona State University, and a noted expert on the risks associated with nanomaterials,  the research indicates that this finding extends beyond magnetite to any airborne nanoscale particles—including those deliberately manufactured.

“The findings further support the possibility of these particles entering the brain via the olfactory nerve if inhaled.  In this respect, they are certainly relevant to our understanding of the possible risks presented by engineered nanomaterials—especially those that are iron-based and have magnetic properties,” said Maynard in an e-mail interview with IEEE Spectrum. “However, ambient exposures to airborne nanoparticles will typically be much higher than those associated with engineered nanoparticles, simply because engineered nanoparticles will usually be manufactured and handled under conditions designed to avoid release and exposure.”

A Sept. 5, 2016 University of Lancaster press release made the research announcement,

Researchers at Lancaster University found abundant magnetite nanoparticles in the brain tissue from 37 individuals aged three to 92-years-old who lived in Mexico City and Manchester. This strongly magnetic mineral is toxic and has been implicated in the production of reactive oxygen species (free radicals) in the human brain, which are associated with neurodegenerative diseases including Alzheimer’s disease.

Professor Barbara Maher, from Lancaster Environment Centre, and colleagues (from Oxford, Glasgow, Manchester and Mexico City) used spectroscopic analysis to identify the particles as magnetite. Unlike angular magnetite particles that are believed to form naturally within the brain, most of the observed particles were spherical, with diameters up to 150 nm, some with fused surfaces, all characteristic of high-temperature formation – such as from vehicle (particularly diesel) engines or open fires.

The spherical particles are often accompanied by nanoparticles containing other metals, such as platinum, nickel, and cobalt.

Professor Maher said: “The particles we found are strikingly similar to the magnetite nanospheres that are abundant in the airborne pollution found in urban settings, especially next to busy roads, and which are formed by combustion or frictional heating from vehicle engines or brakes.”

Other sources of magnetite nanoparticles include open fires and poorly sealed stoves within homes. Particles smaller than 200 nm are small enough to enter the brain directly through the olfactory nerve after breathing air pollution through the nose.

“Our results indicate that magnetite nanoparticles in the atmosphere can enter the human brain, where they might pose a risk to human health, including conditions such as Alzheimer’s disease,” added Professor Maher.

Leading Alzheimer’s researcher Professor David Allsop, of Lancaster University’s Faculty of Health and Medicine, said: “This finding opens up a whole new avenue for research into a possible environmental risk factor for a range of different brain diseases.”

Damian Carrington’s Sept. 5, 2016 article for the Guardian provides a few more details,

“They [the troubling magnetite particles] are abundant,” she [Maher] said. “For every one of [the crystal shaped particles] we saw about 100 of the pollution particles. The thing about magnetite is it is everywhere.” An analysis of roadside air in Lancaster found 200m magnetite particles per cubic metre.

Other scientists told the Guardian the new work provided strong evidence that most of the magnetite in the brain samples come from air pollution but that the link to Alzheimer’s disease remained speculative.

For anyone who might be concerned about health risks, there’s this from Andrew Maynard’s comments in Dexter Johnson’s Sept. 8, 2016 posting,

“In most workplaces, exposure to intentionally made nanoparticles is likely be small compared to ambient nanoparticles, and so it’s reasonable to assume—at least without further data—that this isn’t a priority concern for engineered nanomaterial production,” said Maynard.

While deliberate nanoscale manufacturing may not carry much risk, Maynard does believe that the research raises serious questions about other manufacturing processes where exposure to high concentrations of airborne nanoscale iron particles is common—such as welding, gouging, or working with molten ore and steel.

It seems everyone is agreed that the findings are concerning but I think it might be good to remember that the percentage of people who develop Alzheimer’s Disease is much smaller than the population of people who have crystals in their brains. In other words, these crystals might (they don’t know) be a factor and likely there would have to be one or more factors to create the condition for developing Alzheimer’s.

Here’s a link to and a citation for the paper,

Magnetite pollution nanoparticles in the human brain by Barbara A. Maher, Imad A. M. Ahmed, Vassil Karloukovski, Donald A. MacLaren, Penelope G. Fouldsd, David Allsop, David M. A. Mann, Ricardo Torres-Jardón, and Lilian Calderon-Garciduenas. PNAS [Proceedings of the National Academy of Sciences] doi: 10.1073/pnas.1605941113

This paper is behind a paywall but Dexter’s posting offers more detail for those who are still curious.

Drone fly larvae avoid bacterial contamination due to their nanopillars

This is some fascinating bug research. From an April 6, 2016 news item on phys.org,

The immature stage of the drone fly (Eristalis tenax) is known as a “rat-tailed maggot” because it resembles a hairless baby rodent with a “tail” that is actually used as a breathing tube. Rat-tailed maggots are known to live in stagnant, fetid water that is rich in bacteria, fungi, and algae. However, despite this dirty environment, they are able to avoid infection by these microorganisms.

An April 6, 2016 Entomological Society of America news release on EurekAlert, which originated the news item, describes the findings,

Recently, Matthew Hayes, a cell biologist at the Institute of Ophthalmology at University College London in England, discovered never-before-seen structures that appear to keep the maggot mostly free of bacteria, despite living where microorganisms flourish. …

With scanning and transmission electron microscopes, Hayes carefully examined the larva and saw that much of its body is covered with thin spines, or “nanopillars,” that narrow to sharp points. Once he confirmed the spiky structures were indeed part of the maggot, he noticed a direct relationship between the presence of the spines and the absence of bacteria on the surface of the larva. He speculated that the carpet of spines simply makes it impossible for the bacteria to find enough room to adhere to the larva’s body surface.

Here’s an image of the nanopillars,

Caption: This electron-microscope image expose the spines, or "nanopillars," that poke up from the body of the rat-tailed maggot. The length and density of the spines vary as shown in this cross-section image of the cuticle. Credit: Matthew Hayes

Caption: This electron-microscope image expose the spines, or “nanopillars,” that poke up from the body of the rat-tailed maggot. The length and density of the spines vary as shown in this cross-section image of the cuticle. Credit: Matthew Hayes

Back to the news release,

“They’re much like anti-pigeon spikes that keep the birds away because they can’t find a nice surface to land on,” he said.

Hayes also ventured that the spines could possibly have superoleophobic properties (the ability to repel oils), which would also impede the bacteria from colonizing and forming a biofilm that could ultimately harm or kill the maggot. The composition of the spines is as unique as the structures themselves, Hayes said. Each spine appears to consist of a stack of hollow-cored disks, the largest at the bottom and the smallest at the top.

“What I really think they look like is the baby’s toy with the stack of rings of decreasing size, but on a very small scale,” he said. “I’ve worked in many different fields and looked at lots of different things, and I’ve never seen anything that looks like it.”

This work with the rat-tailed maggot is leading him to examine other insects as well, including the ability of another aquatic invertebrate — the mosquito larva — to thwart bacteria. Such antibacterial properties have applications in many different fields, including ophthalmology and other medical fields where biofilms can foul surgical instruments or implanted devices.

For now, though, he’s thrilled about shedding light on the underappreciated rat-tailed maggot and revealing its spiny armor.

“I’ve loved insects since I was a child, when I would breed butterflies and moths,” he said. “I’m just so chuffed to have discovered something a bit new about insects!”

I am charmed by Hayes’s admission of being “chuffed.”

Here’s a link to and a citation for the paper,

Identification of Nanopillars on the Cuticle of the Aquatic Larvae of the Drone Fly (Diptera: Syrphidae) by Matthew J. Hayes, Timothy P. Levine, Roger H. Wilson. DOI: http://dx.doi.org/10.1093/jisesa/iew019 36 First published online: 30 March 2016

This is an open access paper.

UK’s National Graphene Institute kerfuffle gets bigger

First mentioned here in a March 18, 2016 posting titled: Tempest in a teapot or a sign of things to come? UK’s National Graphene Institute kerfuffle, the ‘scandal’ seems to be getting bigger, from a March 29, 2016 posting on Dexter Johnson’s Nanoclast blog on the IEEE (Institute of Electrical and Electronics Engineers) website (Note: A link has been removed),

Since that news story broke, damage control from the NGI [UK National Graphene Institute], the University of Manchester, and BGT Materials, the company identified in the Times article, has been coming fast and furious. Even this blog’s coverage of the story has gotten comments from representatives of BGT Materials and the University of Manchester.

There was perhaps no greater effort in this coordinated defense than getting Andre Geim, a University of Manchester researcher who was a co-discoverer of graphene, to weigh in. …

Despite Geim’s recent public defense, and a full-on PR campaign to turn around the perception that the UK government was investing millions into UK research only to have the fruits of that research sold off to foreign interests, there was news last week that the UK Parliament would be launching an inquiry into the “benefits and disbenefits of the way that graphene’s intellectual property and commercialisation has been managed, including through research and innovation collaborations.”

The timing for the inquiry is intriguing but there have been no public comments or hints that the NGI kerfuffle precipitated the Graphene Inquiry,

The Science and Technology Committee issues a call for written submissions for its inquiry on graphene.

Send written submissions

The inquiry explores the lessons from graphene for research and innovation in other areas, as well as the management and commercialisation of graphene’s intellectual property. Issues include:

  • The research obstacles that have had to be overcome for graphene, including identifying research priorities and securing research funding, and the lessons from this for other areas of research.
  • The factors that have contributed to the successful development of graphene and how these might be applied in other areas, including translating research into innovation, managing/sharing intellectual property, securing development funding, and bringing key stakeholders together.
  • The benefits and disbenefits of the way that graphene’s intellectual property and commercialisation has been managed, including through research and innovation collaborations, and the lessons from this for other areas.

The deadline for submissions is midday on Monday 18 April 2016.

The Committee expects to take oral evidence later in April 2016.

Getting back to the NGI, BGT Materials, and University of Manchester situation, there’s a forceful comment from Daniel Cochlin (identified as a graphene communications and marketing manager at the University of Manchester in an April 2, 2015 posting on Nanoclast) in Dexter’s latest posting about the NGI. From the comments section of a March 29, 2016 posting on the Nanoclast blog,

Maybe the best way to respond is to directly counter some of your assertions.

1. The NGI’s comments on this blog were to counter factual inaccuracies contained in your story. Your Editor-in-Chief and Editorial Director, Digital were also emailed to complain about the story, with not so much as an acknowledgement of the email.
2. There was categorically no ‘coaxing’ of Sir Andre to make comments. He was motivated to by the inaccuracies and insinuations of the Sunday Times article.
3. Members of the Science and Technology Select Committee visited the NGI about ten days before the Sunday Times article and this was followed by their desire to hold an evidence session to discuss graphene commercialisation.
4. The matter of how many researchers work in the NGI is not ‘hotly contested’. The NGI is 75% full with around 130 researchers regularly working there. We would expect this figure to grow by 10-15% within the next few days as other facilities are closed down.
5. Graphene Lighting PLC is the spin-out company set up to produce and market the lightbulb. To describe them as a ‘shadowy spin-out’ is unjustified and, I would suggest, libelous [emphasis mine].
6. Your question about why, if BGT Materials is a UK company, was it not mentioned [emphasis mine] in connection with the lightbulb is confusing – as stated earlier the company set up to manage the lightbulb was Graphene Lighting PLC.

Let’s hope it doesn’t take three days for this to be accepted by your moderators, as it did last time.

*ETA March 31, 2016 at 1530 hours PDT: Dexter has posted response comments in answer to Cochlin’s. You can read them for youself here .* I have a couple of observations (1) The use of the word ‘libelous’ seems a bit over the top. However, it should be noted that it’s much easier to sue someone for libel in England where the University of Manchester is located than it is in most jurisdictions. In fact, there’s an industry known as ‘libel tourism’ where litigious companies and individuals shop around for a jurisdiction such as England where they can easily file suit. (2) As for BGT Materials not being mentioned in the 2015 press release for the graphene lightbulb, I cannot emphasize how unusual that is. Generally speaking, everyone and every agency that had any involvement in developing and bringing to market a new product, especially one that was the ‘first consumer graphene-based product’, is mentioned. When you consider that BGT Materials is a newish company according to its About page,

BGT Materials Limited (BGT), established in 2013, is dedicated to the development of graphene technologies that utilize this “wonder material” to enhance our lives. BGT has pioneered the mass production of large-area, high-quality graphene rapidly achieving the first milestone required for the commercialization of graphene-enhanced applications.

the situation grows more peculiar. A new company wants and needs that kind of exposure to attract investment and/or keep current stakeholders happy. One last comment about BGT Materials and its public relations, Thanasis Georgiou, VP BGT Materials, Visiting scientist at the University of Manchester (more can be found on his website’s About page), waded into the comments section of Dexter’s March 15, 2016 posting and the first about the kerfuffle. Gheorgiou starts out in a relatively friendly fashion but his followup has a sharper tone,

I appreciate your position but a simple email to us and we would clarify most of the issues that you raised. Indeed your article carries the same inaccuracies that the initial Sunday Times article does, which is currently the subject of a legal claim by BGT Materials. [emphasis mine]

For example, BGT Materials is a UK registered company, not a Taiwanese one. A quick google search and you can confirm this. There was no “shadowy Canadian investor”, the company went through a round of financing, as most technology startups do, in order to reach the market quickly.

It’s hard to tell if Gheorgiou is trying to inform Dexter or threaten him in his comment to the March 15, 2016 posting but taken together with Daniel Cochlin’s claim of libel in his comment to the March 29, 2016 posting, it suggests an attempt at intimidation.

These are understandable responses given the stakes involved but moving to the most damaging munitions in your arsenal is usually not a good choice for your first  or second response.

Speed of commercializing fashion technology in the 19th century

It took our 19th century ancestors four years to commercialize a new purple dye. While this is not a nanotechnology story as such, it’s a fascinating fashion story that also focuses on commercialization (a newly urgent aspect of the nanotechnology effort). From a Dec. 1, 2015 Elsevier press release on EurekAlert,

The dye industry of the 19th century was fast-moving and international, according to a state-of-the-art analysis of four purple dresses. The study, published in Spectrochimica Acta, Part A: Molecular and Biomolecular Spectroscopy, reveals that a brand new purple dye went from first synthesis to commercial use in just a few years.

Before the 1800s, purple dye came at a premium, so it was usually restricted to royalty — hence the connection between royals and purple. The 19th century saw the discovery of several synthetic purple dyes, making purple textiles more affordable and readily available. Understanding where these dyes came from and were used is therefore of historical interest.

In the new study, researchers from CSIRO Manufacturing and the National Gallery of Victoria in Australia show that the new purple dyes were part of a fast-moving industry, going from first synthesis to commercial use in as few as four years. This reflects how dynamic the fashion industry must have been at the time.

“Chemical analysis has given us a glimpse into the state of the dye industry in the 19th century, revealing the actual use of dyes around the world,” said Dr. Jeffrey Church, one of the authors of the study and principal research scientist at CSIRO Manufacturing.

The researchers took small samples of fibers from four dresses: three 19th century English dresses and one Australian wedding gown. They extracted the dyes from very small silk yarn samples and analyzed them using a combination of chemical techniques: thin layer chromatography and surface enhanced Raman spectroscopy, Fourier transform infrared spectroscopy and energy dispersive x-ray spectroscopy.

They found that the three English dresses were dyed using methyl violet; one of them was made only four years after the dye was first synthesized.

“The dress containing methyl violet was made shortly after the initial synthesis of the dye, indicating the rapidity with which the new synthetic dyes were embraced by the textile dye trade and the fashion world of the day,” commented Dr. Church.

However, the Australian wedding dress incorporated the use of a different dye — Perkin’s mauve — which was very historically significant, as it was the first synthetic purple dye and was only produced for 10 years. This was a surprise to the researchers, as the dress was made 20 years after the dye had been replaced on the market.

“The dress in question was made in Australia,” explained Dr. Church. “Does the presence of Perkin’s mauve relate to trade delays between Europe and Australia? Or was this precious fabric woven decades earlier and kept for the special purpose of a wedding? This is an excellent example of how state-of-the-art science and technology can shed light on the lives and times of previous generations. In doing so, as is common in science, one often raises more questions.”

The researchers have provided an image of the dresses,

Fig. 1. Dress 1 circa 1865, dress 2 circa 1898, dress 3 circa 1878 and dress 4 circa 1885 (clock-wise from left top). Details of these dresses are presented in the Experimental section. [downloaded from http://www.sciencedirect.com/science/article/pii/S1386142515302742]

Fig. 1. Dress 1 circa 1865, dress 2 circa 1898, dress 3 circa 1878 and dress 4 circa 1885 (clock-wise from left top). Details of these dresses are presented in the Experimental section. [downloaded from http://www.sciencedirect.com/science/article/pii/S1386142515302742]

Can you guess which one is the wedding dress? I was wrong. To find out more about the research and the dresses, here’s a link and a citation,

The purple coloration of four late 19th century silk dresses: A spectroscopic investigation by Andrea L. Woodhead, Bronwyn Cosgrove, Jeffrey S. Church. Spectrochimica Acta Part A: Molecular and Biomolecular Spectroscopy Volume 154, 5 February 2016, Pages 185–192  doi:10.1016/j.saa.2015.10.024

This paper appears to be open access. It’s quite interesting as they trace the history of purple dyes back to ancient times before fast forwarding to the 19th Century.

What is the effect of nanoscale plastic on marine life?

A Nov.27, 2015 news item on Nanowerk announces a new UK (United Kingdom) research project designed to answer the question: what impact could nanoscale plastic particles  have on the marine environment?,

As England brings in pricing on plastic carrier bags, and Scotland reveals that similar changes a little over a year ago have reduced the use of such bags by 80%, new research led by Heriot-Watt University in conjunction with Plymouth University will look at the effect which even the most microscopic plastic particles can have on the marine environment.

While images of large ‘islands’ of plastic rubbish or of large marine animals killed or injured by the effects of such discards have brought home some of the obvious negative effects of plastics in the marine environment, it is known that there is more discarded plastic out there than we can account for, and much of it will have degraded into small or even microscopic particles.

It is the effect of these latter, known as nano-plastics, which will be studied under a £1.1m research project, largely funded by NERC [UK Natural Environment Research Council] and run by Heriot-Watt and Plymouth Universities.

A Nov. 25, 2015 Herriot-Watt University press release, which originated the news item, provides more details,

The project, RealRiskNano, will look at the risks these tiny plastic particles pose to the food web including filter-feeding organisms like mussels, clams and sediment dwelling organisms. It will focus on providing information to improve environmental risk assessment for nanoplastics, based on real-world exposure scenarios replicated in the laboratory.

Team leader Dr Theodore Henry, Associate Professor of Toxicology at Heriot-Watt’s School of Life Sciences, said that the study will build on previous research on nano-material toxicology, but will provide information which the earlier studies did not include.

“Pieces of plastic of all sizes have been found in even the most remote marine environments. It’s relatively easy to see some of the results: turtles killed by easting plastic bags which they take for jelly fish, or large marine mammals drowned when caught in discarded ropes and netting.

“But when plastics fragment into microscopic particles, what then? It’s easy to imagine that they simply disappear, but we know that nano-particles pose their own distinct threats purely because of their size. They’re small enough to be transported throughout the environment with unknown effects on organisms including toxicity and interference with processes of the digestive system.

An important component of the project, to be investigated by Dr Tony Gutierrez at Heriot-Watt, will be the study of interactions between microorganisms and the nanoplastics to reveal how these interactions affect their fate and toxicology.

The aim, said Dr Henry, is to provide the information which is needed to effect real change.“We simply don’t know what effects these nano-plastic particles may pose to the marine environment, to filter-feeders and on to fish, and through the RealRiskNano project we aim to provide this urgently needed information to the people whose job it is to assess risk to the marine ecosystem and decide what steps need to be taken to mitigate it.”

You can find the RealRiskNano website here.

New ways to think about water

This post features two items about water both of which suggest we should reconsider our ideas about it. This first item concerns hydrogen bonds and coordinated vibrations. From a July 16 2014 news item on Azonano,

Using a newly developed, ultrafast femtosecond infrared light source, chemists at the University of Chicago have been able to directly visualize the coordinated vibrations between hydrogen-bonded molecules — the first time this sort of chemical interaction, which is found in nature everywhere at the molecular level, has been directly visualized. They describe their experimental techniques and observations in The Journal of Chemical Physics, from AIP [American Institute of Physics] Publishing.

“These two-dimensional infrared spectroscopy techniques provide a new avenue to directly visualize both hydrogen bond partners,” said Andrei Tokmakoff, the lab’s primary investigator. “They have the spectral content and bandwidth to really interrogate huge parts of the vibrational spectrum of molecules. It’s opened up the ability to look at how very different types of vibrations on different molecules interact with one another.”

A July 15, 2014 AIP news release by John Arnst (also on EurekAlert), which originated the news item, provides more detail,

Tokmakoff and his colleagues sought to use two-dimensional infrared spectroscopy to directly characterize structural parameters such as intermolecular distances and hydrogen-bonding configurations, as this information can be encoded in intermolecular cross-peaks that spectroscopy detects between solute-solvent vibrations.

“You pluck on the bonds of one molecule and watch how it influences the other,” Tokmakoff said. “In our experiment, you’re basically plucking on both because they’re so strongly bound.”

Hydrogen bonds are typically perceived as the attractive force between the slightly negative and slightly positive ends of neutrally-charged molecules, such as water. While water stands apart with its unique polar properties, hydrogen bonds can form between a wide range of molecules containing electronegative atoms and range from weakly polar to nearly covalent in strength. Hydrogen bonding plays a key role in the action of large, biologically-relevant molecules and is often an important element in the discovery of new pharmaceuticals.

For their initial visualizations, Tokmakoff’s group used N-methylacetamide, a molecule called a peptide that forms medium-strength hydrogen-bonded dimers in organic solution due to its polar nitrogen-hydrogen and carbon-oxygen tails. By using a targeted three-pulse sequence of mid-infrared light and apparatus described in their article, Tokmakoff’s group was able to render the vibrational patterns of the two peptide units.

“All of the internal vibrations of hydrogen bonded molecules that we look at become intertwined, inextricably; you can’t think of them as just a simple sum of two parts,” Tokmakoff said.

More research is being planned while Tokmakoff suggests that water must be rethought from an atomistic perspective (from the news release),

Future work in Tokmakoff’s group involves visualizing the dynamics and structure of water around biological molecules such as proteins and DNA.

“You can’t just think of the water as sort of an amorphous solvent, you really have to at least on some level think of it atomistically and treat it that way,” Tokmakoff said. “And if you believe that, it has huge consequences all over the place, particularly in biology, where so much computational biology ignores the fact that water has real structure and real quantum mechanical properties of its own.”

The researchers have provided an illustration of hydrogen’s vibrating bonds,

The hydrogen-bonding interaction causes the atoms on each individual N-methylacetamide molecule to vibrate in unison. CREDIT: L. De Marco/UChicago

The hydrogen-bonding interaction causes the atoms on each individual N-methylacetamide molecule to vibrate in unison.
CREDIT: L. De Marco/UChicago

Here’s a link to and a citation for the paper,

Direct observation of intermolecular interactions mediated by hydrogen bonding by Luigi De Marco, Martin Thämer, Mike Reppert, and Andrei Tokmakoff. J. Chem. Phys. 141, 034502 (2014); http://dx.doi.org/10.1063/1.4885145

This paper is open access. (I was able to view the entire HTML version.)

A July 15, 2014 University of Southampton press release on EurekAlert offers another surprise about water,

University of Southampton researchers have found that rainwater can penetrate below the Earth’s fractured upper crust, which could have major implications for our understanding of earthquakes and the generation of valuable mineral deposits.

The reason that water’s ability to penetrate below the earth’s upper crust is a surprise (from the news release),

It had been thought that surface water could not penetrate the ductile crust – where temperatures of more than 300°C and high pressures cause rocks to flex and flow rather than fracture – but researchers, led by Southampton’s Dr Catriona Menzies, have now found fluids derived from rainwater at these levels.

The news release also covers the implications of this finding,

Fluids in the Earth’s crust can weaken rocks and may help to initiate earthquakes along locked fault lines. They also concentrate valuable metals such as gold. The new findings suggest that rainwater may be responsible for controlling these important processes, even deep in the Earth.

Researchers from the University of Southampton, GNS Science (New Zealand), the University of Otago, and the Scottish Universities Environmental Research Centre studied geothermal fluids and mineral veins from the Southern Alps of New Zealand, where the collision of two tectonic plates forces deeper layers of the earth closer to the surface.

The team looked into the origin of the fluids, how hot they were and to what extent they had reacted with rocks deep within the mountain belt.

“When fluids flow through the crust they leave behind deposits of minerals that contain a small amount of water trapped within them,” says Postdoctoral Researcher Catriona, who is based at the National Oceanography Centre. “We have analysed these waters and minerals to identify where the fluids deep in the crust came from.

“Fluids may come from a variety of sources in the crust. In the Southern Alps fluids may flow upwards from deep in the crust, where they are released from hot rocks by metamorphic reactions, or rainwater may flow down from the surface, forced by the high mountains above. We wanted to test the limits of where rainwater may flow in the crust. Although it has been suggested before, our data shows for the first time that rainwater does penetrate into rocks that are too deep and hot to fracture.”

Surface-derived waters reaching such depths are heated to over 400°C and significantly react with crustal rocks. However, through testing the researchers were able to establish the water’s meteoric origin.

Funding for this research, which has been published in Earth and Planetary Science Letters, was provided by the Natural Environmental Research Council (NERC). Catriona and her team are now looking further at the implications of their findings in relation to earthquake cycles as part of the international Deep Fault Drilling Project [DFDP], which aims to drill a hole through the Alpine Fault at a depth of about 1km later this year.

Here’s a link to and a citation for the paper,

Incursion of meteoric waters into the ductile regime in an active orogen by Catriona D. Menzies, Damon A.H. Teagle, Dave Craw, Simon C. Cox, Adrian J. Boyce, Craig D. Barrie, and Stephen Roberts. Earth and Planetary Science Letters Volume 399, 1 August 2014, Pages 1–13 DOI: 10.1016/j.epsl.2014.04.046

Open Access funded by Natural Environment Research Council

This is the first time I’ve seen the funding agency which made the paper’s open access status possible cited.