Tag Archives: Europe

In scientific race US sees China coming up from rear

Sometime it seems as if scientific research is like a race with everyone competing for first place. As in most sports, there are multiple competitions for various sub-groups but only one important race. The US has held the lead position for decades although always with some anxiety. These days the anxiety is focused on China. A June 15, 2017 news item on ScienceDaily suggests that US dominance is threatened in at least one area of research—the biomedical sector,

American scientific teams still publish significantly more biomedical research discoveries than teams from any other country, a new study shows, and the U.S. still leads the world in research and development expenditures.

But American dominance is slowly shrinking, the analysis finds, as China’s skyrocketing investing on science over the last two decades begins to pay off. Chinese biomedical research teams now rank fourth in the world for total number of new discoveries published in six top-tier journals, and the country spent three-quarters what the U.S. spent on research and development during 2015.

Meanwhile, the analysis shows, scientists from the U.S. and other countries increasingly make discoveries and advancements as part of teams that involve researchers from around the world.

A June 15, 2017 Michigan Medicine University of Michigan news release (also on EurekAlert), which originated the news item, details the research team’s insights,

The last 15 years have ushered in an era of “team science” as research funding in the U.S., Great Britain and other European countries, as well as Canada and Australia, stagnated. The number of authors has also grown over time. For example, in 2000 only two percent of the research papers the new study looked include 21 or more authors — a number that increased to 12.5 percent in 2015.

The new findings, published in JCI Insight by a team of University of Michigan researchers, come at a critical time for the debate over the future of U.S. federal research funding. The study is based on a careful analysis of original research papers published in six top-tier and four mid-tier journals from 2000 to 2015, in addition to data on R&D investment from those same years.

The study builds on other work that has also warned of America’s slipping status in the world of science and medical research, and the resulting impact on the next generation of aspiring scientists.

“It’s time for U.S. policy-makers to reflect and decide whether the year-to-year uncertainty in National Institutes of Health budget and the proposed cuts are in our societal and national best interest,” says Bishr Omary, M.D., Ph.D., senior author of the new data-supported opinion piece and chief scientific officer of Michigan Medicine, U-M’s academic medical center. “If we continue on the path we’re on, it will be harder to maintain our lead and, even more importantly, we could be disenchanting the next generation of bright and passionate biomedical scientists who see a limited future in pursuing a scientist or physician-investigator career.”

The analysis charts South Korea’s entry into the top 10 countries for publications, as well as China’s leap from outside the top 10 in 2000 to fourth place in 2015. They also track the major increases in support for research in South Korea and Singapore since the start of the 21st Century.

Meticulous tracking

First author of the study, U-M informationist Marisa Conte, and Omary co-led a team that looked carefully at the currency of modern science: peer-reviewed basic science and clinical research papers describing new findings, published in journals with long histories of accepting among the world’s most significant discoveries.

They reviewed every issue of six top-tier international journals (JAMA, Lancet, the New England Journal of Medicine, Cell, Nature and Science), and four mid-ranking journals (British Medical Journal, JAMA Internal Medicine, Journal of Cell Science, FASEB Journal), chosen to represent the clinical and basic science aspects of research.

The analysis included only papers that reported new results from basic research experiments, translational studies, clinical trials, metanalyses, and studies of disease outcomes. Author affiliations for corresponding authors and all other authors were recorded by country.

The rise in global cooperation is striking. In 2000, 25 percent of papers in the six top-tier journals were by teams that included researchers from at least two countries. In 2015, that figure was closer to 50 percent. The increasing need for multidisciplinary approaches to make major advances, coupled with the advances of Internet-based collaboration tools, likely have something to do with this, Omary says.

The authors, who also include Santiago Schnell, Ph.D. and Jing Liu, Ph.D., note that part of their group’s interest in doing the study sprang from their hypothesis that a flat NIH budget is likely to have negative consequences but they wanted to gather data to test their hypothesis.

They also observed what appears to be an increasing number of Chinese-born scientists who had trained in the U.S. going back to China after their training, where once most of them would have sought to stay in the U.S. In addition, Singapore has been able to recruit several top notch U.S. and other international scientists due to their marked increase in R&D investments.

The same trends appear to be happening in Great Britain, Australia, Canada, France, Germany and other countries the authors studied – where research investing has stayed consistent when measured as a percentage of the U.S. total over the last 15 years.

The authors note that their study is based on data up to 2015, and that in the current 2017 federal fiscal year, funding for NIH has increased thanks to bipartisan Congressional appropriations. The NIH contributes to most of the federal support for medical and basic biomedical research in the U.S. But discussion of cuts to research funding that hinders many federal agencies is in the air during the current debates for the 2018 budget. Meanwhile, the Chinese R&D spending is projected to surpass the U.S. total by 2022.

“Our analysis, albeit limited to a small number of representative journals, supports the importance of financial investment in research,” Omary says. “I would still strongly encourage any child interested in science to pursue their dream and passion, but I hope that our current and future investment in NIH and other federal research support agencies will rise above any branch of government to help our next generation reach their potential and dreams.”

Here’s a link to and a citation for the paper,

Globalization and changing trends of biomedical research output by Marisa L. Conte, Jing Liu, Santiago Schnell, and M. Bishr Omary. JCI Insight. 2017;2(12):e95206 doi:10.1172/jci.insight.95206 Volume 2, Issue 12 (June 15, 2017)

Copyright © 2017, American Society for Clinical Investigation

This paper is open access.

The notion of a race and looking back to see who, if anyone, is gaining on you reminded me of a local piece of sports lore, the Roger Banister-John Landy ‘Miracle Mile’. In the run up to the 1954 Commonwealth Games held in Vancouver, Canada, two runners were known to have broken the 4-minute mile limit (previously thought to have been impossible) and this meeting was considered an historic meeting. Here’s more from the miraclemile1954.com website,

On August 7, 1954 during the British Empire and Commonwealth Games in Vancouver, B.C., England’s Roger Bannister and Australian John Landy met for the first time in the one mile run at the newly constructed Empire Stadium.

Both men had broken the four minute barrier previously that year. Bannister was the first to break the mark with a time of 3:59.4 on May 6th in Oxford, England. Subsequently, on June 21st in Turku, Finland, John Landy became the new record holder with an official time of 3:58.

The world watched eagerly as both men approached the starting blocks. As 35,000 enthusiastic fans looked on, no one knew what would take place on that historic day.

Promoted as “The Mile of the Century”, it would later be known as the “Miracle Mile”.

With only 90 yards to go in one of the world’s most memorable races, John Landy glanced over his left shoulder to check his opponent’s position. At that instant Bannister streaked by him to victory in a Commonwealth record time of 3:58.8. Landy’s second place finish in 3:59.6 marked the first time the four minute mile had been broken by two men in the same race.

The website hosts an image of the moment memorialized in bronze when Landy looks to his left as Banister passes him on his right,

By Statue: Jack HarmanPhoto: Paul Joseph from vancouver, bc, canada – roger bannister running the four minute mileUploaded by Skeezix1000, CC BY 2.0, https://commons.wikimedia.org/w/index.php?curid=9801121

Getting back to science, I wonder if some day we’ll stop thinking of it as a race where, inevitably, there’s one winner and everyone else loses and find a new metaphor.

Patent Politics: a June 23, 2017 book launch at the Wilson Center (Washington, DC)

I received a June 12, 2017 notice (via email) from the Wilson Center (also know as the Woodrow Wilson Center for International Scholars) about a book examining patents and policies in the United States and in Europe and its upcoming launch,

Patent Politics: Life Forms, Markets, and the Public Interest in the United States and Europe

Over the past thirty years, the world’s patent systems have experienced pressure from civil society like never before. From farmers to patient advocates, new voices are arguing that patents impact public health, economic inequality, morality—and democracy. These challenges, to domains that we usually consider technical and legal, may seem surprising. But in Patent Politics, Shobita Parthasarathy argues that patent systems have always been deeply political and social.

To demonstrate this, Parthasarathy takes readers through a particularly fierce and prolonged set of controversies over patents on life forms linked to important advances in biology and agriculture and potentially life-saving medicines. Comparing battles over patents on animals, human embryonic stem cells, human genes, and plants in the United States and Europe, she shows how political culture, ideology, and history shape patent system politics. Clashes over whose voices and which values matter in the patent system, as well as what counts as knowledge and whose expertise is important, look quite different in these two places. And through these debates, the United States and Europe are developing very different approaches to patent and innovation governance. Not just the first comprehensive look at the controversies swirling around biotechnology patents, Patent Politics is also the first in-depth analysis of the political underpinnings and implications of modern patent systems, and provides a timely analysis of how we can reform these systems around the world to maximize the public interest.

Join us on June 23 [2017] from 4-6 pm [elsewhere the time is listed at 4-7 pm] for a discussion on the role of the patent system in governing emerging technologies, on the launch of Shobita Parthasarathy’s Patent Politics: Life Forms, Markets, and the Public Interest in the United States and Europe (University of Chicago Press, 2017).

You can find more information such as this on the Patent Politics event page,

Speakers

Keynote


  • Shobita Parthasarathy

    Fellow
    Associate Professor of Public Policy and Women’s Studies, and Director of the Science, Technology, and Public Policy Program, at University of Michigan

Moderator


  • Eleonore Pauwels

    Senior Program Associate and Director of Biology Collectives, Science and Technology Innovation Program
    Formerly European Commission, Directorate-General for Research and Technological Development, Directorate on Science, Economy and Society

Panelists


  • Daniel Sarewitz

    Co-Director, Consortium for Science, Policy & Outcomes Professor of Science and Society, School for the Future of Innovation in Society

  • Richard Harris

    Award-Winning Journalist National Public Radio Author of “Rigor Mortis: How Sloppy Science Creates Worthless Cures, Crushes Hope, and Wastes Billions”

For those who cannot attend in person, there will be a live webcast. If you can be there in person, you can RSVP here (Note: The time frame for the event is listed in some places as 4-7 pm.) I cannot find any reason for the time frame disparity. My best guess is that the discussion is scheduled for two hours with a one hour reception afterwards for those who can attend in person.

After the April 22, 2017 US March for Science

Since last Saturday’s (April 22, 2017) US March for Science, I’ve stumbled across three interesting perspectives on the ‘movement’. As I noted in my April 14, 2017 posting, the ‘march’ has reached out beyond US borders to become international in scope. (On the day, at least 18 marches were held in Canada alone.)

Canada

John Dupuis wrote about his experience as a featured speaker at the Toronto (Ontario) march in an April 24, 2017 posting on his Confessions of a Science Librarian blog (Note: Links have been removed),

My fellow presenters were Master of Ceremonies Rupinder Brar and speakers Dawn Martin-Hill, Josh Matlow, Tanya Harrison, Chelsea Rochman, Aadita Chaudhury, Eden Hennessey and Cody Looking Horse.

Here’s what I had to say:

Hi, my name is John and I’m a librarian. My librarian superpower is making lists, checking them twice and seeing who’s been naughty and who’s been nice. The nice ones are all of you out here marching for science. And the naughty ones are the ones out there that are attacking science and the environment.

Now I’ve been in the list-making business for quite a few years, making an awful lot of lists of how governments have attacked or ignored science. I did a lot of work making lists about the Harper government and their war on science. The nicest thing I’ve ever seen written about my strange little obsession was in The Guardian.

Here’s what they said, in an article titled, How science helped to swing the Canadian election.

“Things got so bad that scientists and their supporters took to the streets. They demonstrated in Ottawa. They formed an organization, Evidence for Democracy, to bring push back on political interference in science. Awareness-raising forums were held at campuses throughout Canada. And the onslaught on science was painstakingly documented, which tends to happen when you go after librarians.”

Yeah, watch out. Don’t go after libraries and librarians. The Harper govt learned its lesson. And we learned a lesson too. And that lesson was that keeping track of things, that painstakingly documenting all the apparently disconnected little bits and pieces of policies here, regulations changed there and a budget snipped somewhere else, it all adds up.

What before had seemed random and disconnected is suddenly a coherent story. All the dots are connected and everybody can see what’s happened. By telling the whole story, by laying it all out there for everyone to see, it’s suddenly easier for all of us to point to the list and to hold the government of the day accountable. That’s the lesson learned from making lists.

But back in 2013 what I saw the government doing wasn’t the run of the mill anti-science that we’d seen before. Prime Minister Harper’s long standing stated desire to make Canada a global energy superpower revealed the underlying motivation but it was the endless litany of program cuts, census cancellation, science library closures, regulatory changes and muzzling of government scientists that made up the action plan. But was it really a concerted action plan or was it a disconnected series of small changes that were really no big deal or just a little different from normal?

That’s where making lists comes in handy. If you’re keeping track, then, yeah, you see the plan. You see the mission, you see the goals, you see the strategy, you see the tactics. You see that the government was trying to be sneaky and stealthy and incremental and “normal” but that there was a revolution in the making. An anti-science revolution.

Fast forward to now, April 2017, and what do we see? The same game plan repeated, the same anti-science revolution under way [in the US]. Only this time not so stealthy. Instead of a steady drip, it’s a fire hose. Message control at the National Parks Service, climate change denial, slashing budgets and shutting down programs at the EPA and other vital agencies. Incompetent agency directors that don’t understand the mission of their agencies or who even want to destroy them completely.

Once again, we are called to document, document, document. Tell the stories, mobilize science supporters and hold the governments accountable at the ballot box. Hey, like the Guardian said, if we did it in Canada, maybe that game plan can be repeated too.

I invited my three government reps here to the march today, Rob Oliphant, Josh Matlow and Eric Hoskins and I invited them to march with me so we could talk about how evidence should inform public policy. Josh, of course, is up here on the podium with me. As for Rob Oliphant from the Federal Liberals and Eric Hoskins from the Ontario Liberals, well, let’s just say they never answered my tweets.

Keep track, tell the story, hold all of them from every party accountable. The lesson we learned here in Canada was that science can be a decisive issue. Real facts can mobilise people to vote against alternative facts.

Thank you.

I’m not as sure as Dupuis that science was a decisive issue in our 2015 federal election; I’d say it was a factor. More importantly, I think the 2015 election showed Canadian scientists and those who feel science is important that it is possible to give it a voice and more prominence in the political discourse.

Rwanda

Eric Leeuwerck in an April 24, 2017 posting on one of the Agence Science-Press blogs describes his participation from Rwanda (I have provided a very rough translation after),

Un peu partout dans le monde, samedi 22 avril 2017, des milliers de personnes se sont mobilisées pour la « march for science », #sciencemarch, « une marche citoyenne pour les sciences, contre l’obscurantisme ». Et chez moi, au Rwanda ?

J’aurais bien voulu y aller moi à une « march for science », j’aurais bien voulu me joindre aux autres voix, me réconforter dans un esprit de franche camaraderie, à marcher comme un seul homme dans les rues, à dire que oui, nous sommes là ! La science vaincra, « No science, no futur ! » En Arctique, en Antarctique, en Amérique latine, en Asie, en Europe, sur la terre, sous l’eau…. Partout, des centaines de milliers de personnes ont marché ensemble. L’Afrique s’est mobilisée aussi, il y a eu des “march for science” au Kenya, Nigeria, Ouganda…

Et au Rwanda ? Eh bien, rien… Pourquoi suivre la masse, hein ? Pourquoi est-ce que je ne me suis pas bougé le cul pour faire une « march for science » au Rwanda ? Euh… et bien… Je vous avoue que je me vois mal organiser une manif au Rwanda en fait… Une collègue m’a même suggéré l’idée mais voilà, j’ai laissé tomber au moment même où l’idée m’a traversé l’esprit… Cependant, j’avais quand même cette envie d’exprimer ma sympathie et mon appartenance à ce mouvement mondial, à titre personnel, sans vouloir parler pour les autres, avec un GIF tout simple.

March for science RWanda

” March for science ” Rwanda

Je dois dire que je me sens bien souvent seul ici… Les cours de biologie de beaucoup d’écoles sont créationnistes, même au KICS (pour Kigali International Community School), une école internationale américaine (je tiens ça d’amis qui ont eu leurs enfants dans cette école). Sur son site, cette école de grande renommée ici ne cache pas ses penchants chrétiens : “KICS is a fully accredited member of the Association of Christian Schools International (ACSI) (…)” et, de plus, est reconnue par le ministère de l’éducation rwandais : “(KICS) is endorsed by the Rwandan Ministry of Education as a sound educational institution“. Et puis, il y a cette phrase sur leur page d’accueil : « Join the KICS family and impact the world for christ ».

Je réalise régulièrement des formations en pédagogie des sciences pour des profs locaux du primaire et du secondaire. Lors de ma formation sur la théorie de l’Evolution, qui a eu pas mal de succès, les enseignants de biologie m’ont confié que c’était la première fois, avec moi, qu’ils avaient eu de vrais cours sur la théorie de l’Evolution… (Je passe les débats sur l’athéisme, sur la « création » qui n’est pas un fait, sur ce qu’est un fait, qu’il ne faut pas faire « acte de foi » pour faire de la science et que donc on ne peut pas « croire » en la science, mais la comprendre…). Un thème délicat à aborder a été celui de la « construction des identités meurtrières » pour reprendre le titre du livre d’Amin Maalouf, au Rwanda comment est-ce qu’une pseudoscience, subjective, orientée politiquement et religieusement a pu mener au racisme et au génocide. On m’avait aussi formellement interdit d’en parler à l’époque, ma directrice de l’époque disait « ne te mêle pas de ça, ce n’est pas notre histoire », mais voilà, maintenant, ce thème est devenu un thème incontournable, même à l’Ecole Belge de Kigali !

Une autre formation sur l’éducation sexuelle a été très bien reçue aussi ! J’ai mis en place cette formation, aussi contre l’avis de ma directrice de l’époque (une autre) : des thèmes comme le planning familial, la contraception, l’homosexualité, gérer un débat houleux, les hormones… ont été abordées ! Première fois aussi, m’ont confié les enseignants, qu’ils ont reçu une formation objective sur ces sujets tabous.

Chaque année, je réunis un peu d’argent avec l’aide de l’École Belge de Kigali pour faire ces formations (même si mes directions ne sont pas toujours d’accord avec les thèmes ), je suis totalement indépendant et à part l’École Belge de Kigali, aucune autre institution dont j’ai sollicité le soutien n’a voulu me répondre. Mais je continue, ça relève parfois du militantisme, je l’avoue.

C’est comme mon blog, un des seuls blogs francophones de sciences en Afrique (en fait, je n’en ai jamais trouvé aucun en cherchant sur le net) dans un pays à la connexion Internet catastrophique, je me demande parfois pourquoi je continue… Je perds tellement de temps à attendre que mes pages chargent, à me reconnecter je ne sais pas combien de fois toutes les 5 minutes … En particulier lors de la saison des pluies ! Heureusement que je peux compter sur le soutien inconditionnel de mes communautés de blogueurs : le café des sciences , les Mondoblogueurs de RFI , l’Agence Science-Presse. Sans eux, j’aurais arrêté depuis longtemps ! Six ans de blogging scientifique quand même…

Alors, ce n’est pas que virtuel, vous savez ! Chaque jour, quand je vais au boulot pour donner mes cours de bio et chimie, quand j’organise mes formations, quand j’arrive à me connecter à mon blog, je « marche pour la science ».

Yeah. (De la route, de la science et du rock’n’roll : Rock’n’Science !)

(Un commentaire de soutien ça fait toujours plaisir !)

As I noted, this will be a very rough translation and anything in square brackets [] means that I’m even less sure about the translation in that bit,

Pretty much around the world, thousands will march for science against anti-knowledge/anti-science.

I would have liked to join in and to march with other kindred spirits as one in the streets. We are here! Science will triumph! No science .No future. In the Arctic, in the Antarctic, in Latin America, in Asia, in Europe,  on land, on water … Everywhere hundreds of thousands of people are marching together. Africa, too, has mobilized with marches in Kenya, Nigeria, Uganda ..

And in Rwanda? Well, no, nothing. Why follow everyone else? Why didn’t I get my butt in gear and organize a march? [I’m not good at organizing these kinds of things] A colleague even suggested I arrange something . I had an impulse to do it and then it left. Still, I want to express my solidarity with the March for Science without attempting to talk for or represent anyone other than myself. So, here’s a simple gif,

I have to say I often feel myself to be alone here. The biology courses taught in many of the schools here are creationist biology even at the KICS (Kigali International Community School), an international American school (I have friends whose children attend the school). On the school’s site there’s a sign that does nothing to hide its mission: “KICS is a fully accredited member of the Association of Christian Schools International (ACSI) (…)” and, further, it is recognized as such by the Rwandan Ministry of Education : “(KICS) is endorsed by the Rwandan Ministry of Education as a sound educational institution”. Finally, there’s this on their welcome page : « Join the KICS family and impact the world for christ ».

I regularly give science education prgorammes for local primary and secondary teachers. With regard to my teaching on the theory of evolution some have confided that this is the first time they’ve truly been exposed to a theory of evolution.  (I avoid the debates about atheism and the creation story. Science is not about faith it’s about understanding …). One theme that must be skirted with some delicacy in Rwanda is the notion of constructing a murderous/violent identity to borrow from Amin Maalouf’s book title, ‘Les Identités meurtrières’; in English: In the Name of Identity: Violence and the Need to Belong) as it has elements of a pseudoscience, subjectivity, political and religious connotations and has been used to justify racism and genocide. [Not sure here if he’s saying that the theory of evolution has been appropriated and juxtaposed with notions of violence and identify leading to racism and genocide. For anyone not familiar with the Rwandan genocide of 1994, see this Wikipedia entry.] Ihave been formally forbidden to discuss this period and my director said “Don’t meddle in this. It’s not our history.” But this theme/history has become essential/unavoidable even at the l’Ecole Belge de Kigali (Belgian School of Kigali).

A programme on sex education was well received and that subject too was forbidden to me (by a different director). I included topics such as  family planning, contraception, homosexuality, hormones and inspired a spirited debate. Many times my students have confided that they received good factual information on these taboo topics.

Each year with help from the Belgian School at Kigali, I raise money for these programmes (even if my directors don’t approve of the topics). I’m totally independent and other than the Belgian School at Kigali no other institution that I’ve appraoched has responded. But I continue as I hope that it can help lower milittancy.

My blog is one of the few French language science blogs in Africa (I rarely find any other such blogs when I search). In a country where the internet connection is catastrophically poor, I ask myself why I go on. I lose a lot of time waiting for pages to load or to re-establish a connection, especially in the rainy season. Happily I can depend on the communities of bloggers such as: café des sciences , les Mondoblogueurs de RFI , l’Agence Science-Presse. Without them I would have stopped long ago. It has been six years of blogging science …

It is virtual, you know. Each day when I deliver my courses in biology and chemistry, when I organize my programmes, when I post on my blog, ‘I march for science’.

Comments are gladly accepted. [http://www.sciencepresse.qc.ca/blogue/2017/04/24/march-science-rwanda]

All mistakes are mine.

US

My last bit is from an April 24, 2017 article by Jeremy Samuel Faust for Slate.com, (Note: Links have been removed),

Hundreds of thousands of self-professed science supporters turned out to over 600 iterations of the March for Science around the world this weekend. Thanks to the app Periscope, I attended half a dozen of them from the comfort of my apartment, thereby assiduously minimizing my carbon footprint.

Mainly, these marches appeared to be a pleasant excuse for liberals to write some really bad (and, OK, some truly superb) puns, and put them on cardboard signs. There were also some nicely stated slogans that roused support for important concepts such as reason and data and many that decried the defunding of scientific research and ignorance-driven policy.

But here’s the problem: Little of what I observed dissuades me from my baseline belief that, even among the sanctimonious elite who want to own science (and pwn [sic] anyone who questions it), most people have no idea how science actually works. The scientific method itself is already under constant attack from within the scientific community itself and is ceaselessly undermined by its so-called supporters, including during marches like those on Saturday. [April 22, 2017] In the long run, such demonstrations will do little to resolve the myriad problems science faces and instead could continue to undermine our efforts to use science accurately and productively.

Indeed much of the sentiment of the March for Science seemed to fall firmly in the camp of people espousing a gee-whiz attitude in which science is just great and beyond reproach. They feel that way because, so often, the science they’re exposed to feels that way—it’s cherry-picked. Cherry-picking scientific findings that support an already cherished and firmly held belief (while often ignoring equally if not more compelling data that contradicts it) is epidemic—in scientific journals and in the media.

Let’s face it: People like science when it supports their views. I see this every day. When patients ask me for antibiotics to treat their common colds, I tell them that decades of science and research, let alone a basic understanding of microbiology, shows that antibiotics don’t work for cold viruses. Trust me, people don’t care. They have gotten antibiotics for their colds in the past, and, lo, they got better. (The human immune system, while a bit slower and clunkier than we’d like it to be, never seems to get the credit it deserves in these little anecdotal stories.) Who needs science when you have something mightier—personal experience?

Another example is the vocal wing of environmentalists who got up one day and decided that genetically modified organisms were bad for you. They had not one shred of evidence for this, but it just kind of felt true. As a result, responsible scientists will be fighting against these zealots for years to come. While the leaders of March for Science events are on the right side of this issue, many of its supporters are not. I’m looking at you, Bernie Sanders; the intellectual rigor behind your stance requiring GMO labelling reflects a level of scientific understanding that would likely lead for calls for self-defenestration from your own supporters if it were applied to, say, something like climate change.

But it does not stop there. Perhaps as irritating as people who know nothing about science are those who know just a little bit—just enough to think they have any idea as to what is going on. Take for example the clever cheer (and unparalleled public declaration of nerdiness):

What do we want?

Science!

When do we want it?

After peer review!

Of course, the quality of most peer-review research is somewhere between bad and unfair to the pixels that gave their lives to display it. Just this past week, a study published by the world’s most prestigious stroke research journal (Stroke), made headlines and achieved media virality by claiming a correlation between increased diet soda consumption and strokes and dementia. Oh, by the way, the authors didn’t control for body mass index [*], even though, unsurprisingly, people who have the highest BMIs had the most strokes. An earlier study that no one seems to remember showed a correlation of around the same magnitude between obesity and strokes alone. But, who cares, right? Ban diet sodas now! Science says they’re linked to strokes and dementia! By the way, Science used to say that diet sodas cause cancer. But Science was, perish the thought, wrong.

If you can get past the writer’s great disdain for just about everyone, he makes very good points.

To add some clarity with regard to “controlling for body mass index,” there’s a concept in research known as a confounding variable. In this case, people who have a higher body mass index (or are more obese) will tend to have more strokes according to previous research which qualifies as a confounding variable when studying the effect of diet soda on strokes. To control for obesity means you set up the research project in such a way you can compare (oranges to oranges) the stroke rates of obese people who drink x amount of diet soda with obese people who do not drink x amount of diet soda and compare stroke rates of standard weight people who drink x amount of diet soda with other standard weight people who do not drink x amount of diet soda. There are other aspects of the research that would also have be considered but to control for body mass index that’s the way I’d set it up.

One point that Faust makes that isn’t made often enough and certainly not within the context of the ‘evidence-based policy movement’ and ‘marches for science’ is the great upheaval taking place within the scientific endeavour (Note: Links have been removed),

… . There are a dozen other statistical games that researchers can play to get statistical significance. Such ruses do not rise to anything approaching clinical relevance. Nevertheless, fun truthy ones like the diet soda study grab headlines and often end up changing human behaviors.

The reason this problem, what one of my friends delightfully calls statistical chicanery, is so rampant is twofold. First, academics need to “publish or perish.” If researchers don’t publish in peer-reviewed journals, their careers will be short and undistinguished. Second, large pharmaceutical companies have learned how to game the science system so that their patented designer molecules can earn them billions of dollars, often treating made-up diseases (I won’t risk public opprobrium naming those) as well as other that we, the medical establishment, literally helped create (opioid-induced constipation being a recent flagrancy).

Of course, the journals themselves have suffered because their contributors know the game. There are now dozens of stories of phony research passing muster in peer-review journals, despite being intentionally badly written. These somewhat cynical, though hilarious, exposés have largely focused on outing predatory journals that charge authors money in exchange for publication (assuming the article is “accepted” by the rigorous peer-review process; the word rigorous, by the way, now means “the credit card payment went through and your email address didn’t bounce”). But even prestigious journals have been bamboozled. The Lancet famously published fabrications linking vaccines and autism in 1998. and it took it 12 years to retract the studies. Meanwhile, the United States Congress took only three years for its own inquiry to debunk any link. You know it’s bad when the U.S. Congress is running circles around the editorial board of one of the world’s most illustrious medical journals. Over the last couple of decades, multiple attempts to improve the quality of peer-review adjudication have disappointingly and largely failed to improve the situation.

While the scientific research community is in desperate need of an overhaul, the mainstream media (and social media influencers) could in the meantime play a tremendously helpful role in alleviating the situation. Rather than indiscriminately repeating the results of the latest headline-grabbing scientific journal article and quoting the authors who wrote the paper, journalists should also reach out to skeptics and use their comments not just to provide (false) balance in their articles but to assess whether the finding really warrants an entire article of coverage in the first place. Headlines should be vetted not for impact and virality but for honesty. As a reader, be wary of any headline that includes the phrase “Science says,” as well as anything that states that a particular study “proves” that a particular exposure “causes” a particular disease. Smoking causes cancer, heart disease, and emphysema, and that’s about as close to a causal statement as actual scientists will make, when it comes to health. Most of what you read and hear about turns out to be mere associations, and mostly fairly weak ones, at that.

Faust refers mostly to medical research but many of his comments are applicable to other science research as well. By the way, Faust has written an excellent description of p-values for which, if for no other reason, you should read his piece in its entirety.

One last comment about Faust’s piece, he exhorts journalists to take more care in their writing but fails to recognize the pressures on journalists and those who participate in social media. Briefly, journalists are under pressure to produce. Many of the journalists who write about science don’t know much about it and even the ones who have a science background may be quite ignorant about the particular piece of science they are covering, i.e., a physicist might have some problems covering medical research and vice versa. Also, mainstream media are in trouble as they struggle to find revenue models.

As for those of us who blog and others in the social media environment; we are a mixed bag in much the same way that mainstream media is. If you get your science from gossip rags such as the National Enquirer, it’s not likely to be as reliable as what you’d expect from The Guardian or the The New York Times. Still, those prestigious publications have gotten quite wrong on occasion.

In the end, readers (scientists, journalists, bloggers, etc.) need to be skeptical. It’s also helpful to be humble or at least willing to admit you’ve made a mistake (confession: I have my share on this blog, which are noted when I’ve found or when they’ve been pointed out to me).

Final comments

Hopefully, this has given you a taste for the wide ranges of experiences and perspectives on the April 22, 2017 March for Science.

European Commission has issued evaluation of nanomaterial risk frameworks and tools

Despite complaints that there should have been more, there has been some research into risks where nanomaterials are concerned. While additional research would be welcome, it’s perhaps more imperative that standardized testing and risk frameworks are developed so, for example, carbon nanotube safety research in Japan can be compared with the similar research in the Netherlands, the US, and elsewhere. This March 15, 2017 news item on Nanowerk features some research analyzing risk assessment frameworks and tools in Europe,

A recent study has evaluated frameworks and tools used in Europe to assess the potential health and environmental risks of manufactured nanomaterials. The study identifies a trend towards tools that provide protocols for conducting experiments, which enable more flexible and efficient hazard testing. Among its conclusions, however, it notes that no existing frameworks meet all the study’s evaluation criteria and calls for a new, more comprehensive framework.

A March 9, 2017 news alert in the European Commission’s Science for Environment Policy series, which originated the news item, provides more detail (Note: Links have been removed),

Nanotechnology is identified as a key emerging technology in the EU’s growth strategy, Europe 2020. It has great potential to contribute to innovation and economic growth and many of its applications have already received large investments. However,there are some uncertainties surrounding the environmental, health and safety risks of manufactured nanomaterials. For effective regulation, careful scientific analysis of their potential impacts is needed, as conducted through risk assessment exercises.

This study, conducted under the EU-funded MARINA project1, reviewed existing frameworks and tools for risk assessing manufactured nanomaterials. The researchers define a framework as a ‘conceptual paradigm’ of how a risk assessment should be conducted and understood, and give the REACH chemical safety assessment as an example. Tools are defined as implements used to carry out a specific task or function, such as experimental protocols, computer models or databases.

In all, 12 frameworks and 48 tools were evaluated. These were identified from other studies and projects. The frameworks were assessed against eight criteria which represent different strengths, such as whether they consider properties specific to nanomaterials, whether they consider the entire life cycle of a nanomaterial and whether they include careful planning and prioritise objectives before the risk assessment is conducted.

The tools were assessed against seven criteria, such as ease of use, whether they provide quantitative information and if they clearly communicate uncertainty in their results. The researchers defined the criteria for both frameworks and tools by reviewing other studies and by interviewing staff at organisations who develop tools.

The evaluation was thus able to produce a list of strengths and areas for improvement for the frameworks and tools, based on whether they meet each of the criteria. Among its many findings, the evaluation showed that most of the frameworks stress that ‘problem formulation’, which sets the goals and scope of an assessment during the planning process, is essential to avoid unnecessary testing. In addition, most frameworks consider routes of exposure in the initial stages of assessment, which is beneficial as it can exclude irrelevant exposure routes and avoid unnecessary tests.

However, none of the frameworks met all eight of the criteria. The study therefore recommends that a new, comprehensive framework is developed that meets all criteria. Such a framework is needed to inform regulation, the researchers say, and should integrate human health and environmental factors, and cover all stages of the life cycle of a product containing nanomaterials.

The evaluation of the tools suggested that many of them are designed to screen risks, and not necessarily to support regulatory risk assessment. However, their strengths include a growing trend in quantitative models, which can assess uncertainty; for example, one tool analysed can identify uncertainties in its results that are due to gaps in knowledge about a material’s origin, characteristics and use.

The researchers also identified a growing trend in tools that provide protocols for experiments, such as identifying materials and test hazards, which are reproducible across laboratories. These tools could lead to a shift from expensive case-by-case testing for risk assessment of manufactured nanomaterials towards a more efficient process based on groupings of nanomaterials; and ‘read-across’ methods, where the properties of one material can be inferred without testing, based on the known properties of a similar material. The researchers do note, however, that although read-across methods are well established for chemical substances, they are still being developed for nanomaterials. To improve nanomaterial read-across methods, they suggest that more data are needed on the links between nanomaterials’ specific properties and their biological effects.

That’s all, folks.

China, US, and the race for artificial intelligence research domination

John Markoff and Matthew Rosenberg have written a fascinating analysis of the competition between US and China regarding technological advances, specifically in the field of artificial intelligence. While the focus of the Feb. 3, 2017 NY Times article is military, the authors make it easy to extrapolate and apply the concepts to other sectors,

Robert O. Work, the veteran defense official retained as deputy secretary by President Trump, calls them his “A.I. dudes.” The breezy moniker belies their serious task: The dudes have been a kitchen cabinet of sorts, and have advised Mr. Work as he has sought to reshape warfare by bringing artificial intelligence to the battlefield.

Last spring, he asked, “O.K., you guys are the smartest guys in A.I., right?”

No, the dudes told him, “the smartest guys are at Facebook and Google,” Mr. Work recalled in an interview.

Now, increasingly, they’re also in China. The United States no longer has a strategic monopoly on the technology, which is widely seen as the key factor in the next generation of warfare.

The Pentagon’s plan to bring A.I. to the military is taking shape as Chinese researchers assert themselves in the nascent technology field. And that shift is reflected in surprising commercial advances in artificial intelligence among Chinese companies. [emphasis mine]

Having read Marshal McLuhan (de rigeur for any Canadian pursuing a degree in communications [sociology-based] anytime from the 1960s into the late 1980s [at least]), I took the movement of technology from military research to consumer applications as a standard. Television is a classic example but there are many others including modern plastic surgery. The first time, I encountered the reverse (consumer-based technology being adopted by the military) was in a 2004 exhibition “Massive Change: The Future of Global Design” produced by Bruce Mau for the Vancouver (Canada) Art Gallery.

Markoff and Rosenberg develop their thesis further (Note: Links have been removed),

Last year, for example, Microsoft researchers proclaimed that the company had created software capable of matching human skills in understanding speech.

Although they boasted that they had outperformed their United States competitors, a well-known A.I. researcher who leads a Silicon Valley laboratory for the Chinese web services company Baidu gently taunted Microsoft, noting that Baidu had achieved similar accuracy with the Chinese language two years earlier.

That, in a nutshell, is the challenge the United States faces as it embarks on a new military strategy founded on the assumption of its continued superiority in technologies such as robotics and artificial intelligence.

First announced last year by Ashton B. Carter, President Barack Obama’s defense secretary, the “Third Offset” strategy provides a formula for maintaining a military advantage in the face of a renewed rivalry with China and Russia.

As consumer electronics manufacturing has moved to Asia, both Chinese companies and the nation’s government laboratories are making major investments in artificial intelligence.

The advance of the Chinese was underscored last month when Qi Lu, a veteran Microsoft artificial intelligence specialist, left the company to become chief operating officer at Baidu, where he will oversee the company’s ambitious plan to become a global leader in A.I.

The authors note some recent military moves (Note: Links have been removed),

In August [2016], the state-run China Daily reported that the country had embarked on the development of a cruise missile system with a “high level” of artificial intelligence. The new system appears to be a response to a missile the United States Navy is expected to deploy in 2018 to counter growing Chinese military influence in the Pacific.

Known as the Long Range Anti-Ship Missile, or L.R.A.S.M., it is described as a “semiautonomous” weapon. According to the Pentagon, this means that though targets are chosen by human soldiers, the missile uses artificial intelligence technology to avoid defenses and make final targeting decisions.

The new Chinese weapon typifies a strategy known as “remote warfare,” said John Arquilla, a military strategist at the Naval Post Graduate School in Monterey, Calif. The idea is to build large fleets of small ships that deploy missiles, to attack an enemy with larger ships, like aircraft carriers.

“They are making their machines more creative,” he said. “A little bit of automation gives the machines a tremendous boost.”

Whether or not the Chinese will quickly catch the United States in artificial intelligence and robotics technologies is a matter of intense discussion and disagreement in the United States.

Markoff and Rosenberg return to the world of consumer electronics as they finish their article on AI and the military (Note: Links have been removed),

Moreover, while there appear to be relatively cozy relationships between the Chinese government and commercial technology efforts, the same cannot be said about the United States. The Pentagon recently restarted its beachhead in Silicon Valley, known as the Defense Innovation Unit Experimental facility, or DIUx. It is an attempt to rethink bureaucratic United States government contracting practices in terms of the faster and more fluid style of Silicon Valley.

The government has not yet undone the damage to its relationship with the Valley brought about by Edward J. Snowden’s revelations about the National Security Agency’s surveillance practices. Many Silicon Valley firms remain hesitant to be seen as working too closely with the Pentagon out of fear of losing access to China’s market.

“There are smaller companies, the companies who sort of decided that they’re going to be in the defense business, like a Palantir,” said Peter W. Singer, an expert in the future of war at New America, a think tank in Washington, referring to the Palo Alto, Calif., start-up founded in part by the venture capitalist Peter Thiel. “But if you’re thinking about the big, iconic tech companies, they can’t become defense contractors and still expect to get access to the Chinese market.”

Those concerns are real for Silicon Valley.

If you have the time, I recommend reading the article in its entirety.

Impact of the US regime on thinking about AI?

A March 24, 2017 article by Daniel Gross for Slate.com hints that at least one high level offician in the Trump administration may be a little naïve in his understanding of AI and its impending impact on US society (Note: Links have been removed),

Treasury Secretary Steven Mnuchin is a sharp guy. He’s a (legacy) alumnus of Yale and Goldman Sachs, did well on Wall Street, and was a successful movie producer and bank investor. He’s good at, and willing to, put other people’s money at risk alongside some of his own. While he isn’t the least qualified person to hold the post of treasury secretary in 2017, he’s far from the best qualified. For in his 54 years on this planet, he hasn’t expressed or displayed much interest in economic policy, or in grappling with the big picture macroeconomic issues that are affecting our world. It’s not that he is intellectually incapable of grasping them; they just haven’t been in his orbit.

Which accounts for the inanity he uttered at an Axios breakfast Friday morning about the impact of artificial intelligence on jobs.

“it’s not even on our radar screen…. 50-100 more years” away, he said. “I’m not worried at all” about robots displacing humans in the near future, he said, adding: “In fact I’m optimistic.”

A.I. is already affecting the way people work, and the work they do. (In fact, I’ve long suspected that Mike Allen, Mnuchin’s Axios interlocutor, is powered by A.I.) I doubt Mnuchin has spent much time in factories, for example. But if he did, he’d see that machines and software are increasingly doing the work that people used to do. They’re not just moving goods through an assembly line, they’re soldering, coating, packaging, and checking for quality. Whether you’re visiting a GE turbine plant in South Carolina, or a cable-modem factory in Shanghai, the thing you’ll notice is just how few people there actually are. It’s why, in the U.S., manufacturing output rises every year while manufacturing employment is essentially stagnant. It’s why it is becoming conventional wisdom that automation is destroying more manufacturing jobs than trade. And now we are seeing the prospect of dark factories, which can run without lights because there are no people in them, are starting to become a reality. The integration of A.I. into factories is one of the reasons Trump’s promise to bring back manufacturing employment is absurd. You’d think his treasury secretary would know something about that.

It goes far beyond manufacturing, of course. Programmatic advertising buying, Spotify’s recommendation engines, chatbots on customer service websites, Uber’s dispatching system—all of these are examples of A.I. doing the work that people used to do. …

Adding to Mnuchin’s lack of credibility on the topic of jobs and robots/AI, Matthew Rozsa’s March 28, 2017 article for Salon.com features a study from the US National Bureau of Economic Research (Note: Links have been removed),

A new study by the National Bureau of Economic Research shows that every fully autonomous robot added to an American factory has reduced employment by an average of 6.2 workers, according to a report by BuzzFeed. The study also found that for every fully autonomous robot per thousand workers, the employment rate dropped by 0.18 to 0.34 percentage points and wages fell by 0.25 to 0.5 percentage points.

I can’t help wondering if the US Secretary of the Treasury is so oblivious to what is going on in the workplace whether that’s representative of other top-tier officials such as the Secretary of Defense, Secretary of Labor, etc. What is going to happen to US research in fields such as robotics and AI?

I have two more questions, in future what happens to research which contradicts or makes a top tier Trump government official look foolish? Will it be suppressed?

You can find the report “Robots and Jobs: Evidence from US Labor Markets” by Daron Acemoglu and Pascual Restrepo. NBER (US National Bureau of Economic Research) WORKING PAPER SERIES (Working Paper 23285) released March 2017 here. The introduction featured some new information for me; the term ‘technological unemployment’ was introduced in 1930 by John Maynard Keynes.

Moving from a wholly US-centric view of AI

Naturally in a discussion about AI, it’s all US and the country considered its chief sceince rival, China, with a mention of its old rival, Russia. Europe did rate a mention, albeit as a totality. Having recently found out that Canadians were pioneers in a very important aspect of AI, machine-learning, I feel obliged to mention it. You can find more about Canadian AI efforts in my March 24, 2017 posting (scroll down about 40% of the way) where you’ll find a very brief history and mention of the funding for a newly launching, Pan-Canadian Artificial Intelligence Strategy.

If any of my readers have information about AI research efforts in other parts of the world, please feel free to write them up in the comments.

‘Smart’ fabric that’s bony

Researchers at Australia’s University of New South of Wales (UNSW) have devised a means of ‘weaving’ a material that mimics *bone tissue, periosteum according to a Jan. 11, 2017 news item on ScienceDaily,

For the first time, UNSW [University of New South Wales] biomedical engineers have woven a ‘smart’ fabric that mimics the sophisticated and complex properties of one nature’s ingenious materials, the bone tissue periosteum.

Having achieved proof of concept, the researchers are now ready to produce fabric prototypes for a range of advanced functional materials that could transform the medical, safety and transport sectors. Patents for the innovation are pending in Australia, the United States and Europe.

Potential future applications range from protective suits that stiffen under high impact for skiers, racing-car drivers and astronauts, through to ‘intelligent’ compression bandages for deep-vein thrombosis that respond to the wearer’s movement and safer steel-belt radial tyres.

A Jan. 11, 2017 UNSW press release on EurekAlert, which originated the news item, expands on the theme,

Many animal and plant tissues exhibit ‘smart’ and adaptive properties. One such material is the periosteum, a soft tissue sleeve that envelops most bony surfaces in the body. The complex arrangement of collagen, elastin and other structural proteins gives periosteum amazing resilience and provides bones with added strength under high impact loads.

Until now, a lack of scalable ‘bottom-up’ approaches by researchers has stymied their ability to use smart tissues to create advanced functional materials.

UNSW’s Paul Trainor Chair of Biomedical Engineering, Professor Melissa Knothe Tate, said her team had for the first time mapped the complex tissue architectures of the periosteum, visualised them in 3D on a computer, scaled up the key components and produced prototypes using weaving loom technology.

“The result is a series of textile swatch prototypes that mimic periosteum’s smart stress-strain properties. We have also demonstrated the feasibility of using this technique to test other fibres to produce a whole range of new textiles,” Professor Knothe Tate said.

In order to understand the functional capacity of the periosteum, the team used an incredibly high fidelity imaging system to investigate and map its architecture.

“We then tested the feasibility of rendering periosteum’s natural tissue weaves using computer-aided design software,” Professor Knothe Tate said.

The computer modelling allowed the researchers to scale up nature’s architectural patterns to weave periosteum-inspired, multidimensional fabrics using a state-of-the-art computer-controlled jacquard loom. The loom is known as the original rudimentary computer, first unveiled in 1801.

“The challenge with using collagen and elastin is their fibres, that are too small to fit into the loom. So we used elastic material that mimics elastin and silk that mimics collagen,” Professor Knothe Tate said.

In a first test of the scaled-up tissue weaving concept, a series of textile swatch prototypes were woven, using specific combinations of collagen and elastin in a twill pattern designed to mirror periosteum’s weave. Mechanical testing of the swatches showed they exhibited similar properties found in periosteum’s natural collagen and elastin weave.

First author and biomedical engineering PhD candidate, Joanna Ng, said the technique had significant implications for the development of next-generation advanced materials and mechanically functional textiles.

While the materials produced by the jacquard loom have potential manufacturing applications – one tyremaker believes a titanium weave could spawn a new generation of thinner, stronger and safer steel-belt radials – the UNSW team is ultimately focused on the machine’s human potential.

“Our longer term goal is to weave biological tissues – essentially human body parts – in the lab to replace and repair our failing joints that reflect the biology, architecture and mechanical properties of the periosteum,” Ms Ng said.

An NHMRC development grant received in November [2016] will allow the team to take its research to the next phase. The researchers will work with the Cleveland Clinic and the University of Sydney’s Professor Tony Weiss to develop and commercialise prototype bone implants for pre-clinical research, using the ‘smart’ technology, within three years.

In searching for more information about this work, I found a Winter 2015 article (PDF; pp. 8-11) by Amy Coopes and Steve Offner for UNSW Magazine about Knothe Tate and her work (Note: In Australia, winter would be what we in the Northern Hemisphere consider summer),

Tucked away in a small room in UNSW’s Graduate School of Biomedical Engineering sits a 19th century–era weaver’s wooden loom. Operated by punch cards and hooks, the machine was the first rudimentary computer when it was unveiled in 1801. While on the surface it looks like a standard Jacquard loom, it has been enhanced with motherboards integrated into each of the loom’s five hook modules and connected to a computer. This state-of-the-art technology means complex algorithms control each of the 5,000 feed-in fibres with incredible precision.

That capacity means the loom can weave with an extraordinary variety of substances, from glass and titanium to rayon and silk, a development that has attracted industry attention around the world.

The interest lies in the natural advantage woven materials have over other manufactured substances. Instead of manipulating material to create new shades or hues as in traditional weaving, the fabrics’ mechanical properties can be modulated, to be stiff at one end, for example, and more flexible at the other.

“Instead of a pattern of colours we get a pattern of mechanical properties,” says Melissa Knothe Tate, UNSW’s Paul Trainor Chair of Biomedical Engineering. “Think of a rope; it’s uniquely good in tension and in bending. Weaving is naturally strong in that way.”


The interface of mechanics and physiology is the focus of Knothe Tate’s work. In March [2015], she travelled to the United States to present another aspect of her work at a meeting of the international Orthopedic Research Society in Las Vegas. That project – which has been dubbed “Google Maps for the body” – explores the interaction between cells and their environment in osteoporosis and other degenerative musculoskeletal conditions such as osteoarthritis.

Using previously top-secret semiconductor technology developed by optics giant Zeiss, and the same approach used by Google Maps to locate users with pinpoint accuracy, Knothe Tate and her team have created “zoomable” anatomical maps from the scale of a human joint down to a single cell.

She has also spearheaded a groundbreaking partnership that includes the Cleveland Clinic, and Brown and Stanford universities to help crunch terabytes of data gathered from human hip studies – all processed with the Google technology. Analysis that once took 25 years can now be done in a matter of weeks, bringing researchers ever closer to a set of laws that govern biological behaviour. [p. 9]

I gather she was recruited from the US to work at the University of New South Wales and this article was to highlight why they recruited her and to promote the university’s biomedical engineering department, which she chairs.

Getting back to 2017, here’s a link to and citation for the paper,

Scale-up of nature’s tissue weaving algorithms to engineer advanced functional materials by Joanna L. Ng, Lillian E. Knothe, Renee M. Whan, Ulf Knothe & Melissa L. Knothe Tate. Scientific Reports 7, Article number: 40396 (2017) doi:10.1038/srep40396 Published online: 11 January 2017

This paper is open access.

One final comment, that’s a lot of people (three out of five) with the last name Knothe in the author’s list for the paper.

*’the bone tissue’ changed to ‘bone tissue’ on July 17,2017.

Robots judge a beauty contest

I have a lot of respect for good PR gimmicks and a beauty contest judged by robots (or more accurately, artificial intelligence) is a provocative idea wrapped up in a good public relations (PR) gimmick. A July 12, 2016 In Silico Medicine press release on EurekAlert reveals more,

Beauty.AI 2.0, a platform,” a platform, where human beauty is evaluated by a jury of robots and algorithm developers compete on novel applications of machine intelligence to perception is supported by Ernst and Young.

“We were very impressed by E&Y’s recent advertising campaign with a robot hand holding a beautiful butterfly and a slogan “How human is your algorithm?” and immediately invited them to participate. This slogan captures the very essence of our contest, which is constantly exploring new ideas in machine perception of humans”, said Anastasia Georgievskaya, Managing Scientist at Youth Laboratories, the organizer of Beauty.AI.

Beauty.AI contest is supported by the many innovative companies from the US, Europe, and Asia with some of the top cosmetics companies participating in collaborative research projects. Imagene Labs, one of the leaders in linking facial and biological information from Singapore operating across Asia, is a gold sponsor and research partner of the contest.

There are many approaches to evaluating human beauty. Features like symmetry, pigmentation, pimples, wrinkles may play a role and similarity to actors, models and celebrities may be used in the calculation of the overall score. However, other innovative approaches have been proposed. A robot developed by Insilico Medicine compares the chronological age with the age predicted by a deep neural network. Another team is training an artificially-intelligent system to identify features that contribute to the popularity of the people on dating sites.

“We look forward to collaborating with the Youth Laboratories team to create new AI algorithms. These will eventually allow consumers to objectively evaluate how well their wellness interventions – such as diet, exercise, skincare and supplements – are working. Based on the results they can then fine tune their approach to further improve their well-being and age better”, said Jia-Yi Har, Vice President of Imagene Labs.

The contest is open to anyone with a modern smartphone running either Android or iOS operating system, and Beauty.AI 2.0 app can be downloaded for free from either Google or Apple markets. Programmers and companies can participate by submitting their algorithm to the organizers through the Beauty.AI website.

“The beauty of Beauty.AI pageants is that algorithms are much more impartial than humans, and we are trying to prevent any racial bias and run the contest in multiple age categories. Most of the popular beauty contests discriminate by age, gender, marital status, body weight and race. Algorithms are much less partial”, said Alex Shevtsov, CEO of Youth Laboratories.

Very interesting take on beauty and bias. I wonder if they’re building change into their algorithms. After all, standards for beauty don’t remain static, they change over time.

Unfortunately, that question isn’t asked in Wency Leung’s July 4, 2016 article on the robot beauty contest for the Globe and Mail but she does provides more details about the contest and insight into the world of international cosmetics companies and their use of technology,

Teaching computers about aesthetics involves designing sophisticated algorithms to recognize and measure features like wrinkles, face proportions, blemishes and skin colour. And the beauty industry is rapidly embracing these high-tech tools to respond to consumers’ demand for products that suit their individual tastes and attributes.

Companies like Sephora and Avon, for instance, are using face simulation technology to provide apps that allow customers to virtually try on and shop for lipsticks and eye shadows using their mobile devices. Skincare producers are using similar technologies to track and predict the effects of serums and creams on various skin types. And brands like L’Oréal’s Lancôme are using facial analysis to read consumers’ skin tones to create personalized foundations.

“The more we’re able to use these tools like augmented reality [and] artificial intelligence to provide new consumer experiences, the more we can move to customizing and personalizing products for every consumer around the world, no matter what their skin tone is, no matter where they live, no matter who they are,” says Guive Balooch, global vice-president of L’Oréal’s technology incubator.

Balooch was tasked with starting up the company’s tech research hub four years ago, with a mandate to predict and invent solutions to how consumers would choose and use products in the future. Among its innovations, his team has come up with the Makeup Genius app, a virtual mirror that allows customers to try on products on a mobile screen, and a device called My UV Patch, a sticker sensor that users wear on their skin, which informs them through an app how much UV exposure they get.

These tools may seem easy enough to use, but their simplicity belies the work that goes on behind the scenes. To create the Makeup Genius app, for example, Balooch says the developers sought expertise from the animation industry to enable users to see themselves move onscreen in real time. The developers also brought in hundreds of consumers with different skin tones to test real products in the lab, and they tested the app on some 100,000 images in more than 40 lighting conditions, to ensure the colours of makeup products appeared the same in real life as they did onscreen, Balooch says.

The article is well worth reading in its entirety.

For the seriously curious, you can find Beauty AI here, In Silico Medicine here, and Imagene Labs here. I cannot find a website for Youth Laboratories featuring Anastasia Georgievskaya.

I last wrote about In Silico Medicine in a May 31, 2016 post about deep learning, wrinkles, and aging.

Nanoparticles in baby formula

Needle-like particles of hydroxyapatite found in infant formula by ASU researchers. Westerhoff and Schoepf/ASU, CC BY-ND

Needle-like particles of hydroxyapatite found in infant formula by ASU [Arizona State University] researchers. Westerhoff and Schoepf/ASU, CC BY-ND

Nanowerk is featuring an essay about hydroxyapatite nanoparticles in baby formula written by Dr. Andrew Maynard in a May 17, 2016 news item (Note: A link has been removed),

There’s a lot of stuff you’d expect to find in baby formula: proteins, carbs, vitamins, essential minerals. But parents probably wouldn’t anticipate finding extremely small, needle-like particles. Yet this is exactly what a team of scientists here at Arizona State University [ASU] recently discovered.

The research, commissioned and published by Friends of the Earth (FoE) – an environmental advocacy group – analyzed six commonly available off-the-shelf baby formulas (liquid and powder) and found nanometer-scale needle-like particles in three of them. The particles were made of hydroxyapatite – a poorly soluble calcium-rich mineral. Manufacturers use it to regulate acidity in some foods, and it’s also available as a dietary supplement.

Andrew’s May 17, 2016 essay first appeared on The Conversation website,

Looking at these particles at super-high magnification, it’s hard not to feel a little anxious about feeding them to a baby. They appear sharp and dangerous – not the sort of thing that has any place around infants. …

… questions like “should infants be ingesting them?” make a lot of sense. However, as is so often the case, the answers are not quite so straightforward.

Andrew begins by explaining about calcium and hydroxyapatite (from The Conversation),

Calcium is an essential part of a growing infant’s diet, and is a legally required component in formula. But not necessarily in the form of hydroxyapatite nanoparticles.

Hydroxyapatite is a tough, durable mineral. It’s naturally made in our bodies as an essential part of bones and teeth – it’s what makes them so strong. So it’s tempting to assume the substance is safe to eat. But just because our bones and teeth are made of the mineral doesn’t automatically make it safe to ingest outright.

The issue here is what the hydroxyapatite in formula might do before it’s digested, dissolved and reconstituted inside babies’ bodies. The size and shape of the particles ingested has a lot to do with how they behave within a living system.

He then discusses size and shape, which are important at the nanoscale,

Size and shape can make a difference between safe and unsafe when it comes to particles in our food. Small particles aren’t necessarily bad. But they can potentially get to parts of our body that larger ones can’t reach. Think through the gut wall, into the bloodstream, and into organs and cells. Ingested nanoscale particles may be able to interfere with cells – even beneficial gut microbes – in ways that larger particles don’t.

These possibilities don’t necessarily make nanoparticles harmful. Our bodies are pretty well adapted to handling naturally occurring nanoscale particles – you probably ate some last time you had burnt toast (carbon nanoparticles), or poorly washed vegetables (clay nanoparticles from the soil). And of course, how much of a material we’re exposed to is at least as important as how potentially hazardous it is.

Yet there’s a lot we still don’t know about the safety of intentionally engineered nanoparticles in food. Toxicologists have started paying close attention to such particles, just in case their tiny size makes them more harmful than otherwise expected.

Currently, hydroxyapatite is considered safe at the macroscale by the US Food and Drug Administration (FDA). However, the agency has indicated that nanoscale versions of safe materials such as hydroxyapatite may not be safe food additives. From Andrew’s May 17, 2016 essay,

Hydroxyapatite is a tough, durable mineral. It’s naturally made in our bodies as an essential part of bones and teeth – it’s what makes them so strong. So it’s tempting to assume the substance is safe to eat. But just because our bones and teeth are made of the mineral doesn’t automatically make it safe to ingest outright.

The issue here is what the hydroxyapatite in formula might do before it’s digested, dissolved and reconstituted inside babies’ bodies. The size and shape of the particles ingested has a lot to do with how they behave within a living system. Size and shape can make a difference between safe and unsafe when it comes to particles in our food. Small particles aren’t necessarily bad. But they can potentially get to parts of our body that larger ones can’t reach. Think through the gut wall, into the bloodstream, and into organs and cells. Ingested nanoscale particles may be able to interfere with cells – even beneficial gut microbes – in ways that larger particles don’t.These possibilities don’t necessarily make nanoparticles harmful. Our bodies are pretty well adapted to handling naturally occurring nanoscale particles – you probably ate some last time you had burnt toast (carbon nanoparticles), or poorly washed vegetables (clay nanoparticles from the soil). And of course, how much of a material we’re exposed to is at least as important as how potentially hazardous it is.Yet there’s a lot we still don’t know about the safety of intentionally engineered nanoparticles in food. Toxicologists have started paying close attention to such particles, just in case their tiny size makes them more harmful than otherwise expected.

Putting particle size to one side for a moment, hydroxyapatite is classified by the US Food and Drug Administration (FDA) as “Generally Regarded As Safe.” That means it considers the material safe for use in food products – at least in a non-nano form. However, the agency has raised concerns that nanoscale versions of food ingredients may not be as safe as their larger counterparts.Some manufacturers may be interested in the potential benefits of “nanosizing” – such as increasing the uptake of vitamins and minerals, or altering the physical, textural and sensory properties of foods. But because decreasing particle size may also affect product safety, the FDA indicates that intentionally nanosizing already regulated food ingredients could require regulatory reevaluation.In other words, even though non-nanoscale hydroxyapatite is “Generally Regarded As Safe,” according to the FDA, the safety of any nanoscale form of the substance would need to be reevaluated before being added to food products.Despite this size-safety relationship, the FDA confirmed to me that the agency is unaware of any food substance intentionally engineered at the nanoscale that has enough generally available safety data to determine it should be “Generally Regarded As Safe.”Casting further uncertainty on the use of nanoscale hydroxyapatite in food, a 2015 report from the European Scientific Committee on Consumer Safety (SCCS) suggests there may be some cause for concern when it comes to this particular nanomaterial.Prompted by the use of nanoscale hydroxyapatite in dental products to strengthen teeth (which they consider “cosmetic products”), the SCCS reviewed published research on the material’s potential to cause harm. Their conclusion?

The available information indicates that nano-hydroxyapatite in needle-shaped form is of concern in relation to potential toxicity. Therefore, needle-shaped nano-hydroxyapatite should not be used in cosmetic products.

This recommendation was based on a handful of studies, none of which involved exposing people to the substance. Researchers injected hydroxyapatite needles directly into the bloodstream of rats. Others exposed cells outside the body to the material and observed the effects. In each case, there were tantalizing hints that the small particles interfered in some way with normal biological functions. But the results were insufficient to indicate whether the effects were meaningful in people.

As Andrew also notes in his essay, none of the studies examined by the SCCS OEuropean Scientific Committee on Consumer Safety) looked at what happens to nano-hydroxyapatite once it enters your gut and that is what the researchers at Arizona State University were considering (from the May 17, 2016 essay),

The good news is that, according to preliminary studies from ASU researchers, hydroxyapatite needles don’t last long in the digestive system.

This research is still being reviewed for publication. But early indications are that as soon as the needle-like nanoparticles hit the highly acidic fluid in the stomach, they begin to dissolve. So fast in fact, that by the time they leave the stomach – an exceedingly hostile environment – they are no longer the nanoparticles they started out as.

These findings make sense since we know hydroxyapatite dissolves in acids, and small particles typically dissolve faster than larger ones. So maybe nanoscale hydroxyapatite needles in food are safer than they sound.

This doesn’t mean that the nano-needles are completely off the hook, as some of them may get past the stomach intact and reach more vulnerable parts of the gut. But the findings do suggest these ultra-small needle-like particles could be an effective source of dietary calcium – possibly more so than larger or less needle-like particles that may not dissolve as quickly.

Intriguingly, recent research has indicated that calcium phosphate nanoparticles form naturally in our stomachs and go on to be an important part of our immune system. It’s possible that rapidly dissolving hydroxyapatite nano-needles are actually a boon, providing raw material for these natural and essential nanoparticles.

While it’s comforting to know that preliminary research suggests that the hydroxyapatite nanoparticles are likely safe for use in food products, Andrew points out that more needs to be done to insure safety (from the May 17, 2016 essay),

And yet, even if these needle-like hydroxyapatite nanoparticles in infant formula are ultimately a good thing, the FoE report raises a number of unresolved questions. Did the manufacturers knowingly add the nanoparticles to their products? How are they and the FDA ensuring the products’ safety? Do consumers have a right to know when they’re feeding their babies nanoparticles?

Whether the manufacturers knowingly added these particles to their formula is not clear. At this point, it’s not even clear why they might have been added, as hydroxyapatite does not appear to be a substantial source of calcium in most formula. …

And regardless of the benefits and risks of nanoparticles in infant formula, parents have a right to know what’s in the products they’re feeding their children. In Europe, food ingredients must be legally labeled if they are nanoscale. In the U.S., there is no such requirement, leaving American parents to feel somewhat left in the dark by producers, the FDA and policy makers.

As far as I’m aware, the Canadian situation is much the same as the US. If the material is considered safe at the macroscale, there is no requirement to indicate that a nanoscale version of the material is in the product.

I encourage you to read Andrew’s essay in its entirety. As for the FoE report (Nanoparticles in baby formula: Tiny new ingredients are a big concern), that is here.

Cities as incubators of technological and economic growth: from the rustbelt to the brainbelt

An April 10, 2016 news article by Xumei Dong on the timesunion website casts a light on what some feel is an emerging ‘brainbelt’ (Note: Links have been removed),

Albany [New York state, US], in the forefront of nanotechnology research, is one of the fastest-growing cities for tech jobs, according to a new book exploring hot spots of innovation across the globe.

“You have GlobalFoundries, which has thousands of employees working in one of the most modern plants in the world,” says Antoine van Agtmael, the Dutch-born investor who wrote “The Smartest Places on Earth: Why Rustbelts Are the Emerging Hotspots of Global Innovation” with Dutch journalist Fred Bakker.

Their book, mentioned in a Brookings Institution panel discussion last week [April 6, 2016], lists Albany as a leading innovation hub — part of an emerging “brainbelt” in the United States.

The Brookings Institute’s The smartest places on Earth: Why rustbelts are the emerging hotspots of global innovation event page provides more details and includes an embedded video of the event (running time: roughly 1 hour 17 mins.), Note: A link has been removed,

The conventional wisdom in manufacturing has long held that the key to maintaining a competitive edge lies in making things as cheaply as possible, which saw production outsourced to the developing world in pursuit of ever-lower costs. In contradiction to that prevailing wisdom, authors Antoine van Agtmael, a Brookings trustee, and Fred Bakker crisscrossed the globe and found that the economic tide is beginning to shift from its obsession with cheap goods to the production of smart ones.

Their new book, “The Smartest Places on Earth” (PublicAffairs, 2016), examines this changing dynamic and the transformation of “rustbelt” cities, the former industrial centers of the U.S. and Europe, into a “brainbelt” of design and innovation.

On Wednesday, April 6 [2016] Centennial Scholar Bruce Katz and the Metropolitan Policy Program hosted an event discussing these emerging hotspots and how cities such as Akron, Albany, Raleigh-Durham, Minneapolis-St.Paul, and Portland in the United States, and Eindhoven, Malmo, Dresden, and Oulu in Europe are seizing the initiative and recovering their economic strength.

You can find the book here or if a summary and biographies of the authors will suffice, there’s this,

The remarkable story of how rustbelt cities such as Akron and Albany in the United States and Eindhoven in Europe are becoming the unlikely hotspots of global innovation, where sharing brainpower and making things smarter—not cheaper—is creating a new economy that is turning globalization on its head

Antoine van Agtmael and Fred Bakker counter recent conventional wisdom that the American and northern European economies have lost their initiative in innovation and their competitive edge by focusing on an unexpected and hopeful trend: the emerging sources of economic strength coming from areas once known as “rustbelts” that had been written off as yesterday’s story.

In these communities, a combination of forces—visionary thinkers, local universities, regional government initiatives, start-ups, and big corporations—have created “brainbelts.” Based on trust, a collaborative style of working, and freedom of thinking prevalent in America and Europe, these brainbelts are producing smart products that are transforming industries by integrating IT, sensors, big data, new materials, new discoveries, and automation. From polymers to medical devices, the brainbelts have turned the tide from cheap, outsourced production to making things smart right in our own backyard. The next emerging market may, in fact, be the West.

about Antoine van Agtmael and Fred Bakker

Antoine van Agtmael is senior adviser at Garten Rothkopf, a public policy advisory firm in Washington, DC. He was a founder, CEO, and CIO of Emerging Markets Management LLC; previously he was deputy director of the capital markets department of the International Finance Corporation (“IFC”), the private sector oriented affiliate of the World Bank, and a division chief in the World Bank’s borrowing operations. He was an adjunct professor at Georgetown Law Center and taught at the Harvard Institute of Politics. Mr. van Agtmael is chairman of the NPR Foundation, a member of the board of NPR, and chairman of its Investment Committee. He is also a trustee of The Brookings Institution and cochairman of its International Advisory Council. He is on the President’s Council on International Activities at Yale University, the Advisory Council of Johns Hopkins University’s Paul H. Nitze School of Advanced International Studies (SAIS), and a member of the Council on Foreign Relations

Alfred Bakker, until his recent retirement, was a journalist specializing in monetary and financial affairs with Het Financieele Dagblad, the “Financial Times of Holland,” serving as deputy editor, editor-in-chief and CEO. In addition to his writing and editing duties he helped develop the company from a newspaper publisher to a multimedia company, developing several websites, a business news radio channel, and a quarterly business magazine, FD Outlook, and, responsible for the establishment of FD Intelligence

A hard cover copy of the book is $25.99, presumably US currency.

A 2nd European roadmap for graphene

About 2.5 years ago there was an article titled, “A roadmap for graphene” (behind a paywall) which Nature magazine published online in Oct. 2012. I see at least two of the 2012 authors, Konstantin (Kostya) Novoselov and Vladimir Fal’ko,, are party to this second, more comprehensive roadmap featured in a Feb. 24, 2015 news item on Nanowerk,

In October 2013, academia and industry came together to form the Graphene Flagship. Now with 142 partners in 23 countries, and a growing number of associate members, the Graphene Flagship was established following a call from the European Commission to address big science and technology challenges of the day through long-term, multidisciplinary R&D efforts.

A Feb.  24, 2015 University of Cambridge news release, which originated the news item, describes the roadmap in more detail,

In an open-access paper published in the Royal Society of Chemistry journal Nanoscale, more than 60 academics and industrialists lay out a science and technology roadmap for graphene, related two-dimensional crystals, other 2D materials, and hybrid systems based on a combination of different 2D crystals and other nanomaterials. The roadmap covers the next ten years and beyond, and its objective is to guide the research community and industry toward the development of products based on graphene and related materials.

The roadmap highlights three broad areas of activity. The first task is to identify new layered materials, assess their potential, and develop reliable, reproducible and safe means of producing them on an industrial scale. Identification of new device concepts enabled by 2D materials is also called for, along with the development of component technologies. The ultimate goal is to integrate components and structures based on 2D materials into systems capable of providing new functionalities and application areas.

Eleven science and technology themes are identified in the roadmap. These are: fundamental science, health and environment, production, electronic devices, spintronics, photonics and optoelectronics, sensors, flexible electronics, energy conversion and storage, composite materials, and biomedical devices. The roadmap addresses each of these areas in turn, with timelines.

Research areas outlined in the roadmap correspond broadly with current flagship work packages, with the addition of a work package devoted to the growing area of biomedical applications, to be included in the next phase of the flagship. A recent independent assessment has confirmed that the Graphene Flagship is firmly on course, with hundreds of research papers, numerous patents and marketable products to its name.

Roadmap timelines predict that, before the end of the ten-year period of the flagship, products will be close to market in the areas of flexible electronics, composites, and energy, as well as advanced prototypes of silicon-integrated photonic devices, sensors, high-speed electronics, and biomedical devices.

“This publication concludes a four-year effort to collect and coordinate state-of-the-art science and technology of graphene and related materials,” says Andrea Ferrari, director of the Cambridge Graphene Centre, and chairman of the Executive Board of the Graphene Flagship. “We hope that this open-access roadmap will serve as the starting point for academia and industry in their efforts to take layered materials and composites from laboratory to market.” Ferrari led the roadmap effort with Italian Institute of Technology physicist Francesco Bonaccorso, who is a Royal Society Newton Fellow of the University of Cambridge, and a Fellow of Hughes Hall.

“We are very proud of the joint effort of the many authors who have produced this roadmap,” says Jari Kinaret, director of the Graphene Flagship. “The roadmap forms a solid foundation for the graphene community in Europe to plan its activities for the coming years. It is not a static document, but will evolve to reflect progress in the field, and new applications identified and pursued by industry.”

I have skimmed through the report briefly (wish I had more time) and have a couple of comments. First, there’s an excellent glossary of terms for anyone who might stumble over chemical abbreviations and/or more technical terminology. Second, they present a very interesting analysis of the intellectual property (patents) landscape (Note: Links have been removed. Incidental numbers are footnote references),

In the graphene area, there has been a particularly rapid increase in patent activity from around 2007.45 Much of this is driven by patent applications made by major corporations and universities in South Korea and USA.53 Additionally, a high level of graphene patent activity in China is also observed.54 These features have led some commentators to conclude that graphene innovations arising in Europe are being mainly exploited elsewhere.55 Nonetheless, an analysis of the Intellectual Property (IP) provides evidence that Europe already has a significant foothold in the graphene patent landscape and significant opportunities to secure future value. As the underlying graphene technology space develops, and the GRM [graphene and related materials] patent landscape matures, re-distribution of the patent landscape seems inevitable and Europe is well positioned to benefit from patent-based commercialisation of GRM research.

Overall, the graphene patent landscape is growing rapidly and already resembles that of sub-segments of the semiconductor and biotechnology industries,56 which experience high levels of patent activity. The patent strategies of the businesses active in such sub-sectors frequently include ‘portfolio maximization’56 and ‘portfolio optimization’56 strategies, and the sub-sectors experience the development of what commentators term ‘patent thickets’56, or multiple overlapping granted patent rights.56 A range of policies, regulatory and business strategies have been developed to limit such patent practices.57 In such circumstances, accurate patent landscaping may provide critical information to policy-makers, investors and individual industry participants, underpinning the development of sound policies, business strategies and research commercialisation plans.

It sounds like a patent thicket is developing (Note: Links have been removed. Incidental numbers are footnote references),,

Fig. 13 provides evidence of a relative increase in graphene patent filings in South Korea from 2007 to 2009 compared to 2004–2006. This could indicate increased commercial interest in graphene technology from around 2007. The period 2010 to 2012 shows a marked relative increase in graphene patent filings in China. It should be noted that a general increase in Chinese patent filings across many ST domains in this period is observed.76 Notwithstanding this general increase in Chinese patent activity, there does appear to be increased commercial interest in graphene in China. It is notable that the European Patent Office contribution as a percentage of all graphene patent filings globally falls from a 8% in the period 2007 to 2009 to 4% in the period 2010 to 2012.

The importance of the US, China and South Korea is emphasised by the top assignees, shown in Fig. 14. The corporation with most graphene patent applications is the Korean multinational Samsung, with over three times as many filings as its nearest rival. It has also patented an unrivalled range of graphene-technology applications, including synthesis procedures,77 transparent display devices,78 composite materials,79 transistors,80 batteries and solar cells.81 Samsung’s patent applications indicate a sustained and heavy investment in graphene R&D, as well as collaboration (co-assignment of patents) with a wide range of academic institutions.82,83

 

image file: c4nr01600a-f14.tif
Fig. 14 Top 10 graphene patent assignees by number and cumulative over all time as of end-July 2014. Number of patents are indicated in the red histograms referred to the left Y axis, while the cumulative percentage is the blue line, referred to the right Y axis.

It is also interesting to note that patent filings by universities and research institutions make up a significant proportion ([similar]50%) of total patent filings: the other half comprises contributions from small and medium-sized enterprises (SMEs) and multinationals.

Europe’s position is shown in Fig. 10, 12 and 14. While Europe makes a good showing in the geographical distribution of publications, it lags behind in patent applications, with only 7% of patent filings as compared to 30% in the US, 25% in China, and 13% in South Korea (Fig. 13) and only 9% of filings by academic institutions assigned in Europe (Fig. 15).

 

image file: c4nr01600a-f15.tif
Fig. 15 Geographical breakdown of academic patent holders as of July 2014.

While Europe is trailing other regions in terms of number of patent filings, it nevertheless has a significant foothold in the patent landscape. Currently, the top European patent holder is Finland’s Nokia, primarily around incorporation of graphene into electrical devices, including resonators and electrodes.72,84,85

This may sound like Europe is trailing behind but that’s not the case according to the roadmap (Note: Links have been removed. Incidental numbers are footnote references),

European Universities also show promise in the graphene patent landscape. We also find evidence of corporate-academic collaborations in Europe, including e.g. co-assignments filed with European research institutions and Germany’s AMO GmbH,86 and chemical giant BASF.87,88 Finally, Europe sees significant patent filings from a number of international corporate and university players including Samsung,77 Vorbeck Materials,89 Princeton University,90–92 and Rice University,93–95 perhaps reflecting the quality of the European ST base around graphene, and its importance as a market for graphene technologies.

There are a number of features in the graphene patent landscape which may lead to a risk of patent thickets96 or ‘multiple overlapping granted patents’ existing around aspects of graphene technology systems. [emphasis mine] There is a relatively high volume of patent activity around graphene, which is an early stage technology space, with applications in patent intensive industry sectors. Often patents claim carbon nano structures other than graphene in graphene patent landscapes, illustrating difficulties around defining ‘graphene’ and mapping the graphene patent landscape. Additionally, the graphene patent nomenclature is not entirely settled. Different patent examiners might grant patents over the same components which the different experts and industry players call by different names.

For anyone new to this blog, I am not a big fan of current patent regimes as they seem to be stifling rather encouraging innovation. Sadly, patents and copyright were originally developed to encourage creativity and innovation by allowing the creators to profit from their ideas. Over time a system designed to encourage innovation has devolved into one that does the opposite. (My Oct. 31, 2011 post titled Patents as weapons and obstacles, details my take on this matter.) I’m not arguing against patents and copyright but suggesting that the system be fixed or replaced with something that delivers on the original intention.

Getting back to the matter at hand, here’s a link to and a citation for the 200 pp. 2015 European Graphene roadmap,

Science and technology roadmap for graphene, related two-dimensional crystals, and hybrid systems by Andrea C. Ferrari, Francesco Bonaccorso, Vladimir Fal’ko, Konstantin S. Novoselov, Stephan Roche, Peter Bøggild, Stefano Borini, Frank H. L. Koppens, Vincenzo Palermo, Nicola Pugno, José A. Garrido, Roman Sordan, Alberto Bianco, Laura Ballerini, Maurizio Prato, Elefterios Lidorikis, Jani Kivioja, Claudio Marinelli, Tapani Ryhänen, Alberto Morpurgo, Jonathan N. Coleman, Valeria Nicolosi, Luigi Colombo, Albert Fert, Mar Garcia-Hernandez, Adrian Bachtold, Grégory F. Schneider, Francisco Guinea, Cees Dekker, Matteo Barbone, Zhipei Sun, Costas Galiotis,  Alexander N. Grigorenko, Gerasimos Konstantatos, Andras Kis, Mikhail Katsnelson, Lieven Vandersypen, Annick Loiseau, Vittorio Morandi, Daniel Neumaier, Emanuele Treossi, Vittorio Pellegrini, Marco Polini, Alessandro Tredicucci, Gareth M. Williams, Byung Hee Hong, Jong-Hyun Ahn, Jong Min Kim, Herbert Zirath, Bart J. van Wees, Herre van der Zant, Luigi Occhipinti, Andrea Di Matteo, Ian A. Kinloch, Thomas Seyller, Etienne Quesnel, Xinliang Feng,  Ken Teo, Nalin Rupesinghe, Pertti Hakonen, Simon R. T. Neil, Quentin Tannock, Tomas Löfwander and Jari Kinaret. Nanoscale, 2015, Advance Article DOI: 10.1039/C4NR01600A First published online 22 Sep 2014

Here’s a diagram illustrating the roadmap process,

Fig. 122 The STRs [science and technology roadmaps] follow a hierarchical structure where the strategic level in a) is connected to the more detailed roadmap shown in b). These general roadmaps are the condensed form of the topical roadmaps presented in the previous sections, and give technological targets for key applications to become commercially competitive and the forecasts for when the targets are predicted to be met.  Courtesy: Researchers and  the Royal Society's journal, Nanoscale

Fig. 122 The STRs [science and technology roadmaps] follow a hierarchical structure where the strategic level in a) is connected to the more detailed roadmap shown in b). These general roadmaps are the condensed form of the topical roadmaps presented in the previous sections, and give technological targets for key applications to become commercially competitive and the forecasts for when the targets are predicted to be met.
Courtesy: Researchers and the Royal Society’s journal, Nanoscale

The image here is not the best quality; the one embedded in the relevant Nanowerk news item is better.

As for the earlier roadmap, here’s my Oct. 11, 2012 post on the topic.