Monthly Archives: May 2017

Explaining the link between air pollution and heart disease?

An April 26, 2017 news item on Nanowerk announces research that may explain the link between heart disease and air pollution (Note: A link has been removed),

Tiny particles in air pollution have been associated with cardiovascular disease, which can lead to premature death. But how particles inhaled into the lungs can affect blood vessels and the heart has remained a mystery.

Now, scientists have found evidence in human and animal studies that inhaled nanoparticles can travel from the lungs into the bloodstream, potentially explaining the link between air pollution and cardiovascular disease. Their results appear in the journal ACS Nano (“Inhaled Nanoparticles Accumulate at Sites of Vascular Disease”).

An April 26, 2017 American Chemical Society news release on EurekAlert, which originated the news item,  expands on the theme,

The World Health Organization estimates that in 2012, about 72 percent of premature deaths related to outdoor air pollution were due to ischemic heart disease and strokes. Pulmonary disease, respiratory infections and lung cancer were linked to the other 28 percent. Many scientists have suspected that fine particles travel from the lungs into the bloodstream, but evidence supporting this assumption in humans has been challenging to collect. So Mark Miller and colleagues at the University of Edinburgh in the United Kingdom and the National Institute for Public Health and the Environment in the Netherlands used a selection of specialized techniques to track the fate of inhaled gold nanoparticles.

In the new study, 14 healthy volunteers, 12 surgical patients and several mouse models inhaled gold nanoparticles, which have been safely used in medical imaging and drug delivery. Soon after exposure, the nanoparticles were detected in blood and urine. Importantly, the nanoparticles appeared to preferentially accumulate at inflamed vascular sites, including carotid plaques in patients at risk of a stroke. The findings suggest that nanoparticles can travel from the lungs into the bloodstream and reach susceptible areas of the cardiovascular system where they could possibly increase the likelihood of a heart attack or stroke, the researchers say.

Here’s a link to and a citation for the paper,

Inhaled Nanoparticles Accumulate at Sites of Vascular Disease by Mark R. Miller, Jennifer B. Raftis, Jeremy P. Langrish, Steven G. McLean, Pawitrabhorn Samutrtai, Shea P. Connell, Simon Wilson, Alex T. Vesey, Paul H. B. Fokkens, A. John F. Boere, Petra Krystek, Colin J. Campbell, Patrick W. F. Hadoke, Ken Donaldson, Flemming R. Cassee, David E. Newby, Rodger Duffin, and Nicholas L. Mills. ACS Nano, Article ASAP DOI: 10.1021/acsnano.6b08551 Publication Date (Web): April 26, 2017

Copyright © 2017 American Chemical Society

This paper is behind a paywall.

Is there a risk of resistance to nanosilver?

Anyone who’s noticed how popular silver has become as an antibacterial, antifungal, or antiviral agent may have wondered if resistance might occur as its use becomes more common. I have two bits on the topic, one from Australia and the other from Canada.


Researchers in Australia don’t have a definitive statement on the issue but are suggesting more caution (from a March 31, 2017 news item on Nanowerk),

Researchers at the University of Technology Sydney [UTS] warn that the broad-spectrum antimicrobial effectiveness of silver is being put at risk by the widespread and inappropriate expansion of nanosilver use in medical and consumer goods.

As well as their use in medical items such as wound dressings and catheters, silver nanoparticles are becoming ubiquitous in everyday items, including toothbrushes and toothpaste, baby bottles and teats, bedding, clothing and household appliances, because of their antibacterial potency and the incorrect assumption that ordinary items should be kept “clean” of microbes.

Nanobiologist Dr Cindy Gunawan, from the ithree institute at UTS and lead researcher on the investigation, said alarm bells should be ringing at the commercialisation of nanosilver use because of a “real threat” that resistance to nanosilver will develop and spread through microorganisms in the human body and the environment.

A March 31 (?), 2017 University of Technology Sydney press release by Fiona McGill, which originated the news item, expands on the theme,

Dr Gunawan and ithree institute director Professor Liz Harry, in collaboration with researchers at UNSW [University of New South Wales] and abroad, investigated more than 140 commercially available medical devices, including wound dressings and tracheal and urinary catheters, and dietary supplements, which are promoted as immunity boosters and consumed by throat or nasal spray.

Their perspective article in the journal ACS Nano concluded that the use of nanosilver in these items could lead to prolonged exposure to bioactive silver in the human body. Such exposure creates the conditions for microbial resistance to develop.

E. coli bacteria. Photo: Flickr/NIAID


The use of silver as an antimicrobial agent dates back centuries. Its ability to destroy pathogens while seemingly having low toxicity on human cells has seen it widely employed, in treating burns or purifying water, for example. More recently, ultra-small (less than 10,000th of a millimetre) silver nanoparticles have been engineered for antimicrobial purposes.  Their commercial appeal lies in superior potency at lower concentrations than “bulk” silver.

“Nanosilver is a proven antimicrobial agent whose reliability is being jeopardised by the commercialisation of people’s fear of bacteria,” Dr Gunawan said.

“Our use of it needs to be far more judicious, in the same way we need to approach antibiotic usage. Nanosilver is a useful tool but we need to be careful, use it wisely and only when the benefit outweighs the risk.

“People need to be made aware of just how widely it is used, but more importantly they need to be made aware that the presence of nanosilver has been shown to cause antimicrobial resistance.”

What is also needed, Dr Gunawan said, is a targeted surveillance strategy to monitor for any occurrence of resistance.

Professor Harry said the findings were a significant contribution to addressing the global antimicrobial resistance crisis.

“This research emphasises the threat posed to our health and that of the environment by the inappropriate use of nanosilver as an antibacterial, particularly in ordinary household and consumer items,” she said.

Here’s a link to and a citation for the paper,

Widespread and Indiscriminate Nanosilver Use: Genuine Potential for Microbial Resistance by Cindy Gunawan, Christopher P. Marquis, Rose Amal, Georgios A. Sotiriou, Scott A. Rice⊥, and Elizabeth J. Harry. ACS Nano, Article ASAP DOI: 10.1021/acsnano.7b01166 Publication Date (Web): March 24, 2017

Copyright © 2017 American Chemical Society

This paper is behind a paywall.

Meanwhile, researchers at the University Calgary (Alberta, Canada) may have discovered what could cause resistance to silver.


This April 25, 2017 news release on EurekAlert is from the Experimental Biology Annual Meeting 2017,

Silver and other metals have been used to fight infections since ancient times. Today, researchers are using sophisticated techniques such as the gene-editing platform Crispr-Cas9 to take a closer look at precisely how silver poisons pathogenic microbes–and when it fails. The work is yielding new insights on how to create effective antimicrobials and avoid the pitfalls of antimicrobial resistance.

Joe Lemire, a postdoctoral fellow at the University of Calgary, will present his work in this area at the American Society for Biochemistry and Molecular Biology annual meeting during the Experimental Biology 2017 meeting, to be held April 22-26 in Chicago.

“Our overarching goal is to deliver the relevant scientific evidence that would aid policymakers in developing guidelines for when and how silver could be used in the clinic to combat and control infectious pathogens,” said Lemire. “With our enhanced mechanistic understanding of silver toxicity, we also aim to develop novel silver-based antimicrobial therapies, and potentially rejuvenate other antibiotic therapies that bacteria have come to resist, via silver-based co-treatment strategies.”

Lemire and his colleagues are using Crispr-Cas9 genome editing to screen for and delete genes that allow certain bacterial species to resist silver’s antimicrobial properties. [emphasis mine] Although previous methods allowed researchers to identify genes that confer antibiotic resistance or tolerance, Crispr-Cas9 is the first technology to allow researchers to cleanly delete these genes from the genome without leaving behind any biochemical markers or “scars.”

The team has discovered many biological pathways involved in silver toxicity and some surprising ways that bacteria avoid succumbing to silver poisoning, Lemire said. While silver is used to control bacteria in many clinical settings and has been incorporated into hundreds of commercial products, gaining a more complete understanding of silver’s antimicrobial properties is necessary if we are to make the most of this ancient remedy for years to come.


Joe Lemire will present this research at 12-2:30 p.m. Tuesday, April 25, [2017] in Hall F, McCormick Place Convention Center (poster B379 939.2) (abstract). Contact the media team for more information or to obtain a free press pass to attend the meeting.

About Experimental Biology 2017

Experimental Biology is an annual meeting comprised of more than 14,000 scientists and exhibitors from six host societies and multiple guest societies. With a mission to share the newest scientific concepts and research findings shaping clinical advances, the meeting offers an unparalleled opportunity for exchange among scientists from across the U.S. and the world who represent dozens of scientific areas, from laboratory to translational to clinical research. #expbio

About the American Society for Biochemistry and Molecular Biology (ASBMB)

ASBMB is a nonprofit scientific and educational organization with more than 12,000 members worldwide. Founded in 1906 to advance the science of biochemistry and molecular biology, the society publishes three peer-reviewed journals, advocates for funding of basic research and education, supports science education at all levels, and promotes the diversity of individuals entering the scientific workforce.

Lemire’s co-authors for the work presented at the 2017 annual meeting are: Kate Chatfield-Reed (The University of Calgary), Lindsay Kalan (Perelman School of Medicine), Natalie Gugala (The University of Calgary), Connor Westersund (The University of Calgary), Henrik Almblad (The University of Calgary), Gordon Chua (The University of Calgary), Raymond Turner (The University of Calgary).

For anyone who wants to pursue this research a little further, the most recent paper I can find is this one from 2015,

Silver oxynitrate: An Unexplored Silver Compound with Antimicrobial and Antibiofilm Activity by Joe A. Lemire, Lindsay Kalan, Alexandru Bradu, and Raymond J. Turner. Antimicrobial Agents and Chemotherapy 05177-14, doi: 10.1128/AAC.05177-14 Accepted manuscript posted online 27 April 2015

This paper appears to be open access.

May/June 2017 scienceish events in Canada (mostly in Vancouver)

I have five* events for this posting

(1) Science and You (Montréal)

The latest iteration of the Science and You conference took place May 4 – 6, 2017 at McGill University (Montréal, Québec). That’s the sad news, the good news is that they have recorded and released the sessions onto YouTube. (This is the first time the conference has been held outside of Europe, in fact, it’s usually held in France.) Here’s why you might be interested (from the 2017 conference page),

The animator of the conference will be Véronique Morin:

Véronique Morin is science journalist and communicator, first president of the World Federation of Science Journalists (WFSJ) and serves as judge for science communication awards. She worked for a science program on Quebec’s public TV network, CBCRadio-Canada, TVOntario, and as a freelancer is also a contributor to -among others-  The Canadian Medical Journal, University Affairs magazine, NewsDeeply, while pursuing documentary projects.

Let’s talk about S …

Holding the attention of an audience full of teenagers may seem impossible… particularly on topics that might be seen as boring, like sciences! Yet, it’s essential to demistify science in order to make it accessible, even appealing in the eyes of futur citizens.
How can we encourage young adults to ask themselves questions about the surrounding world, nature and science? How can we make them discover sciences with and without digital tools?

Find out tips and tricks used by our speakers Kristin Alford and Amanda Tyndall.

Kristin Alford
Dr Kristin Alford is a futurist and the inaugural Director of MOD., a futuristic museum of discovery at the University of South Australia. Her mind is presently occupied by the future of work and provoking young adults to ask questions about the role of science at the intersection of art and innovation.

Internet Website

Amanda Tyndall
Over 20 years of  science communication experience with organisations such as Café Scientifique, The Royal Institution of Great Britain (and Australia’s Science Exchange), the Science Museum in London and now with the Edinburgh International Science Festival. Particularly interested in engaging new audiences through linkages with the arts and digital/creative industries.

Internet Website

A troll in the room

Increasingly used by politicians, social media can reach thousand of people in few seconds. Relayed to infinity, the message seems truthful, but is it really? At a time of fake news and alternative facts, how can we, as a communicator or a journalist, take up the challenge of disinformation?
Discover the traps and tricks of disinformation in the age of digital technologies with our two fact-checking experts, Shawn Otto and Vanessa Schipani, who will offer concrete solutions to unravel the true from the false..


Shawn Otto
Shawn Otto was awarded the IEEE-USA (“I-Triple-E”) National Distinguished Public Service Award for his work elevating science in America’s national public dialogue. He is cofounder and producer of the US presidential science debates at He is also an award-winning screenwriter and novelist, best known for writing and co-producing the Academy Award-nominated movie House of Sand and Fog.

Vanessa Schipani
Vanessa is a science journalist at, which monitors U.S. politicians’ claims for accuracy. Previously, she wrote for outlets in the U.S., Europe and Japan, covering topics from quantum mechanics to neuroscience. She has bachelor’s degrees in zoology and philosophy and a master’s in the history and philosophy of science.

At 20,000 clicks from the extreme

Sharing living from a space station, ship or submarine. The examples of social media use in extreme conditions are multiplying and the public is asking for more. How to use public tools to highlight practices and discoveries? How to manage the use of social networks of a large organisation? What pitfalls to avoid? What does this mean for citizens and researchers?
Find out with Phillipe Archambault and Leslie Elliott experts in extrem conditions.

Philippe Archambault

Professor Philippe Archambault is a marine ecologist at Laval University, the director of the Notre Golfe network and president of the 4th World Conference on Marine Biodiversity. His research on the influence of global changes on biodiversity and the functioning of ecosystems has led him to work in all four corners of our oceans from the Arctic to the Antarctic, through Papua New Guinea and the French Polynesia.


Leslie Elliott

Leslie Elliott leads a team of communicators at Ocean Networks Canada in Victoria, British Columbia, home to Canada’s world-leading ocean observatories in the Pacific and Arctic Oceans. Audiences can join robots equipped with high definition cameras via #livedive to discover more about our ocean.


Science is not a joke!

Science and humor are two disciplines that might seem incompatible … and yet, like the ig-Nobels, humour can prove to be an excellent way to communicate a scientific message. This, however, can prove to be quite challenging since one needs to ensure they employ the right tone and language to both captivate the audience while simultaneously communicating complex topics.

Patrick Baud and Brian Malow, both well-renowned scientific communicators, will give you with the tools you need to capture your audience and also convey a proper scientific message. You will be surprised how, even in Science, a good dose of humour can make you laugh and think.

Patrick Baud
Patrick Baud is a French author who was born on June 30, 1979, in Avignon. He has been sharing for many years his passion for tales of fantasy, and the marvels and curiosities of the world, through different media: radio, web, novels, comic strips, conferences, and videos. His YouTube channel “Axolot”, was created in 2013, and now has over 420,000 followers.

Internet Website

Brian Malow
Brian Malow is Earth’s Premier Science Comedian (self-proclaimed).  Brian has made science videos for Time Magazine and contributed to Neil deGrasse Tyson’s radio show.  He worked in science communications at a museum, blogged for Scientific American, and trains scientists to be better communicators.

Internet Website

I don’t think they’ve managed to get everything up on YouTube yet but the material I’ve found has been subtitled (into French or English, depending on which language the speaker used).

Here are the opening day’s talks on YouTube with English subtitles or French subtitles when appropriate. You can also find some abstracts for the panel presentations here. I was particularly in this panel (S3 – The Importance of Reaching Out to Adults in Scientific Culture), Note: I have searched out the French language descriptions for those unavailable in English,

Organized by Coeur des sciences, Université du Québec à Montréal (UQAM)
Animator: Valérie Borde, Freelance Science Journalist

Anouk Gingras, Musée de la civilisation, Québec
Text not available in English

[La science au Musée de la civilisation c’est :
• Une cinquantaine d’expositions et espaces découvertes
• Des thèmes d’actualité, liés à des enjeux sociaux, pour des exposition souvent destinées aux adultes
• Un potentiel de nouveaux publics en lien avec les autres thématiques présentes au Musée (souvent non scientifiques)
L’exposition Nanotechnologies : l’invisible révolution :
• Un thème d’actualité suscitant une réflexion
• Un sujet sensible menant à la création d’un parcours d’exposition polarisé : choix entre « oui » ou « non » au développement des nanotechnologies pour l’avenir
• L’utilisation de divers éléments pour rapprocher le sujet du visiteur

  • Les nanotechnologies dans la science-fiction
  • Les objets du quotidien contenant des nanoparticules
  • Les objets anciens qui utilisant les nanotechnologies
  • Divers microscopes retraçant l’histoire des nanotechnologies

• Une forme d’interaction suscitant la réflexion du visiteur via un objet sympatique : le canard  de plastique jaune, muni d’une puce RFID

  • Sept stations de consultation qui incitent le visiteur à se prononcer et à réfléchir sur des questions éthiques liées au développement des nanotechnologies
  • Une compilation des données en temps réel
  • Une livraison des résultats personnalisée
  • Une mesure des visiteurs dont l’opinion s’est modifiée à la suite de la visite de l’exposition

Résultats de fréquentation :
• Public de jeunes adultes rejoint (51%)
• Plus d’hommes que de femmes ont visité l’exposition
• Parcours avec canard: incite à la réflexion et augmente l’attention
• 3 visiteurs sur 4 prennent le canard; 92% font l’activité en entier]

Marie Lambert-Chan, Québec Science
Capting the attention of adult readership : challenging mission, possible mission
Since 1962, Québec Science Magazine is the only science magazine aimed at an adult readership in Québec. Our mission : covering topical subjects related to science and technology, as well as social issues from a scientific point of view. Each year, we print eight issues, with a circulation of 22,000 copies. Furthermore, the magazine has received several awards and accolades. In 2017, Québec Science Magazine was honored by the Canadian Magazine Awards/Grands Prix du Magazine and was named Best Magazine in Science, Business and Politics category.
Although we have maintained a solid reputation among scientists and the media industry, our magazine is still relatively unknown to the general public. Why is that ? How is it that, through all those years, we haven’t found the right angle to engage a broader readership ?
We are still searching for definitive answers, but here are our observations :
Speaking science to adults is much more challenging than it is with children, who can marvel endlessly at the smallest things. Unfortunately, adults lose this capacity to marvel and wonder for various reasons : they have specific interests, they failed high-school science, they don’t feel competent enough to understand scientific phenomena. How do we bring the wonder back ? This is our mission. Not impossible, and hopefully soon to be accomplished. One noticible example is the number of reknown scientists interviewed during the popular talk-show Tout le monde en parle, leading us to believe the general public may have an interest in science.
However, to accomplish our mission, we have to recount science. According to the Bulgarian writer and blogger Maria Popova, great science writing should explain, elucidate and enchant . To explain : to make the information clear and comprehensible. To elucidate : to reveal all the interconnections between the pieces of information. To enchant : to go beyond the scientific terms and information and tell a story, thus giving a kaleidoscopic vision of the subject. This is how we intend to capture our readership’s attention.
Our team aims to accomplish this challenge. Although, to be perfectly honest, it would be much easier if we had more resources, financial-wise or human-wise. However, we don’t lack ideas. We dream of major scientific investigations, conferences organized around themes from the magazine’s issues, Web documentaries, podcasts… Such initiatives would give us the visibility we desperately crave.
That said, even in the best conditions, would be have more subscribers ? Perhaps. But it isn’t assured. Even if our magazine is aimed at adult readership, we are convinced that childhood and science go hand in hand, and is even decisive for the children’s future. At the moment, school programs are not in place for continuous scientific development. It is possible to develop an interest for scientific culture as adults, but it is much easier to achieve this level of curiosity if it was previously fostered.

Robert Lamontagne, Université de Montréal
Since the beginning of my career as an astrophysicist, I have been interested in scientific communication to non-specialist audiences. I have presented hundreds of lectures describing the phenomena of the cosmos. Initially, these were mainly offered in amateur astronomers’ clubs or in high-schools and Cégeps. Over the last few years, I have migrated to more general adult audiences in the context of cultural activities such as the “Festival des Laurentides”, the Arts, Culture and Society activities in Repentigny and, the Université du troisième âge (UTA) or Senior’s University.
The Quebec branch of the UTA, sponsored by the Université de Sherbrooke (UdeS), exists since 1976. Seniors universities, created in Toulouse, France, are part of a worldwide movement. The UdeS and its senior’s university antennas are members of the International Association of the Universities of the Third Age (AIUTA). The UTA is made up of 28 antennas located in 10 regions and reaches more than 10,000 people per year. Antenna volunteers prepare educational programming by drawing on a catalog of courses, seminars and lectures, covering a diverse range of subjects ranging from history and politics to health, science, or the environment.
The UTA is aimed at people aged 50 and over who wish to continue their training and learn throughout their lives. It is an attentive, inquisitive, educated public and, given the demographics in Canada, its number is growing rapidly. This segment of the population is often well off and very involved in society.
I usually use a two-prong approach.
• While remaining rigorous, the content is articulated around a few ideas, avoiding analytical expressions in favor of a qualitative description.
• The narrative framework, the story, which allows to contextualize the scientific content and to forge links with the audience.

Sophie Malavoy, Coeur des sciences – UQAM

Many obstacles need to be overcome in order to reach out to adults, especially those who aren’t in principle interested in science.
• Competing against cultural activities such as theater, movies, etc.
• The idea that science is complex and dull
• A feeling of incompetence. « I’ve always been bad in math and physics»
• Funding shortfall for activities which target adults
How to reach out to those adults?
• To put science into perspective. To bring its relevance out by making links with current events and big issues (economic, heath, environment, politic). To promote a transdisciplinary approach which includes humanities and social sciences.
• To stake on originality by offering uncommon and ludic experiences (scientific walks in the city, street performances, etc.)
• To bridge between science and popular activities to the public (science/music; science/dance; science/theater; science/sports; science/gastronomy; science/literature)
• To reach people with emotions without sensationalism. To boost their curiosity and ability to wonder.
• To put a human face on science, by insisting not only on the results of a research but on its process. To share the adventure lived by researchers.
• To liven up people’s feeling of competence. To insist on the scientific method.
• To invite non-scientists (citizens groups, communities, consumers, etc.) to the reflections on science issues (debate, etc.).  To move from dissemination of science to dialog

Didier Pourquery, The Conversation France
Text not available in English

[Depuis son lancement en septembre 2015 la plateforme The Conversation France (2 millions de pages vues par mois) n’a cessé de faire progresser son audience. Selon une étude menée un an après le lancement, la structure de lectorat était la suivante
Pour accrocher les adultes et les ainés deux axes sont intéressants ; nous les utilisons autant sur notre site que sur notre newsletter quotidienne – 26.000 abonnés- ou notre page Facebook (11500 suiveurs):
1/ expliquer l’actualité : donner les clefs pour comprendre les débats scientifiques qui animent la société ; mettre de la science dans les discussions (la mission du site est de  « nourrir le débat citoyen avec de l’expertise universitaire et de la recherche »). L’idée est de poser des questions de compréhension simple au moment où elles apparaissent dans le débat (en période électorale par exemple : qu’est-ce que le populisme ? Expliqué par des chercheurs de Sciences Po incontestables.)
Exemples : comprendre les conférences climat -COP21, COP22 – ; comprendre les débats de société (Gestation pour autrui); comprendre l’économie (revenu universel); comprendre les maladies neurodégénératives (Alzheimer) etc.
2/ piquer la curiosité : utiliser les formules classiques (le saviez-vous ?) appliquées à des sujets surprenants (par exemple : «  Que voit un chien quand il regarde la télé ? » a eu 96.000 pages vues) ; puis jouer avec ces articles sur les réseaux sociaux. Poser des questions simples et surprenantes. Par exemple : ressemblez-vous à votre prénom ? Cet article académique très sérieux a comptabilisé 95.000 pages vues en français et 171.000 en anglais.
3/ Susciter l’engagement : faire de la science participative simple et utile. Par exemple : appeler nos lecteurs à surveiller l’invasion de moustiques tigres partout sur le territoire. Cet article a eu 112.000 pages vues et a été republié largement sur d’autres sites. Autre exemple : appeler les lecteurs à photographier les punaises de leur environnement.]

Here are my very brief and very rough translations. (1) Anouk Gingras is focused largely on a nanotechnology exhibit and whether or not visitors went through it and participated in various activities. She doesn’t seem specifically focused on science communication for adults but they are doing some very interesting and related work at Québec’s Museum of Civilization. (2) Didier Pourquery is describing an online initiative known as ‘The Conversation France’ (strange—why not La conversation France?). Moving on, there’s a website with a daily newsletter (blog?) and a Facebook page. They have two main projects, one is a discussion of current science issues in society, which is informed with and by experts but is not exclusive to experts, and more curiosity-based science questions and discussion such as What does a dog see when it watches television?

Serendipity! I hadn’t stumbled across this conference when I posted my May 12, 2017 piece on the ‘insanity’ of science outreach in Canada. It’s good to see I’m not the only one focused on science outreach for adults and that there is some action, although seems to be a Québec-only effort.

(2) Ingenious—a book launch in Vancouver

The book will be launched on Thursday, June 1, 2017 at the Vancouver Public Library’s Central Branch (from the Ingenious: An Evening of Canadian Innovation event page)

Ingenious: An Evening of Canadian Innovation
Thursday, June 1, 2017 (6:30 pm – 8:00 pm)
Central Branch

Gov. Gen. David Johnston and OpenText Corp. chair Tom Jenkins discuss Canadian innovation and their book Ingenious: How Canadian Innovators Made the World Smarter, Smaller, Kinder, Safer, Healthier, Wealthier and Happier.

Books will be available for purchase and signing.

Doors open at 6 p.m.



350 West Georgia St.
VancouverV6B 6B1

Get Directions

  • Phone:

Location Details:

Alice MacKay Room, Lower Level

I do have a few more details about the authors and their book. First, there’s this from the Ottawa Writer’s Festival March 28, 2017 event page,

To celebrate Canada’s 150th birthday, Governor General David Johnston and Tom Jenkins have crafted a richly illustrated volume of brilliant Canadian innovations whose widespread adoption has made the world a better place. From Bovril to BlackBerrys, lightbulbs to liquid helium, peanut butter to Pablum, this is a surprising and incredibly varied collection to make Canadians proud, and to our unique entrepreneurial spirit.

Successful innovation is always inspired by at least one of three forces — insight, necessity, and simple luck. Ingenious moves through history to explore what circumstances, incidents, coincidences, and collaborations motivated each great Canadian idea, and what twist of fate then brought that idea into public acceptance. Above all, the book explores what goes on in the mind of an innovator, and maps the incredible spectrum of personalities that have struggled to improve the lot of their neighbours, their fellow citizens, and their species.

From the marvels of aboriginal invention such as the canoe, snowshoe, igloo, dogsled, lifejacket, and bunk bed to the latest pioneering advances in medicine, education, philanthropy, science, engineering, community development, business, the arts, and the media, Canadians have improvised and collaborated their way to international admiration. …

Then, there’s this April 5, 2017 item on Canadian Broadcasting Corporation’s (CBC) news online,

From peanut butter to the electric wheelchair, the stories behind numerous life-changing Canadian innovations are detailed in a new book.

Gov. Gen. David Johnston and Tom Jenkins, chair of the National Research Council and former CEO of OpenText, are the authors of Ingenious: How Canadian Innovators Made the World Smarter, Smaller, Kinder, Safer, Healthier, Wealthier and Happier. The authors hope their book reinforces and extends the culture of innovation in Canada.

“We started wanting to tell 50 stories of Canadian innovators, and what has amazed Tom and myself is how many there are,” Johnston told The Homestretch on Wednesday. The duo ultimately chronicled 297 innovations in the book, including the pacemaker, life jacket and chocolate bars.

“Innovations are not just technological, not just business, but they’re social innovations as well,” Johnston said.

Many of those innovations, and the stories behind them, are not well known.

“We’re sort of a humble people,” Jenkins said. “We’re pretty quiet. We don’t brag, we don’t talk about ourselves very much, and so we then lead ourselves to believe as a culture that we’re not really good inventors, the Americans are. And yet we knew that Canadians were actually great inventors and innovators.”

‘Opportunities and challenges’

For Johnston, his favourite story in the book is on the light bulb.

“It’s such a symbol of both our opportunities and challenges,” he said. “The light bulb was invented in Canada, not the United States. It was two inventors back in the 1870s that realized that if you passed an electric current through a resistant metal it would glow, and they patented that, but then they didn’t have the money to commercialize it.”

American inventor Thomas Edison went on to purchase that patent and made changes to the original design.

Johnston and Jenkins are also inviting readers to share their own innovation stories, on the book’s website.

I’m looking forward to the talk and wondering if they’ve included the botox and cellulose nanocrystal (CNC) stories to the book. BTW, Tom Jenkins was the chair of a panel examining Canadian research and development and lead author of the panel’s report (Innovation Canada: A Call to Action) for the then Conservative government (it’s also known as the Jenkins report). You can find out more about in my Oct. 21, 2011 posting.

(3) Made in Canada (Vancouver)

This is either fortuitous or there’s some very high level planning involved in the ‘Made in Canada; Inspiring Creativity and Innovation’ show which runs from April 21 – Sept. 4, 2017 at Vancouver’s Science World (also known as the Telus World of Science). From the Made in Canada; Inspiring Creativity and Innovation exhibition page,

Celebrate Canadian creativity and innovation, with Science World’s original exhibition, Made in Canada, presented by YVR [Vancouver International Airport] — where you drive the creative process! Get hands-on and build the fastest bobsled, construct a stunning piece of Vancouver architecture and create your own Canadian sound mashup, to share with friends.

Vote for your favourite Canadian inventions and test fly a plane of your design. Discover famous (and not-so-famous, but super neat) Canadian inventions. Learn about amazing, local innovations like robots that teach themselves, one-person electric cars and a computer that uses parallel universes.

Imagine what you can create here, eh!!

You can find more information here.

One quick question, why would Vancouver International Airport be presenting this show? I asked that question of Science World’s Communications Coordinator, Jason Bosher, and received this response,

 YVR is the presenting sponsor. They donated money to the exhibition and they also contributed an exhibit for the “We Move” themed zone in the Made in Canada exhibition. The YVR exhibit details the history of the YVR airport, it’s geographic advantage and some of the planes they have seen there.

I also asked if there was any connection between this show and the ‘Ingenious’ book launch,

Some folks here are aware of the book launch. It has to do with the Canada 150 initiative and nothing to do with the Made in Canada exhibition, which was developed here at Science World. It is our own original exhibition.

So there you have it.

(4) Robotics, AI, and the future of work (Ottawa)

I’m glad to finally stumble across a Canadian event focusing on the topic of artificial intelligence (AI), robotics and the future of work. Sadly (for me), this is taking place in Ottawa. Here are more details  from the May 25, 2017 notice (received via email) from the Canadian Science Policy Centre (CSPC),

CSPC is Partnering with CIFAR {Canadian Institute for Advanced Research]
The Second Annual David Dodge Lecture

Join CIFAR and Senior Fellow Daron Acemoglu for
the Second Annual David Dodge CIFAR Lecture in Ottawa on June 13.
June 13, 2017 | 12 – 2 PM [emphasis mine]
Fairmont Château Laurier, Drawing Room | 1 Rideau St, Ottawa, ON
Along with the backlash against globalization and the outsourcing of jobs, concern is also growing about the effect that robotics and artificial intelligence will have on the labour force in advanced industrial nations. World-renowned economist Acemoglu, author of the best-selling book Why Nations Fail, will discuss how technology is changing the face of work and the composition of labour markets. Drawing on decades of data, Acemoglu explores the effects of widespread automation on manufacturing jobs, the changes we can expect from artificial intelligence technologies, and what responses to these changes might look like. This timely discussion will provide valuable insights for current and future leaders across government, civil society, and the private sector.

Daron Acemoglu is a Senior Fellow in CIFAR’s Insitutions, Organizations & Growth program, and the Elizabeth and James Killian Professor of Economics at the Massachusetts Institute of Technology.

Tickets: $15 (A light lunch will be served.)

You can find a registration link here. Also, if you’re interested in the Canadian efforts in the field of artificial intelligence you can find more in my March 24, 2017 posting (scroll down about 25% of the way and then about 40% of the way) on the 2017 Canadian federal budget and science where I first noted the $93.7M allocated to CIFAR for launching a Pan-Canadian Artificial Intelligence Strategy.

(5) June 2017 edition of the Curiosity Collider Café (Vancouver)

This is an art/science (also known called art/sci and SciArt) that has taken place in Vancouver every few months since April 2015. Here’s more about the June 2017 edition (from the Curiosity Collider events page),

Collider Cafe

8:00pm on Wednesday, June 21st, 2017. Door opens at 7:30pm.

Café Deux Soleils. 2096 Commercial Drive, Vancouver, BC (Google Map).

$5.00-10.00 cover at the door (sliding scale). Proceeds will be used to cover the cost of running this event, and to fund future Curiosity Collider events. Curiosity Collider is a registered BC non-profit organization.


#ColliderCafe is a space for artists, scientists, makers, and anyone interested in art+science. Meet, discover, connect, create. How do you explore curiosity in your life? Join us and discover how our speakers explore their own curiosity at the intersection of art & science.

The event will start promptly at 8pm (doors open at 7:30pm). $5.00-10.00 (sliding scale) cover at the door. Proceeds will be used to cover the cost of running this event, and to fund future Curiosity Collider events. Curiosity Collider is a registered BC non-profit organization.


*I changed ‘three’ events to ‘five’ events and added a number to each event for greater reading ease on May 31, 2017.

Making sense of the world with data visualization

A March 30, 2017 item on features an essay about data visualization,

The late data visionary Hans Rosling mesmerised the world with his work, contributing to a more informed society. Rosling used global health data to paint a stunning picture of how our world is a better place now than it was in the past, bringing hope through data.

Matt Escobar, postdoctoral researcher on machine learning applied to chemical engineering at the University of Tokyo, wrote this March 30, 2017 essay originally for The Conversation,

Now more than ever, data are collected from every aspect of our lives. From social media and advertising to artificial intelligence and automated systems, understanding and parsing information have become highly valuable skills. But we often overlook the importance of knowing how to communicate data to peers and to the public in an effective, meaningful way.

Hans Rosling paved the way for effectively communicating global health data. Vimeo

Data visualisation can take many other forms, just as data itself can be interpreted in many different ways. It can be used to highlight important achievements, as Bill and Melinda Gates have shown with their annual letters in which their main results and aspirations are creatively displayed.

Escobar goes on to explore a number of approaches to data visualization including this one,

Finding similarity between samples is another good starting point. Network analysis is a well-known technique that relies on establishing connections between samples (also called nodes). Strong connections between samples indicate a high level of similarity between features.

Once these connections are established, the network rearranges itself so that samples with like characteristics stick together. While before we were considering only the most relevant features of each live show and using that as reference, now all features are assessed simultaneously – similarity is more broadly defined.

Networks show a highly connected yet well-defined world.

The amount of information that can be visualised with networks is akin to dimensionality reduction, but the feature assessment aspect is now different. Whereas previously samples would be grouped based on a few specific marking features, in this tool samples that share many features stick together. That leaves it up to users to choose their approach based on their goals.

He finishes by noting that his essay is an introduction to a complex topic.

An explanation of neural networks from the Massachusetts Institute of Technology (MIT)

I always enjoy the MIT ‘explainers’ and have been a little sad that I haven’t stumbled across one in a while. Until now, that is. Here’s an April 14, 201 neural network ‘explainer’ (in its entirety) by Larry Hardesty (?),

In the past 10 years, the best-performing artificial-intelligence systems — such as the speech recognizers on smartphones or Google’s latest automatic translator — have resulted from a technique called “deep learning.”

Deep learning is in fact a new name for an approach to artificial intelligence called neural networks, which have been going in and out of fashion for more than 70 years. Neural networks were first proposed in 1944 by Warren McCullough and Walter Pitts, two University of Chicago researchers who moved to MIT in 1952 as founding members of what’s sometimes called the first cognitive science department.

Neural nets were a major area of research in both neuroscience and computer science until 1969, when, according to computer science lore, they were killed off by the MIT mathematicians Marvin Minsky and Seymour Papert, who a year later would become co-directors of the new MIT Artificial Intelligence Laboratory.

The technique then enjoyed a resurgence in the 1980s, fell into eclipse again in the first decade of the new century, and has returned like gangbusters in the second, fueled largely by the increased processing power of graphics chips.

“There’s this idea that ideas in science are a bit like epidemics of viruses,” says Tomaso Poggio, the Eugene McDermott Professor of Brain and Cognitive Sciences at MIT, an investigator at MIT’s McGovern Institute for Brain Research, and director of MIT’s Center for Brains, Minds, and Machines. “There are apparently five or six basic strains of flu viruses, and apparently each one comes back with a period of around 25 years. People get infected, and they develop an immune response, and so they don’t get infected for the next 25 years. And then there is a new generation that is ready to be infected by the same strain of virus. In science, people fall in love with an idea, get excited about it, hammer it to death, and then get immunized — they get tired of it. So ideas should have the same kind of periodicity!”

Weighty matters

Neural nets are a means of doing machine learning, in which a computer learns to perform some task by analyzing training examples. Usually, the examples have been hand-labeled in advance. An object recognition system, for instance, might be fed thousands of labeled images of cars, houses, coffee cups, and so on, and it would find visual patterns in the images that consistently correlate with particular labels.

Modeled loosely on the human brain, a neural net consists of thousands or even millions of simple processing nodes that are densely interconnected. Most of today’s neural nets are organized into layers of nodes, and they’re “feed-forward,” meaning that data moves through them in only one direction. An individual node might be connected to several nodes in the layer beneath it, from which it receives data, and several nodes in the layer above it, to which it sends data.

To each of its incoming connections, a node will assign a number known as a “weight.” When the network is active, the node receives a different data item — a different number — over each of its connections and multiplies it by the associated weight. It then adds the resulting products together, yielding a single number. If that number is below a threshold value, the node passes no data to the next layer. If the number exceeds the threshold value, the node “fires,” which in today’s neural nets generally means sending the number — the sum of the weighted inputs — along all its outgoing connections.

When a neural net is being trained, all of its weights and thresholds are initially set to random values. Training data is fed to the bottom layer — the input layer — and it passes through the succeeding layers, getting multiplied and added together in complex ways, until it finally arrives, radically transformed, at the output layer. During training, the weights and thresholds are continually adjusted until training data with the same labels consistently yield similar outputs.

Minds and machines

The neural nets described by McCullough and Pitts in 1944 had thresholds and weights, but they weren’t arranged into layers, and the researchers didn’t specify any training mechanism. What McCullough and Pitts showed was that a neural net could, in principle, compute any function that a digital computer could. The result was more neuroscience than computer science: The point was to suggest that the human brain could be thought of as a computing device.

Neural nets continue to be a valuable tool for neuroscientific research. For instance, particular network layouts or rules for adjusting weights and thresholds have reproduced observed features of human neuroanatomy and cognition, an indication that they capture something about how the brain processes information.

The first trainable neural network, the Perceptron, was demonstrated by the Cornell University psychologist Frank Rosenblatt in 1957. The Perceptron’s design was much like that of the modern neural net, except that it had only one layer with adjustable weights and thresholds, sandwiched between input and output layers.

Perceptrons were an active area of research in both psychology and the fledgling discipline of computer science until 1959, when Minsky and Papert published a book titled “Perceptrons,” which demonstrated that executing certain fairly common computations on Perceptrons would be impractically time consuming.

“Of course, all of these limitations kind of disappear if you take machinery that is a little more complicated — like, two layers,” Poggio says. But at the time, the book had a chilling effect on neural-net research.

“You have to put these things in historical context,” Poggio says. “They were arguing for programming — for languages like Lisp. Not many years before, people were still using analog computers. It was not clear at all at the time that programming was the way to go. I think they went a little bit overboard, but as usual, it’s not black and white. If you think of this as this competition between analog computing and digital computing, they fought for what at the time was the right thing.”


By the 1980s, however, researchers had developed algorithms for modifying neural nets’ weights and thresholds that were efficient enough for networks with more than one layer, removing many of the limitations identified by Minsky and Papert. The field enjoyed a renaissance.

But intellectually, there’s something unsatisfying about neural nets. Enough training may revise a network’s settings to the point that it can usefully classify data, but what do those settings mean? What image features is an object recognizer looking at, and how does it piece them together into the distinctive visual signatures of cars, houses, and coffee cups? Looking at the weights of individual connections won’t answer that question.

In recent years, computer scientists have begun to come up with ingenious methods for deducing the analytic strategies adopted by neural nets. But in the 1980s, the networks’ strategies were indecipherable. So around the turn of the century, neural networks were supplanted by support vector machines, an alternative approach to machine learning that’s based on some very clean and elegant mathematics.

The recent resurgence in neural networks — the deep-learning revolution — comes courtesy of the computer-game industry. The complex imagery and rapid pace of today’s video games require hardware that can keep up, and the result has been the graphics processing unit (GPU), which packs thousands of relatively simple processing cores on a single chip. It didn’t take long for researchers to realize that the architecture of a GPU is remarkably like that of a neural net.

Modern GPUs enabled the one-layer networks of the 1960s and the two- to three-layer networks of the 1980s to blossom into the 10-, 15-, even 50-layer networks of today. That’s what the “deep” in “deep learning” refers to — the depth of the network’s layers. And currently, deep learning is responsible for the best-performing systems in almost every area of artificial-intelligence research.

Under the hood

The networks’ opacity is still unsettling to theorists, but there’s headway on that front, too. In addition to directing the Center for Brains, Minds, and Machines (CBMM), Poggio leads the center’s research program in Theoretical Frameworks for Intelligence. Recently, Poggio and his CBMM colleagues have released a three-part theoretical study of neural networks.

The first part, which was published last month in the International Journal of Automation and Computing, addresses the range of computations that deep-learning networks can execute and when deep networks offer advantages over shallower ones. Parts two and three, which have been released as CBMM technical reports, address the problems of global optimization, or guaranteeing that a network has found the settings that best accord with its training data, and overfitting, or cases in which the network becomes so attuned to the specifics of its training data that it fails to generalize to other instances of the same categories.

There are still plenty of theoretical questions to be answered, but CBMM researchers’ work could help ensure that neural networks finally break the generational cycle that has brought them in and out of favor for seven decades.

This image from MIT illustrates a ‘modern’ neural network,

Most applications of deep learning use “convolutional” neural networks, in which the nodes of each layer are clustered, the clusters overlap, and each cluster feeds data to multiple nodes (orange and green) of the next layer. Image: Jose-Luis Olivares/MIT

h/t April 17, 2017

One final note, I wish the folks at MIT had an ‘explainer’ archive. I’m not sure how to find any more ‘explainers on MIT’s website.

Biodegradable nanoparticles to program immune cells for cancer treatments

The Fred Hutchinson Cancer Research Centre in Seattle, Washington has announced a proposed cancer treatment using nanoparticle-programmed T cells according to an April 12, 2017 news release (received via email; also on EurekAlert), Note: A link has been removed,

Researchers at Fred Hutchinson Cancer Research Center have developed biodegradable nanoparticles that can be used to genetically program immune cells to recognize and destroy cancer cells — while the immune cells are still inside the body.

In a proof-of-principle study to be published April 17 [2017] in Nature Nanotechnology, the team showed that nanoparticle-programmed immune cells, known as T cells, can rapidly clear or slow the progression of leukemia in a mouse model.

“Our technology is the first that we know of to quickly program tumor-recognizing capabilities into T cells without extracting them for laboratory manipulation,” said Fred Hutch’s Dr. Matthias Stephan, the study’s senior author. “The reprogrammed cells begin to work within 24 to 48 hours and continue to produce these receptors for weeks. This suggests that our technology has the potential to allow the immune system to quickly mount a strong enough response to destroy cancerous cells before the disease becomes fatal.”

Cellular immunotherapies have shown promise in clinical trials, but challenges remain to making them more widely available and to being able to deploy them quickly. At present, it typically takes a couple of weeks to prepare these treatments: the T cells must be removed from the patient and genetically engineered and grown in special cell processing facilities before they are infused back into the patient. These new nanoparticles could eliminate the need for such expensive and time consuming steps.

Although his T-cell programming method is still several steps away from the clinic, Stephan imagines a future in which nanoparticles transform cell-based immunotherapies — whether for cancer or infectious disease — into an easily administered, off-the-shelf treatment that’s available anywhere.

“I’ve never had cancer, but if I did get a cancer diagnosis I would want to start treatment right away,” Stephan said. “I want to make cellular immunotherapy a treatment option the day of diagnosis and have it able to be done in an outpatient setting near where people live.”

The body as a genetic engineering lab

Stephan created his T-cell homing nanoparticles as a way to bring the power of cellular cancer immunotherapy to more people.

In his method, the laborious, time-consuming T-cell programming steps all take place within the body, creating a potential army of “serial killers” within days.

As reported in the new study, Stephan and his team developed biodegradable nanoparticles that turned T cells into CAR T cells, a particular type of cellular immunotherapy that has delivered promising results against leukemia in clinical trials.

The researchers designed the nanoparticles to carry genes that encode for chimeric antigen receptors, or CARs, that target and eliminate cancer. They also tagged the nanoparticles with molecules that make them stick like burrs to T cells, which engulf the nanoparticles. The cell’s internal traffic system then directs the nanoparticle to the nucleus, and it dissolves.

The study provides proof-of-principle that the nanoparticles can educate the immune system to target cancer cells. Stephan and his team designed the new CAR genes to integrate into chromosomes housed in the nucleus, making it possible for T cells to begin decoding the new genes and producing CARs within just one or two days.

Once the team determined that their CAR-carrying nanoparticles reprogrammed a noticeable percent of T cells, they tested their efficacy. Using a preclinical mouse model of leukemia, Stephan and his colleagues compared their nanoparticle-programming strategy against chemotherapy followed by an infusion of T cells programmed in the lab to express CARs, which mimics current CAR-T-cell therapy.

The nanoparticle-programmed CAR-T cells held their own against the infused CAR-T cells. Treatment with nanoparticles or infused CAR-T cells improved survival 58 days on average, up from a median survival of about two weeks.

The study was funded by Fred Hutch’s Immunotherapy Initiative, the Leukemia & Lymphoma Society, the Phi Beta Psi Sorority, the National Science Foundation and the National Cancer Institute.

Next steps and other applications

Stephan’s nanoparticles still have to clear several hurdles before they get close to human trials. He’s pursuing new strategies to make the gene-delivery-and-expression system safe in people and working with companies that have the capacity to produce clinical-grade nanoparticles. Additionally, Stephan has turned his sights to treating solid tumors and is collaborating to this end with several research groups at Fred Hutch.

And, he said, immunotherapy may be just the beginning. In theory, nanoparticles could be modified to serve the needs of patients whose immune systems need a boost, but who cannot wait for several months for a conventional vaccine to kick in.

“We hope that this can be used for infectious diseases like hepatitis or HIV,” Stephan said. This method may be a way to “provide patients with receptors they don’t have in their own body,” he explained. “You just need a tiny number of programmed T cells to protect against a virus.”

Here’s a link to and a citation for the paper,

In situ programming of leukaemia-specific T cells using synthetic DNA nanocarriers by Tyrel T. Smith, Sirkka B. Stephan, Howell F. Moffett, Laura E. McKnight, Weihang Ji, Diana Reiman, Emmy Bonagofski, Martin E. Wohlfahrt, Smitha P. S. Pillai, & Matthias T. Stephan. Nature Nanotechnology (2017) doi:10.1038/nnano.2017.57 Published online 17 April 2017

This paper is behind a paywall.

Health technology and the Canadian Broadcasting Corporation’s (CBC) two-tier health system ‘Viewpoint’

There’s a lot of talk and handwringing about Canada’s health care system, which ebbs and flows in almost predictable cycles. Jesse Hirsh in a May 16, 2017 ‘Viewpoints’ segment (an occasional series run as part the of the CBC’s [Canadian Broadcasting Corporation] flagship, daily news programme, The National) dared to reframe the discussion as one about technology and ‘those who get it’  [the technologically literate] and ‘those who don’t’,  a state Hirsh described as being illiterate as you can see and hear in the following video.

I don’t know about you but I’m getting tired of being called illiterate when I don’t know something. To be illiterate means you can’t read and write and as it turns out I do both of those things on a daily basis (sometimes even in two languages). Despite my efforts, I’m ignorant about any number of things and those numbers keep increasing day by day. BTW, Is there anyone who isn’t having trouble keeping up?

Moving on from my rhetorical question, Hirsh has a point about the tech divide and about the need for discussion. It’s a point that hadn’t occurred to me (although I think he’s taking it in the wrong direction). In fact, this business of a tech divide already exists if you consider that people who live in rural environments and need the latest lifesaving techniques or complex procedures or access to highly specialized experts have to travel to urban centres. I gather that Hirsh feels that this divide isn’t necessarily going to be an urban/rural split so much as an issue of how technically literate you and your doctor are.  That’s intriguing but then his argumentation gets muddled. Confusingly, he seems to be suggesting that the key to the split is your access (not your technical literacy) to artificial intelligence (AI) and algorithms (presumably he’s referring to big data and data analytics). I expect access will come down more to money than technological literacy.

For example, money is likely to be a key issue when you consider his big pitch is for access to IBM’s Watson computer. (My Feb. 28, 2011 posting titled: Engineering, entertainment, IBM’s Watson, and product placement focuses largely on Watson, its winning appearances on the US television game show, Jeopardy, and its subsequent adoption into the University of Maryland’s School of Medicine in a project to bring Watson into the examining room with patients.)

Hirsh’s choice of IBM’s Watson is particularly interesting for a number of reasons. (1) Presumably there are companies other than IBM in this sector. Why do they not rate a mention?  (2) Given the current situation with IBM and the Canadian federal government’s introduction of the Phoenix payroll system (a PeopleSoft product customized by IBM), which is  a failure of monumental proportions (a Feb. 23, 2017 article by David Reevely for the Ottawa Citizen and a May 25, 2017 article by Jordan Press for the National Post), there may be a little hesitation, if not downright resistance, to a large scale implementation of any IBM product or service, regardless of where the blame lies. (3) Hirsh notes on the home page for his eponymous website,

I’m presently spending time at the IBM Innovation Space in Toronto Canada, investigating the impact of artificial intelligence and cognitive computing on all sectors and industries.

Yes, it would seem he has some sort of relationship with IBM not referenced in his Viewpoints segment on The National. Also, his description of the relationship isn’t especially illuminating but perhaps it.s this? (from the IBM Innovation Space  – Toronto Incubator Application webpage),

Our incubator

The IBM Innovation Space is a Toronto-based incubator that provides startups with a collaborative space to innovate and disrupt the market. Our goal is to provide you with the tools needed to take your idea to the next level, introduce you to the right networks and help you acquire new clients. Our unique approach, specifically around client engagement, positions your company for optimal growth and revenue at an accelerated pace.


IBM Bluemix
IBM Global Entrepreneur
Softlayer – an IBM Company

Startups partnered with the IBM Innovation Space can receive up to $120,000 in IBM credits at no charge for up to 12 months through the Global Entrepreneurship Program (GEP). These credits can be used in our products such our IBM Bluemix developer platform, Softlayer cloud services, and our world-renowned IBM Watson ‘cognitive thinking’ APIs. We provide you with enterprise grade technology to meet your clients’ needs, large or small.

Collaborative workspace in the heart of Downtown Toronto
Mentorship opportunities available with leading experts
Access to large clients to scale your startup quickly and effectively
Weekly programming ranging from guest speakers to collaborative activities
Help with funding and access to local VCs and investors​

Final comments

While I have some issues with Hirsh’s presentation, I agree that we should be discussing the issues around increased automation of our health care system. A friend of mine’s husband is a doctor and according to him those prescriptions and orders you get when leaving the hospital? They are not made up by a doctor so much as they are spit up by a computer based on the data that the doctors and nurses have supplied.

GIGO, bias, and de-skilling

Leaving aside the wonders that Hirsh describes, there’s an oldish saying in the computer business, garbage in/garbage out (gigo). At its simplest, who’s going to catch a mistake? (There are lots of mistakes made in hospitals and other health care settings.)

There are also issues around the quality of research. Are all the research papers included in the data used by the algorithms going to be considered equal? There’s more than one case where a piece of problematic research has been accepted uncritically, even if it get through peer review, and subsequently cited many times over. One of the ways to measure impact, i.e., importance, is to track the number of citations. There’s also the matter of where the research is published. A ‘high impact’ journal, such as Nature, Science, or Cell, automatically gives a piece of research a boost.

There are other kinds of bias as well. Increasingly, there’s discussion about algorithms being biased and about how machine learning (AI) can become biased. (See my May 24, 2017 posting: Machine learning programs learn bias, which highlights the issues and cites other FrogHeart posts on that and other related topics.)

These problems are to a large extent already present. Doctors have biases and research can be wrong and it can take a long time before there are corrections. However, the advent of an automated health diagnosis and treatment system is likely to exacerbate the problems. For example, if you don’t agree with your doctor’s diagnosis or treatment, you can search other opinions. What happens when your diagnosis and treatment have become data? Will the system give you another opinion? Who will you talk to? The doctor who got an answer from ‘Watson”? Is she or he going to debate Watson? Are you?

This leads to another issue and that’s automated systems getting more credit than they deserve. Futurists such as Hirsh tend to underestimate people and overestimate the positive impact that automation will have. A computer, data analystics, or an AI system are tools not gods. You’ll have as much luck petitioning one of those tools as you would Zeus.

The unasked question is how will your doctor or other health professional gain experience and skills if they never have to practice the basic, boring aspects of health care (asking questions for a history, reading medical journals to keep up with the research, etc.) and leave them to the computers? There had to be  a reason for calling it a medical ‘practice’.

There are definitely going to be advantages to these technological innovations but thoughtful adoption of these practices (pun intended) should be our goal.

Who owns your data?

Another issue which is increasingly making itself felt is ownership of data. Jacob Brogan has written a provocative May 23, 2017 piece for asking that question about the data gathers for DNA testing (Note: Links have been removed),

AncestryDNA’s pitch to consumers is simple enough. For $99 (US), the company will analyze a sample of your saliva and then send back information about your “ethnic mix.” While that promise may be scientifically dubious, it’s a relatively clear-cut proposal. Some, however, worry that the service might raise significant privacy concerns.

After surveying AncestryDNA’s terms and conditions, consumer protection attorney Joel Winston found a few issues that troubled him. As he noted in a Medium post last week, the agreement asserts that it grants the company “a perpetual, royalty-free, world-wide, transferable license to use your DNA.” (The actual clause is considerably longer.) According to Winston, “With this single contractual provision, customers are granting the broadest possible rights to own and exploit their genetic information.”

Winston also noted a handful of other issues that further complicate the question of ownership. Since we share much of our DNA with our relatives, he warned, “Even if you’ve never used, but one of your genetic relatives has, the company may already own identifiable portions of your DNA.” [emphasis mine] Theoretically, that means information about your genetic makeup could make its way into the hands of insurers or other interested parties, whether or not you’ve sent the company your spit. (Maryam Zaringhalam explored some related risks in a recent Slate article.) Further, Winston notes that Ancestry’s customers waive their legal rights, meaning that they cannot sue the company if their information gets used against them in some way.

Over the weekend, Eric Heath, Ancestry’s chief privacy officer, responded to these concerns on the company’s own site. He claims that the transferable license is necessary for the company to provide its customers with the service that they’re paying for: “We need that license in order to move your data through our systems, render it around the globe, and to provide you with the results of our analysis work.” In other words, it allows them to send genetic samples to labs (Ancestry uses outside vendors), store the resulting data on servers, and furnish the company’s customers with the results of the study they’ve requested.

Speaking to me over the phone, Heath suggested that this license was akin to the ones that companies such as YouTube employ when users upload original content. It grants them the right to shift that data around and manipulate it in various ways, but isn’t an assertion of ownership. “We have committed to our users that their DNA data is theirs. They own their DNA,” he said.

I’m glad to see the company’s representatives are open to discussion and, later in the article, you’ll see there’ve already been some changes made. Still, there is no guarantee that the situation won’t again change, for ill this time.

What data do they have and what can they do with it?

It’s not everybody who thinks data collection and data analytics constitute problems. While some people might balk at the thought of their genetic data being traded around and possibly used against them, e.g., while hunting for a job, or turned into a source of revenue, there tends to be a more laissez-faire attitude to other types of data. Andrew MacLeod’s May 24, 2017 article for highlights political implications and privacy issues (Note: Links have been removed),

After a small Victoria [British Columbia, Canada] company played an outsized role in the Brexit vote, government information and privacy watchdogs in British Columbia and Britain have been consulting each other about the use of social media to target voters based on their personal data.

The U.K.’s information commissioner, Elizabeth Denham [Note: Denham was formerly B.C.’s Office of the Information and Privacy Commissioner], announced last week [May 17, 2017] that she is launching an investigation into “the use of data analytics for political purposes.”

The investigation will look at whether political parties or advocacy groups are gathering personal information from Facebook and other social media and using it to target individuals with messages, Denham said.

B.C.’s Office of the Information and Privacy Commissioner confirmed it has been contacted by Denham.

Macleod’s March 6, 2017 article for provides more details about the company’s role (note: Links have been removed),

The “tiny” and “secretive” British Columbia technology company [AggregateIQ; AIQ] that played a key role in the Brexit referendum was until recently listed as the Canadian office of a much larger firm that has 25 years of experience using behavioural research to shape public opinion around the world.

The larger firm, SCL Group, says it has worked to influence election outcomes in 19 countries. Its associated company in the U.S., Cambridge Analytica, has worked on a wide range of campaigns, including Donald Trump’s presidential bid.

In late February [2017], the Telegraph reported that campaign disclosures showed that Vote Leave campaigners had spent £3.5 million — about C$5.75 million [emphasis mine] — with a company called AggregateIQ, run by CEO Zack Massingham in downtown Victoria.

That was more than the Leave side paid any other company or individual during the campaign and about 40 per cent of its spending ahead of the June referendum that saw Britons narrowly vote to exit the European Union.

According to media reports, Aggregate develops advertising to be used on sites including Facebook, Twitter and YouTube, then targets messages to audiences who are likely to be receptive.

The Telegraph story described Victoria as “provincial” and “picturesque” and AggregateIQ as “secretive” and “low-profile.”

Canadian media also expressed surprise at AggregateIQ’s outsized role in the Brexit vote.

The Globe and Mail’s Paul Waldie wrote “It’s quite a coup for Mr. Massingham, who has only been involved in politics for six years and started AggregateIQ in 2013.”

Victoria Times Colonist columnist Jack Knox wrote “If you have never heard of AIQ, join the club.”

The Victoria company, however, appears to be connected to the much larger SCL Group, which describes itself on its website as “the global leader in data-driven communications.”

In the United States it works through related company Cambridge Analytica and has been involved in elections since 2012. Politico reported in 2015 that the firm was working on Ted Cruz’s presidential primary campaign.

And NBC and other media outlets reported that the Trump campaign paid Cambridge Analytica millions to crunch data on 230 million U.S. adults, using information from loyalty cards, club and gym memberships and charity donations [emphasis mine] to predict how an individual might vote and to shape targeted political messages.

That’s quite a chunk of change and I don’t believe that gym memberships, charity donations, etc. were the only sources of information (in the US, there’s voter registration, credit card information, and more) but the list did raise my eyebrows. It would seem we are under surveillance at all times, even in the gym.

In any event, I hope that Hirsh’s call for discussion is successful and that the discussion includes more critical thinking about the implications of Hirsh’s ‘Brave New World’.

Café Scientifique (Vancouver, Canada) May 30, 2017 talk: Jerilyn Prior redux

I’m not sure ‘redux’ is exactly the right term but I’m going to declare it ‘close enough’. This upcoming talk was originally scheduled for March 2016 (my March 29, 2016 posting) but cancelled when the venerable The Railway Club abruptly closed its doors after 84 years of operation.

Our next café will happen on TUESDAY MAY 30TH, 7:30PM in the back room
at YAGGER'S DOWNTOWN (433 W Pender). Our speaker for the evening
will be DR. JERILYNN PRIOR, a is Professor of Endocrinology and
Metabolism at the University of British Columbia, founder and scientific
director of the Centre for Menstrual Cycle and Ovulation Research
(CeMCOR), director of the BC Center of the Canadian Multicenter
Osteoporosis Study (CaMOS), and a past president of the Society for
Menstrual Cycle Research.  The title of her talk is:


43 years old with teenagers a full-time executive director of a not for
profit is not sleeping, she wakes soaked a couple of times a night, not
every night but especially around the time her period comes. As it does
frequently—it is heavy, even flooding. Her sexual interest is
virtually gone and she feels dry when she tries.

Her family doctor offered her The Pill. When she took it she got very
sore breasts, ankle swelling and high blood pressure. Her brain feels
fuzzy, she’s getting migraines, gaining weight and just can’t cope.
. . .
What’s going on? Does she need estrogen “replacement”?  If yes,
why when she’s still getting flow? Does The Pill work for other women?

We hope to see you there!

As I noted in March 2016, this seems more like a description for a workshop on perimenopause  and consequently of more interest for doctors and perimenopausal women than the audience that Café Scientifique usually draws. Of course, I  could be completely wrong.

‘Mother of all bombs’ is a nanoweapon?

According to physicist, Louis A. Del Monte, in an April 14, 2017 opinion piece for Huffington, the ‘mother of all bombs ‘ is a nanoweapon (Note: Links have been removed),

The United States military dropped its largest non-nuclear bomb, the GBU-43/B Massive Ordnance Air Blast Bomb (MOAB), nicknamed the “mother of all bombs,” on an ISIS cave and tunnel complex in the Achin District of the Nangarhar province, Afghanistan [on Thursday, April 13, 2017]. The Achin District is the center of ISIS activity in Afghanistan. This was the first use in combat of the GBU-43/B Massive Ordnance Air Blast (MOAB).

… Although it carries only about 8 tons of explosives, the explosive mixture delivers a destructive impact equivalent of 11 tons of TNT.

There is little doubt the United States Department of Defense is likely using nanometals, such as nanoaluminum (alternately spelled nano-aluminum) mixed with TNT, to enhance the detonation properties of the MOAB. The use of nanoaluminum mixed with TNT was known to boost the explosive power of the TNT since the early 2000s. If true, this means that the largest known United States non-nuclear bomb is a nanoweapon. When most of us think about nanoweapons, we think small, essentially invisible weapons, like nanobots (i.e., tiny robots made using nanotechnology). That can often be the case. But, as defined in my recent book, Nanoweapons: A Growing Threat to Humanity (Potomac 2017), “Nanoweapons are any military technology that exploits the power of nanotechnology.” This means even the largest munition, such as the MOAB, is a nanoweapon if it uses nanotechnology.

… The explosive is H6, which is a mixture of five ingredients (by weight):

  • 44.0% RDX & nitrocellulose (RDX is a well know explosive, more powerful that TNT, often used with TNT and other explosives. Nitrocellulose is a propellant or low-order explosive, originally known as gun-cotton.)
  • 29.5% TNT
  • 21.0% powdered aluminum
  • 5.0% paraffin wax as a phlegmatizing (i.e., stabilizing) agent.
  • 0.5% calcium chloride (to absorb moisture and eliminate the production of gas

Note, the TNT and powdered aluminum account for over half the explosive payload by weight. It is highly likely that the “powdered aluminum” is nanoaluminum, since nanoaluminum can enhance the destructive properties of TNT. This argues that H6 is a nano-enhanced explosive, making the MOAB a nanoweapon.

The United States GBU-43/B Massive Ordnance Air Blast Bomb (MOAB) was the largest non-nuclear bomb known until Russia detonated the Aviation Thermobaric Bomb of Increased Power, termed the “father of all bombs” (FOAB), in 2007. It is reportedly four times more destructive than the MOAB, even though it carries only 7 tons of explosives versus the 8 tons of the MOAB. Interestingly, the Russians claim to achieve the more destructive punch using nanotechnology.

If you have the time, I encourage you to read the piece in its entirety.

Repairing a ‘broken’ heart with a 3D printed patch

The idea of using stem cells to help heal your heart so you don’t have scar tissue seems to be a step closer to reality. From an April 14, 2017 news item on ScienceDaily which announces the research and explains why scar tissue in your heart is a problem,

A team of biomedical engineering researchers, led by the University of Minnesota, has created a revolutionary 3D-bioprinted patch that can help heal scarred heart tissue after a heart attack. The discovery is a major step forward in treating patients with tissue damage after a heart attack.

According to the American Heart Association, heart disease is the No. 1 cause of death in the U.S. killing more than 360,000 people a year. During a heart attack, a person loses blood flow to the heart muscle and that causes cells to die. Our bodies can’t replace those heart muscle cells so the body forms scar tissue in that area of the heart, which puts the person at risk for compromised heart function and future heart failure.

An April 13, 2017 University of Minnesota news release (also on EurekAlert but dated April 14, 2017), which originated the news item, describes the work in more detail,

In this study, researchers from the University of Minnesota-Twin Cities, University of Wisconsin-Madison, and University of Alabama-Birmingham used laser-based 3D-bioprinting techniques to incorporate stem cells derived from adult human heart cells on a matrix that began to grow and beat synchronously in a dish in the lab.

When the cell patch was placed on a mouse following a simulated heart attack, the researchers saw significant increase in functional capacity after just four weeks. Since the patch was made from cells and structural proteins native to the heart, it became part of the heart and absorbed into the body, requiring no further surgeries.

“This is a significant step forward in treating the No. 1 cause of death in the U.S.,” said Brenda Ogle, an associate professor of biomedical engineering at the University of Minnesota. “We feel that we could scale this up to repair hearts of larger animals and possibly even humans within the next several years.”

Ogle said that this research is different from previous research in that the patch is modeled after a digital, three-dimensional scan of the structural proteins of native heart tissue.  The digital model is made into a physical structure by 3D printing with proteins native to the heart and further integrating cardiac cell types derived from stem cells.  Only with 3D printing of this type can we achieve one micron resolution needed to mimic structures of native heart tissue.

“We were quite surprised by how well it worked given the complexity of the heart,” Ogle said.  “We were encouraged to see that the cells had aligned in the scaffold and showed a continuous wave of electrical signal that moved across the patch.”

Ogle said they are already beginning the next step to develop a larger patch that they would test on a pig heart, which is similar in size to a human heart.

The researchers has made this video of beating heart cells in a petri dish available,

Date: Published on Apr 14, 2017

Caption: Researchers used laser-based 3D-bioprinting techniques to incorporate stem cells derived from adult human heart cells on a matrix that began to grow and beat synchronously in a dish in the lab. Credit: Brenda Ogle, University of Minnesota

Here’s a link to and a citation for the paper,

Myocardial Tissue Engineering With Cells Derived From Human-Induced Pluripotent Stem Cells and a Native-Like, High-Resolution, 3-Dimensionally Printed Scaffold by Ling Gao, Molly E. Kupfer, Jangwook P. Jung, Libang Yang, Patrick Zhang, Yong Da Sie, Quyen Tran, Visar Ajeti, Brian T. Freeman, Vladimir G. Fast, Paul J. Campagnola, Brenda M. Ogle, Jianyi Zhang. Circulation Research April 14, 2017, Volume 120, Issue 8 Circulation Research. 2017;120:1318-1325 Originally published online] January 9, 2017

This paper appears to be open access.