Tag Archives: Elon Musk

Turning brain-controlled wireless electronic prostheses into reality plus some ethical points

Researchers at Stanford University (California, US) believe they have a solution for a problem with neuroprosthetics (Note: I have included brief comments about neuroprosthetics and possible ethical issues at the end of this posting) according an August 5, 2020 news item on ScienceDaily,

The current generation of neural implants record enormous amounts of neural activity, then transmit these brain signals through wires to a computer. But, so far, when researchers have tried to create wireless brain-computer interfaces to do this, it took so much power to transmit the data that the implants generated too much heat to be safe for the patient. A new study suggests how to solve his problem — and thus cut the wires.

Caption: Photo of a current neural implant, that uses wires to transmit information and receive power. New research suggests how to one day cut the wires. Credit: Sergey Stavisky

An August 3, 2020 Stanford University news release (also on EurekAlert but published August 4, 2020) by Tom Abate, which originated the news item, details the problem and the proposed solution,

Stanford researchers have been working for years to advance a technology that could one day help people with paralysis regain use of their limbs, and enable amputees to use their thoughts to control prostheses and interact with computers.

The team has been focusing on improving a brain-computer interface, a device implanted beneath the skull on the surface of a patient’s brain. This implant connects the human nervous system to an electronic device that might, for instance, help restore some motor control to a person with a spinal cord injury, or someone with a neurological condition like amyotrophic lateral sclerosis, also called Lou Gehrig’s disease.

The current generation of these devices record enormous amounts of neural activity, then transmit these brain signals through wires to a computer. But when researchers have tried to create wireless brain-computer interfaces to do this, it took so much power to transmit the data that the devices would generate too much heat to be safe for the patient.

Now, a team led by electrical engineers and neuroscientists Krishna Shenoy, PhD, and Boris Murmann, PhD, and neurosurgeon and neuroscientist Jaimie Henderson, MD, have shown how it would be possible to create a wireless device, capable of gathering and transmitting accurate neural signals, but using a tenth of the power required by current wire-enabled systems. These wireless devices would look more natural than the wired models and give patients freer range of motion.

Graduate student Nir Even-Chen and postdoctoral fellow Dante Muratore, PhD, describe the team’s approach in a Nature Biomedical Engineering paper.

The team’s neuroscientists identified the specific neural signals needed to control a prosthetic device, such as a robotic arm or a computer cursor. The team’s electrical engineers then designed the circuitry that would enable a future, wireless brain-computer interface to process and transmit these these carefully identified and isolated signals, using less power and thus making it safe to implant the device on the surface of the brain.

To test their idea, the researchers collected neuronal data from three nonhuman primates and one human participant in a (BrainGate) clinical trial.

As the subjects performed movement tasks, such as positioning a cursor on a computer screen, the researchers took measurements. The findings validated their hypothesis that a wireless interface could accurately control an individual’s motion by recording a subset of action-specific brain signals, rather than acting like the wired device and collecting brain signals in bulk.

The next step will be to build an implant based on this new approach and proceed through a series of tests toward the ultimate goal.

Here’s a link to and a citation for the paper,

Power-saving design opportunities for wireless intracortical brain–computer interfaces by Nir Even-Chen, Dante G. Muratore, Sergey D. Stavisky, Leigh R. Hochberg, Jaimie M. Henderson, Boris Murmann & Krishna V. Shenoy. Nature Biomedical Engineering (2020) DOI: https://doi.org/10.1038/s41551-020-0595-9 Published: 03 August 2020

This paper is behind a paywall.

Comments about ethical issues

As I found out while investigating, ethical issues in this area abound. My first thought was to look at how someone with a focus on ability studies might view the complexities.

My ‘go to’ resource for human enhancement and ethical issues is Gregor Wolbring, an associate professor at the University of Calgary (Alberta, Canada). his profile lists these areas of interest: ability studies, disability studies, governance of emerging and existing sciences and technologies (e.g. neuromorphic engineering, genetics, synthetic biology, robotics, artificial intelligence, automatization, brain machine interfaces, sensors) and more.

I can’t find anything more recent on this particular topic but I did find an August 10, 2017 essay for The Conversation where he comments on technology and human enhancement ethical issues where the technology is gene-editing. Regardless, he makes points that are applicable to brain-computer interfaces (human enhancement), Note: Links have been removed),

Ability expectations have been and still are used to disable, or disempower, many people, not only people seen as impaired. They’ve been used to disable or marginalize women (men making the argument that rationality is an important ability and women don’t have it). They also have been used to disable and disempower certain ethnic groups (one ethnic group argues they’re smarter than another ethnic group) and others.

A recent Pew Research survey on human enhancement revealed that an increase in the ability to be productive at work was seen as a positive. What does such ability expectation mean for the “us” in an era of scientific advancements in gene-editing, human enhancement and robotics?

Which abilities are seen as more important than others?

The ability expectations among “us” will determine how gene-editing and other scientific advances will be used.

And so how we govern ability expectations, and who influences that governance, will shape the future. Therefore, it’s essential that ability governance and ability literacy play a major role in shaping all advancements in science and technology.

One of the reasons I find Gregor’s commentary so valuable is that he writes lucidly about ability and disability as concepts and poses what can be provocative questions about expectations and what it is to be truly abled or disabled. You can find more of his writing here on his eponymous (more or less) blog.

Ethics of clinical trials for testing brain implants

This October 31, 2017 article by Emily Underwood for Science was revelatory,

In 2003, neurologist Helen Mayberg of Emory University in Atlanta began to test a bold, experimental treatment for people with severe depression, which involved implanting metal electrodes deep in the brain in a region called area 25 [emphases mine]. The initial data were promising; eventually, they convinced a device company, St. Jude Medical in Saint Paul, to sponsor a 200-person clinical trial dubbed BROADEN.

This month [October 2017], however, Lancet Psychiatry reported the first published data on the trial’s failure. The study stopped recruiting participants in 2012, after a 6-month study in 90 people failed to show statistically significant improvements between those receiving active stimulation and a control group, in which the device was implanted but switched off.

… a tricky dilemma for companies and research teams involved in deep brain stimulation (DBS) research: If trial participants want to keep their implants [emphases mine], who will take responsibility—and pay—for their ongoing care? And participants in last week’s meeting said it underscores the need for the growing corps of DBS researchers to think long-term about their planned studies.

… participants bear financial responsibility for maintaining the device should they choose to keep it, and for any additional surgeries that might be needed in the future, Mayberg says. “The big issue becomes cost [emphasis mine],” she says. “We transition from having grants and device donations” covering costs, to patients being responsible. And although the participants agreed to those conditions before enrolling in the trial, Mayberg says she considers it a “moral responsibility” to advocate for lower costs for her patients, even it if means “begging for charity payments” from hospitals. And she worries about what will happen to trial participants if she is no longer around to advocate for them. “What happens if I retire, or get hit by a bus?” she asks.

There’s another uncomfortable possibility: that the hypothesis was wrong [emphases mine] to begin with. A large body of evidence from many different labs supports the idea that area 25 is “key to successful antidepressant response,” Mayberg says. But “it may be too simple-minded” to think that zapping a single brain node and its connections can effectively treat a disease as complex as depression, Krakauer [John Krakauer, a neuroscientist at Johns Hopkins University in Baltimore, Maryland] says. Figuring that out will likely require more preclinical research in people—a daunting prospect that raises additional ethical dilemmas, Krakauer says. “The hardest thing about being a clinical researcher,” he says, “is knowing when to jump.”

Brain-computer interfaces, symbiosis, and ethical issues

This was the most recent and most directly applicable work that I could find. From a July 24, 2019 article by Liam Drew for Nature Outlook: The brain,

“It becomes part of you,” Patient 6 said, describing the technology that enabled her, after 45 years of severe epilepsy, to halt her disabling seizures. Electrodes had been implanted on the surface of her brain that would send a signal to a hand-held device when they detected signs of impending epileptic activity. On hearing a warning from the device, Patient 6 knew to take a dose of medication to halt the coming seizure.

“You grow gradually into it and get used to it, so it then becomes a part of every day,” she told Frederic Gilbert, an ethicist who studies brain–computer interfaces (BCIs) at the University of Tasmania in Hobart, Australia. “It became me,” she said. [emphasis mine]

Gilbert was interviewing six people who had participated in the first clinical trial of a predictive BCI to help understand how living with a computer that monitors brain activity directly affects individuals psychologically1. Patient 6’s experience was extreme: Gilbert describes her relationship with her BCI as a “radical symbiosis”.

Symbiosis is a term, borrowed from ecology, that means an intimate co-existence of two species for mutual advantage. As technologists work towards directly connecting the human brain to computers, it is increasingly being used to describe humans’ potential relationship with artificial intelligence.

Interface technologies are divided into those that ‘read’ the brain to record brain activity and decode its meaning, and those that ‘write’ to the brain to manipulate activity in specific regions and affect their function.

Commercial research is opaque, but scientists at social-media platform Facebook are known to be pursuing brain-reading techniques for use in headsets that would convert users’ brain activity into text. And neurotechnology companies such as Kernel in Los Angeles, California, and Neuralink, founded by Elon Musk in San Francisco, California, predict bidirectional coupling in which computers respond to people’s brain activity and insert information into their neural circuitry. [emphasis mine]

Already, it is clear that melding digital technologies with human brains can have provocative effects, not least on people’s agency — their ability to act freely and according to their own choices. Although neuroethicists’ priority is to optimize medical practice, their observations also shape the debate about the development of commercial neurotechnologies.

Neuroethicists began to note the complex nature of the therapy’s side effects. “Some effects that might be described as personality changes are more problematic than others,” says Maslen [Hannah Maslen, a neuroethicist at the University of Oxford, UK]. A crucial question is whether the person who is undergoing stimulation can reflect on how they have changed. Gilbert, for instance, describes a DBS patient who started to gamble compulsively, blowing his family’s savings and seeming not to care. He could only understand how problematic his behaviour was when the stimulation was turned off.

Such cases present serious questions about how the technology might affect a person’s ability to give consent to be treated, or for treatment to continue. [emphases mine] If the person who is undergoing DBS is happy to continue, should a concerned family member or doctor be able to overrule them? If someone other than the patient can terminate treatment against the patient’s wishes, it implies that the technology degrades people’s ability to make decisions for themselves. It suggests that if a person thinks in a certain way only when an electrical current alters their brain activity, then those thoughts do not reflect an authentic self.

To observe a person with tetraplegia bringing a drink to their mouth using a BCI-controlled robotic arm is spectacular. [emphasis mine] This rapidly advancing technology works by implanting an array of electrodes either on or in a person’s motor cortex — a brain region involved in planning and executing movements. The activity of the brain is recorded while the individual engages in cognitive tasks, such as imagining that they are moving their hand, and these recordings are used to command the robotic limb.

If neuroscientists could unambiguously discern a person’s intentions from the chattering electrical activity that they record in the brain, and then see that it matched the robotic arm’s actions, ethical concerns would be minimized. But this is not the case. The neural correlates of psychological phenomena are inexact and poorly understood, which means that signals from the brain are increasingly being processed by artificial intelligence (AI) software before reaching prostheses.[emphasis mine]

But, he [Philipp Kellmeyer, a neurologist and neuroethicist at the University of Freiburg, Germany] says, using AI tools also introduces ethical issues of which regulators have little experience. [emphasis mine] Machine-learning software learns to analyse data by generating algorithms that cannot be predicted and that are difficult, or impossible, to comprehend. This introduces an unknown and perhaps unaccountable process between a person’s thoughts and the technology that is acting on their behalf.

Maslen is already helping to shape BCI-device regulation. She is in discussion with the European Commission about regulations it will implement in 2020 that cover non-invasive brain-modulating devices that are sold straight to consumers. [emphases mine; Note: There is a Canadian company selling this type of product, MUSE] Maslen became interested in the safety of these devices, which were covered by only cursory safety regulations. Although such devices are simple, they pass electrical currents through people’s scalps to modulate brain activity. Maslen found reports of them causing burns, headaches and visual disturbances. She also says clinical studies have shown that, although non-invasive electrical stimulation of the brain can enhance certain cognitive abilities, this can come at the cost of deficits in other aspects of cognition.

Regarding my note about MUSE, the company is InteraXon and its product is MUSE.They advertise the product as “Brain Sensing Headbands That Improve Your Meditation Practice.” The company website and the product seem to be one entity, Choose Muse. The company’s product has been used in some serious research papers they can be found here. I did not see any research papers concerning safety issues.

Getting back to Drew’s July 24, 2019 article and Patient 6,

… He [Gilbert] is now preparing a follow-up report on Patient 6. The company that implanted the device in her brain to help free her from seizures went bankrupt. The device had to be removed.

… Patient 6 cried as she told Gilbert about losing the device. … “I lost myself,” she said.

“It was more than a device,” Gilbert says. “The company owned the existence of this new person.”

I strongly recommend reading Drew’s July 24, 2019 article in its entirety.


It’s easy to forget that in all the excitement over technologies ‘making our lives better’ that there can be a dark side or two. Some of the points brought forth in the articles by Wolbring, Underwood, and Drew confirmed my uneasiness as reasonable and gave me some specific examples of how these technologies raise new issues or old issues in new ways.

What I find interesting is that no one is using the term ‘cyborg’, which would seem quite applicable.There is an April 20, 2012 posting here titled ‘My mother is a cyborg‘ where I noted that by at lease one definition people with joint replacements, pacemakers, etc. are considered cyborgs. In short, cyborgs or technology integrated into bodies have been amongst us for quite some time.

Interestingly, no one seems to care much when insects are turned into cyborgs (can’t remember who pointed this out) but it is a popular area of research especially for military applications and search and rescue applications.

I’ve sometimes used the term ‘machine/flesh’ and or ‘augmentation’ as a description of technologies integrated with bodies, human or otherwise. You can find lots on the topic here however I’ve tagged or categorized it.

Amongst other pieces you can find here, there’s the August 8, 2016 posting, ‘Technology, athletics, and the ‘new’ human‘ featuring Oscar Pistorius when he was still best known as the ‘blade runner’ and a remarkably successful paralympic athlete. It’s about his efforts to compete against able-bodied athletes at the London Olympic Games in 2012. It is fascinating to read about technology and elite athletes of any kind as they are often the first to try out ‘enhancements’.

Gregor Wolbring has a number of essays on The Conversation looking at Paralympic athletes and their pursuit of enhancements and how all of this is affecting our notions of abilities and disabilities. By extension, one has to assume that ‘abled’ athletes are also affected with the trickle-down effect on the rest of us.

Regardless of where we start the investigation, there is a sameness to the participants in neuroethics discussions with a few experts and commercial interests deciding on how the rest of us (however you define ‘us’ as per Gregor Wolbring’s essay) will live.

This paucity of perspectives is something I was getting at in my COVID-19 editorial for the Canadian Science Policy Centre. My thesis being that we need a range of ideas and insights that cannot be culled from small groups of people who’ve trained and read the same materials or entrepreneurs who too often seem to put profit over thoughtful implementations of new technologies. (See the PDF May 2020 edition [you’ll find me under Policy Development]) or see my May 15, 2020 posting here (with all the sources listed.)

As for this new research at Stanford, it’s exciting news, which raises questions, as it offers the hope of independent movement for people diagnosed as tetraplegic (sometimes known as quadriplegic.)

AI (artificial intelligence) text generator, too dangerous to release?

Could this latest version of OpenAI‘s text generator be so good that it would fool you? And following on that thought could the concomitant reluctance to release the research be real or is it a publicity stunt? Here’s a sample of the text from the GPT2 AI model from a February 15, 2019 article by Mark Fraunfelder for Boing Boing,

Recycling is good for the world.
Recycling is NOT good for the world. It is bad for the environment, it is bad for our health, and it is bad for our economy. I’m not kidding. Recycling is not good for the environment. It is destructive to the earth and it is a major contributor to global warming. Recycling is not good for our health. It contributes to obesity and diseases like heart disease and cancer. Recycling is bad for our economy. It increases the cost of a product, and in turn, the price of everything that is made with that product. Recycling is not good for our nation. We pay a tremendous price for the privilege of having the world’s most advanced and efficient recycling system. Recycling is a huge, colossal waste of time, energy, money, and resources. And THAT is why we need to get back to basics and get back to basics in our recycling efforts. One of the best ways to start is to look at the process of creating a paper product. When you make a paper product, it is basically a long chain of materials. Everything from the raw materials (wood, cardboard, paper, etc.), to the reagents (dyes, solvents, etc.) to the printing equipment (chemicals, glue, paper, ink, etc.), to the packaging, to the packaging materials (mercury, chemicals, etc.) to the processing equipment (heating, cooling, etc.), to the packaging materials, to the packaging materials that are shipped overseas and to the packaging materials that are used in the United States. Each step along the way creates tons of waste that we constantly have to clean up. The process of making a paper product is a very wasteful one. But the end result is something that all of us need to consume. And if we want to keep the recycling process running efficiently, then we really need to think about each and every step that goes into making a paper product.

The first few sentences don’t work for me but once the discussion turns to making paper products, then it becomes more convincing to me. As to whether the company’s reluctance to release the research is genuine or a publicity stunt, I don’t know. However, there was a fair degree of interest in GPT2 after the decision.

From a February 14, 2019 article by Alex Hern for the Guardian,

OpenAI, an nonprofit research company backed by Elon Musk, Reid Hoffman, Sam Altman, and others, says its new AI model, called GPT2 is so good and the risk of malicious use so high that it is breaking from its normal practice of releasing the full research to the public in order to allow more time to discuss the ramifications of the technological breakthrough.

At its core, GPT2 is a text generator. The AI system is fed text, anything from a few words to a whole page, and asked to write the next few sentences based on its predictions of what should come next. The system is pushing the boundaries of what was thought possible, both in terms of the quality of the output, and the wide variety of potential uses.

When used to simply generate new text, GPT2 is capable of writing plausible passages that match what it is given in both style and subject. It rarely shows any of the quirks that mark out previous AI systems, such as forgetting what it is writing about midway through a paragraph, or mangling the syntax of long sentences.

Feed it the opening line of George Orwell’s Nineteen Eighty-Four – “It was a bright cold day in April, and the clocks were striking thirteen” – and the system recognises the vaguely futuristic tone and the novelistic style, and continues with: …

Sean Gallagher’s February 15, 2019 posting on the ars Technica blog provides some insight that’s partially written a style sometimes associated with gossip (Note: Links have been removed),

OpenAI is funded by contributions from a group of technology executives and investors connected to what some have referred to as the PayPal “mafia”—Elon Musk, Peter Thiel, Jessica Livingston, and Sam Altman of YCombinator, former PayPal COO and LinkedIn co-founder Reid Hoffman, and former Stripe Chief Technology Officer Greg Brockman. [emphasis mine] Brockman now serves as OpenAI’s CTO. Musk has repeatedly warned of the potential existential dangers posed by AI, and OpenAI is focused on trying to shape the future of artificial intelligence technology—ideally moving it away from potentially harmful applications.

Given present-day concerns about how fake content has been used to both generate money for “fake news” publishers and potentially spread misinformation and undermine public debate, GPT-2’s output certainly qualifies as concerning. Unlike other text generation “bot” models, such as those based on Markov chain algorithms, the GPT-2 “bot” did not lose track of what it was writing about as it generated output, keeping everything in context.

For example: given a two-sentence entry, GPT-2 generated a fake science story on the discovery of unicorns in the Andes, a story about the economic impact of Brexit, a report about a theft of nuclear materials near Cincinnati, a story about Miley Cyrus being caught shoplifting, and a student’s report on the causes of the US Civil War.

Each matched the style of the genre from the writing prompt, including manufacturing quotes from sources. In other samples, GPT-2 generated a rant about why recycling is bad, a speech written by John F. Kennedy’s brain transplanted into a robot (complete with footnotes about the feat itself), and a rewrite of a scene from The Lord of the Rings.

While the model required multiple tries to get a good sample, GPT-2 generated “good” results based on “how familiar the model is with the context,” the researchers wrote. “When prompted with topics that are highly represented in the data (Brexit, Miley Cyrus, Lord of the Rings, and so on), it seems to be capable of generating reasonable samples about 50 percent of the time. The opposite is also true: on highly technical or esoteric types of content, the model can perform poorly.”

There were some weak spots encountered in GPT-2’s word modeling—for example, the researchers noted it sometimes “writes about fires happening under water.” But the model could be fine-tuned to specific tasks and perform much better. “We can fine-tune GPT-2 on the Amazon Reviews dataset and use this to let us write reviews conditioned on things like star rating and category,” the authors explained.

James Vincent’s February 14, 2019 article for The Verge offers a deeper dive into the world of AI text agents and what makes GPT2 so special (Note: Links have been removed),

For decades, machines have struggled with the subtleties of human language, and even the recent boom in deep learning powered by big data and improved processors has failed to crack this cognitive challenge. Algorithmic moderators still overlook abusive comments, and the world’s most talkative chatbots can barely keep a conversation alive. But new methods for analyzing text, developed by heavyweights like Google and OpenAI as well as independent researchers, are unlocking previously unheard-of talents.

OpenAI’s new algorithm, named GPT-2, is one of the most exciting examples yet. It excels at a task known as language modeling, which tests a program’s ability to predict the next word in a given sentence. Give it a fake headline, and it’ll write the rest of the article, complete with fake quotations and statistics. Feed it the first line of a short story, and it’ll tell you what happens to your character next. It can even write fan fiction, given the right prompt.

The writing it produces is usually easily identifiable as non-human. Although its grammar and spelling are generally correct, it tends to stray off topic, and the text it produces lacks overall coherence. But what’s really impressive about GPT-2 is not its fluency but its flexibility.

This algorithm was trained on the task of language modeling by ingesting huge numbers of articles, blogs, and websites. By using just this data — and with no retooling from OpenAI’s engineers — it achieved state-of-the-art scores on a number of unseen language tests, an achievement known as “zero-shot learning.” It can also perform other writing-related tasks, like translating text from one language to another, summarizing long articles, and answering trivia questions.

GPT-2 does each of these jobs less competently than a specialized system, but its flexibility is a significant achievement. Nearly all machine learning systems used today are “narrow AI,” meaning they’re able to tackle only specific tasks. DeepMind’s original AlphaGo program, for example, was able to beat the world’s champion Go player, but it couldn’t best a child at Monopoly. The prowess of GPT-2, say OpenAI, suggests there could be methods available to researchers right now that can mimic more generalized brainpower.

“What the new OpenAI work has shown is that: yes, you absolutely can build something that really seems to ‘understand’ a lot about the world, just by having it read,” says Jeremy Howard, a researcher who was not involved with OpenAI’s work but has developed similar language modeling programs …

To put this work into context, it’s important to understand how challenging the task of language modeling really is. If I asked you to predict the next word in a given sentence — say, “My trip to the beach was cut short by bad __” — your answer would draw upon on a range of knowledge. You’d consider the grammar of the sentence and its tone but also your general understanding of the world. What sorts of bad things are likely to ruin a day at the beach? Would it be bad fruit, bad dogs, or bad weather? (Probably the latter.)

Despite this, programs that perform text prediction are quite common. You’ve probably encountered one today, in fact, whether that’s Google’s AutoComplete feature or the Predictive Text function in iOS. But these systems are drawing on relatively simple types of language modeling, while algorithms like GPT-2 encode the same information in more complex ways.

The difference between these two approaches is technically arcane, but it can be summed up in a single word: depth. Older methods record information about words in only their most obvious contexts, while newer methods dig deeper into their multiple meanings.

So while a system like Predictive Text only knows that the word “sunny” is used to describe the weather, newer algorithms know when “sunny” is referring to someone’s character or mood, when “Sunny” is a person, or when “Sunny” means the 1976 smash hit by Boney M.

The success of these newer, deeper language models has caused a stir in the AI community. Researcher Sebastian Ruder compares their success to advances made in computer vision in the early 2010s. At this time, deep learning helped algorithms make huge strides in their ability to identify and categorize visual data, kickstarting the current AI boom. Without these advances, a whole range of technologies — from self-driving cars to facial recognition and AI-enhanced photography — would be impossible today. This latest leap in language understanding could have similar, transformational effects.

Hern’s article for the Guardian (February 14, 2019 article ) acts as a good overview, while Gallagher’s ars Technical posting (February 15, 2019 posting) and Vincent’s article (February 14, 2019 article) for the The Verge take you progressively deeper into the world of AI text agents.

For anyone who wants to dig down even further, there’s a February 14, 2019 posting on OpenAI’s blog.

2016 thoughts and 2017 hopes from FrogHeart

This is the 4900th post on this blog and as FrogHeart moves forward to 5000, I’m thinking there will be some changes although I’m not sure what they’ll be. In the meantime, here are some random thoughts on the year that was in Canadian science and on the FrogHeart blog.

Changeover to Liberal government: year one

Hopes were high after the Trudeau government was elected. Certainly, there seems to have been a loosening where science communication policies have been concerned although it may not have been quite the open and transparent process people dreamed of. On the plus side, it’s been easier to participate in public consultations but there has been no move (perceptible to me) towards open government science or better access to government-funded science papers.

Open Science in Québec

As far as I know, la crème de la crème of open science (internationally) is the Montreal Neurological Institute (Montreal Neuro; affiliated with McGill University. They bookended the year with two announcements. In January 2016, Montreal Neuro announced it was going to be an “Open Science institution (my Jan. 22, 2016 posting),

The Montreal Neurological Institute (MNI) in Québec, Canada, known informally and widely as Montreal Neuro, has ‘opened’ its science research to the world. David Bruggeman tells the story in a Jan. 21, 2016 posting on his Pasco Phronesis blog (Note: Links have been removed),

The Montreal Neurological Institute (MNI) at McGill University announced that it will be the first academic research institute to become what it calls ‘Open Science.’  As Science is reporting, the MNI will make available all research results and research data at the time of publication.  Additionally it will not seek patents on any of the discoveries made on research at the Institute.

Will this catch on?  I have no idea if this particular combination of open access research data and results with no patents will spread to other university research institutes.  But I do believe that those elements will continue to spread.  More universities and federal agencies are pursuing open access options for research they support.  Elon Musk has opted to not pursue patent litigation for any of Tesla Motors’ patents, and has not pursued patents for SpaceX technology (though it has pursued litigation over patents in rocket technology). …

Then, there’s my Dec. 19, 2016 posting about this Montreal Neuro announcement,

It’s one heck of a Christmas present. Canadian businessmen Larry Tannenbaum and his wife Judy have given the Montreal Neurological Institute (Montreal Neuro), which is affiliated with McGill University, a $20M donation. From a Dec. 16, 2016 McGill University news release,

The Prime Minister of Canada, Justin Trudeau, was present today at the Montreal Neurological Institute and Hospital (MNI) for the announcement of an important donation of $20 million by the Larry and Judy Tanenbaum family. This transformative gift will help to establish the Tanenbaum Open Science Institute, a bold initiative that will facilitate the sharing of neuroscience findings worldwide to accelerate the discovery of leading edge therapeutics to treat patients suffering from neurological diseases.

‟Today, we take an important step forward in opening up new horizons in neuroscience research and discovery,” said Mr. Larry Tanenbaum. ‟Our digital world provides for unprecedented opportunities to leverage advances in technology to the benefit of science.  That is what we are celebrating here today: the transformation of research, the removal of barriers, the breaking of silos and, most of all, the courage of researchers to put patients and progress ahead of all other considerations.”

Neuroscience has reached a new frontier, and advances in technology now allow scientists to better understand the brain and all its complexities in ways that were previously deemed impossible. The sharing of research findings amongst scientists is critical, not only due to the sheer scale of data involved, but also because diseases of the brain and the nervous system are amongst the most compelling unmet medical needs of our time.

Neurological diseases, mental illnesses, addictions, and brain and spinal cord injuries directly impact 1 in 3 Canadians, representing approximately 11 million people across the country.

“As internationally-recognized leaders in the field of brain research, we are uniquely placed to deliver on this ambitious initiative and reinforce our reputation as an institution that drives innovation, discovery and advanced patient care,” said Dr. Guy Rouleau, Director of the Montreal Neurological Institute and Hospital and Chair of McGill University’s Department of Neurology and Neurosurgery. “Part of the Tanenbaum family’s donation will be used to incentivize other Canadian researchers and institutions to adopt an Open Science model, thus strengthening the network of like-minded institutes working in this field.”

Chief Science Advisor

Getting back to the federal government, we’re still waiting for a Chief Science Advisor. Should you be interested in the job, apply here. The job search was launched in early Dec. 2016 (see my Dec. 7, 2016 posting for details) a little over a year after the Liberal government was elected. I’m not sure why the process is taking so long. It’s not like the Canadian government is inventing a position or trailblazing in this regard. Many, many countries and jurisdictions have chief science advisors. Heck the European Union managed to find their first chief science advisor in considerably less time than we’ve spent on the project. My guess, it just wasn’t a priority.

Prime Minister Trudeau, quantum, nano, and Canada’s 150th birthday

In April 2016, Prime Minister Justin Trudeau stunned many when he was able to answer, in an articulate and informed manner, a question about quantum physics during a press conference at the Perimeter Institute in Waterloo, Ontario (my April 18, 2016 post discussing that incident and the so called ‘quantum valley’ in Ontario).

In Sept. 2016, the University of Waterloo publicized the world’s smallest Canadian flag to celebrate the country’s upcoming 150th birthday and to announce its presence in QUANTUM: The Exhibition (a show which will tour across Canada). Here’s more from my Sept. 20, 2016 posting,

The record-setting flag was unveiled at IQC’s [Institute of Quantum Computing at the University of Waterloo] open house on September 17 [2016], which attracted nearly 1,000 visitors. It will also be on display in QUANTUM: The Exhibition, a Canada 150 Fund Signature Initiative, and part of Innovation150, a consortium of five leading Canadian science-outreach organizations. QUANTUM: The Exhibition is a 4,000-square-foot, interactive, travelling exhibit IQC developed highlighting Canada’s leadership in quantum information science and technology.

“I’m delighted that IQC is celebrating Canadian innovation through QUANTUM: The Exhibition and Innovation150,” said Raymond Laflamme, executive director of IQC. “It’s an opportunity to share the transformative technologies resulting from Canadian research and bring quantum computing to fellow Canadians from coast to coast to coast.”

The first of its kind, the exhibition will open at THEMUSEUM in downtown Kitchener on October 14 [2016], and then travel to science centres across the country throughout 2017.

You can find the English language version of QUANTUM: The Exhibition website here and the French language version of QUANTUM: The Exhibition website here.

There are currently four other venues for the show once finishes its run in Waterloo. From QUANTUM’S Join the Celebration webpage,


  • Science World at TELUS World of Science, Vancouver
  • TELUS Spark, Calgary
  • Discovery Centre, Halifax
  • Canada Science and Technology Museum, Ottawa

I gather they’re still looking for other venues to host the exhibition. If interested, there’s this: Contact us.

Other than the flag which is both nanoscale and microscale, they haven’t revealed what else will be included in their 4000 square foot exhibit but it will be “bilingual, accessible, and interactive.” Also, there will be stories.

Hmm. The exhibition is opening in roughly three weeks and they have no details. Strategy or disorganization? Only time will tell.

Calgary and quantum teleportation

This is one of my favourite stories of the year. Scientists at the University of Calgary teleported photons six kilometers from the university to city hall breaking the teleportation record. What I found particularly interesting was the support for science from Calgary City Hall. Here’s more from my Sept. 21, 2016 post,

Through a collaboration between the University of Calgary, The City of Calgary and researchers in the United States, a group of physicists led by Wolfgang Tittel, professor in the Department of Physics and Astronomy at the University of Calgary have successfully demonstrated teleportation of a photon (an elementary particle of light) over a straight-line distance of six kilometres using The City of Calgary’s fibre optic cable infrastructure. The project began with an Urban Alliance seed grant in 2014.

This accomplishment, which set a new record for distance of transferring a quantum state by teleportation, has landed the researchers a spot in the prestigious Nature Photonics scientific journal. The finding was published back-to-back with a similar demonstration by a group of Chinese researchers.

The research could not be possible without access to the proper technology. One of the critical pieces of infrastructure that support quantum networking is accessible dark fibre. Dark fibre, so named because of its composition — a single optical cable with no electronics or network equipment on the alignment — doesn’t interfere with quantum technology.

The City of Calgary is building and provisioning dark fibre to enable next-generation municipal services today and for the future.

“By opening The City’s dark fibre infrastructure to the private and public sector, non-profit companies, and academia, we help enable the development of projects like quantum encryption and create opportunities for further research, innovation and economic growth in Calgary,” said Tyler Andruschak, project manager with Innovation and Collaboration at The City of Calgary.

As for the science of it (also from my post),

A Sept. 20, 2016 article by Robson Fletcher for CBC (Canadian Broadcasting News) online provides a bit more insight from the lead researcher (Note: A link has been removed),

“What is remarkable about this is that this information transfer happens in what we call a disembodied manner,” said physics professor Wolfgang Tittel, whose team’s work was published this week in the journal Nature Photonics.

“Our transfer happens without any need for an object to move between these two particles.”

A Sept. 20, 2016 University of Calgary news release by Drew Scherban, which originated the news item, provides more insight into the research,

“Such a network will enable secure communication without having to worry about eavesdropping, and allow distant quantum computers to connect,” says Tittel.

Experiment draws on ‘spooky action at a distance’

The experiment is based on the entanglement property of quantum mechanics, also known as “spooky action at a distance” — a property so mysterious that not even Einstein could come to terms with it.

“Being entangled means that the two photons that form an entangled pair have properties that are linked regardless of how far the two are separated,” explains Tittel. “When one of the photons was sent over to City Hall, it remained entangled with the photon that stayed at the University of Calgary.”

Next, the photon whose state was teleported to the university was generated in a third location in Calgary and then also travelled to City Hall where it met the photon that was part of the entangled pair.

“What happened is the instantaneous and disembodied transfer of the photon’s quantum state onto the remaining photon of the entangled pair, which is the one that remained six kilometres away at the university,” says Tittel.

Council of Canadian Academies and The State of Science and Technology and Industrial Research and Development in Canada

Preliminary data was released by the CCA’s expert panel in mid-December 2016. I reviewed that material briefly in my Dec. 15, 2016 post but am eagerly awaiting the full report due late 2017 when, hopefully, I’ll have the time to critique the material, and which I hope will have more surprises and offer greater insights than the preliminary report did.


Thank you to my online colleagues. While we don’t interact much it’s impossible to estimate how encouraging it is to know that these people continually participate and help create the nano and/or science blogosphere.

David Bruggeman at his Pasco Phronesis blog keeps me up-to-date on science policy both in the US, Canada, and internationally, as well as, keeping me abreast of the performing arts/science scene. Also, kudos to David for raising my (and his audience’s) awareness of just how much science is discussed on late night US television. Also, I don’t know how he does it but he keeps scooping me on Canadian science policy matters. Thankfully, I’m not bitter and hope he continues to scoop me which will mean that I will get the information from somewhere since it won’t be from the Canadian government.

Tim Harper of Cientifica Research keeps me on my toes as he keeps shifting his focus. Most lately, it’s been on smart textiles and wearables. You can download his latest White Paper titled, Fashion, Smart Textiles, Wearables and Disappearables, from his website. Tim consults on nanotechnology and other emerging technologies at the international level.

Dexter Johnson of the Nanoclast blog on the IEEE (Institute of Electrical and Electronics Engineers) website consistently provides informed insight into how a particular piece of research fits into the nano scene and often provides historical details that you’re not likely to get from anyone else.

Dr. Andrew Maynard is currently the founding Director of the Risk Innovation Lab at the University of Arizona. I know him through his 2020 Science blog where he posts text and videos on many topics including emerging technologies, nanotechnologies, risk, science communication, and much more. Do check out 2020 Science as it is a treasure trove.

2017 hopes and dreams

I hope Canada’s Chief Science Advisor brings some fresh thinking to science in government and that the Council of Canadian Academies’ upcoming assessment on The State of Science and Technology and Industrial Research and Development in Canada is visionary. Also, let’s send up some collective prayers for the Canada Science and Technology Museum which has been closed since 2014 (?) due to black mold (?). It would be lovely to see it open in time for Canada’s 150th anniversary.

I’d like to see the nanotechnology promise come closer to a reality, which benefits as many people as possible.

As for me and FrogHeart, I’m not sure about the future. I do know there’s one more Steep project (I’m working with Raewyn Turner on a multiple project endeavour known as Steep; this project will involve sound and gold nanoparticles).

Should anything sparkling occur to me, I will add it at a future date.

In the meantime, Happy New Year and thank you from the bottom of my heart for reading this blog!

Soft contact lenses key to supercapacitor breaththrough

It seems like pretty exciting news for anyone following the supercapacitor story but they are being awfully cagey about it all in a Dec. 6, 2016 news item on Nanowerk,

Ground-breaking research from the University of Surrey and Augmented Optics Ltd., in collaboration with the University of Bristol, has developed potentially transformational technology which could revolutionise the capabilities of appliances that have previously relied on battery power to work.

This development by Augmented Optics Ltd., could translate into very high energy density super-capacitors making it possible to recharge your mobile phone, laptop or other mobile devices in just a few seconds.

The technology could have a seismic impact across a number of industries, including transport, aerospace, energy generation, and household applications such as mobile phones, flat screen electronic devices, and biosensors. It could also revolutionise electric cars, allowing the possibility for them to recharge as quickly as it takes for a regular non-electric car to refuel with petrol – a process that currently takes approximately 6-8 hours to recharge. Imagine, instead of an electric car being limited to a drive from London to Brighton, the new technology could allow the electric car to travel from London to Edinburgh without the need to recharge, but when it did recharge for this operation to take just a few minutes to perform.

I imagine the reason for the caginess has to do with the efforts to commercialize the technology. In any event, here’s a little more from a Dec. 5, 2016 University of Surrey press release by Ashley Lovell,

Supercapacitor buses are already being used in China, but they have a very limited range whereas this technology could allow them to travel a lot further between recharges. Instead of recharging every 2-3 stops this technology could mean they only need to recharge every 20-30 stops and that will only take a few seconds.

Elon Musk, of Tesla and SpaceX, has previously stated his belief that supercapacitors are likely to be the technology for future electric air transportation. We believe that the present scientific advance could make that vision a reality.

The technology was adapted from the principles used to make soft contact lenses, which Dr Donald Highgate (of Augmented Optics, and an alumnus of the University of Surrey) developed following his postgraduate studies at Surrey 40 years ago. Supercapacitors, an alternative power source to batteries, store energy using electrodes and electrolytes and both charge and deliver energy quickly, unlike conventional batteries which do so in a much slower, more sustained way. Supercapacitors have the ability to charge and discharge rapidly over very large numbers of cycles. However, because of their poor energy density per kilogramme (approximately just one twentieth of existing battery technology), they have, until now, been unable to compete with conventional battery energy storage in many applications.

Dr Brendan Howlin of the University of Surrey, explained: “There is a global search for new energy storage technology and this new ultra capacity supercapacitor has the potential to open the door to unimaginably exciting developments.”

The ground-breaking research programme was conducted by researchers at the University of Surrey’s Department of Chemistry where the project was initiated by Dr Donald Highgate of Augmented Optics Ltd. The research team was co-led by the Principal Investigators Dr Ian Hamerton and Dr Brendan Howlin. Dr Hamerton continues to collaborate on the project in his new post at the University of Bristol, where the electrochemical testing to trial the research findings was carried out by fellow University of Bristol academic – David Fermin, Professor of Electrochemistry in the School of Chemistry.

Dr Ian Hamerton, Reader in Polymers and Composite Materials from the Department of Aerospace Engineering, University of Bristol said: “While this research has potentially opened the route to very high density supercapacitors, these *polymers have many other possible uses in which tough, flexible conducting materials are desirable, including bioelectronics, sensors, wearable electronics, and advanced optics. We believe that this is an extremely exciting and potentially game changing development.”

*the materials are based on large organic molecules composed of many repeated sub-units and bonded together to form a 3-dimensional network.

Jim Heathcote, Chief Executive of both Augmented Optics Ltd and Supercapacitor Materials Ltd, said: “It is a privilege to work with the teams from the University of Surrey and the University of Bristol. The test results from the new polymers suggest that extremely high energy density supercapacitors could be constructed in the very new future. We are now actively seeking commercial partners [emphasis mine] in order to supply our polymers and offer assistance to build these ultra high energy density storage devices.”

I was not able to find a website for Augmented Optics but there is one for SuperCapacitor Materials here.

Alberta’s Ingenuity Lab opens new facility in India and competes in the Carbon XPRIZE


The Ingenuity Lab in Alberta has made two recent announcements. The first one to catch my attention was a May 7, 2016 news item on Nanotechnology Now,

Ingenuity Lab is proud to announce the opening of the Ingenuity Lab Research Hub at Mahatma Gandhi University in Kottayam, Kerala India, to implement applied research and enable the translation of new 22nd century technologies. This new facility is the result of collaboration between the International and Inter University Centre for Nanoscience Nanotechnology (IIUCNN) and Ingenuity Lab to leverage what each participant does best.

Should the Nanotechnology Now news item not be available you can find the same information in a May 6, 2016 news item in The Canadian Business News Journal. Here’s the rest of the news item,

Ingenuity Lab, led by Dr. Carlo Montemagno, brings the best minds together to address global challenges and was in 2014 voted the Best Nanotechnology Research Organisation in 2014 by The New Economy. IIUCNN is led by Professor Sabu Thomas, whose vision it is to perform and coordinate academic and research activities in the frontier areas of Nanoscience and Nanotechnology by incorporating physical, chemical, biological and environmental aspects.

The two institutions are world-renowned for their work, and the new partnership should cover areas as diverse as catalysis, macromolecules, environmental chemistry, biological processes and health and wellness.

“The initial focus,” according to Ingenuity Lab’s Director Dr. Carlo Montemagno, “Will be on inexpensive point of care healthcare technologies and water availability for both agriculture and personal consumption.” However, in the future, he says, “We plan to expand the scope to include food safety and energy systems.”

Ingenuity Lab’s role is to focus on producing, adapting and supplying new materials to Ingenuity Lab India to focus on final device development and field-testing. The India team members know what system characteristics work best in developing economies, and will establish the figures of merit to make an appropriate solution. Alberta team members will then use this information to exercise its skills in advance materials and systems design to be crafted into its final form and field-tested.

The collaboration is somewhat unique in that it includes the bilateral exchange of students and researchers to facilitate the commercial translation of new and game changing technologies.

Dr. Babu Sebastian, Honourable Vice Chancellor of Mahatma Gandhi University, will declare the opening of the new facility in the presence of Dr. Montemagno, who will explain the vision of this research hub in association with his plenary lecture of ICM 2016.


A May 9, 2016 press release on Market Wired describes Ingenuity Lab’s latest venture into carbon ‘transformation’,

Alberta-based Ingenuity Lab has entered the Carbon XPRIZE under the name of Ingenuity Carbon Solutions. With competition registration taking place in March, Ingenuity Carbon Solutions plans to launch its latest carbon transformation technology and win the backing it so deserves on the world stage.

Ingenuity Lab is working to develop a technology that transforms CO2 emissions and changes the conversation on carbon and its consequences for the environment. By developing nano particles that have the capability to sequester CO2 from facility gas flue emissions, the technology can metabolize emissions into marketable by-products.

The Carbon XPRIZE this year seeks to inspire solutions to the issue of climate change by incentivizing the development of new and emerging CO2 conversation technologies. Described recently in a WEF [World Economic Forum] survey as the biggest potential threat to the economy in 2016, climate change has been targeted as a priority issue, and the XPRIZE has done a great deal to provide answers to the climate question.

Renowned for its role in bringing new and radical thought leaders into the public domain, the XPRIZE Board of Trustees include Elon Musk, James Cameron and Arianna Huffington and the prize never fails to attract the world’s brightest minds.

This year’s Carbon XPRIZE challenges participants including Ingenuity Lab and its Ingenuity Carbon Solutions team to reimagine the climate question by accelerating the development of technologies to convert CO2 into valuable products. Ingenuity Carbon Solutions and others will compete in a three-round competition for a total prize purse of $20m, with the winnings going towards the technology’s continued development.

I hope to hear more good news soon. Alberta could certainly do with some of that as it copes with Fort McMurray’s monstrous wildfire (more here in a NASA/ Goddard Space Flight Center May 9, 2016 news release on EurekAlert).

For anyone interesting Alberta’s ‘nano’ Ingenuity Lab, more can be found here.

Montreal Neuro goes open science

The Montreal Neurological Institute (MNI) in Québec, Canada, known informally and widely as Montreal Neuro, has ‘opened’ its science research to the world. David Bruggeman tells the story in a Jan. 21, 2016 posting on his Pasco Phronesis blog (Note: Links have been removed),

The Montreal Neurological Institute (MNI) at McGill University announced that it will be the first academic research institute to become what it calls ‘Open Science.’  As Science is reporting, the MNI will make available all research results and research data at the time of publication.  Additionally it will not seek patents on any of the discoveries made on research at the Institute.

Will this catch on?  I have no idea if this particular combination of open access research data and results with no patents will spread to other university research institutes.  But I do believe that those elements will continue to spread.  More universities and federal agencies are pursuing open access options for research they support.  Elon Musk has opted to not pursue patent litigation for any of Tesla Motors’ patents, and has not pursued patents for SpaceX technology (though it has pursued litigation over patents in rocket technology). …

Montreal Neuro and its place in Canadian and world history

Before pursuing this announcement a little more closely, you might be interested in some of the institute’s research history (from the Montreal Neurological Institute Wikipedia entry and Note: Links have been removed),

The MNI was founded in 1934 by the neurosurgeon Dr. Wilder Penfield (1891–1976), with a $1.2 million grant from the Rockefeller Foundation of New York and the support of the government of Quebec, the city of Montreal, and private donors such as Izaak Walton Killam. In the years since the MNI’s first structure, the Rockefeller Pavilion was opened, several major structures were added to expand the scope of the MNI’s research and clinical activities. The MNI is the site of many Canadian “firsts.” Electroencephalography (EEG) was largely introduced and developed in Canada by MNI scientist Herbert Jasper, and all of the major new neuroimaging techniques—computer axial tomography (CAT), positron emission tomography (PET), and magnetic resonance imaging (MRI) were first used in Canada at the MNI. Working under the same roof, the Neuro’s scientists and physicians made discoveries that drew world attention. Penfield’s technique for epilepsy neurosurgery became known as the Montreal procedure. K.A.C. Elliott identified γ-aminobutyric acid (GABA) as the first inhibitory neurotransmitter. Brenda Milner revealed new aspects of brain function and ushered in the field of neuropsychology as a result of her groundbreaking study of the most famous neuroscience patient of the 20th century, H.M., who had anterograde amnesia and was unable to form new memories. In 2007, the Canadian government recognized the innovation and work of the MNI by naming it one of seven national Centres of Excellence in Commercialization and Research.

For those with the time and the interest, here’s a link to an interview (early 2015?) with Brenda Milner (and a bonus, related second link) as part of a science podcast series (from my March 6, 2015 posting),

Dr. Wendy Suzuki, a Professor of Neural Science and Psychology in the Center for Neural Science at New York University, whose research focuses on understanding how our brains form and retain new long-term memories and the effects of aerobic exercise on memory. Her book Healthy Brain, Happy Life will be published by Harper Collins in the Spring of 2015.

  • Totally Cerebral: Untangling the Mystery of Memory: Neuroscientist Wendy Suzuki introduces us to scientists who have uncovered some of the deepest secrets about our brains. She begins by talking with experimental psychologist Brenda Milner [interviewed in her office at McGill University, Montréal, Quebéc], who in the 1950s, completely changed our understanding of the parts of the brain important for forming new long-term memories.
  • Totally Cerebral: The Man Without a Memory: Imagine never being able to form a new long term memory after the age of 27. Welcome to the life of the famous amnesic patient “HM”. Neuroscientist Suzanne Corkin studied HM for almost half a century, and gives us a glimpse of what daily life was like for him, and his tremendous contribution to our understanding of how our memories work.

Brief personal anecdote
For those who just want the science, you may want to skip this section.

About 15 years ago, I had the privilege of talking with Mary Filer, a former surgical nurse and artist in glass. Originally from Saskatchewan, she, a former member of Wilder Penfield’s surgical team, was then in her 80s living in Vancouver and still associated with Montreal Neuro, albeit as an artist rather than a surgical nurse.

Penfield had encouraged her to pursue her interest in the arts (he was an art/science aficionado) and at this point her work could be seen many places throughout the world and, if memory serves, she had just been asked to go MNI for the unveiling of one of her latest pieces.

Her husband, then in his 90s, had founded the School of Architecture at McGill University. This couple had known all the ‘movers and shakers’ in Montreal society for decades and retired to Vancouver where their home was in a former chocolate factory.

It was one of those conversations, you just don’t forget.

More about ‘open science’ at Montreal Neuro

Brian Owens’ Jan. 21, 2016 article for Science Magazine offers some insight into the reason for the move to ‘open science’,

Guy Rouleau, the director of McGill University’s Montreal Neurological Institute (MNI) and Hospital in Canada, is frustrated with how slowly neuroscience research translates into treatments. “We’re doing a really shitty job,” he says. “It’s not because we’re not trying; it has to do with the complexity of the problem.”

So he and his colleagues at the renowned institute decided to try a radical solution. Starting this year, any work done there will conform to the principles of the “open-
science” movement—all results and data will be made freely available at the time of publication, for example, and the institute will not pursue patents on any of its discoveries. …

“It’s an experiment; no one has ever done this before,” he says. The intent is that neuroscience research will become more efficient if duplication is reduced and data are shared more widely and earlier. …”

After a year of consultations among the institute’s staff, pretty much everyone—about 70 principal investigators and 600 other scientific faculty and staff—has agreed to take part, Rouleau says. Over the next 6 months, individual units will hash out the details of how each will ensure that its work lives up to guiding principles for openness that the institute has developed. …

Owens’ article provides more information about implementation and issues about sharing. I encourage you to read it in its entirety.

As for getting more research to the patient, there’s a Jan. 26, 2016 Cafe Scientifique talk in Vancouver (my Jan. 22, 2016 ‘Events’ posting; scroll down about 40% of the way) regarding that issue although there’s no hint that the speakers will be discussing ‘open science’.