Tag Archives: McGill University

ISEA (International Symposium on Electronic Arts) 2020: Why Sentience? still in October 2020 but virtually in Montréal, Québec

I wonder what happens to geography and time when you hold your conference virtually? Part of the excitement of a conference or other meetings is the promise of the destination with new people and new adventures. Whether 2020 is a pause between in-person meetings, a moment when everything changed, or some combination is yet to be determined but perhaps ISEA2020 will be a harbinger.

I received a June 10, 2020 notice (via email) with the latest news about ISEA2020,

Montreal, June 10, 2020    ISEA2020 from October 13 to 18, 2020 goes entirely digital, with an innovative experiential format.

The worldwide COVID-19 outbreak has forced ISEA2020 in Montreal to be postponed to October 13 to 18. The physical distancing measures put in place in many countries and the travel restrictions imposed to prevent the spread of the pandemic mean that we cannot all be physically present in Montreal. Montreal Digital Spring (Printemps numérique) – the organizers of ISEA2020 –  have thus decided to make the symposium a 100% online event. Our team is currently working on the platform that will allow us to come together, connect, and exchange knowledge and practices, despite the physical distance. We worked with our partners and collaborators, experts in art, design, science and technology to (if only begin to) reinvent the format of academic interdisciplinary conferencing! 

Our main strategies for ISEA2020 Online:

Programming: ISEA2020 is a full week of research and creation, 100% online, with more than 300 international speakers and artists from over 40 countries.

Connecting: We are working on the online platform to ensure we meet ISEA’s core values of encouraging and promoting creative exchanges between diverse groups; of creating opportunities for networking and informal meetings, in addition to ensuring the good flow of panel sessions.

Programming for all the time-zones: From October 13 to 16, conference presentations will unfold over 16 consecutive hours each day, in order to include participants in all the time zones, from East Asia to the west Americas.

Reduced registration fee: The registration fee has been reduced. In addition to saving travel costs, ISEA2020 Online is accessible at a significantly reduced fee, hoping it will attract a larger number of participants, including more students and independent artists. 

Live Q&A: All presentation sessions, including keynote sessions, will include live Q&A periods, mediated by invited delegates.

Art Programming:  We are working with artists and partners on strategies to showcase the selected projects and special programming.

While we regret not seeing you in Montreal, the new format will make ISEA2020 accessible to a larger number and will certainly contribute to a broader discussion on how to produce and transfer knowledge and showcase art through connected digital communication platforms. Our team is committed to ensuring the high standard of creative and academic contributions that is paramount to ISEA.

We look forward to seeing you online this fall!

REGISTRATION FEE

You can now purchase your ISEA2020 Online Pass at the Early Bird rates of CAD $99 (regular) and CAD $69 (students), offer valid until August 13 [2020].

NEW DEADLINE TO REGISTER (for presenters)

The deadline to register, to upload the camera-ready papers, and to fill in the Zone Festival form is July 27 at 11:59 pm (GMT-5). 

CONTACT

We updated our website. Please refer to the Frequently Asked Questions – FAQ page. If you have a specific question, please contact: isea.academic@printempsnumerique.ca for academic presentations / isea.artistic@printempsnumerique.ca for artworks / ISEA2020@printempsnumerique.ca for general questions/registration. 

 LIRE LE COMMUNIQUÉ EN FRANÇAIS

How we got here

The academic chairs have written this statement,

ISEA2020 ONLINE: WHY SENTIENCE?
OCTOBER 13-18, 2020

The academic chairs’ statements regarding the ISEA2020 online turn:

Since last August [2019] when we established the ISEA2020 theme of “Why Sentience?”, life on Earth has been dramatically transformed. Our belief in concepts like proximity, justice, equality, indeed, the very concept of the future itself, has been radically uprooted. As cultural organizations worldwide scramble to adapt, the ISEA2020 team has decided to reimagine the event for the anytime/anyplace zone of digital space and to transform it into an online experience. But we have also realized that there is no need to adjust the theme to make it more “responsive” to our current conditions. Despite their almost cataclysmic impact on the political-economic-social-cultural-ecological fabric of the world, the triumvirate forces of the coronavirus pandemic, its disastrous economic consequences, as well as systemic racial injustice have now acutely amplified ISEA2020s question: “Why Sentience?” These conditions sharpen the need to stop, pause and re-examine what it means to be sentient, “the ability to feel or perceive.” They help us reformulate our notions of what the world is with us and beyond us. They give us a front seat perspective on the corporeal and ecological entanglements between power and knowledge, animals and humans, machines and environment, oppression and liberation. They pointedly demonstrate that difference—social-economic-cultural—resonates through the sentient world. The virus—a 120-160 nm in diameter entity that is invisible to our human senses and considered neither living nor dead but ontologically somewhere in between [emphasis mine]—is thus perversely a great teacher and provides us lessons on how the modern splitting up of the sentient and inanimate worlds increasingly makes no sense.

ISEA’s mission aims to foster interdisciplinary academic discourse and exchange among culturally diverse organizations and individuals working with art, science and technology. As we write, ISEA2020 should have already passed into history. The new digital space of ISEA2020 will link the local community in Montreal with the international one beyond so that we can collectively rethink the form of such an event. The new platform will also allow us to examine close up these new and, at the same time, ongoing historical set of conditions; conditions that demand a response if we are to live in the coming (post)-pandemic world. 

Christine Ross – McGill University (Montreal, Canada)

Chris Salter – Concordia University/Hexagram (Montreal, Canada)

2020 Trailers

There is a conference trailer for this new ‘virtual’ version of the 2020 conference,

Montreal Digital Spring (Printemps numérique) produced both the English language version and this one in French***, Note: Video [credit]: Guillaume Guardia,

I’m not sure why the French language version is so much shorter*** (maybe I found an abridged version?), in any case, the content is quite different and you may want to check out both trailers.

***ETA June 22, 2020 at 1550 PDT: The answer to my question as to why one trailer was shorter? Two different (but this year related) events. I failed to note that the second trailer was for “MTL Connect.” Here is Manuelle Freire’s description (academic programme manager of ISEA2020, Printemps numerique) of MTL Connect,

The latter is an annual event organised in Montreal by Printemps numérique, consisting of different thematic pavilion. This year ISEA2020 is the art and creativity pavilion of MTL Connect, so part of a larger endeavour that is affected by this online turn in its entirety.

As for MTL Connect, there’s this from the homepage,

BRINGING TOGETHER DIGITAL MINDS, DIGITALLY

6 DAYS OF PROGRAMMING • +400 SPEAKERS • 50 COUNTRIES REPRESENTED • +10,000 ATTENDEES • THOUSANDS OF OPPORTUNITIES FOR INTERACTION

That’s it for the correction. ***

Meeting technology, cyber security, and local involvement

I emailed (Friday, June 19, 2020) a couple of questions to the organizers which they have kindly answered.

  1. Are you going to be using Zoom as the technology for virtual
    attendance? Will there be security measures for attendees?
  2. [A]re there going to be any local (Vancouver, BC) virtual or in-person get-togethers? By October it might be possible to have small groups (with appropriate precautions) meet in person for ISEA2020 discussions/participation in virtual events held elsewhere. (Just a thought)

They responded by Sunday, June 21, 2020). That is quickly. The short answer to both questions is: “We don’t know yet.”

More specifically, Manuelle Freire (Printemps numerique) had this to say,

I will have to forward your first question regarding the technology of the platform, specifically cybersecurity, to the platform development project manager. Cybersecurity is an important matter that we have discussed internally and will be included in the FAQ and the IEA2020, as soon as we have stabilized the different features of the platform and we are ready to release.

As for the second question,

In what comes to small groups meeting in person. It is indeed possible that groups [might] be able to meet in October [2020], but at this stage, with social distancing and travel restrictions in place, we are still facing degrees of uncertainty. While we regret not meeting everyone in Montréal, moving the symposium 100% online seemed the only safe and certain solution. No in-person activities are scheduled for now.

The questions were also sent to Philippe Pasquier, a locally based (Vancouver, BC) member of the ISEA2020 academic committee and he had this to say about the possibility of local, in-person get-togethers,

As for (2), this is a good idea. Let’s wait and see what will be possible and revisit this idea closer to the date. 

The responses have made me happy. Hearing that they take cybersecurity seriously is downright musical and learning that they are open to local, small, in person get-togethers is spirit-lifting.

Final words

In 2009, I attended an ISEA being held in Northern Ireland and Ireland and asked one of the organizers if any of their symposia had been held in Canada. Yes! Montréal, my source raved at length, hosted a great meeting.

The next Canadian ISEA host was Vancouver in 2015 and guess what? Someone in a lineup was raving about the Montréal meeting. It seems that 1995 meeting has taken on a legendary glow.

It was a privilege being able to attend two meetings in person. Legendary, problematic, or good, the meetings bring together exciting talent and disturbing and/or mind-expanding ideas and experiences. Given the circumstances, the organizers find themselves dealing with, I wish them the best of luck although I’m confident that despite all the obstacles, ISEA2020 will be an extraordinary affair.

On a practical note, the $99 (or less) fee for the online pass is a good deal. (I know because I had to pay for mine when they were here in Vancouver in 2015. By the way, I’ve never regretted a penny of it.)

Canadian and Italian researchers go beyond graphene with 2D polymers

According to a May 20,2020 McGill University news release (also on EurkekAltert), a team of Canadian and Italian researchers has broken new ground in materials science (Note: There’s a press release I found a bit more accessible and therefore informative coming up after this one),

A study by a team of researchers from Canada and Italy recently published in Nature Materials could usher in a revolutionary development in materials science, leading to big changes in the way companies create modern electronics.

The goal was to develop two-dimensional materials, which are a single atomic layer thick, with added functionality to extend the revolutionary developments in materials science that started with the discovery of graphene in 2004.

In total, 19 authors worked on this paper from INRS [Institut National de la Recherche Scientifique], McGill {University], Lakehead [University], and Consiglio Nazionale delle Ricerche, the national research council in Italy.

This work opens exciting new directions, both theoretical and experimental. The integration of this system into a device (e.g. transistors) may lead to outstanding performances. In addition, these results will foster more studies on a wide range of two-dimensional conjugated polymers with different lattice symmetries, thereby gaining further insights into the structure vs. properties of these systems.

The Italian/Canadian team demonstrated the synthesis of large-scale two-dimensional conjugated polymers, also thoroughly characterizing their electronic properties. They achieved success by combining the complementary expertise of organic chemists and surface scientists.

“This work represents an exciting development in the realization of functional two-dimensional materials beyond graphene,” said Mark Gallagher, a Physics professor at Lakehead University.

“I found it particularly rewarding to participate in this collaboration, which allowed us to combine our expertise in organic chemistry, condensed matter physics, and materials science to achieve our goals.”

Dmytro Perepichka, a professor and chair of Chemistry at McGill University, said they have been working on this research for a long time.

“Structurally reconfigurable two-dimensional conjugated polymers can give a new breadth to applications of two-dimensional materials in electronics,” Perepichka said.

“We started dreaming of them more than 15 years ago. It’s only through this four-way collaboration, across the country and between the continents, that this dream has become the reality.”

Federico Rosei, a professor at the Énergie Matériaux Télécommunications Research Centre of the Institut National de la Recherche Scientifique (INRS) in Varennes who holds the Canada Research Chair in Nanostructured Materials since 2016, said they are excited about the results of this collaboration.

“These results provide new insights into mechanisms of surface reactions at a fundamental level and simultaneously yield a novel material with outstanding properties, whose existence had only been predicted theoretically until now,” he said.

About this study

Synthesis of mesoscale ordered two-dimensional π-conjugated polymers with semiconducting properties” by G. Galeotti et al. was published in Nature Materials.

This research was partially supported by a project Grande Rilevanza Italy-Quebec of the Italian Ministero degli Affari Esteri e della Cooperazione Internazionale, Direzione Generale per la Promozione del Sistema Paese, the Natural Sciences and Engineering Research Council of Canada, the Fonds Québécois de la recherche sur la nature et les technologies and a US Army Research Office. Federico Rosei is also grateful to the Canada Research Chairs program for funding and partial salary support.

About McGill University

Founded in Montreal, Quebec, in 1821, McGill is a leading Canadian post-secondary institution. It has two campuses, 11 faculties, 13 professional schools, 300 programs of study and over 40,000 students, including more than 10,200 graduate students. McGill attracts students from over 150 countries around the world, its 12,800 international students making up 31% per cent of the student body. Over half of McGill students claim a first language other than English, including approximately 19% of our students who say French is their mother tongue.

About the INRS
The Institut National de la Recherche Scientifique (INRS) is the only institution in Québec dedicated exclusively to graduate level university research and training. The impacts of its faculty and students are felt around the world. INRS proudly contributes to societal progress in partnership with industry and community stakeholders, both through its discoveries and by training new researchers and technicians to deliver scientific, social, and technological breakthroughs in the future.

Lakehead University
Lakehead University is a fully comprehensive university with approximately 9,700 full-time equivalent students and over 2,000 faculty and staff at two campuses in Orillia and Thunder Bay, Ontario. Lakehead has 10 faculties, including Business Administration, Education, Engineering, Graduate Studies, Health & Behavioural Sciences, Law, Natural Resources Management, the Northern Ontario School of Medicine, Science & Environmental Studies, and Social Sciences & Humanities. In 2019, Maclean’s 2020 University Rankings, once again, included Lakehead University among Canada’s Top 10 primarily undergraduate universities, while Research Infosource named Lakehead ‘Research University of the Year’ in its category for the fifth consecutive year. Visit www.lakeheadu.ca

I’m a little surprised there wasn’t a quote from one of the Italian researchers in the McGill news release but then there isn’t a quote in this slightly more accessible May 18, 2020 Consiglio Nazionale delle Ricerche press release either,

Graphene’s isolation took the world by surprise and was meant to revolutionize modern electronics. However, it was soon realized that its intrinsic properties limit the utilization in our daily electronic devices. When a concept of Mathematics, namely Topology, met the field of on-surface chemistry, new materials with exotic features were theoretically discovered. Topological materials exhibit technological relevant properties such as quantum hall conductivity that are protected by a concept similar to the comparison of a coffee mug and a donut.  These structures can be synthesized by the versatile molecular engineering toolbox that surface reactions provide. Nevertheless, the realization of such a material yields access to properties that suit the figure of merits for modern electronic application and could eventually for example lead to solve the ever-increasing heat conflict in chip design. However, problems such as low crystallinity and defect rich structures prevented the experimental observation and kept it for more than a decade a playground only investigated theoretically.

An international team of scientists from Institut National de la Recherche Scientifique (Centre Energie, Matériaux et Télécommunications), McGill University and Lakehead University, both located in Canada, and the SAMOS laboratory of the Istituto di Struttura della Materia (Cnr), led by Giorgio Contini, demonstrates, in a recent publication on Nature Materials, that the synthesis of two-dimensional π-conjugated polymers with topological Dirac cone and flats bands became a reality allowing a sneak peek into the world of organic topological materials.

Complementary work of organic chemists and surface scientists lead to two-dimensional polymers on a mesoscopic scale and granted access to their electronic properties. The band structure of the topological polymer reveals both flat bands and a Dirac cone confirming the prediction of theory. The observed coexistence of both structures is of particular interest, since whereas Dirac cones yield massless charge carriers (a band velocity of the same order of magnitude of graphene has been obtained), necessary for technological applications, flat bands quench the kinetic energy of charge carriers and could give rise to intriguing phenomena such as the anomalous Hall effect, surface superconductivity or superfluid transport.

This work paths multiple new roads – both theoretical and experimental nature. The integration of this topological polymer into a device such as transistors possibly reveals immense performance. On the other hand, it will foster many researchers to explore a wide range of two-dimensional polymers with different lattice symmetries, obtaining insight into the relationship between geometrical and electrical topology, which would in return be beneficial to fine tune a-priori theoretical studies. These materials – beyond graphene – could be then used for both their intrinsic properties as well as their interplay in new heterostructure designs.

The authors are currently exploring the practical use of the realized material trying to integrate it into transistors, pushing toward a complete designing of artificial topological lattices.

This work was partially supported by a project Grande Rilevanza Italy-Quebec of the Italian Ministero degli Affari Esteri e della Cooperazione Internazionale (MAECI), Direzione Generale per la Promozione del Sistema Paese.

The Italians also included an image to accompany their press release,

Image of the synthesized material and its band structure Courtesy: Consiglio Nazionale delle Ricerche

My heart sank when I saw the number of authors for this paper (WordPress no longer [since their Christmas 2018 update] makes it easy to add the author’s names quickly to the ‘tags field’). Regardless and in keeping with my practice, here’s a link to and a citation for the paper,

Synthesis of mesoscale ordered two-dimensional π-conjugated polymers with semiconducting properties by G. Galeotti, F. De Marchi, E. Hamzehpoor, O. MacLean, M. Rajeswara Rao, Y. Chen, L. V. Besteiro, D. Dettmann, L. Ferrari, F. Frezza, P. M. Sheverdyaeva, R. Liu, A. K. Kundu, P. Moras, M. Ebrahimi, M. C. Gallagher, F. Rosei, D. F. Perepichka & G. Contini. Nature Materials (2020) DOI: https://doi.org/10.1038/s41563-020-0682-z Published 18 May 2020

This paper is behind a paywall.

McGill University team gets better understanding of nonribosomal peptide synthetases (NRPSs) also described as nanomachines

This research from McGill University (Montréal, Canada) focuses on enzymes and their possible utility as nanomachines for producing drugs. (For the uninitiated, nano means billionth, which, in turn, means these enzymes are measured at the nanoscale.)

An April 30, 2020 McGill University news release (also on EurekAlert) describes the work,

Many of the drugs and medicines that we rely on today are natural products taken from microbes like bacteria and fungi. Within these microbes, the drugs are made by tiny natural machines – mega-enzymes known as nonribosomal peptide synthetases (NRPSs). A research team led by McGill University has gained a better understanding of the structures of NRPSs and the processes by which they work. This improved understanding of NRPSs could potentially allow bacteria and fungi to be leveraged for the production of desired new compounds and lead to the creation of new potent antibiotics, immunosuppressants and other modern drugs.

“NRPSs are really fantastic enzymes that take small molecules like amino acids or other similar sized building blocks and assemble them into natural, biologically active, potent compounds, many of which are drugs,” said Martin Schmeing, Associate Professor in the Department of Biochemistry at McGill University, and corresponding author on the article that was recently published in Nature Chemical Biology. “An NRPS works like a factory assembly line that consists of a series of robotic workstations. Each station has multi-step workflows and moving parts that allow it to add one building block substrate to the growing drug, elongating and modifying it, and then passing it off to the next little workstation, all on the same huge enzyme.”

Ultra-intensive light beam allows scientists to see proteins

n their paper featured on the cover of the May 2020 issue of Nature Chemical Biology, the team reports visualizing an NRPS mechanical system by using the CMCF beamline at the Canadian Light Source (CLS). The CLS is a Canadian national lab [these types of labs are sometimes called synchrotrons] that produces the ultra-intense beams of X-rays required to image proteins, as even mega-enzymes are too small to see with any light microscope.

“Scientists have long been excited about the potential of bioengineering NRPSs by identifying the order of building blocks and reorganizing the workstations in the enzyme to create new drugs, but the effort has rarely been successful,” said Schmeing. “This is the first time anyone has seen how these enzymes transform keto acids into a building block that can be put into a peptide drug. This helps us understand how the NRPSs can use so very many building blocks to make the many different compounds and therapeutics.”

Here’s a link to and a citation for the paper,

Structural basis of keto acid utilization in nonribosomal depsipeptide synthesis by Diego A. Alonzo, Clarisse Chiche-Lapierre, Michael J. Tarry, Jimin Wang & T. Martin Schmeing. Nature Chemical Biology volume 16, pages493–496(2020) Published: 17 February 2020

This paper is behind a paywall.

Gene editing and personalized medicine: Canada

Back in the fall of 2018 I came across one of those overexcited pieces about personalized medicine and gene editing tha are out there. This one came from an unexpected source, an author who is a “PhD Scientist in Medical Science (Blood and Vasculature” (from Rick Gierczak’s LinkedIn profile).

It starts our promisingly enough although I’m beginning to dread the use of the word ‘precise’  where medicine is concerned, (from a September 17, 2018 posting on the Science Borealis blog by Rick Gierczak (Note: Links have been removed),

CRISPR-Cas9 technology was accidentally discovered in the 1980s when scientists were researching how bacteria defend themselves against viral infection. While studying bacterial DNA called clustered regularly interspaced short palindromic repeats (CRISPR), they identified additional CRISPR-associated (Cas) protein molecules. Together, CRISPR and one of those protein molecules, termed Cas9, can locate and cut precise regions of bacterial DNA. By 2012, researchers understood that the technology could be modified and used more generally to edit the DNA of any plant or animal. In 2015, the American Association for the Advancement of Science chose CRISPR-Cas9 as science’s “Breakthrough of the Year”.

Today, CRISPR-Cas9 is a powerful and precise gene-editing tool [emphasis mine] made of two molecules: a protein that cuts DNA (Cas9) and a custom-made length of RNA that works like a GPS for locating the exact spot that needs to be edited (CRISPR). Once inside the target cell nucleus, these two molecules begin editing the DNA. After the desired changes are made, they use a repair mechanism to stitch the new DNA into place. Cas9 never changes, but the CRISPR molecule must be tailored for each new target — a relatively easy process in the lab. However, it’s not perfect, and occasionally the wrong DNA is altered [emphasis mine].

Note that Gierczak makes a point of mentioning that CRISPR/Cas9 is “not perfect.” And then, he gets excited (Note: Links have been removed),

CRISPR-Cas9 has the potential to treat serious human diseases, many of which are caused by a single “letter” mutation in the genetic code (A, C, T, or G) that could be corrected by precise editing. [emphasis mine] Some companies are taking notice of the technology. A case in point is CRISPR Therapeutics, which recently developed a treatment for sickle cell disease, a blood disorder that causes a decrease in oxygen transport in the body. The therapy targets a special gene called fetal hemoglobin that’s switched off a few months after birth. Treatment involves removing stem cells from the patient’s bone marrow and editing the gene to turn it back on using CRISPR-Cas9. These new stem cells are returned to the patient ready to produce normal red blood cells. In this case, the risk of error is eliminated because the new cells are screened for the correct edit before use.

The breakthroughs shown by companies like CRISPR Therapeutics are evidence that personalized medicine has arrived. [emphasis mine] However, these discoveries will require government regulatory approval from the countries where the treatment is going to be used. In the US, the Food and Drug Administration (FDA) has developed new regulations allowing somatic (i.e., non-germ) cell editing and clinical trials to proceed. [emphasis mine]

The potential treatment for sickle cell disease is exciting but Gierczak offers no evidence that this treatment or any unnamed others constitute proof that “personalized medicine has arrived.” In fact, Goldman Sachs, a US-based investment bank, makes the case that it never will .

Cost/benefit analysis

Edward Abrahams, president of the Personalized Medicine Coalition (US-based), advocates for personalized medicine while noting in passing, market forces as represented by Goldman Sachs in his May 23, 2018 piece for statnews.com (Note: A link has been removed),

One of every four new drugs approved by the Food and Drug Administration over the last four years was designed to become a personalized (or “targeted”) therapy that zeros in on the subset of patients likely to respond positively to it. That’s a sea change from the way drugs were developed and marketed 10 years ago.

Some of these new treatments have extraordinarily high list prices. But focusing solely on the cost of these therapies rather than on the value they provide threatens the future of personalized medicine.

… most policymakers are not asking the right questions about the benefits of these treatments for patients and society. Influenced by cost concerns, they assume that prices for personalized tests and treatments cannot be justified even if they make the health system more efficient and effective by delivering superior, longer-lasting clinical outcomes and increasing the percentage of patients who benefit from prescribed treatments.

Goldman Sachs, for example, issued a report titled “The Genome Revolution.” It argues that while “genome medicine” offers “tremendous value for patients and society,” curing patients may not be “a sustainable business model.” [emphasis mine] The analysis underlines that the health system is not set up to reap the benefits of new scientific discoveries and technologies. Just as we are on the precipice of an era in which gene therapies, gene-editing, and immunotherapies promise to address the root causes of disease, Goldman Sachs says that these therapies have a “very different outlook with regard to recurring revenue versus chronic therapies.”

Let’s just chew on this one (contemplate)  for a minute”curing patients may not be ‘sustainable business model’!”

Coming down to earth: policy

While I find Gierczak to be over-enthused, he, like Abrahams, emphasizes the importance of new policy, in his case, the focus is Canadian policy. From Gierczak’s September 17, 2018 posting (Note: Links have been removed),

In Canada, companies need approval from Health Canada. But a 2004 law called the Assisted Human Reproduction Act (AHR Act) states that it’s a criminal offence “to alter the genome of a human cell, or in vitroembryo, that is capable of being transmitted to descendants”. The Actis so broadly written that Canadian scientists are prohibited from using the CRISPR-Cas9 technology on even somatic cells. Today, Canada is one of the few countries in the world where treating a disease with CRISPR-Cas9 is a crime.

On the other hand, some countries provide little regulatory oversight for editing either germ or somatic cells. In China, a company often only needs to satisfy the requirements of the local hospital where the treatment is being performed. And, if germ-cell editing goes wrong, there is little recourse for the future generations affected.

The AHR Act was introduced to regulate the use of reproductive technologies like in vitrofertilization and research related to cloning human embryos during the 1980s and 1990s. Today, we live in a time when medical science, and its role in Canadian society, is rapidly changing. CRISPR-Cas9 is a powerful tool, and there are aspects of the technology that aren’t well understood and could potentially put patients at risk if we move ahead too quickly. But the potential benefits are significant. Updated legislation that acknowledges both the risks and current realities of genomic engineering [emphasis mine] would relieve the current obstacles and support a path toward the introduction of safe new therapies.

Criminal ban on human gene-editing of inheritable cells (in Canada)

I had no idea there was a criminal ban on the practice until reading this January 2017 editorial by Bartha Maria Knoppers, Rosario Isasi, Timothy Caulfield, Erika Kleiderman, Patrick Bedford, Judy Illes, Ubaka Ogbogu, Vardit Ravitsky, & Michael Rudnicki for (Nature) npj Regenerative Medicine (Note: Links have been removed),

Driven by the rapid evolution of gene editing technologies, international policy is examining which regulatory models can address the ensuing scientific, socio-ethical and legal challenges for regenerative and personalised medicine.1 Emerging gene editing technologies, including the CRISPR/Cas9 2015 scientific breakthrough,2 are powerful, relatively inexpensive, accurate, and broadly accessible research tools.3 Moreover, they are being utilised throughout the world in a wide range of research initiatives with a clear eye on potential clinical applications. Considering the implications of human gene editing for selection, modification and enhancement, it is time to re-examine policy in Canada relevant to these important advances in the history of medicine and science, and the legislative and regulatory frameworks that govern them. Given the potential human reproductive applications of these technologies, careful consideration of these possibilities, as well as ethical and regulatory scrutiny must be a priority.4

With the advent of human embryonic stem cell research in 1978, the birth of Dolly (the cloned sheep) in 1996 and the Raelian cloning hoax in 2003, the environment surrounding the enactment of Canada’s 2004 Assisted Human Reproduction Act (AHRA) was the result of a decade of polarised debate,5 fuelled by dystopian and utopian visions for future applications. Rightly or not, this led to the AHRA prohibition on a wide range of activities, including the creation of embryos (s. 5(1)(b)) or chimeras (s. 5(1)(i)) for research and in vitro and in vivo germ line alterations (s. 5(1)(f)). Sanctions range from a fine (up to $500,000) to imprisonment (up to 10 years) (s. 60 AHRA).

In Canada, the criminal ban on gene editing appears clear, the Act states that “No person shall knowingly […] alter the genome of a cell of a human being or in vitro embryo such that the alteration is capable of being transmitted to descendants;” [emphases mine] (s. 5(1)(f) AHRA). This approach is not shared worldwide as other countries such as the United Kingdom, take a more regulatory approach to gene editing research.1 Indeed, as noted by the Law Reform Commission of Canada in 1982, criminal law should be ‘an instrument of last resort’ used solely for “conduct which is culpable, seriously harmful, and generally conceived of as deserving of punishment”.6 A criminal ban is a suboptimal policy tool for science as it is inflexible, stifles public debate, and hinders responsiveness to the evolving nature of science and societal attitudes.7 In contrast, a moratorium such as the self-imposed research moratorium on human germ line editing called for by scientists in December 20158 can at least allow for a time limited pause. But like bans, they may offer the illusion of finality and safety while halting research required to move forward and validate innovation.

On October 1st, 2016, Health Canada issued a Notice of Intent to develop regulations under the AHRA but this effort is limited to safety and payment issues (i.e. gamete donation). Today, there is a need for Canada to revisit the laws and policies that address the ethical, legal and social implications of human gene editing. The goal of such a critical move in Canada’s scientific and legal history would be a discussion of the right of Canadians to benefit from the advancement of science and its applications as promulgated in article 27 of the Universal Declaration of Human Rights9 and article 15(b) of the International Covenant on Economic, Social and Cultural Rights,10 which Canada has signed and ratified. Such an approach would further ensure the freedom of scientific endeavour both as a principle of a liberal democracy and as a social good, while allowing Canada to be engaged with the international scientific community.

Even though it’s a bit old, I still recommend reading the open access editorial in full, if you have the time.

One last thing abut the paper, the acknowledgements,

Sponsored by Canada’s Stem Cell Network, the Centre of Genomics and Policy of McGill University convened a ‘think tank’ on the future of human gene editing in Canada with legal and ethics experts as well as representatives and observers from government in Ottawa (August 31, 2016). The experts were Patrick Bedford, Janetta Bijl, Timothy Caulfield, Judy Illes, Rosario Isasi, Jonathan Kimmelman, Erika Kleiderman, Bartha Maria Knoppers, Eric Meslin, Cate Murray, Ubaka Ogbogu, Vardit Ravitsky, Michael Rudnicki, Stephen Strauss, Philip Welford, and Susan Zimmerman. The observers were Geneviève Dubois-Flynn, Danika Goosney, Peter Monette, Kyle Norrie, and Anthony Ridgway.

Competing interests

The authors declare no competing interests.

Both McGill and the Stem Cell Network pop up again. A November 8, 2017 article about the need for new Canadian gene-editing policies by Tom Blackwell for the National Post features some familiar names (Did someone have a budget for public relations and promotion?),

It’s one of the most exciting, and controversial, areas of health science today: new technology that can alter the genetic content of cells, potentially preventing inherited disease — or creating genetically enhanced humans.

But Canada is among the few countries in the world where working with the CRISPR gene-editing system on cells whose DNA can be passed down to future generations is a criminal offence, with penalties of up to 10 years in jail.

This week, one major science group announced it wants that changed, calling on the federal government to lift the prohibition and allow researchers to alter the genome of inheritable “germ” cells and embryos.

The potential of the technology is huge and the theoretical risks like eugenics or cloning are overplayed, argued a panel of the Stem Cell Network.

The step would be a “game-changer,” said Bartha Knoppers, a health-policy expert at McGill University, in a presentation to the annual Till & McCulloch Meetings of stem-cell and regenerative-medicine researchers [These meetings were originally known as the Stem Cell Network’s Annual General Meeting {AGM}]. [emphases mine]

“I’m completely against any modification of the human genome,” said the unidentified meeting attendee. “If you open this door, you won’t ever be able to close it again.”

If the ban is kept in place, however, Canadian scientists will fall further behind colleagues in other countries, say the experts behind the statement say; they argue possible abuses can be prevented with good ethical oversight.

“It’s a human-reproduction law, it was never meant to ban and slow down and restrict research,” said Vardit Ravitsky, a University of Montreal bioethicist who was part of the panel. “It’s a sort of historical accident … and now our hands are tied.”

There are fears, as well, that CRISPR could be used to create improved humans who are genetically programmed to have certain facial or other features, or that the editing could have harmful side effects. Regardless, none of it is happening in Canada, good or bad.

In fact, the Stem Cell Network panel is arguably skirting around the most contentious applications of the technology. It says it is asking the government merely to legalize research for its own sake on embryos and germ cells — those in eggs and sperm — not genetic editing of embryos used to actually get women pregnant.

The highlighted portions in the last two paragraphs of the excerpt were written one year prior to the claims by a Chinese scientist that he had run a clinical trial resulting in gene-edited twins, Lulu and Nana. (See my my November 28, 2018 posting for a comprehensive overview of the original furor). I have yet to publish a followup posting featuring the news that the CRISPR twins may have been ‘improved’ more extensively than originally realized. The initial reports about the twins focused on an illness-related reason (making them HIV ‘immune’) but made no mention of enhanced cognitive skills a side effect of eliminating the gene that would make them HIV ‘immune’. To date, the researcher has not made the bulk of his data available for an in-depth analysis to support his claim that he successfully gene-edited the twins. As well, there were apparently seven other pregnancies coming to term as part of the researcher’s clinical trial and there has been no news about those births.

Risk analysis innovation

Before moving onto the innovation of risk analysis, I want to focus a little more on at least one of the risks that gene-editing might present. Gierczak noted that CRISPR/Cas9 is “not perfect,” which acknowledges the truth but doesn’t convey all that much information.

While the terms ‘precision’ and ‘scissors’ are used frequently when describing the CRISPR technique, scientists actually mean that the technique is significantly ‘more precise’ than other techniques but they are not referencing an engineering level of precision. As for the ‘scissors’, it’s an analogy scientists like to use but in fact CRISPR is not as efficient and precise as a pair of scissors.

Michael Le Page in a July 16, 2018 article for New Scientist lays out some of the issues (Note: A link has been removed),

A study of CRIPSR suggests we shouldn’t rush into trying out CRISPR genome editing inside people’s bodies just yet. The technique can cause big deletions or rearrangements of DNA [emphasis mine], says Allan Bradley of the Wellcome Sanger Institute in the UK, meaning some therapies based on CRISPR may not be quite as safe as we thought.

The CRISPR genome editing technique is revolutionising biology, enabling us to create new varieties of plants and animals and develop treatments for a wide range of diseases.

The CRISPR Cas9 protein works by cutting the DNA of a cell in a specific place. When the cell repairs the damage, a few DNA letters get changed at this spot – an effect that can be exploited to disable genes.

At least, that’s how it is supposed to work. But in studies of mice and human cells, Bradley’s team has found that in around a fifth of cells, CRISPR causes deletions or rearrangements more than 100 DNA letters long. These surprising changes are sometimes thousands of letters long.

“I do believe the findings are robust,” says Gaetan Burgio of the Australian National University, an expert on CRISPR who has debunked previous studies questioning the method’s safety. “This is a well-performed study and fairly significant.”

I covered the Bradley paper and the concerns in a July 17, 2018 posting ‘The CRISPR ((clustered regularly interspaced short palindromic repeats)-CAS9 gene-editing technique may cause new genetic damage kerfuffle‘. (The ‘kerfufle’ was in reference to a report that the CRISPR market was affected by the publication of Bradley’s paper.)

Despite Health Canada not moving swiftly enough for some researchers, they have nonetheless managed to release an ‘outcome’ report about a consultation/analysis started in October 2016. Before getting to the consultation’s outcome, it’s interesting to look at how the consultation’s call for response was described (from Health Canada’s Toward a strengthened Assisted Human Reproduction Act ; A Consultation with Canadians on Key Policy Proposals webpage),

In October 2016, recognizing the need to strengthen the regulatory framework governing assisted human reproduction in Canada, Health Canada announced its intention to bring into force the dormant sections of the Assisted Human Reproduction Act  and to develop the necessary supporting regulations.

This consultation document provides an overview of the key policy proposals that will help inform the development of regulations to support bringing into force Section 10, Section 12 and Sections 45-58 of the Act. Specifically, the policy proposals describe the Department’s position on the following:

Section 10: Safety of Donor Sperm and Ova

  • Scope and application
  • Regulated parties and their regulatory obligations
  • Processing requirements, including donor suitability assessment
  • Record-keeping and traceability

Section 12: Reimbursement

  • Expenditures that may be reimbursed
  • Process for reimbursement
  • Creation and maintenance of records

Sections 45-58: Administration and Enforcement

  • Scope of the administration and enforcement framework
  • Role of inspectors designated under the Act

The purpose of the document is to provide Canadians with an opportunity to review the policy proposals and to provide feedback [emphasis mine] prior to the Department finalizing policy decisions and developing the regulations. In addition to requesting stakeholders’ general feedback on the policy proposals, the Department is also seeking input on specific questions, which are included throughout the document.

It took me a while to find the relevant section (in particular, take note of ‘Federal Regulatory Oversight’),

3.2. AHR in Canada Today

Today, an increasing number of Canadians are turning to AHR technologies to grow or build their families. A 2012 Canadian studyFootnote 1 found that infertility is on the rise in Canada, with roughly 16% of heterosexual couples experiencing infertility. In addition to rising infertility, the trend of delaying marriage and parenthood, scientific advances in cryopreserving ova, and the increasing use of AHR by LGBTQ2 couples and single parents to build a family are all contributing to an increase in the use of AHR technologies.

The growing use of reproductive technologies by Canadians to help build their families underscores the need to strengthen the AHR Act. While the approach to regulating AHR varies from country to country, Health Canada has considered international best practices and the need for regulatory alignment when developing the proposed policies set out in this document. …

3.2.1 Federal Regulatory Oversight

Although the scope of the AHR Act was significantly reduced in 2012 and some of the remaining sections have not yet been brought into force, there are many important sections of the Act that are currently administered and enforced by Health Canada, as summarized generally below:

Section 5: Prohibited Scientific and Research Procedures
Section 5 prohibits certain types of scientific research and clinical procedures that are deemed unacceptable, including: human cloning, the creation of an embryo for non-reproductive purposes, maintaining an embryo outside the human body beyond the fourteenth day, sex selection for non-medical reasons, altering the genome in a way that could be transmitted to descendants, and creating a chimera or a hybrid. [emphasis mine]

….

It almost seems as if the they were hiding the section that broached the human gene-editing question. It doesn’t seem to have worked as it appears, there are some very motivated parties determined to reframe the discussion. Health Canada’s ‘outocme’ report, published March 2019, What we heard: A summary of scanning and consultations on what’s next for health product regulation reflects the success of those efforts,

1.0 Introduction and Context

Scientific and technological advances are accelerating the pace of innovation. These advances are increasingly leading to the development of health products that are better able to predict, define, treat, and even cure human diseases. Globally, many factors are driving regulators to think about how to enable health innovation. To this end, Health Canada has been expanding beyond existing partnerships and engaging both domestically and internationally. This expanding landscape of products and services comes with a range of new challenges and opportunities.

In keeping up to date with emerging technologies and working collaboratively through strategic partnerships, Health Canada seeks to position itself as a regulator at the forefront of health innovation. Following the targeted sectoral review of the Health and Biosciences Sector Regulatory Review consultation by the Treasury Board Secretariat, Health Canada held a number of targeted meetings with a broad range of stakeholders.

This report outlines the methodologies used to look ahead at the emerging health technology environment, [emphasis mine] the potential areas of focus that resulted, and the key findings from consultations.

… the Department identified the following key drivers that are expected to shape the future of health innovation:

  1. The use of “big data” to inform decision-making: Health systems are generating more data, and becoming reliant on this data. The increasing accuracy, types, and volume of data available in real time enable automation and machine learning that can forecast activity, behaviour, or trends to support decision-making.
  2. Greater demand for citizen agency: Canadians increasingly want and have access to more information, resources, options, and platforms to manage their own health (e.g., mobile apps, direct-to-consumer services, decentralization of care).
  3. Increased precision and personalization in health care delivery: Diagnostic tools and therapies are increasingly able to target individual patients with customized therapies (e.g., individual gene therapy).
  4. Increased product complexity: Increasingly complex products do not fit well within conventional product classifications and standards (e.g., 3D printing).
  5. Evolving methods for production and distribution: In some cases, manufacturers and supply chains are becoming more distributed, challenging the current framework governing production and distribution of health products.
  6. The ways in which evidence is collected and used are changing: The processes around new drug innovation, research and development, and designing clinical trials are evolving in ways that are more flexible and adaptive.

With these key drivers in mind, the Department selected the following six emerging technologies for further investigation to better understand how the health product space is evolving:

  1. Artificial intelligence, including activities such as machine learning, neural networks, natural language processing, and robotics.
  2. Advanced cell therapies, such as individualized cell therapies tailor-made to address specific patient needs.
  3. Big data, from sources such as sensors, genetic information, and social media that are increasingly used to inform patient and health care practitioner decisions.
  4. 3D printing of health products (e.g., implants, prosthetics, cells, tissues).
  5. New ways of delivering drugs that bring together different product lines and methods (e.g., nano-carriers, implantable devices).
  6. Gene editing, including individualized gene therapies that can assist in preventing and treating certain diseases.

Next, to test the drivers identified and further investigate emerging technologies, the Department consulted key organizations and thought leaders across the country with expertise in health innovation. To this end, Health Canada held seven workshops with over 140 representatives from industry associations, small-to-medium sized enterprises and start-ups, larger multinational companies, investors, researchers, and clinicians in Ottawa, Toronto, Montreal, and Vancouver. [emphases mine]

The ‘outocme’ report, ‘What we heard …’, is well worth reading in its entirety; it’s about 9 pp.

I have one comment, ‘stakeholders’ don’t seem to include anyone who isn’t “from industry associations, small-to-medium sized enterprises and start-ups, larger multinational companies, investors, researchers, and clinician” or from “Ottawa, Toronto, Montreal, and Vancouver.” Aren’t the rest of us stakeholders?

Innovating risk analysis

This line in the report caught my eye (from Health Canada’s Toward a strengthened Assisted Human Reproduction Act ; A Consultation with Canadians on Key Policy Proposals webpage),

There is increasing need to enable innovation in a flexible, risk-based way, with appropriate oversight to ensure safety, quality, and efficacy. [emphases mine]

It reminded me of the 2019 federal budget (from my March 22, 2019 posting). One comment before proceeding, regulation and risk are tightly linked and, so, by innovating regulation they are by exttension alos innovating risk analysis,

… Budget 2019 introduces the first three “Regulatory Roadmaps” to specifically address stakeholder issues and irritants in these sectors, informed by over 140 responses [emphasis mine] from businesses and Canadians across the country, as well as recommendations from the Economic Strategy Tables.

Introducing Regulatory Roadmaps

These Roadmaps lay out the Government’s plans to modernize regulatory frameworks, without compromising our strong health, safety, and environmental protections. They contain proposals for legislative and regulatory amendments as well as novel regulatory approaches to accommodate emerging technologies, including the use of regulatory sandboxes and pilot projects—better aligning our regulatory frameworks with industry realities.

Budget 2019 proposes the necessary funding and legislative revisions so that regulatory departments and agencies can move forward on the Roadmaps, including providing the Canadian Food Inspection Agency, Health Canada and Transport Canada with up to $219.1 million over five years, starting in 2019–20, (with $0.5 million in remaining amortization), and $3.1 million per year on an ongoing basis.

In the coming weeks, the Government will be releasing the full Regulatory Roadmaps for each of the reviews, as well as timelines for enacting specific initiatives, which can be grouped in the following three main areas:

What Is a Regulatory Sandbox? Regulatory sandboxes are controlled “safe spaces” in which innovative products, services, business models and delivery mechanisms can be tested without immediately being subject to all of the regulatory requirements.
– European Banking Authority, 2017

Establishing a regulatory sandbox for new and innovative medical products
The regulatory approval system has not kept up with new medical technologies and processes. Health Canada proposes to modernize regulations to put in place a regulatory sandbox for new and innovative products, such as tissues developed through 3D printing, artificial intelligence, and gene therapies targeted to specific individuals. [emphasis mine]

Modernizing the regulation of clinical trials
Industry and academics have expressed concerns that regulations related to clinical trials are overly prescriptive and inconsistent. Health Canada proposes to implement a risk-based approach [emphasis mine] to clinical trials to reduce costs to industry and academics by removing unnecessary requirements for low-risk drugs and trials. The regulations will also provide the agri-food industry with the ability to carry out clinical trials within Canada on products such as food for special dietary use and novel foods.

Does the government always get 140 responses from a consultation process? Moving on, I agree with finding new approaches to regulatory processes and oversight and, by extension, new approaches to risk analysis.

Earlier in this post, I asked if someone had a budget for public relations/promotion. I wasn’t joking. My March 22, 2019 posting also included these line items in the proposed 2019 budget,

Budget 2019 proposes to make additional investments in support of the following organizations:
Stem Cell Network: Stem cell research—pioneered by two Canadians in the 1960s [James Till and Ernest McCulloch]—holds great promise for new therapies and medical treatments for respiratory and heart diseases, spinal cord injury, cancer, and many other diseases and disorders. The Stem Cell Network is a national not-for-profit organization that helps translate stem cell research into clinical applications and commercial products. To support this important work and foster Canada’s leadership in stem cell research, Budget 2019 proposes to provide the Stem Cell Network with renewed funding of $18 million over three years, starting in 2019–20.

Genome Canada: The insights derived from genomics—the study of the entire genetic information of living things encoded in their DNA and related molecules and proteins—hold the potential for breakthroughs that can improve the lives of Canadians and drive innovation and economic growth. Genome Canada is a not-for-profit organization dedicated to advancing genomics science and technology in order to create economic and social benefits for Canadians. To support Genome Canada’s operations, Budget 2019 proposes to provide Genome Canada with $100.5 million over five years, starting in 2020–21. This investment will also enable Genome Canada to launch new large-scale research competitions and projects, in collaboration with external partners, ensuring that Canada’s research community continues to have access to the resources needed to make transformative scientific breakthroughs and translate these discoveries into real-world applications.

Years ago, I managed to find a webpage with all of the proposals various organizations were submitting to a government budget committee. It was eye-opening. You can tell which organizations were able to hire someone who knew the current government buzzwords and the things that a government bureaucrat would want to hear and the organizations that didn’t.

Of course, if the government of the day is adamantly against or uninterested, no amount of persusasion will work to get your organization more money in the budget.

Finally

Reluctantly, I am inclined to explore the topic of emerging technologies such as gene-editing not only in the field of agriculture (for gene-editing of plants, fish, and animals see my November 28, 2018 posting) but also with humans. At the very least, it needs to be discussed whether we choose to participate or not.

If you are interested in the arguments against changing Canada’s prohibition against gene-editing of humans, there’s an Ocotber 2, 2017 posting on Impact Ethics by Françoise Baylis, Professor and Canada Research Chair in Bioethics and Philosophy at Dalhousie University, and Alana Cattapan, Johnson Shoyama Graduate School of Public Policy at the University of Saskatchewan, which makes some compelling arguments. Of course, it was written before the CRISPR twins (my November 28, 2018 posting).

Recaliing CRISPR Therapeutics (mentioned by Gierczak), the company received permission to run clinical trials in the US in October 2018 after the FDA (US Food and Drug Administration) lifted an earlier ban on their trials according to an Oct. 10, 2018 article by Frank Vinhuan for exome,

The partners also noted that their therapy is making progress outside of the U.S. They announced that they have received regulatory clearance in “multiple countries” to begin tests of the experimental treatment in both sickle cell disease and beta thalassemia, …

It seems to me that the quotes around “multiple countries” are meant to suggest doubt of some kind. Generally speaking, company representatives make those kinds of generalizations when they’re trying to pump up their copy. E.g., 50% increase in attendance  but no whole numbers to tell you what that means. It could mean two people attended the first year and then brought a friend the next year or 100 people attended and the next year there were 150.

Despite attempts to declare personalized medicine as having arrived, I think everything is still in flux with no preordained outcome. The future has yet to be determined but it will be and I , for one, would like to have some say in the matter.

Democratizing science .. neuroscience that is

What is going on with the neuroscience folks? First it was Montreal Neuro opening up its science  as featured in my January 22, 2016 posting,

The Montreal Neurological Institute (MNI) in Québec, Canada, known informally and widely as Montreal Neuro, has ‘opened’ its science research to the world. David Bruggeman tells the story in a Jan. 21, 2016 posting on his Pasco Phronesis blog (Note: Links have been removed),

The Montreal Neurological Institute (MNI) at McGill University announced that it will be the first academic research institute to become what it calls ‘Open Science.’  As Science is reporting, the MNI will make available all research results and research data at the time of publication.  Additionally it will not seek patents on any of the discoveries made on research at the Institute.

Will this catch on?  I have no idea if this particular combination of open access research data and results with no patents will spread to other university research institutes.  But I do believe that those elements will continue to spread.  More universities and federal agencies are pursuing open access options for research they support.  Elon Musk has opted to not pursue patent litigation for any of Tesla Motors’ patents, and has not pursued patents for SpaceX technology (though it has pursued litigation over patents in rocket technology). …

Whether or not they were inspired by the MNI, the scientists at the University of Washington (UW [state]) have found their own unique way of opening up science. From a March 15, 2018 UW news blog posting (also on EurekAlert) by James Urton, Note: Links have been removed,

Over the past few years, scientists have faced a problem: They often cannot reproduce the results of experiments done by themselves or their peers.

This “replication crisis” plagues fields from medicine to physics, and likely has many causes. But one is undoubtedly the difficulty of sharing the vast amounts of data collected and analyses performed in so-called “big data” studies. The volume and complexity of the information also can make these scientific endeavors unwieldy when it comes time for researchers to share their data and findings with peers and the public.

Researchers at the University of Washington have developed a set of tools to make one critical area of big data research — that of our central nervous system — easier to share. In a paper published online March 5 [2018] in Nature Communications, the UW team describes an open-access browser they developed to display, analyze and share neurological data collected through a type of magnetic resonance imaging study known as diffusion-weighted MRI.

“There has been a lot of talk among researchers about the replication crisis,” said lead author Jason Yeatman. “But we wanted a tool — ready, widely available and easy to use — that would actually help fight the replication crisis.”

Yeatman — who is an assistant professor in the UW Department of Speech & Hearing Sciences and the Institute for Learning & Brain Sciences (I-LABS) — is describing AFQ-Browser. This web browser-based tool, freely available online, is a platform for uploading, visualizing, analyzing and sharing diffusion MRI data in a format that is publicly accessible, improving transparency and data-sharing methods for neurological studies. In addition, since it runs in the web browser, AFQ-Browser is portable — requiring no additional software package or equipment beyond a computer and an internet connection.

“One major barrier to data transparency in neuroscience is that so much data collection, storage and analysis occurs on local computers with special software packages,” said senior author Ariel Rokem, a senior data scientist in the UW eScience Institute. “But using AFQ-Browser, we eliminate those requirements and make uploading, sharing and analyzing diffusion-weighted MRI data a simple, straightforward process.”

Diffusion-weighted MRI measures the movement of fluid in the brain and spinal cord, revealing the structure and function of white-matter tracts. These are the connections of the central nervous system, tissue that are made up primarily of axons that transmit long-range signals between neural circuits. Diffusion MRI research on brain connectivity has fundamentally changed the way neuroscientists understand human brain function: The state, organization and layout of white matter tracts are at the core of cognitive functions such as memory, learning and other capabilities. Data collected using diffusion-weighted MRI can be used to diagnose complex neurological conditions such as multiple sclerosis (MS) and amyotrophic lateral sclerosis (ALS). Researchers also use diffusion-weighted MRI data to study the neurological underpinnings of conditions such as dyslexia and learning disabilities.

“This is a widely-used technique in neuroscience research, and it is particularly amenable to the benefits that can be gleaned from big data, so it became a logical starting point for developing browser-based, open-access tools for the field,” said Yeatman.

The AFQ-Browser — the AFQ stands for Automated Fiber-tract Quantification — can receive diffusion-weighted MRI data and perform tract analysis for each individual subject. The analyses occur via a remote server, again eliminating technical and financial barriers for researchers. The AFQ-Browser also contains interactive tools to display data for multiple subjects — allowing a researcher to easily visualize how white matter tracts might be similar or different among subjects, identify trends in the data and generate hypotheses for future experiments. Researchers also can insert additional code to analyze the data, as well as save, upload and share data instantly with fellow researchers.

“We wanted this tool to be as generalizable as possible, regardless of research goals,” said Rokem. “In addition, the format is easy for scientists from a variety of backgrounds to use and understand — so that neuroscientists, statisticians and other researchers can collaborate, view data and share methods toward greater reproducibility.”

The idea for the AFQ-Browser came out of a UW course on data visualization, and the researchers worked with several graduate students to develop and perfect the browser. They tested it on existing diffusion-weighted MRI datasets, including research subjects with ALS and MS. In the future, they hope that the AFQ-Browser can be improved to do automated analyses — and possibly even diagnoses — based on diffusion-weighted MRI data.

“AFQ-Browser is really just the start of what could be a number of tools for sharing neuroscience data and experiments,” said Yeatman. “Our goal here is greater reproducibility and transparency, and a more robust scientific process.”

Here are a couple of images the researchers have used to illustrate their work,

AFQ-Browser.Jason Yeatman/Ariel Rokem Courtesy: University of Washington

Depiction of the left hemisphere of the human brain. Colored regions are selected white matter regions that could be measured using diffusion-weighted MRI: Corticospinal tract (orange), arcuate fasciculus (blue) and cingulum (green).Jason Yeatman/Ariel Rokem

You can find an embedded version of the AFQ-Browser here: http://www.washington.edu/news/2018/03/15/democratizing-science-researchers-make-neuroscience-experiments-easier-to-share-reproduce/ (scroll down about 50 – 55% of the way).

As for the paper, here’s a link and a citation,

A browser-based tool for visualization and analysis of diffusion MRI data by Jason D. Yeatman, Adam Richie-Halford, Josh K. Smith, Anisha Keshavan, & Ariel Rokem. Nature Communicationsvolume 9, Article number: 940 (2018) doi:10.1038/s41467-018-03297-7 Published online: 05 March 2018

Fittingly, this paper is open access.

AI fairytale and April 25, 2018 AI event at Canada Science and Technology Museum*** in Ottawa

These days it’s all about artificial intelligence (AI) or robots and often, it’s both. They’re everywhere and they will take everyone’s jobs, or not, depending on how you view them. Today, I’ve got two artificial intelligence items, the first of which may provoke writers’ anxieties.

Fairytales

The Princess and the Fox is a new fairytale by the Brothers Grimm or rather, their artificially intelligent surrogate according to an April 18, 2018 article on the British Broadcasting Corporation’s online news website,

It was recently reported that the meditation app Calm had published a “new” fairytale by the Brothers Grimm.

However, The Princess and the Fox was written not by the brothers, who died over 150 years ago, but by humans using an artificial intelligence (AI) tool.

It’s the first fairy tale written by an AI, claims Calm, and is the result of a collaboration with Botnik Studios – a community of writers, artists and developers. Calm says the technique could be referred to as “literary cloning”.

Botnik employees used a predictive-text program to generate words and phrases that might be found in the original Grimm fairytales. Human writers then pieced together sentences to form “the rough shape of a story”, according to Jamie Brew, chief executive of Botnik.

The full version is available to paying customers of Calm, but here’s a short extract:

“Once upon a time, there was a golden horse with a golden saddle and a beautiful purple flower in its hair. The horse would carry the flower to the village where the princess danced for joy at the thought of looking so beautiful and good.

Advertising for a meditation app?

Of course, it’s advertising and it’s ‘smart’ advertising (wordplay intended). Here’s a preview/trailer,

Blair Marnell’s April 18, 2018 article for SyFy Wire provides a bit more detail,

“You might call it a form of literary cloning,” said Calm co-founder Michael Acton Smith. Calm commissioned Botnik to use its predictive text program, Voicebox, to create a new Brothers Grimm story. But first, Voicebox was given the entire collected works of the Brothers Grimm to analyze, before it suggested phrases and sentences based upon those stories. Of course, human writers gave the program an assist when it came to laying out the plot. …

“The Brothers Grimm definitely have a reputation for darkness and many of their best-known tales are undoubtedly scary,” Peter Freedman told SYFY WIRE. Freedman is a spokesperson for Calm who was a part of the team behind the creation of this story. “In the process of machine-human collaboration that generated The Princess and The Fox, we did gently steer the story towards something with a more soothing, calm plot and vibe, that would make it work both as a new Grimm fairy tale and simultaneously as a Sleep Story on Calm.” [emphasis mine]

….

If Marnell’s article is to be believed, Peter Freedman doesn’t hold much hope for writers in the long-term future although we don’t need to start ‘battening down the hatches’ yet.

You can find Calm here.

You can find Botnik  here and Botnik Studios here.

 

AI at Ingenium [Canada Science and Technology Museum] on April 25, 2018

Formerly known (I believe) [*Read the comments for the clarification] as the Canada Science and Technology Museum, Ingenium is hosting a ‘sold out but there will be a livestream’ Google event. From Ingenium’s ‘Curiosity on Stage Evening Edition with Google – The AI Revolution‘ event page,

Join Google, Inc. and the Canada Science and Technology Museum for an evening of thought-provoking discussions about artificial intelligence.

[April 25, 2018
7:00 p.m. – 10:00 p.m. {ET}
Fees: Free]

Invited speakers from industry leaders Google, Facebook, Element AI and Deepmind will explore the intersection of artificial intelligence with robotics, arts, social impact and healthcare. The session will end with a panel discussion and question-and-answer period. Following the event, there will be a reception along with light refreshments and networking opportunities.

The event will be simultaneously translated into both official languages as well as available via livestream from the Museum’s YouTube channel.

Seating is limited

THIS EVENT IS NOW SOLD OUT. Please join us for the livestream from the Museum’s YouTube channel. https://www.youtube.com/cstmweb *** April 25, 2018: I received corrective information about the link for the livestream: https://youtu.be/jG84BIno5J4 from someone at Ingenium.***

Speakers

David Usher (Moderator)

David Usher is an artist, best-selling author, entrepreneur and keynote speaker. As a musician he has sold more than 1.4 million albums, won 4 Junos and has had #1 singles singing in English, French and Thai. When David is not making music, he is equally passionate about his other life, as a Geek. He is the founder of Reimagine AI, an artificial intelligence creative studio working at the intersection of art and artificial intelligence. David is also the founder and creative director of the non-profit, the Human Impact Lab at Concordia University [located in Montréal, Québec]. The Lab uses interactive storytelling to revisualize the story of climate change. David is the co-creator, with Dr. Damon Matthews, of the Climate Clock. Climate Clock has been presented all over the world including the United Nations COP 23 Climate Conference and is presently on a three-year tour with the Canada Museum of Science and Innovation’s Climate Change Exhibit.

Joelle Pineau (Facebook)

The AI Revolution:  From Ideas and Models to Building Smart Robots
Joelle Pineau is head of the Facebook AI Research Lab Montreal, and an Associate Professor and William Dawson Scholar at McGill University. Dr. Pineau’s research focuses on developing new models and algorithms for automatic planning and learning in partially-observable domains. She also applies these algorithms to complex problems in robotics, health-care, games and conversational agents. She serves on the editorial board of the Journal of Artificial Intelligence Research and the Journal of Machine Learning Research and is currently President of the International Machine Learning Society. She is a AAAI Fellow, a Senior Fellow of the Canadian Institute for Advanced Research (CIFAR) and in 2016 was named a member of the College of New Scholars, Artists and Scientists by the Royal Society of Canada.

Pablo Samuel Castro (Google)

Building an Intelligent Assistant for Music Creators
Pablo was born and raised in Quito, Ecuador, and moved to Montreal after high school to study at McGill. He stayed in Montreal for the next 10 years, finished his bachelors, worked at a flight simulator company, and then eventually obtained his masters and PhD at McGill, focusing on Reinforcement Learning. After his PhD Pablo did a 10-month postdoc in Paris before moving to Pittsburgh to join Google. He has worked at Google for almost 6 years, and is currently a research Software Engineer in Google Brain in Montreal, focusing on fundamental Reinforcement Learning research, as well as Machine Learning and Music. Aside from his interest in coding/AI/math, Pablo is an active musician (https://www.psctrio.com), loves running (5 marathons so far, including Boston!), and discussing politics and activism.

Philippe Beaudoin (Element AI)

Concrete AI-for-Good initiatives at Element AI
Philippe cofounded Element AI in 2016 and currently leads its applied lab and AI-for-Good initiatives. His team has helped tackle some of the biggest and most interesting business challenges using machine learning. Philippe holds a Ph.D in Computer Science and taught virtual bipeds to walk by themselves during his postdoc at UBC. He spent five years at Google as a Senior Developer and Technical Lead Manager, partly with the Chrome Machine Learning team. Philippe also founded ArcBees, specializing in cloud-based development. Prior to that he worked in the videogame and graphics hardware industries. When he has some free time, Philippe likes to invent new boardgames — the kind of games where he can still beat the AI!

Doina Precup (Deepmind)

Challenges and opportunities for the AI revolution in health care
Doina Precup splits her time between McGill University, where she co-directs the Reasoning and Learning Lab in the School of Computer Science, and DeepMind Montreal, where she leads the newly formed research team since October 2017.  She got her BSc degree in computer science form the Technical University Cluj-Napoca, Romania, and her MSc and PhD degrees from the University of Massachusetts-Amherst, where she was a Fulbright fellow. Her research interests are in the areas of reinforcement learning, deep learning, time series analysis, and diverse applications of machine learning in health care, automated control and other fields. She became a senior member of AAAI in 2015, a Canada Research Chair in Machine Learning in 2016 and a Senior Fellow of CIFAR in 2017.

Interesting, oui? Not a single expert from Ottawa or Toronto. Well, Element AI has an office in Toronto. Still, I wonder why this singular focus on AI in Montréal. After all, one of the current darlings of AI, machine learning, was developed at the University of Toronto which houses the Canadian Institute for Advanced Research (CIFAR),  the institution in charge of the Pan-Canadian Artificial Intelligence Strategy and the Vector Institutes (more about that in my March 31,2017 posting).

Enough with my musing: For those of us on the West Coast, there’s an opportunity to attend via livestream from 4 pm to 7 pm on April 25, 2018 on xxxxxxxxx. *** April 25, 2018: I received corrective information about the link for the livestream: https://youtu.be/jG84BIno5J4 and clarification as the relationship between Ingenium and the Canada Science and Technology Museum from someone at Ingenium.***

For more about Element AI, go here; for more about DeepMind, go here for information about parent company in the UK and the most I dug up about their Montréal office was this job posting; and, finally , Reimagine.AI is here.

The Hedy Lamarr of international research: Canada’s Third assessment of The State of Science and Technology and Industrial Research and Development in Canada (2 of 2)

Taking up from where I left off with my comments on Competing in a Global Innovation Economy: The Current State of R and D in Canada or as I prefer to call it the Third assessment of Canadas S&T (science and technology) and R&D (research and development). (Part 1 for anyone who missed it).

Is it possible to get past Hedy?

Interestingly (to me anyway), one of our R&D strengths, the visual and performing arts, features sectors where a preponderance of people are dedicated to creating culture in Canada and don’t spend a lot of time trying to make money so they can retire before the age of 40 as so many of our start-up founders do. (Retiring before the age of 40 just reminded me of Hollywood actresses {Hedy] who found and still do find that work was/is hard to come by after that age. You may be able but I’m not sure I can get past Hedy.) Perhaps our business people (start-up founders) could take a leaf out of the visual and performing arts handbook? Or, not. There is another question.

Does it matter if we continue to be a ‘branch plant’ economy? Somebody once posed that question to me when I was grumbling that our start-ups never led to larger businesses and acted more like incubators (which could describe our R&D as well),. He noted that Canadians have a pretty good standard of living and we’ve been running things this way for over a century and it seems to work for us. Is it that bad? I didn’t have an  answer for him then and I don’t have one now but I think it’s a useful question to ask and no one on this (2018) expert panel or the previous expert panel (2013) seems to have asked.

I appreciate that the panel was constrained by the questions given by the government but given how they snuck in a few items that technically speaking were not part of their remit, I’m thinking they might have gone just a bit further. The problem with answering the questions as asked is that if you’ve got the wrong questions, your answers will be garbage (GIGO; garbage in, garbage out) or, as is said, where science is concerned, it’s the quality of your questions.

On that note, I would have liked to know more about the survey of top-cited researchers. I think looking at the questions could have been quite illuminating and I would have liked some information on from where (geographically and area of specialization) they got most of their answers. In keeping with past practice (2012 assessment published in 2013), there is no additional information offered about the survey questions or results. Still, there was this (from the report released April 10, 2018; Note: There may be some difference between the formatting seen here and that seen in the document),

3.1.2 International Perceptions of Canadian Research
As with the 2012 S&T report, the CCA commissioned a survey of top-cited researchers’ perceptions of Canada’s research strength in their field or subfield relative to that of other countries (Section 1.3.2). Researchers were asked to identify the top five countries in their field and subfield of expertise: 36% of respondents (compared with 37% in the 2012 survey) from across all fields of research rated Canada in the top five countries in their field (Figure B.1 and Table B.1 in the appendix). Canada ranks fourth out of all countries, behind the United States, United Kingdom, and Germany, and ahead of France. This represents a change of about 1 percentage point from the overall results of the 2012 S&T survey. There was a 4 percentage point decrease in how often France is ranked among the top five countries; the ordering of the top five countries, however, remains the same.

When asked to rate Canada’s research strength among other advanced countries in their field of expertise, 72% (4,005) of respondents rated Canadian research as “strong” (corresponding to a score of 5 or higher on a 7-point scale) compared with 68% in the 2012 S&T survey (Table 3.4). [pp. 40-41 Print; pp. 78-70 PDF]

Before I forget, there was mention of the international research scene,

Growth in research output, as estimated by number of publications, varies considerably for the 20 top countries. Brazil, China, India, Iran, and South Korea have had the most significant increases in publication output over the last 10 years. [emphases mine] In particular, the dramatic increase in China’s output means that it is closing the gap with the United States. In 2014, China’s output was 95% of that of the United States, compared with 26% in 2003. [emphasis mine]

Table 3.2 shows the Growth Index (GI), a measure of the rate at which the research output for a given country changed between 2003 and 2014, normalized by the world growth rate. If a country’s growth in research output is higher than the world average, the GI score is greater than 1.0. For example, between 2003 and 2014, China’s GI score was 1.50 (i.e., 50% greater than the world average) compared with 0.88 and 0.80 for Canada and the United States, respectively. Note that the dramatic increase in publication production of emerging economies such as China and India has had a negative impact on Canada’s rank and GI score (see CCA, 2016).

As long as I’ve been blogging (10 years), the international research community (in particular the US) has been looking over its shoulder at China.

Patents and intellectual property

As an inventor, Hedy got more than one patent. Much has been made of the fact that  despite an agreement, the US Navy did not pay her or her partner (George Antheil) for work that would lead to significant military use (apparently, it was instrumental in the Bay of Pigs incident, for those familiar with that bit of history), GPS, WiFi, Bluetooth, and more.

Some comments about patents. They are meant to encourage more innovation by ensuring that creators/inventors get paid for their efforts .This is true for a set time period and when it’s over, other people get access and can innovate further. It’s not intended to be a lifelong (or inheritable) source of income. The issue in Lamarr’s case is that the navy developed the technology during the patent’s term without telling either her or her partner so, of course, they didn’t need to compensate them despite the original agreement. They really should have paid her and Antheil.

The current patent situation, particularly in the US, is vastly different from the original vision. These days patents are often used as weapons designed to halt innovation. One item that should be noted is that the Canadian federal budget indirectly addressed their misuse (from my March 16, 2018 posting),

Surprisingly, no one else seems to have mentioned a new (?) intellectual property strategy introduced in the document (from Chapter 2: Progress; scroll down about 80% of the way, Note: The formatting has been changed),

Budget 2018 proposes measures in support of a new Intellectual Property Strategy to help Canadian entrepreneurs better understand and protect intellectual property, and get better access to shared intellectual property.

What Is a Patent Collective?
A Patent Collective is a way for firms to share, generate, and license or purchase intellectual property. The collective approach is intended to help Canadian firms ensure a global “freedom to operate”, mitigate the risk of infringing a patent, and aid in the defence of a patent infringement suit.

Budget 2018 proposes to invest $85.3 million over five years, starting in 2018–19, with $10 million per year ongoing, in support of the strategy. The Minister of Innovation, Science and Economic Development will bring forward the full details of the strategy in the coming months, including the following initiatives to increase the intellectual property literacy of Canadian entrepreneurs, and to reduce costs and create incentives for Canadian businesses to leverage their intellectual property:

  • To better enable firms to access and share intellectual property, the Government proposes to provide $30 million in 2019–20 to pilot a Patent Collective. This collective will work with Canada’s entrepreneurs to pool patents, so that small and medium-sized firms have better access to the critical intellectual property they need to grow their businesses.
  • To support the development of intellectual property expertise and legal advice for Canada’s innovation community, the Government proposes to provide $21.5 million over five years, starting in 2018–19, to Innovation, Science and Economic Development Canada. This funding will improve access for Canadian entrepreneurs to intellectual property legal clinics at universities. It will also enable the creation of a team in the federal government to work with Canadian entrepreneurs to help them develop tailored strategies for using their intellectual property and expanding into international markets.
  • To support strategic intellectual property tools that enable economic growth, Budget 2018 also proposes to provide $33.8 million over five years, starting in 2018–19, to Innovation, Science and Economic Development Canada, including $4.5 million for the creation of an intellectual property marketplace. This marketplace will be a one-stop, online listing of public sector-owned intellectual property available for licensing or sale to reduce transaction costs for businesses and researchers, and to improve Canadian entrepreneurs’ access to public sector-owned intellectual property.

The Government will also consider further measures, including through legislation, in support of the new intellectual property strategy.

Helping All Canadians Harness Intellectual Property
Intellectual property is one of our most valuable resources, and every Canadian business owner should understand how to protect and use it.

To better understand what groups of Canadians are benefiting the most from intellectual property, Budget 2018 proposes to provide Statistics Canada with $2 million over three years to conduct an intellectual property awareness and use survey. This survey will help identify how Canadians understand and use intellectual property, including groups that have traditionally been less likely to use intellectual property, such as women and Indigenous entrepreneurs. The results of the survey should help the Government better meet the needs of these groups through education and awareness initiatives.

The Canadian Intellectual Property Office will also increase the number of education and awareness initiatives that are delivered in partnership with business, intermediaries and academia to ensure Canadians better understand, integrate and take advantage of intellectual property when building their business strategies. This will include targeted initiatives to support underrepresented groups.

Finally, Budget 2018 also proposes to invest $1 million over five years to enable representatives of Canada’s Indigenous Peoples to participate in discussions at the World Intellectual Property Organization related to traditional knowledge and traditional cultural expressions, an important form of intellectual property.

It’s not wholly clear what they mean by ‘intellectual property’. The focus seems to be on  patents as they are the only intellectual property (as opposed to copyright and trademarks) singled out in the budget. As for how the ‘patent collective’ is going to meet all its objectives, this budget supplies no clarity on the matter. On the plus side, I’m glad to see that indigenous peoples’ knowledge is being acknowledged as “an important form of intellectual property” and I hope the discussions at the World Intellectual Property Organization are fruitful.

As for the patent situation in Canada (from the report released April 10, 2018),

Over the past decade, the Canadian patent flow in all technical sectors has consistently decreased. Patent flow provides a partial picture of how patents in Canada are exploited. A negative flow represents a deficit of patented inventions owned by Canadian assignees versus the number of patented inventions created by Canadian inventors. The patent flow for all Canadian patents decreased from about −0.04 in 2003 to −0.26 in 2014 (Figure 4.7). This means that there is an overall deficit of 26% of patent ownership in Canada. In other words, fewer patents were owned by Canadian institutions than were invented in Canada.

This is a significant change from 2003 when the deficit was only 4%. The drop is consistent across all technical sectors in the past 10 years, with Mechanical Engineering falling the least, and Electrical Engineering the most (Figure 4.7). At the technical field level, the patent flow dropped significantly in Digital Communication and Telecommunications. For example, the Digital Communication patent flow fell from 0.6 in 2003 to −0.2 in 2014. This fall could be partially linked to Nortel’s US$4.5 billion patent sale [emphasis mine] to the Rockstar consortium (which included Apple, BlackBerry, Ericsson, Microsoft, and Sony) (Brickley, 2011). Food Chemistry and Microstructural [?] and Nanotechnology both also showed a significant drop in patent flow. [p. 83 Print; p. 121 PDF]

Despite a fall in the number of parents for ‘Digital Communication’, we’re still doing well according to statistics elsewhere in this report. Is it possible that patents aren’t that big a deal? Of course, it’s also possible that we are enjoying the benefits of past work and will miss out on future work. (Note: A video of the April 10, 2018 report presentation by Max Blouw features him saying something like that.)

One last note, Nortel died many years ago. Disconcertingly, this report, despite more than one reference to Nortel, never mentions the company’s demise.

Boxed text

While the expert panel wasn’t tasked to answer certain types of questions, as I’ve noted earlier they managed to sneak in a few items.  One of the strategies they used was putting special inserts into text boxes including this (from the report released April 10, 2018),

Box 4.2
The FinTech Revolution

Financial services is a key industry in Canada. In 2015, the industry accounted for 4.4%

of Canadia jobs and about 7% of Canadian GDP (Burt, 2016). Toronto is the second largest financial services hub in North America and one of the most vibrant research hubs in FinTech. Since 2010, more than 100 start-up companies have been founded in Canada, attracting more than $1 billion in investment (Moffatt, 2016). In 2016 alone, venture-backed investment in Canadian financial technology companies grew by 35% to $137.7 million (Ho, 2017). The Toronto Financial Services Alliance estimates that there are approximately 40,000 ICT specialists working in financial services in Toronto alone.

AI, blockchain, [emphasis mine] and other results of ICT research provide the basis for several transformative FinTech innovations including, for example, decentralized transaction ledgers, cryptocurrencies (e.g., bitcoin), and AI-based risk assessment and fraud detection. These innovations offer opportunities to develop new markets for established financial services firms, but also provide entry points for technology firms to develop competing service offerings, increasing competition in the financial services industry. In response, many financial services companies are increasing their investments in FinTech companies (Breznitz et al., 2015). By their own account, the big five banks invest more than $1 billion annually in R&D of advanced software solutions, including AI-based innovations (J. Thompson, personal communication, 2016). The banks are also increasingly investing in university research and collaboration with start-up companies. For instance, together with several large insurance and financial management firms, all big five banks have invested in the Vector Institute for Artificial Intelligence (Kolm, 2017).

I’m glad to see the mention of blockchain while AI (artificial intelligence) is an area where we have innovated (from the report released April 10, 2018),

AI has attracted researchers and funding since the 1960s; however, there were periods of stagnation in the 1970s and 1980s, sometimes referred to as the “AI winter.” During this period, the Canadian Institute for Advanced Research (CIFAR), under the direction of Fraser Mustard, started supporting AI research with a decade-long program called Artificial Intelligence, Robotics and Society, [emphasis mine] which was active from 1983 to 1994. In 2004, a new program called Neural Computation and Adaptive Perception was initiated and renewed twice in 2008 and 2014 under the title, Learning in Machines and Brains. Through these programs, the government provided long-term, predictable support for high- risk research that propelled Canadian researchers to the forefront of global AI development. In the 1990s and early 2000s, Canadian research output and impact on AI were second only to that of the United States (CIFAR, 2016). NSERC has also been an early supporter of AI. According to its searchable grant database, NSERC has given funding to research projects on AI since at least 1991–1992 (the earliest searchable year) (NSERC, 2017a).

The University of Toronto, the University of Alberta, and the Université de Montréal have emerged as international centres for research in neural networks and deep learning, with leading experts such as Geoffrey Hinton and Yoshua Bengio. Recently, these locations have expanded into vibrant hubs for research in AI applications with a diverse mix of specialized research institutes, accelerators, and start-up companies, and growing investment by major international players in AI development, such as Microsoft, Google, and Facebook. Many highly influential AI researchers today are either from Canada or have at some point in their careers worked at a Canadian institution or with Canadian scholars.

As international opportunities in AI research and the ICT industry have grown, many of Canada’s AI pioneers have been drawn to research institutions and companies outside of Canada. According to the OECD, Canada’s share of patents in AI declined from 2.4% in 2000 to 2005 to 2% in 2010 to 2015. Although Canada is the sixth largest producer of top-cited scientific publications related to machine learning, firms headquartered in Canada accounted for only 0.9% of all AI-related inventions from 2012 to 2014 (OECD, 2017c). Canadian AI researchers, however, remain involved in the core nodes of an expanding international network of AI researchers, most of whom continue to maintain ties with their home institutions. Compared with their international peers, Canadian AI researchers are engaged in international collaborations far more often than would be expected by Canada’s level of research output, with Canada ranking fifth in collaboration. [p. 97-98 Print; p. 135-136 PDF]

The only mention of robotics seems to be here in this section and it’s only in passing. This is a bit surprising given its global importance. I wonder if robotics has been somehow hidden inside the term artificial intelligence, although sometimes it’s vice versa with robot being used to describe artificial intelligence. I’m noticing this trend of assuming the terms are synonymous or interchangeable not just in Canadian publications but elsewhere too.  ’nuff said.

Getting back to the matter at hand, t he report does note that patenting (technometric data) is problematic (from the report released April 10, 2018),

The limitations of technometric data stem largely from their restricted applicability across areas of R&D. Patenting, as a strategy for IP management, is similarly limited in not being equally relevant across industries. Trends in patenting can also reflect commercial pressures unrelated to R&D activities, such as defensive or strategic patenting practices. Finally, taxonomies for assessing patents are not aligned with bibliometric taxonomies, though links can be drawn to research publications through the analysis of patent citations. [p. 105 Print; p. 143 PDF]

It’s interesting to me that they make reference to many of the same issues that I mention but they seem to forget and don’t use that information in their conclusions.

There is one other piece of boxed text I want to highlight (from the report released April 10, 2018),

Box 6.3
Open Science: An Emerging Approach to Create New Linkages

Open Science is an umbrella term to describe collaborative and open approaches to
undertaking science, which can be powerful catalysts of innovation. This includes
the development of open collaborative networks among research performers, such
as the private sector, and the wider distribution of research that usually results when
restrictions on use are removed. Such an approach triggers faster translation of ideas
among research partners and moves the boundaries of pre-competitive research to
later, applied stages of research. With research results freely accessible, companies
can focus on developing new products and processes that can be commercialized.

Two Canadian organizations exemplify the development of such models. In June
2017, Genome Canada, the Ontario government, and pharmaceutical companies
invested $33 million in the Structural Genomics Consortium (SGC) (Genome Canada,
2017). Formed in 2004, the SGC is at the forefront of the Canadian open science
movement and has contributed to many key research advancements towards new
treatments (SGC, 2018). McGill University’s Montréal Neurological Institute and
Hospital has also embraced the principles of open science. Since 2016, it has been
sharing its research results with the scientific community without restriction, with
the objective of expanding “the impact of brain research and accelerat[ing] the
discovery of ground-breaking therapies to treat patients suffering from a wide range
of devastating neurological diseases” (neuro, n.d.).

This is exciting stuff and I’m happy the panel featured it. (I wrote about the Montréal Neurological Institute initiative in a Jan. 22, 2016 posting.)

More than once, the report notes the difficulties with using bibliometric and technometric data as measures of scientific achievement and progress and open science (along with its cousins, open data and open access) are contributing to the difficulties as James Somers notes in his April 5, 2018 article ‘The Scientific Paper is Obsolete’ for The Atlantic (Note: Links have been removed),

The scientific paper—the actual form of it—was one of the enabling inventions of modernity. Before it was developed in the 1600s, results were communicated privately in letters, ephemerally in lectures, or all at once in books. There was no public forum for incremental advances. By making room for reports of single experiments or minor technical advances, journals made the chaos of science accretive. Scientists from that point forward became like the social insects: They made their progress steadily, as a buzzing mass.

The earliest papers were in some ways more readable than papers are today. They were less specialized, more direct, shorter, and far less formal. Calculus had only just been invented. Entire data sets could fit in a table on a single page. What little “computation” contributed to the results was done by hand and could be verified in the same way.

The more sophisticated science becomes, the harder it is to communicate results. Papers today are longer than ever and full of jargon and symbols. They depend on chains of computer programs that generate data, and clean up data, and plot data, and run statistical models on data. These programs tend to be both so sloppily written and so central to the results that it’s [sic] contributed to a replication crisis, or put another way, a failure of the paper to perform its most basic task: to report what you’ve actually discovered, clearly enough that someone else can discover it for themselves.

Perhaps the paper itself is to blame. Scientific methods evolve now at the speed of software; the skill most in demand among physicists, biologists, chemists, geologists, even anthropologists and research psychologists, is facility with programming languages and “data science” packages. And yet the basic means of communicating scientific results hasn’t changed for 400 years. Papers may be posted online, but they’re still text and pictures on a page.

What would you get if you designed the scientific paper from scratch today? A little while ago I spoke to Bret Victor, a researcher who worked at Apple on early user-interface prototypes for the iPad and now runs his own lab in Oakland, California, that studies the future of computing. Victor has long been convinced that scientists haven’t yet taken full advantage of the computer. “It’s not that different than looking at the printing press, and the evolution of the book,” he said. After Gutenberg, the printing press was mostly used to mimic the calligraphy in bibles. It took nearly 100 years of technical and conceptual improvements to invent the modern book. “There was this entire period where they had the new technology of printing, but they were just using it to emulate the old media.”Victor gestured at what might be possible when he redesigned a journal article by Duncan Watts and Steven Strogatz, “Collective dynamics of ‘small-world’ networks.” He chose it both because it’s one of the most highly cited papers in all of science and because it’s a model of clear exposition. (Strogatz is best known for writing the beloved “Elements of Math” column for The New York Times.)

The Watts-Strogatz paper described its key findings the way most papers do, with text, pictures, and mathematical symbols. And like most papers, these findings were still hard to swallow, despite the lucid prose. The hardest parts were the ones that described procedures or algorithms, because these required the reader to “play computer” in their head, as Victor put it, that is, to strain to maintain a fragile mental picture of what was happening with each step of the algorithm.Victor’s redesign interleaved the explanatory text with little interactive diagrams that illustrated each step. In his version, you could see the algorithm at work on an example. You could even control it yourself….

For anyone interested in the evolution of how science is conducted and communicated, Somers’ article is a fascinating and in depth look at future possibilities.

Subregional R&D

I didn’t find this quite as compelling as the last time and that may be due to the fact that there’s less information and I think the 2012 report was the first to examine the Canadian R&D scene with a subregional (in their case, provinces) lens. On a high note, this report also covers cities (!) and regions, as well as, provinces.

Here’s the conclusion (from the report released April 10, 2018),

Ontario leads Canada in R&D investment and performance. The province accounts for almost half of R&D investment and personnel, research publications and collaborations, and patents. R&D activity in Ontario produces high-quality publications in each of Canada’s five R&D strengths, reflecting both the quantity and quality of universities in the province. Quebec lags Ontario in total investment, publications, and patents, but performs as well (citations) or better (R&D intensity) by some measures. Much like Ontario, Quebec researchers produce impactful publications across most of Canada’s five R&D strengths. Although it invests an amount similar to that of Alberta, British Columbia does so at a significantly higher intensity. British Columbia also produces more highly cited publications and patents, and is involved in more international research collaborations. R&D in British Columbia and Alberta clusters around Vancouver and Calgary in areas such as physics and ICT and in clinical medicine and energy, respectively. [emphasis mine] Smaller but vibrant R&D communities exist in the Prairies and Atlantic Canada [also referred to as the Maritime provinces or Maritimes] (and, to a lesser extent, in the Territories) in natural resource industries.

Globally, as urban populations expand exponentially, cities are likely to drive innovation and wealth creation at an increasing rate in the future. In Canada, R&D activity clusters around five large cities: Toronto, Montréal, Vancouver, Ottawa, and Calgary. These five cities create patents and high-tech companies at nearly twice the rate of other Canadian cities. They also account for half of clusters in the services sector, and many in advanced manufacturing.

Many clusters relate to natural resources and long-standing areas of economic and research strength. Natural resource clusters have emerged around the location of resources, such as forestry in British Columbia, oil and gas in Alberta, agriculture in Ontario, mining in Quebec, and maritime resources in Atlantic Canada. The automotive, plastics, and steel industries have the most individual clusters as a result of their economic success in Windsor, Hamilton, and Oshawa. Advanced manufacturing industries tend to be more concentrated, often located near specialized research universities. Strong connections between academia and industry are often associated with these clusters. R&D activity is distributed across the country, varying both between and within regions. It is critical to avoid drawing the wrong conclusion from this fact. This distribution does not imply the existence of a problem that needs to be remedied. Rather, it signals the benefits of diverse innovation systems, with differentiation driven by the needs of and resources available in each province. [pp.  132-133 Print; pp. 170-171 PDF]

Intriguingly, there’s no mention that in British Columbia (BC), there are leading areas of research: Visual & Performing Arts, Psychology & Cognitive Sciences, and Clinical Medicine (according to the table on p. 117 Print, p. 153 PDF).

As I said and hinted earlier, we’ve got brains; they’re just not the kind of brains that command respect.

Final comments

My hat’s off to the expert panel and staff of the Council of Canadian Academies. Combining two previous reports into one could not have been easy. As well, kudos to their attempts to broaden the discussion by mentioning initiative such as open science and for emphasizing the problems with bibliometrics, technometrics, and other measures. I have covered only parts of this assessment, (Competing in a Global Innovation Economy: The Current State of R&D in Canada), there’s a lot more to it including a substantive list of reference materials (bibliography).

While I have argued that perhaps the situation isn’t quite as bad as the headlines and statistics may suggest, there are some concerning trends for Canadians but we have to acknowledge that many countries have stepped up their research game and that’s good for all of us. You don’t get better at anything unless you work with and play with others who are better than you are. For example, both India and Italy surpassed us in numbers of published research papers. We slipped from 7th place to 9th. Thank you, Italy and India. (And, Happy ‘Italian Research in the World Day’ on April 15, 2018, the day’s inaugural year. In Italian: Piano Straordinario “Vivere all’Italiana” – Giornata della ricerca Italiana nel mondo.)

Unfortunately, the reading is harder going than previous R&D assessments in the CCA catalogue. And in the end, I can’t help thinking we’re just a little bit like Hedy Lamarr. Not really appreciated in all of our complexities although the expert panel and staff did try from time to time. Perhaps the government needs to find better ways of asking the questions.

***ETA April 12, 2018 at 1500 PDT: Talking about missing the obvious! I’ve been ranting on about how research strength in visual and performing arts and in philosophy and theology, etc. is perfectly fine and could lead to ‘traditional’ science breakthroughs without underlining the point by noting that Antheil was a musician, Lamarr was as an actress and they set the foundation for work by electrical engineers (or people with that specialty) for their signature work leading to WiFi, etc.***

There is, by the way, a Hedy-Canada connection. In 1998, she sued Canadian software company Corel, for its unauthorized use of her image on their Corel Draw 8 product packaging. She won.

More stuff

For those who’d like to see and hear the April 10, 2017 launch for “Competing in a Global Innovation Economy: The Current State of R&D in Canada” or the Third Assessment as I think of it, go here.

The report can be found here.

For anyone curious about ‘Bombshell: The Hedy Lamarr Story’ to be broadcast on May 18, 2018 as part of PBS’s American Masters series, there’s this trailer,

For the curious, I did find out more about the Hedy Lamarr and Corel Draw. John Lettice’s December 2, 1998 article The Rgister describes the suit and her subsequent victory in less than admiring terms,

Our picture doesn’t show glamorous actress Hedy Lamarr, who yesterday [Dec. 1, 1998] came to a settlement with Corel over the use of her image on Corel’s packaging. But we suppose that following the settlement we could have used a picture of Corel’s packaging. Lamarr sued Corel earlier this year over its use of a CorelDraw image of her. The picture had been produced by John Corkery, who was 1996 Best of Show winner of the Corel World Design Contest. Corel now seems to have come to an undisclosed settlement with her, which includes a five-year exclusive (oops — maybe we can’t use the pack-shot then) licence to use “the lifelike vector illustration of Hedy Lamarr on Corel’s graphic software packaging”. Lamarr, bless ‘er, says she’s looking forward to the continued success of Corel Corporation,  …

There’s this excerpt from a Sept. 21, 2015 posting (a pictorial essay of Lamarr’s life) by Shahebaz Khan on The Blaze Blog,

6. CorelDRAW:
For several years beginning in 1997, the boxes of Corel DRAW’s software suites were graced by a large Corel-drawn image of Lamarr. The picture won Corel DRAW’s yearly software suite cover design contest in 1996. Lamarr sued Corel for using the image without her permission. Corel countered that she did not own rights to the image. The parties reached an undisclosed settlement in 1998.

There’s also a Nov. 23, 1998 Corel Draw 8 product review by Mike Gorman on mymac.com, which includes a screenshot of the packaging that precipitated the lawsuit. Once they settled, it seems Corel used her image at least one more time.

2017 proceedings for the Canadian Science Policy Conference

I received (via email) a December 11, 2017 notice from the Canadian Science Policy Centre that the 2017 Proceedings for the ninth annual conference (Nov. 1 – 3, 2017 in Ottawa, Canada) can now be accessed,

The Canadian Science Policy Centre is pleased to present you the Proceedings of CSPC 2017. Check out the reports and takeaways for each panel session, which have been carefully drafted by a group of professional writers. You can also listen to the audio recordings and watch the available videos. The proceedings page will provide you with the opportunity to immerse yourself in all of the discussions at the conference. Feel free to share the ones you like! Also, check out the CSPC 2017 reports, analyses, and stats in the proceedings.

Click here for the CSPC 2017 Proceedings

CSPC 2017 Interviews

Take a look at the 70+ one-on-one interviews with prominent figures of science policy. The interviews were conducted by the great team of CSPC 2017 volunteers. The interviews feature in-depth perspectives about the conference, panels, and new up and coming projects.

Click here for the CSPC 2017 interviews

Amongst many others, you can find a video of Governor General Julie Payette’s notorious remarks made at the opening ceremonies and which I highlighted in my November 3, 2017 posting about this year’s conference.

The proceedings are organized by day with links to individual pages for each session held that day. Here’s a sample of what is offered on Day 1: Artificial Intelligence and Discovery Science: Playing to Canada’s Strengths,

Artificial Intelligence and Discovery Science: Playing to Canada’s Strengths

Conference Day:
Day 1 – November 1st 2017

Organized by: Friends of the Canadian Institutes of Health Research

Keynote: Alan Bernstein, President and CEO, CIFAR, 2017 Henry G. Friesen International Prizewinner

Speakers: Brenda Andrews, Director, Andrew’s Lab, University of Toronto; Doina Precup, Associate Professor, McGill University; Dr Rémi Quirion, Chief Scientist of Quebec; Linda Rabeneck, Vice President, Prevention and Cancer Control, Cancer Care Ontario; Peter Zandstra, Director, School of Biomedical Engineering, University of British Columbia

Discussants: Henry Friesen, Professor Emeritus, University of Manitoba; Roderick McInnes, Acting President, Canadian Institutes of Health Research and Director, Lady Davis Institute, Jewish General Hospital, McGill University; Duncan J. Stewart, CEO and Scientific Director, Ottawa Hospital Research Institute; Vivek Goel, Vice President, Research and Innovation, University of Toronto

Moderators: Eric Meslin, President & CEO, Council of Canadian Academies; André Picard, Health Reporter and Columnist, The Globe and Mail

Takeaways and recommendations:

The opportunity for Canada

  • The potential impact of artificial intelligence (AI) could be as significant as the industrial revolution of the 19th century.
  • Canada’s global advantage in deep learning (a subset of machine learning) stems from the pioneering work of Geoffrey Hinton and early support from CIFAR and NSERC.
  • AI could mark a turning point in Canada’s innovation performance, fueled by the highest levels of venture capital financing in nearly a decade, and underpinned by publicly funded research at the federal, provincial and institutional levels.
  • The Canadian AI advantage can only be fully realized by developing and importing skilled talent, accessible markets, capital and companies willing to adopt new technologies into existing industries.
  • Canada leads in the combination of functional genomics and machine learning which is proving effective for predicting the functional variation in genomes.
  • AI promises advances in biomedical engineering by connecting chronic diseases – the largest health burden in Canada – to gene regulatory networks by understanding how stem cells make decisions.
  • AI can be effectively deployed to evaluate health and health systems in the general population.

The challenges

  • AI brings potential ethical and economic perils and requires a watchdog to oversee standards, engage in fact-based debate and prepare for the potential backlash over job losses to robots.
  • The ethical, environmental, economic, legal and social (GEL3S) aspects of genomics have been largely marginalized and it’s important not to make the same mistake with AI.
  • AI’s rapid scientific development makes it difficult to keep pace with safeguards and standards.
  • The fields of AI’s and pattern recognition are strongly connected but here is room for improvement.
  • Self-learning algorithms such as Alphaville could lead to the invention of new things that humans currently don’t know how to do. The field is developing rapidly, leading to some concern over the deployment of such systems.

Training future AI professionals

  • Young researchers must be given the oxygen to excel at AI if its potential is to be realized.
  • Students appreciate the breadth of training and additional resources they receive from researchers with ties to both academia and industry.
  • The importance of continuing fundamental research in AI is being challenged by companies such as Facebook, Google and Amazon which are hiring away key talent.
  • The explosion of AI is a powerful illustration of how the importance of fundamental research may only be recognized and exploited after 20 or 30 years. As a result, support for fundamental research, and the students working in areas related to AI, must continue.

A couple comments

To my knowledge, this is the first year the proceedings have been made so easily accessible. In fact, I can’t remember another year where they have been open access. Thank you!

Of course, I have to make a comment about the Day 2 session titled: Does Canada have a Science Culture? The answer is yes and it’s in the province of Ontario. Just take a look at the panel,

Organized by: Kirsten Vanstone, Royal Canadian Institute for Science and Reinhart Reithmeier, Professor, University of Toronto [in Ontario]

Speakers: Chantal Barriault, Director, Science Communication Graduate Program, Laurentian University [in Ontario] and Science North [in Ontario]; Maurice Bitran, CEO, Ontario Science Centre [take a wild guess as to where this institution is located?]; Kelly Bronson, Assistant Professor, Faculty of Social Sciences, University of Ottawa [in Ontario]; Marc LePage, President and CEO, Genome Canada [in Ontario]

Moderator: Ivan Semeniuk, Science Reporter, The Globe and Mail [in Ontario]

In fact, all of the institutions are in southern Ontario, even, the oddly named Science North.

I know from bitter experience it’s hard to put together panels but couldn’t someone from another province have participated?

Ah well, here’s hoping for 2018 and for a new location. After Ottawa as the CSPC site for three years in a row, please don’t make it a fourth year in a row.

Announcing Canada’s Chief Science Advisor: Dr. Mona Nemer

Thanks to the Canadian Science Policy Centre’s September 26, 2017 announcement (received via email) a burning question has been answered,

After great anticipation, Prime Minister Trudeau along with Minister Duncan have announced Canada’s Chief Science Advisor, Dr. Mona Nemer, [emphasis mine]  at a ceremony at the House of Commons. The Canadian Science Policy Centre welcomes this exciting news and congratulates Dr. Nemer on her appointment in this role and we wish her the best in carrying out her duties in this esteemed position. CSPC is looking forward to working closely with Dr. Nemer for the Canadian science policy community. Mehrdad Hariri, CEO & President of the CSPC, stated, “Today’s historic announcement is excellent news for science in Canada, for informed policy-making and for all Canadians. We look forward to working closely with the new Chief Science Advisor.”

In fulfilling our commitment to keep the community up to date and informed regarding science, technology, and innovation policy issues, CSPC has been compiling all news, publications, and editorials in recognition of the importance of the Federal Chief Science Officer as it has been developing, as you may see by clicking here.

We invite your opinions regarding the new Chief Science Advisor, to be published on our CSPC Featured Editorial page. We will publish your reactions on our website, sciencepolicy.ca on our Chief Science Advisor page.

Please send your opinion pieces to editorial@sciencepolicy.ca.

Here are a few (very few) details from the Prime Minister’s (Justin Trudeau) Sept. 26, 2017 press release making the official announcement,

The Government of Canada is committed to strengthen science in government decision-making and to support scientists’ vital work.

In keeping with these commitments, the Prime Minister, Justin Trudeau, today announced Dr. Mona Nemer as Canada’s new Chief Science Advisor, following an open, transparent, and merit-based selection process.  

We know Canadians value science. As the new Chief Science Advisor, Dr. Nemer will help promote science and its real benefits for Canadians—new knowledge, novel technologies, and advanced skills for future jobs. These breakthroughs and new opportunities form an essential part of the Government’s strategy to secure a better future for Canadian families and to grow Canada’s middle class.

Dr. Nemer is a distinguished medical researcher whose focus has been on the heart, particularly on the mechanisms of heart failure and congenital heart diseases. In addition to publishing over 200 scholarly articles, her research has led to new diagnostic tests for heart failure and the genetics of cardiac birth defects. Dr. Nemer has spent more than ten years as the Vice-President, Research at the University of Ottawa, has served on many national and international scientific advisory boards, and is a Fellow of the Royal Society of Canada, a Member of the Order of Canada, and a Chevalier de l’Ordre du Québec.

As Canada’s new top scientist, Dr. Nemer will provide impartial scientific advice to the Prime Minister and the Minister of Science. She will also make recommendations to help ensure that government science is fully available and accessible to the public, and that federal scientists remain free to speak about their work. Once a year, she will submit a report about the state of federal government science in Canada to the Prime Minister and the Minister of Science, which will also be made public.

Quotes

“We have taken great strides to fulfill our promise to restore science as a pillar of government decision-making. Today, we took another big step forward by announcing Dr. Mona Nemer as our Chief Science Advisor. Dr. Nemer brings a wealth of expertise to the role. Her advice will be invaluable and inform decisions made at the highest levels. I look forward to working with her to promote a culture of scientific excellence in Canada.”
— The Rt. Hon. Justin Trudeau, Prime Minister of Canada

“A respect for science and for Canada’s remarkable scientists is a core value for our government. I look forward to working with Dr. Nemer, Canada’s new Chief Science Advisor, who will provide us with the evidence we need to make decisions about what matters most to Canadians: their health and safety, their families and communities, their jobs, environment and future prosperity.”
— The Honourable Kirsty Duncan, Minister of Science

“I am honoured and excited to be Canada’s Chief Science Advisor. I am very pleased to be representing Canadian science and research – work that plays a crucial role in protecting and improving the lives of people everywhere. I look forward to advising the Prime Minister and the Minister of Science and working with the science community, policy makers, and the public to make science part of government policy making.”
— Dr. Mona Nemer, Chief Science Advisor, Canada

Quick Facts

  • Dr. Nemer is also a Knight of the Order of Merit of the French Republic, and has been awarded honorary doctorates from universities in France and Finland.
  • The Office of the Chief Science Advisor will be housed at Innovation, Science and Economic Development and supported by a secretariat.

Nemers’ Wikipedia entry does not provide much additional information although you can find out a bit more on her University of Ottawa page. Brian Owens in a Sept. 26, 2017 article for the American Association for the Advancement of Science’s (AAAS) Science Magazine provides a bit more detail, about this newly created office and its budget

Nemer’s office will have a $2 million budget, and she will report to both Trudeau and science minister Kirsty Duncan. Her mandate includes providing scientific advice to government ministers, helping keep government-funded science accessible to the public, and protecting government scientists from being muzzled.

Ivan Semeniuk’s Sept. 26, 2017 article for the Globe and Mail newspaper about Nemer’s appointment is the most informative (that I’ve been able to find),

Mona Nemer, a specialist in the genetics of heart disease and a long time vice-president of research at the University of Ottawa, has been named Canada’s new chief science advisor.

The appointment, announced Tuesday [Sept. 26, 2017] by Prime Minister Justin Trudeau, comes two years after the federal Liberals pledged to reinstate the position during the last election campaign and nearly a decade after the previous version of the role was cut by then prime minister Stephen Harper.

Dr. Nemer steps into the job of advising the federal government on science-related policy at a crucial time. Following a landmark review of Canada’s research landscape [Naylor report] released last spring, university-based scientists are lobbying hard for Ottawa to significantly boost science funding, one of the report’s key recommendations. At the same time, scientists and science-advocacy groups are increasingly scrutinizing federal actions on a range of sensitive environment and health-related issues to ensure the Trudeau government is making good on promises to embrace evidence-based decision making.

A key test of the position’s relevance for many observers will be the extent to which Dr. Nemer is able to speak her mind on matters where science may run afoul of political expediency.

Born in 1957, Dr. Nemer grew up in Lebanon and pursued an early passion for chemistry at a time and place where women were typically discouraged from entering scientific fields. With Lebanon’s civil war making it increasingly difficult for her to pursue her studies, her family was able to arrange for her to move to the United States, where she completed an undergraduate degree at Wichita State University in Kansas.

A key turning point came in the summer of 1977 when Dr. Nemer took a trip with friends to Montreal. She quickly fell for the city and, in short order, managed to secure acceptance to McGill University, where she received a PhD in 1982. …

It took a lot of searching to find out that Nemer was born in Lebanon and went to the United States first. A lot of immigrants and their families view Canada as a second choice and Nemer and her family would appear to have followed that pattern. It’s widely believed (amongst Canadians too) that the US is where you go for social mobility. I’m not sure if this is still the case but at one point in the 1980s Israel ranked as having the greatest social mobility in the world. Canada came in second while the US wasn’t even third or fourth ranked.

It’s the second major appointment by Justin Trudeau in the last few months to feature a woman who speaks French. The first was Julie Payette, former astronaut and Québecker, as the upcoming Governor General (there’s more detail and a whiff of sad scandal in this Aug. 21, 2017 Canadian Broadcasting Corporation online news item). Now there’s Dr. Mona Nemer who’s lived both in Québec and Ontario. Trudeau and his feminism, eh? Also, his desire to keep Québeckers happy (more or less).

I’m not surprised by the fact that Nemer has been based in Ottawa for several years. I guess they want someone who’s comfortable with the government apparatus although I for one think a little fresh air might be welcome. After all, the Minister of Science, Kirsty Duncan, is from Toronto which between Nemer and Duncan gives us the age-old Canadian government trifecta (geographically speaking), Ottawa-Montréal-Toronto.

Two final comments, I am surprised that Duncan did not make the announcement. After all, it was in her 2015 mandate letter.But perhaps Paul Wells in his acerbic June 29, 2017 article for Macleans hints at the reason as he discusses the Naylor report (review of fundamental science mentioned in Semeniuk’s article and for which Nemer is expected to provide advice),

The Naylor report represents Canadian research scientists’ side of a power struggle. The struggle has been continuing since Jean Chrétien left office. After early cuts, he presided for years over very large increases to the budgets of the main science granting councils. But since 2003, governments have preferred to put new funding dollars to targeted projects in applied sciences. …

Naylor wants that trend reversed, quickly. He is supported in that call by a frankly astonishingly broad coalition of university administrators and working researchers, who until his report were more often at odds. So you have the group representing Canada’s 15 largest research universities and the group representing all universities and a new group representing early-career researchers and, as far as I can tell, every Canadian scientist on Twitter. All backing Naylor. All fundamentally concerned that new money for research is of no particular interest if it does not back the best science as chosen by scientists, through peer review.

The competing model, the one preferred by governments of all stripes, might best be called superclusters. Very large investments into very large projects with loosely defined scientific objectives, whose real goal is to retain decorated veteran scientists and to improve the Canadian high-tech industry. Vast and sprawling labs and tech incubators, cabinet ministers nodding gravely as world leaders in sexy trendy fields sketch the golden path to Jobs of Tomorrow.

You see the imbalance. On one side, ribbons to cut. On the other, nerds experimenting on tapeworms. Kirsty Duncan, a shaky political performer, transparently a junior minister to the supercluster guy, with no deputy minister or department reporting to her, is in a structurally weak position: her title suggests she’s science’s emissary to the government, but she is not equipped to be anything more than government’s emissary to science.

Second,  our other science minister, Navdeep Bains, Minister of Innovation, Science  and Economic Development does not appear to have been present at the announcement. Quite surprising given where her office will located (from the government’s Sept. 26, 2017 press release in Quick Facts section ) “The Office of the Chief Science Advisor will be housed at Innovation, Science and Economic Development and supported by a secretariat.”

Finally, Wells’ article is well worth reading in its entirety and for those who are information gluttons, I have a three part series on the Naylor report, published June 8, 2017,

INVESTING IN CANADA’S FUTURE; Strengthening the Foundations of Canadian Research (Review of fundamental research final report): 1 of 3

INVESTING IN CANADA’S FUTURE; Strengthening the Foundations of Canadian Research (Review of fundamental research final report): 2 of 3

INVESTING IN CANADA’S FUTURE; Strengthening the Foundations of Canadian Research (Review of fundamental research final report): 3 of 3

Ora Sound, a Montréal-based startup, and its ‘graphene’ headphones

For all the excitement about graphene there aren’t that many products as Glenn Zorpette notes in a June 20, 2017 posting about Ora Sound and its headphones on the Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website; Note: Links have been removed),

Graphene has long been touted as a miracle material that would deliver everything from tiny, ultralow-power transistors to the vastly long and ultrastrong cable [PDF] needed for a space elevator. And yet, 13 years of graphene development, and R&D expenditures well in the tens of billions of dollars have so far yielded just a handful of niche products. The most notable by far is a line of tennis racquets in which relatively small amounts of graphene are used to stiffen parts of the frame.

Ora Sound, a Montreal-based [Québec, Canada] startup, hopes to change all that. On 20 June [2017], it unveiled a Kickstarter campaign for a new audiophile-grade headphone that uses cones, also known as membranes, made of a form of graphene. “To the best of our knowledge, we are the first company to find a significant, commercially viable application for graphene,” says Ora cofounder Ari Pinkas, noting that the cones in the headphones are 95 percent graphene.

Kickstarter

It should be noted that participating in a Kickstarter campaign is an investment/gamble. I am not endorsing Ora Sound or its products. That said, this does look interesting (from the ORA: The World’s First Graphene Headphones Kickstarter campaign webpage),

ORA GQ Headphones uses nanotechnology to deliver the most groundbreaking audio listening experience. Scientists have long promised that one day Graphene will find its way into many facets of our lives including displays, electronic circuits and sensors. ORA’s Graphene technology makes it one of the first companies to have created a commercially viable application for this Nobel-prize winning material, a major scientific achievement.

The GQ Headphones come equipped with ORA’s patented GrapheneQ™ membranes, providing unparalleled fidelity. The headphones also offer all the features you would expect from a high-end audio product: wired/wireless operation, a gesture control track-pad, a digital MEMS microphone, breathable lambskin leather and an ear-shaped design optimized for sound quality and isolated comfort.

They have produced a slick video to promote their campaign,

At the time of publishing this post, the campaign will run for another eight days and has raised $650,949 CAD. This is more than $500,000 dollars over the company’s original goal of $135,000. I’m sure they’re ecstatic but this success can be a mixed blessing. They have many more people expecting a set of headphones than they anticipated and that can mean production issues.

Further, there appears to be only one member of the team with business experience and his (Ari Pinkas) experience includes marketing strategy for a few years and then founding an online marketplace for teachers. I would imagine Pinkas will be experiencing a very steep learning curve. Hopefully, Helge Seetzen, a member of the company’s advisory board will be able to offer assistance. According to Seetzen’s Wikipedia entry, he is a “… German technologist and businessman known for imaging & multimedia research and commercialization,” as well as, having a Canadian educational background and business experience. The rest of the team and advisory board appear to be academics.

The technology

A March 14, 2017 article by Andy Riga for the Montréal Gazette gives a general description of the technology,

A Montreal startup is counting on technology sparked by a casual conversation between two brothers pursuing PhDs at McGill University.

They were chatting about their disparate research areas — one, in engineering, was working on using graphene, a form of carbon, in batteries; the other, in music, was looking at the impact of electronics on the perception of audio quality.

At first glance, the invention that ensued sounds humdrum.

It’s a replacement for an item you use every day. It’s paper thin, you probably don’t realize it’s there and its design has not changed much in more than a century. Called a membrane or diaphragm, it’s the part of a loudspeaker that vibrates to create the sound from the headphones over your ears, the wireless speaker on your desk, the cellphone in your hand.

Membranes are normally made of paper, Mylar or aluminum.

Ora’s innovation uses graphene, a remarkable material whose discovery garnered two scientists the 2010 Nobel Prize in physics but which has yet to fulfill its promise.

“Because it’s so stiff, our membrane gets better sound quality,” said Robert-Eric Gaskell, who obtained his PhD in sound recording in 2015. “It can produce more sound with less distortion, and the sound that you hear is more true to the original sound intended by the artist.

“And because it’s so light, we get better efficiency — the lighter it is, the less energy it takes.”

In January, the company demonstrated its membrane in headphones at the Consumer Electronics Show, a big trade convention in Las Vegas.

Six cellphone manufacturers expressed interest in Ora’s technology, some of which are now trying prototypes, said Ari Pinkas, in charge of product marketing at Ora. “We’re talking about big cellphone manufacturers — big, recognizable names,” he said.

Technology companies are intrigued by the idea of using Ora’s technology to make smaller speakers so they can squeeze other things, such as bigger batteries, into the limited space in electronic devices, Pinkas said. Others might want to use Ora’s membrane to allow their devices to play music louder, he added.

Makers of regular speakers, hearing aids and virtual-reality headsets have also expressed interest, Pinkas said.

Ora is still working on headphones.

Riga’s article offers a good overview for people who are not familiar with graphene.

Zorpette’s June 20, 2017 posting (on Nanoclast) offers a few more technical details (Note: Links have been removed),

During an interview and demonstration in the IEEE Spectrum offices, Pinkas and Robert-Eric Gaskell, another of the company’s cofounders, explained graphene’s allure to audiophiles. “Graphene has the ideal properties for a membrane,” Gaskell says. “It’s incredibly stiff, very lightweight—a rare combination—and it’s well damped,” which means it tends to quell spurious vibrations. By those metrics, graphene soundly beats all the usual choices: mylar, paper, aluminum, or even beryllium, Gaskell adds.

The problem is making it in sheets large enough to fashion into cones. So-called “pristine” graphene exists as flakes, [emphasis mine] perhaps 10 micrometers across, and a single atom thick. To make larger, strong sheets of graphene, researchers attach oxygen atoms to the flakes, and then other elements to the oxygen atoms to cross-link the flakes and hold them together strongly in what materials scientists call a laminate structure. The intellectual property behind Ora’s advance came from figuring out how to make these structures suitably thick and in the proper shape to function as speaker cones, Gaskell says. In short, he explains, the breakthrough was, “being able to manufacture” in large numbers, “and in any geometery we want.”

Much of the R&D work that led to Ora’s process was done at nearby McGill University, by professor Thomas Szkopek of the Electrical and Computer Engineering department. Szkopek worked with Peter Gaskell, Robert-Eric’s younger brother. Ora is also making use of patents that arose from work done on graphene by the Nguyen Group at Northwestern University, in Evanston, Ill.

Robert-Eric Gaskell and Pinkas arrived at Spectrum with a preproduction model of their headphones, as well as some other headphones for the sake of comparison. The Ora prototype is clearly superior to the comparison models, but that’s not much of a surprise. …

… In the 20 minutes or so I had to audition Ora’s preproduction model, I listened to an assortment of classical and jazz standards and I came away impressed. The sound is precise, with fine details sharply rendered. To my surprise, I was reminded of planar-magnetic type headphones that are now surging in popularity in the upper reaches of the audiophile headphone market. Bass is smooth and tight. Overall, the unit holds up quite well against closed-back models in the $400 to $500 range I’ve listened to from Grado, Bowers & Wilkins, and Audeze.

Ora’s Kickstarter campaign page (Graphene vs GrapheneQ subsection) offers some information about their unique graphene composite,

A TECHNICAL INTRODUCTION TO GRAPHENE

Graphene is a new material, first isolated only 13 years ago. Formed from a single layer of carbon atoms, Graphene is a hexagonal crystal lattice in a perfect honeycomb structure. This fundamental geometry makes Graphene ridiculously strong and lightweight. In its pure form, Graphene is a single atomic layer of carbon. It can be very expensive and difficult to produce in sizes any bigger than small flakes. These challenges have prevented pristine Graphene from being integrated into consumer technologies.

THE GRAPHENEQ™ SOLUTION

At ORA, we’ve spent the last few years creating GrapheneQ, our own, proprietary Graphene-based nanocomposite formulation. We’ve specifically designed and optimized it for use in acoustic transducers. GrapheneQ is a composite material which is over 95% Graphene by weight. It is formed by depositing flakes of Graphene into thousands of layers that are bonded together with proprietary cross-linking agents. Rather than trying to form one, continuous layer of Graphene, GrapheneQ stacks flakes of Graphene together into a laminate material that preserves the benefits of Graphene while allowing the material to be formed into loudspeaker cones.

Scanning Electron Microscope (SEM) Comparison
Scanning Electron Microscope (SEM) Comparison

If you’re interested in more technical information on sound, acoustics, soundspeakers, and Ora’s graphene-based headphones, it’s all there on Ora’s Kickstarter campaign page.

The Québec nanotechnology scene in context and graphite flakes for graphene

There are two Canadian provinces that are heavily invested in nanotechnology research and commercialization efforts. The province of Québec has poured money into their nanotechnology efforts, while the province of Alberta has also invested heavily in nanotechnology, it has also managed to snare additional federal funds to host Canada’s National Institute of Nanotechnology (NINT). (This appears to be a current NINT website or you can try this one on the National Research Council website). I’d rank Ontario as being a third centre with the other provinces being considerably less invested. As for the North, I’ve not come across any nanotechnology research from that region. Finally, as I stumble more material about nanotechnology in Québec than I do for any other province, that’s the reason I rate Québec as the most successful in its efforts.

Regarding graphene, Canada seems to have an advantage. We have great graphite flakes for making graphene. With mines in at least two provinces, Ontario and Québec, we have a ready source of supply. In my first posting (July 25, 2011) about graphite mines here, I had this,

Who knew large flakes could be this exciting? From the July 25, 2011 news item on Nanowerk,

Northern Graphite Corporation has announced that graphene has been successfully made on a test basis using large flake graphite from the Company’s Bissett Creek project in Northern Ontario. Northern’s standard 95%C, large flake graphite was evaluated as a source material for making graphene by an eminent professor in the field at the Chinese Academy of Sciences who is doing research making graphene sheets larger than 30cm2 in size using the graphene oxide methodology. The tests indicated that graphene made from Northern’s jumbo flake is superior to Chinese powder and large flake graphite in terms of size, higher electrical conductivity, lower resistance and greater transparency.

Approximately 70% of production from the Bissett Creek property will be large flake (+80 mesh) and almost all of this will in fact be +48 mesh jumbo flake which is expected to attract premium pricing and be a better source material for the potential manufacture of graphene. The very high percentage of large flakes makes Bissett Creek unique compared to most graphite deposits worldwide which produce a blend of large, medium and small flakes, as well as a large percentage of low value -150 mesh flake and amorphous powder which are not suitable for graphene, Li ion batteries or other high end, high growth applications.

Since then I’ve stumbled across more information about Québec’s mines than Ontario’s  as can be seen:

There are some other mentions of graphite mines in other postings but they are tangential to what’s being featured:

  • (my Oct. 26, 2015 posting about St. Jean Carbon and its superconducting graphene and
  • my Feb. 20, 2015 posting about Nanoxplore and graphene production in Québec; and
  • this Feb. 23, 2015 posting about Grafoid and its sister company, Focus Graphite which gets its graphite flakes from a deposit in the northeastern part of Québec).

 

After reviewing these posts, I’ve begun to wonder where Ora’s graphite flakes come from? In any event, I wish the folks at Ora and their Kickstarter funders the best of luck.