Category Archives: science

Dancing quantum entanglement (Ap. 20 – 22, 2017) and performing mathematics (Ap. 26 – 30, 2017) in Vancouver, Canada

I have listings for two art/science events in Vancouver (Canada).

Dance, poetry and quantum entanglement

From April 20, 2017 (tonight) – April 22, 2017, there will be 8 p.m. performances of Lesley Telford’s ‘Three Sets/Relating At A Distance; My tongue, your ear / If / Spooky Action at a Distance (phase 1)’ at the Scotiabank Dance Centre, 677 Davie St, Yes, that third title is a reference to Einstein’s famous phrase describing his response of the concept of quantum entanglement.

An April 19, 2017 article by Janet Smith for the Georgia Straight features the dancer’s description of the upcoming performances,

One of the clearest definitions of quantum entanglement—a phenomenon Albert Einstein dubbed “spooky action at a distance”—can be found in a vampire movie.

In Jim Jarmusch’s Only Lovers Left Alive Tom Hiddleston’s depressed rock-star bloodsucker explains it this way to Tilda Swinton’s Eve, his centuries-long partner: “When you separate an entwined particle and you move both parts away from the other, even at opposite ends of the universe, if you alter or affect one, the other will be identically altered or affected.”

In fact, it was by watching the dark love story that Vancouver dance artist Lesley Telford learned about quantum entanglement—in which particles are so closely connected that they cannot act independently of one another, no matter how much space lies between them. She became fascinated not just with the scientific possibilities of the concept but with the romantic ones. …

 “I thought, ‘What a great metaphor,’ ” the choreographer tells the Straight over sushi before heading into a Dance Centre studio. “It’s the idea of quantum entanglement and how that could relate to human entanglement.…It’s really a metaphor for human interactions.”

First, though, as is so often the case with Telford, she needed to form those ideas into words. So she approached poet Barbara Adler to talk about the phenomenon, and then to have her build poetry around it—text that the writer will perform live in Telford’s first full evening of work here.

“Barbara talked a lot about how you feel this resonance with people that have been in your life, and how it’s tied into romantic connections and love stories,” Telford explains. “As we dig into it, it’s become less about that and more of an underlying vibration in the work; it feels like we’ve gone beyond that starting point.…I feel like she has a way of making it so down-to-earth and it’s given us so much food to work with. Are we in control of the universe or is it in control of us?”

Spooky Action at a Distance, a work for seven dancers, ends up being a string of duets that weave—entangle—into other duets. …

There’s more information about the performance, which concerns itself with more than quantum entanglement in the Scotiabank Dance Centre’s event webpage,

Lesley Telford’s choreography brings together a technically rigorous vocabulary and a thought-provoking approach, refined by her years dancing with Nederlands Dans Theater and creating for companies at home and abroad, most recently Ballet BC. This triple bill features an excerpt of a new creation inspired by Einstein’s famous phrase “spooky action at a distance”, referring to particles that are so closely linked, they share the same existence: a collaboration with poet Barbara Adler, the piece seeks to extend the theory to human connections in our phenomenally interconnected world. The program also includes a new extended version of If, a trio based on Anne Carson’s poem, and the duet My tongue, your ear, with text by Wislawa Szymborska.

Here’s what appears to be an excerpt from a rehearsal for ‘Spooky Action …’,

I’m not super fond of the atonal music/sound they’re using. The voice you hear is Adler’s and here’s more about Barbara Adler from her Wikipedia entry (Note: Links have been removed),

Barbara Adler is a musician, poet, and storyteller based in Vancouver, British Columbia. She is a past Canadian Team Slam Champion, was a founding member of the Vancouver Youth Slam, and a past CBC Poetry Face Off winner.[1]

She was a founding member of the folk band The Fugitives with Brendan McLeod, C.R. Avery and Mark Berube[2][3] until she left the band in 2011 to pursue other artistic ventures. She was a member of the accordion shout-rock band Fang, later Proud Animal, and works under the pseudonym Ten Thousand Wolves.[4][5][6][7][8]

In 2004 she participated in the inaugural Canadian Festival of Spoken Word, winning the Spoken Wordlympics with her fellow team members Shane Koyczan, C.R. Avery, and Brendan McLeod.[9][10] In 2010 she started on The BC Memory Game, a traveling storytelling project based on the game of memory[11] and has also been involved with the B.C. Schizophrenia Society Reach Out Tour for several years.[12][13][14] She is of Czech-Jewish descent.[15][16]

Barbara Adler has her bachelor’s degree and MFA from Simon Fraser University, with a focus on songwriting, storytelling, and community engagement.[17][18] In 2015 she was a co-star in the film Amerika, directed by Jan Foukal,[19][20] which premiered at the Karlovy Vary International Film Festival.[21]

Finally, Telford is Artist in Residence at the Dance Centre and TRIUMF, Canada’s national laboratory for particle and nuclear physics and accelerator-based science.

To buy tickets ($32 or less with a discount), go here. Telford will be present on April 21, 2017 for a post-show talk.

Pi Theatre’s ‘Long Division’

This theatrical performance of concepts in mathematics runs from April 26 – 30, 2017 (check here for the times as they vary) at the Annex at 823 Seymour St.  From the Georgia Straight’s April 12, 2017 Arts notice,

Mathematics is an art form in itself, as proven by Pi Theatre’s number-charged Long Division. This is a “refreshed remount” of Peter Dickinson’s ambitious work, one that circles around seven seemingly unrelated characters (including a high-school math teacher, a soccer-loving imam, and a lesbian bar owner) bound together by a single traumatic incident. Directed by Richard Wolfe, with choreography by Lesley Telford and musical score by Owen Belton, it’s a multimedia, movement-driven piece that has a strong cast. …

Here’s more about the play from Pi Theatre’s Long Division page,

Long Division uses text, multimedia, and physical theatre to create a play about the mathematics of human connection.

Long Division focuses on seven characters linked – sometimes directly, sometimes more obliquely – by a sequence of tragic events. These characters offer lessons on number theory, geometry and logic, while revealing aspects of their inner lives, and collectively the nature of their relationships to one another.

Playwright: Peter Dickinson
Director: Richard Wolfe
Choreographer: Lesley Telford, Inverso Productions
Composer: Owen Belton
Assistant Director: Keltie Forsyth

Cast:  Anousha Alamian, Jay Clift, Nicco Lorenzo Garcia, Jennifer Lines, Melissa Oei, LInda Quibell & Kerry Sandomirsky

Costume Designer: Connie Hosie
Lighting Designer: Jergus Oprsal
Set Designer: Lauchlin Johnston
Projection Designer: Jamie Nesbitt
Production Manager: Jayson Mclean
Stage Manager: Jethelo E. Cabilete
Assistant Projection Designer: Cameron Fraser
Lighting Design Associate: Jeff Harrison

Dates/Times: April 26 – 29 at 8pm, April 29 and 30 at 2pm
Student performance on April 27 at 1pm

A Talk-Back will take place after the 2pm show on April 29th.

Shawn Conner engaged the playwright, Peter Dickinson in an April 20, 2017 Q&A (question and answer) for the Vancouver Sun,

Q: Had you been working on Long Division for a long time?

A: I’d been working on it for about five years. I wrote a previous play called The Objecthood of Chairs, which has a similar style in that I combine lecture performance with physical and dance theatre. There are movement scores in both pieces.

In that first play, I told the story of two men and their relationship through the history of chair design. It was a combination of mining my research about that and trying to craft a story that was human and where the audience could find a way in. When I was thinking about a subject for a new play, I took the profession of one of the characters in that first play, who was a math teacher, and said, “Let’s see what happens to his character, let’s see where he goes after the breakup of his relationship.”

At first, I wrote it (Long Division) in an attempt at completely real, kitchen-sink naturalism, and it was a complete disaster. So I went back into this lecture-style performance.

Q: Long Division is set in a bar. Is the setting left over from that attempt at realism?

A: I guess so. It’s kind of a meta-theatrical play in the sense that the characters address the audience, and they’re aware they’re in a theatrical setting. One of the characters is an actress, and she comments on the connection between mathematics and theatre.

Q: This is being called a “refreshed” remount. What’s changed since its first run 

A: It’s mostly been cuts, and some massaging of certain sections. And I think it’s a play that actually needs a little distance.

Like mathematics, the patterns only reveal themselves at a remove. I think I needed that distance to see where things were working and where they could be better. So it’s a gift for me to be given this opportunity, to make things pop a little more and to make the math, which isn’t meant to be difficult, more understandable and relatable.

You may have noticed that Lesley Telford from Spooky Action is also choreographer for this production. I gather she’s making a career of art/science pieces, at least for now.

In the category of ‘Vancouver being a small town’, Telford lists a review of one of her pieces,  ‘AUDC’s Season Finale at The Playhouse’, on her website. Intriguingly, the reviewer is Peter Dickinson who in addition to being the playwright with whom she has collaborated for Pi Theatre’s ‘Long Division’ is also the Director of SFU’s (Simon Fraser University’s) Institute for Performance Studies. I wonder how many more ways these two crisscross professionally? Personally and for what it’s worth, it might be a good idea for Telford (and Dickinson, if he hasn’t already done so) to make readers aware of their professional connections when there’s a review at stake.

Final comment: I’m not sure how quantum entanglement or mathematics with the pieces attributed to concepts from those fields but I’m sure anyone attempting to make the links will find themselves stimulated.

ETA April 21, 2017: I’m adding this event even though the tickets are completely subscribed. There will be a standby line the night of the event (from the Peter Wall Institute for Advanced Studies The Hidden Beauty of Mathematics event page,

02 May 2017

7:00 pm (doors open at 6:00 pm)

The Vogue Theatre

918 Granville St.

Vancouver, BC

Register

Good luck!

Why are jokes funny? There may be a quantum explanation

Some years ago a friend who’d attended a conference on humour told me I really shouldn’t talk about humour until I had a degree on the topic. I decided the best way to deal with that piece of advice was to avoid all mention of any theories about humour to that friend. I’m happy to say the strategy has worked well although this latest research may allow me to broach the topic once again. From a March 17, 2017 Frontiers (publishing) news release on EurekAlert (Note: A link has been removed),

Why was 6 afraid of 7? Because 789. Whether this pun makes you giggle or groan in pain, your reaction is a consequence of the ambiguity of the joke. Thus far, models have not been able to fully account for the complexity of humor or exactly why we find puns and jokes funny, but a research article recently published in Frontiers in Physics suggests a novel approach: quantum theory.

By the way, it took me forever to get the joke. I always blame these things on the fact that I learned French before English (although my English is now my strongest language). So, for anyone who may immediately grasp the pun: Why was 6 afraid of 7? Because 78 (ate) 9.

This news release was posted by Anna Sigurdsson on March 22, 2017 on the Frontiers blog,

Aiming to answer the question of what kind of formal theory is needed to model the cognitive representation of a joke, researchers suggest that a quantum theory approach might be a contender. In their paper, they outline a quantum inspired model of humor, hoping that this new approach may succeed at a more nuanced modeling of the cognition of humor than previous attempts and lead to the development of a full-fledged, formal quantum theory model of humor. This initial model was tested in a study where participants rated the funniness of verbal puns, as well as the funniness of variants of these jokes (e.g. the punchline on its own, the set-up on its own). The results indicate that apart from the delivery of information, something else is happening on a cognitive level that makes the joke as a whole funny whereas its deconstructed components are not, and which makes a quantum approach appropriate to study this phenomenon.

For decades, researchers from a range of different fields have tried to explain the phenomenon of humor and what happens on a cognitive level in the moment when we “get the joke”. Even within the field of psychology, the topic of humor has been studied using many different approaches, and although the last two decades have seen an upswing of the application of quantum models to the study of psychological phenomena, this is the first time that a quantum theory approach has been suggested as a way to better understand the complexity of humor.

Previous computational models of humor have suggested that the funny element of a joke may be explained by a word’s ability to hold two different meanings (bisociation), and the existence of multiple, but incompatible, ways of interpreting a statement or situation (incongruity). During the build-up of the joke, we interpret the situation one way, and once the punch line comes, there is a shift in our understanding of the situation, which gives it a new meaning and creates the comical effect.

However, the authors argue that it is not the shift of meaning, but rather our ability to perceive both meanings simultaneously, that makes a pun funny. This is where a quantum approach might be able to account for the complexity of humor in a way that earlier models cannot. “Quantum formalisms are highly useful for describing cognitive states that entail this form of ambiguity,” says Dr. Liane Gabora from the University of British Columbia, corresponding author of the paper. “Funniness is not a pre-existing ‘element of reality’ that can be measured; it emerges from an interaction between the underlying nature of the joke, the cognitive state of the listener, and other social and environmental factors. This makes the quantum formalism an excellent candidate for modeling humor,” says Dr. Liane Gabora.

Although much work and testing remains before the completion of a formal quantum theory model of humor to explain the cognitive aspects of reacting to a pun, these first findings provide an exciting first step and opens for the possibility of a more nuanced modeling of humor. “The cognitive process of “getting” a joke is a difficult process to model, and we consider the work in this paper to be an early first step toward an eventually more comprehensive theory of humor that includes predictive models. We believe that the approach promises an exciting step toward a formal theory of humor, and that future research will build upon this modest beginning,” concludes Dr. Liane Gabora.

Here’s a link to and a citation for the paper,

Toward a Quantum Theory of Humor by Liane Gabora and Kirsty Kitto. Front. Phys., 26 January 2017 | https://doi.org/10.3389/fphy.2016.00053

This paper has been published in an open access journal. In viewing the acknowledgements at the end of the paper I found what I found to be a surprising funding agency,

This work was supported by a grant (62R06523) from the Natural Sciences and Engineering Research Council of Canada. We are grateful to Samantha Thomson who assisted with the development of the questionnaire and the collection of the data for the study reported here.

While I’m at this, I might as well mention that Kirsty Katto is from the Queensland University of Technology (QUT) in Australia and, for those unfamiliar with the geography, the University of British Columbia is the the Canada’s province of British Columbia.

Formation of a time (temporal) crystal

It’s a crystal arranged in time according to a March 8, 2017 University of Texas at Austin news release (also on EurekAlert), Note: Links have been removed,

Salt, snowflakes and diamonds are all crystals, meaning their atoms are arranged in 3-D patterns that repeat. Today scientists are reporting in the journal Nature on the creation of a phase of matter, dubbed a time crystal, in which atoms move in a pattern that repeats in time rather than in space.

The atoms in a time crystal never settle down into what’s known as thermal equilibrium, a state in which they all have the same amount of heat. It’s one of the first examples of a broad new class of matter, called nonequilibrium phases, that have been predicted but until now have remained out of reach. Like explorers stepping onto an uncharted continent, physicists are eager to explore this exotic new realm.

“This opens the door to a whole new world of nonequilibrium phases,” says Andrew Potter, an assistant professor of physics at The University of Texas at Austin. “We’ve taken these theoretical ideas that we’ve been poking around for the last couple of years and actually built it in the laboratory. Hopefully, this is just the first example of these, with many more to come.”

Some of these nonequilibrium phases of matter may prove useful for storing or transferring information in quantum computers.

Potter is part of the team led by researchers at the University of Maryland who successfully created the first time crystal from ions, or electrically charged atoms, of the element ytterbium. By applying just the right electrical field, the researchers levitated 10 of these ions above a surface like a magician’s assistant. Next, they whacked the atoms with a laser pulse, causing them to flip head over heels. Then they hit them again and again in a regular rhythm. That set up a pattern of flips that repeated in time.

Crucially, Potter noted, the pattern of atom flips repeated only half as fast as the laser pulses. This would be like pounding on a bunch of piano keys twice a second and notes coming out only once a second. This weird quantum behavior was a signature that he and his colleagues predicted, and helped confirm that the result was indeed a time crystal.

The team also consists of researchers at the National Institute of Standards and Technology, the University of California, Berkeley and Harvard University, in addition to the University of Maryland and UT Austin.

Frank Wilczek, a Nobel Prize-winning physicist at the Massachusetts Institute of Technology, was teaching a class about crystals in 2012 when he wondered whether a phase of matter could be created such that its atoms move in a pattern that repeats in time, rather than just in space.

Potter and his colleague Norman Yao at UC Berkeley created a recipe for building such a time crystal and developed ways to confirm that, once you had built such a crystal, it was in fact the real deal. That theoretical work was announced publically last August and then published in January in the journal Physical Review Letters.

A team led by Chris Monroe of the University of Maryland in College Park built a time crystal, and Potter and Yao helped confirm that it indeed had the properties they predicted. The team announced that breakthrough—constructing a working time crystal—last September and is publishing the full, peer-reviewed description today in Nature.

A team led by Mikhail Lukin at Harvard University created a second time crystal a month after the first team, in that case, from a diamond.

Here’s a link to and a citation for the paper,

Observation of a discrete time crystal by J. Zhang, P. W. Hess, A. Kyprianidis, P. Becker, A. Lee, J. Smith, G. Pagano, I.-D. Potirniche, A. C. Potter, A. Vishwanath, N. Y. Yao, & C. Monroe. Nature 543, 217–220 (09 March 2017) doi:10.1038/nature21413 Published online 08 March 2017

This paper is behind a paywall.

3D printing with cellulose

The scientists seem quite excited about their work with 3D printing and cellulose. From a March 3, 2017 MIT (Massachusetts Institute of Technology) news release (also on EurekAlert),

For centuries, cellulose has formed the basis of the world’s most abundantly printed-on material: paper. Now, thanks to new research at MIT, it may also become an abundant material to print with — potentially providing a renewable, biodegradable alternative to the polymers currently used in 3-D printing materials.

“Cellulose is the most abundant organic polymer in the world,” says MIT postdoc Sebastian Pattinson, lead author of a paper describing the new system in the journal Advanced Materials Technologies. The paper is co-authored by associate professor of mechanical engineering A. John Hart, the Mitsui Career Development Professor in Contemporary Technology.

Cellulose, Pattinson explains, is “the most important component in giving wood its mechanical properties. And because it’s so inexpensive, it’s biorenewable, biodegradable, and also very chemically versatile, it’s used in a lot of products. Cellulose and its derivatives are used in pharmaceuticals, medical devices, as food additives, building materials, clothing — all sorts of different areas. And a lot of these kinds of products would benefit from the kind of customization that additive manufacturing [3-D printing] enables.”

Meanwhile, 3-D printing technology is rapidly growing. Among other benefits, it “allows you to individually customize each product you make,” Pattinson says.

Using cellulose as a material for additive manufacturing is not a new idea, and many researchers have attempted this but faced major obstacles. When heated, cellulose thermally decomposes before it becomes flowable, partly because of the hydrogen bonds that exist between the cellulose molecules. The intermolecular bonding also makes high-concentration cellulose solutions too viscous to easily extrude.

Instead, the MIT team chose to work with cellulose acetate — a material that is easily made from cellulose and is already widely produced and readily available. Essentially, the number of hydrogen bonds in this material has been reduced by the acetate groups. Cellulose acetate can be dissolved in acetone and extruded through a nozzle. As the acetone quickly evaporates, the cellulose acetate solidifies in place. A subsequent optional treatment replaces the acetate groups and increases the strength of the printed parts.

“After we 3-D print, we restore the hydrogen bonding network through a sodium hydroxide treatment,” Pattinson says. “We find that the strength and toughness of the parts we get … are greater than many commonly used materials” for 3-D printing, including acrylonitrile butadiene styrene (ABS) and polylactic acid (PLA).

To demonstrate the chemical versatility of the production process, Pattinson and Hart added an extra dimension to the innovation. By adding a small amount of antimicrobial dye to the cellulose acetate ink, they 3-D-printed a pair of surgical tweezers with antimicrobial functionality.

“We demonstrated that the parts kill bacteria when you shine fluorescent light on them,” Pattinson says. Such custom-made tools “could be useful for remote medical settings where there’s a need for surgical tools but it’s difficult to deliver new tools as they break, or where there’s a need for customized tools. And with the antimicrobial properties, if the sterility of the operating room is not ideal the antimicrobial function could be essential,” he says.

Because most existing extrusion-based 3-D printers rely on heating polymer to make it flow, their production speed is limited by the amount of heat that can be delivered to the polymer without damaging it. This room-temperature cellulose process, which simply relies on evaporation of the acetone to solidify the part, could potentially be faster, Pattinson says. And various methods could speed it up even further, such as laying down thin ribbons of material to maximize surface area, or blowing hot air over it to speed evaporation. A production system would also seek to recover the evaporated acetone to make the process more cost effective and environmentally friendly.

Cellulose acetate is already widely available as a commodity product. In bulk, the material is comparable in price to that of thermoplastics used for injection molding, and it’s much less expensive than the typical filament materials used for 3-D printing, the researchers say. This, combined with the room-temperature conditions of the process and the ability to functionalize cellulose in a variety of ways, could make it commercially attractive.

Here’s a link to and a citation for the paper,

Additive Manufacturing of Cellulosic Materials with Robust Mechanics and Antimicrobial Functionality by Sebastian W. Pattinson and A. John Hart. Advanced Materials Technologies DOI: 10.1002/admt.201600084 Version of Record online: 30 JAN 2017

© 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

Vector Institute and Canada’s artificial intelligence sector

On the heels of the March 22, 2017 federal budget announcement of $125M for a Pan-Canadian Artificial Intelligence Strategy, the University of Toronto (U of T) has announced the inception of the Vector Institute for Artificial Intelligence in a March 28, 2017 news release by Jennifer Robinson (Note: Links have been removed),

A team of globally renowned researchers at the University of Toronto is driving the planning of a new institute staking Toronto’s and Canada’s claim as the global leader in AI.

Geoffrey Hinton, a University Professor Emeritus in computer science at U of T and vice-president engineering fellow at Google, will serve as the chief scientific adviser of the newly created Vector Institute based in downtown Toronto.

“The University of Toronto has long been considered a global leader in artificial intelligence research,” said U of T President Meric Gertler. “It’s wonderful to see that expertise act as an anchor to bring together researchers, government and private sector actors through the Vector Institute, enabling them to aim even higher in leading advancements in this fast-growing, critical field.”

As part of the Government of Canada’s Pan-Canadian Artificial Intelligence Strategy, Vector will share $125 million in federal funding with fellow institutes in Montreal and Edmonton. All three will conduct research and secure talent to cement Canada’s position as a world leader in AI.

In addition, Vector is expected to receive funding from the Province of Ontario and more than 30 top Canadian and global companies eager to tap this pool of talent to grow their businesses. The institute will also work closely with other Ontario universities with AI talent.

(See my March 24, 2017 posting; scroll down about 25% for the science part, including the Pan-Canadian Artificial Intelligence Strategy of the budget.)

Not obvious in last week’s coverage of the Pan-Canadian Artificial Intelligence Strategy is that the much lauded Hinton has been living in the US and working for Google. These latest announcements (Pan-Canadian AI Strategy and Vector Institute) mean that he’s moving back.

A March 28, 2017 article by Kate Allen for TorontoStar.com provides more details about the Vector Institute, Hinton, and the Canadian ‘brain drain’ as it applies to artificial intelligence, (Note:  A link has been removed)

Toronto will host a new institute devoted to artificial intelligence, a major gambit to bolster a field of research pioneered in Canada but consistently drained of talent by major U.S. technology companies like Google, Facebook and Microsoft.

The Vector Institute, an independent non-profit affiliated with the University of Toronto, will hire about 25 new faculty and research scientists. It will be backed by more than $150 million in public and corporate funding in an unusual hybridization of pure research and business-minded commercial goals.

The province will spend $50 million over five years, while the federal government, which announced a $125-million Pan-Canadian Artificial Intelligence Strategy in last week’s budget, is providing at least $40 million, backers say. More than two dozen companies have committed millions more over 10 years, including $5 million each from sponsors including Google, Air Canada, Loblaws, and Canada’s five biggest banks [Bank of Montreal (BMO). Canadian Imperial Bank of Commerce ({CIBC} President’s Choice Financial},  Royal Bank of Canada (RBC), Scotiabank (Tangerine), Toronto-Dominion Bank (TD Canada Trust)].

The mode of artificial intelligence that the Vector Institute will focus on, deep learning, has seen remarkable results in recent years, particularly in image and speech recognition. Geoffrey Hinton, considered the “godfather” of deep learning for the breakthroughs he made while a professor at U of T, has worked for Google since 2013 in California and Toronto.

Hinton will move back to Canada to lead a research team based at the tech giant’s Toronto offices and act as chief scientific adviser of the new institute.

Researchers trained in Canadian artificial intelligence labs fill the ranks of major technology companies, working on tools like instant language translation, facial recognition, and recommendation services. Academic institutions and startups in Toronto, Waterloo, Montreal and Edmonton boast leaders in the field, but other researchers have left for U.S. universities and corporate labs.

The goals of the Vector Institute are to retain, repatriate and attract AI talent, to create more trained experts, and to feed that expertise into existing Canadian companies and startups.

Hospitals are expected to be a major partner, since health care is an intriguing application for AI. Last month, researchers from Stanford University announced they had trained a deep learning algorithm to identify potentially cancerous skin lesions with accuracy comparable to human dermatologists. The Toronto company Deep Genomics is using deep learning to read genomes and identify mutations that may lead to disease, among other things.

Intelligent algorithms can also be applied to tasks that might seem less virtuous, like reading private data to better target advertising. Zemel [Richard Zemel, the institute’s research director and a professor of computer science at U of T] says the centre is creating an ethics working group [emphasis mine] and maintaining ties with organizations that promote fairness and transparency in machine learning. As for privacy concerns, “that’s something we are well aware of. We don’t have a well-formed policy yet but we will fairly soon.”

The institute’s annual funding pales in comparison to the revenues of the American tech giants, which are measured in tens of billions. The risk the institute’s backers are taking is simply creating an even more robust machine learning PhD mill for the U.S.

“They obviously won’t all stay in Canada, but Toronto industry is very keen to get them,” Hinton said. “I think Trump might help there.” Two researchers on Hinton’s new Toronto-based team are Iranian, one of the countries targeted by U.S. President Donald Trump’s travel bans.

Ethics do seem to be a bit of an afterthought. Presumably the Vector Institute’s ‘ethics working group’ won’t include any regular folks. Is there any thought to what the rest of us think about these developments? As there will also be some collaboration with other proposed AI institutes including ones at the University of Montreal (Université de Montréal) and the University of Alberta (Kate McGillivray’s article coming up shortly mentions them), might the ethics group be centered in either Edmonton or Montreal? Interestingly, two Canadians (Timothy Caulfield at the University of Alberta and Eric Racine at Université de Montréa) testified at the US Commission for the Study of Bioethical Issues Feb. 10 – 11, 2014 meeting, the Brain research, ethics, and nanotechnology. Still speculating here but I imagine Caulfield and/or Racine could be persuaded to extend their expertise in ethics and the human brain to AI and its neural networks.

Getting back to the topic at hand the ‘AI sceneCanada’, Allen’s article is worth reading in its entirety if you have the time.

Kate McGillivray’s March 29, 2017 article for the Canadian Broadcasting Corporation’s (CBC) news online provides more details about the Canadian AI situation and the new strategies,

With artificial intelligence set to transform our world, a new institute is putting Toronto to the front of the line to lead the charge.

The Vector Institute for Artificial Intelligence, made possible by funding from the federal government revealed in the 2017 budget, will move into new digs in the MaRS Discovery District by the end of the year.

Vector’s funding comes partially from a $125 million investment announced in last Wednesday’s federal budget to launch a pan-Canadian artificial intelligence strategy, with similar institutes being established in Montreal and Edmonton.

“[A.I.] cuts across pretty well every sector of the economy,” said Dr. Alan Bernstein, CEO and president of the Canadian Institute for Advanced Research, the organization tasked with administering the federal program.

“Silicon Valley and England and other places really jumped on it, so we kind of lost the lead a little bit. I think the Canadian federal government has now realized that,” he said.

Stopping up the brain drain

Critical to the strategy’s success is building a homegrown base of A.I. experts and innovators — a problem in the last decade, despite pioneering work on so-called “Deep Learning” by Canadian scholars such as Yoshua Bengio and Geoffrey Hinton, a former University of Toronto professor who will now serve as Vector’s chief scientific advisor.

With few university faculty positions in Canada and with many innovative companies headquartered elsewhere, it has been tough to keep the few graduates specializing in A.I. in town.

“We were paying to educate people and shipping them south,” explained Ed Clark, chair of the Vector Institute and business advisor to Ontario Premier Kathleen Wynne.

The existence of that “fantastic science” will lean heavily on how much buy-in Vector and Canada’s other two A.I. centres get.

Toronto’s portion of the $125 million is a “great start,” said Bernstein, but taken alone, “it’s not enough money.”

“My estimate of the right amount of money to make a difference is a half a billion or so, and I think we will get there,” he said.

Jessica Murphy’s March 29, 2017 article for the British Broadcasting Corporation’s (BBC) news online offers some intriguing detail about the Canadian AI scene,

Canadian researchers have been behind some recent major breakthroughs in artificial intelligence. Now, the country is betting on becoming a big player in one of the hottest fields in technology, with help from the likes of Google and RBC [Royal Bank of Canada].

In an unassuming building on the University of Toronto’s downtown campus, Geoff Hinton laboured for years on the “lunatic fringe” of academia and artificial intelligence, pursuing research in an area of AI called neural networks.

Also known as “deep learning”, neural networks are computer programs that learn in similar way to human brains. The field showed early promise in the 1980s, but the tech sector turned its attention to other AI methods after that promise seemed slow to develop.

“The approaches that I thought were silly were in the ascendancy and the approach that I thought was the right approach was regarded as silly,” says the British-born [emphasis mine] professor, who splits his time between the university and Google, where he is a vice-president of engineering fellow.

Neural networks are used by the likes of Netflix to recommend what you should binge watch and smartphones with voice assistance tools. Google DeepMind’s AlphaGo AI used them to win against a human in the ancient game of Go in 2016.

Foteini Agrafioti, who heads up the new RBC Research in Machine Learning lab at the University of Toronto, said those recent innovations made AI attractive to researchers and the tech industry.

“Anything that’s powering Google’s engines right now is powered by deep learning,” she says.

Developments in the field helped jumpstart innovation and paved the way for the technology’s commercialisation. They also captured the attention of Google, IBM and Microsoft, and kicked off a hiring race in the field.

The renewed focus on neural networks has boosted the careers of early Canadian AI machine learning pioneers like Hinton, the University of Montreal’s Yoshua Bengio, and University of Alberta’s Richard Sutton.

Money from big tech is coming north, along with investments by domestic corporations like banking multinational RBC and auto parts giant Magna, and millions of dollars in government funding.

Former banking executive Ed Clark will head the institute, and says the goal is to make Toronto, which has the largest concentration of AI-related industries in Canada, one of the top five places in the world for AI innovation and business.

The founders also want it to serve as a magnet and retention tool for top talent aggressively head-hunted by US firms.

Clark says they want to “wake up” Canadian industry to the possibilities of AI, which is expected to have a massive impact on fields like healthcare, banking, manufacturing and transportation.

Google invested C$4.5m (US$3.4m/£2.7m) last November [2016] in the University of Montreal’s Montreal Institute for Learning Algorithms.

Microsoft is funding a Montreal startup, Element AI. The Seattle-based company also announced it would acquire Montreal-based Maluuba and help fund AI research at the University of Montreal and McGill University.

Thomson Reuters and General Motors both recently moved AI labs to Toronto.

RBC is also investing in the future of AI in Canada, including opening a machine learning lab headed by Agrafioti, co-funding a program to bring global AI talent and entrepreneurs to Toronto, and collaborating with Sutton and the University of Alberta’s Machine Intelligence Institute.

Canadian tech also sees the travel uncertainty created by the Trump administration in the US as making Canada more attractive to foreign talent. (One of Clark’s the selling points is that Toronto as an “open and diverse” city).

This may reverse the ‘brain drain’ but it appears Canada’s role as a ‘branch plant economy’ for foreign (usually US) companies could become an important discussion once more. From the ‘Foreign ownership of companies of Canada’ Wikipedia entry (Note: Links have been removed),

Historically, foreign ownership was a political issue in Canada in the late 1960s and early 1970s, when it was believed by some that U.S. investment had reached new heights (though its levels had actually remained stable for decades), and then in the 1980s, during debates over the Free Trade Agreement.

But the situation has changed, since in the interim period Canada itself became a major investor and owner of foreign corporations. Since the 1980s, Canada’s levels of investment and ownership in foreign companies have been larger than foreign investment and ownership in Canada. In some smaller countries, such as Montenegro, Canadian investment is sizable enough to make up a major portion of the economy. In Northern Ireland, for example, Canada is the largest foreign investor. By becoming foreign owners themselves, Canadians have become far less politically concerned about investment within Canada.

Of note is that Canada’s largest companies by value, and largest employers, tend to be foreign-owned in a way that is more typical of a developing nation than a G8 member. The best example is the automotive sector, one of Canada’s most important industries. It is dominated by American, German, and Japanese giants. Although this situation is not unique to Canada in the global context, it is unique among G-8 nations, and many other relatively small nations also have national automotive companies.

It’s interesting to note that sometimes Canadian companies are the big investors but that doesn’t change our basic position. And, as I’ve noted in other postings (including the March 24, 2017 posting), these government investments in science and technology won’t necessarily lead to a move away from our ‘branch plant economy’ towards an innovative Canada.

You can find out more about the Vector Institute for Artificial Intelligence here.

BTW, I noted that reference to Hinton as ‘British-born’ in the BBC article. He was educated in the UK and subsidized by UK taxpayers (from his Wikipedia entry; Note: Links have been removed),

Hinton was educated at King’s College, Cambridge graduating in 1970, with a Bachelor of Arts in experimental psychology.[1] He continued his study at the University of Edinburgh where he was awarded a PhD in artificial intelligence in 1977 for research supervised by H. Christopher Longuet-Higgins.[3][12]

It seems Canadians are not the only ones to experience  ‘brain drains’.

Finally, I wrote at length about a recent initiative taking place between the University of British Columbia (Vancouver, Canada) and the University of Washington (Seattle, Washington), the Cascadia Urban Analytics Cooperative in a Feb. 28, 2017 posting noting that the initiative is being funded by Microsoft to the tune $1M and is part of a larger cooperative effort between the province of British Columbia and the state of Washington. Artificial intelligence is not the only area where US technology companies are hedging their bets (against Trump’s administration which seems determined to terrify people from crossing US borders) by investing in Canada.

For anyone interested in a little more information about AI in the US and China, there’s today’s (March 31, 2017)earlier posting: China, US, and the race for artificial intelligence research domination.

China, US, and the race for artificial intelligence research domination

John Markoff and Matthew Rosenberg have written a fascinating analysis of the competition between US and China regarding technological advances, specifically in the field of artificial intelligence. While the focus of the Feb. 3, 2017 NY Times article is military, the authors make it easy to extrapolate and apply the concepts to other sectors,

Robert O. Work, the veteran defense official retained as deputy secretary by President Trump, calls them his “A.I. dudes.” The breezy moniker belies their serious task: The dudes have been a kitchen cabinet of sorts, and have advised Mr. Work as he has sought to reshape warfare by bringing artificial intelligence to the battlefield.

Last spring, he asked, “O.K., you guys are the smartest guys in A.I., right?”

No, the dudes told him, “the smartest guys are at Facebook and Google,” Mr. Work recalled in an interview.

Now, increasingly, they’re also in China. The United States no longer has a strategic monopoly on the technology, which is widely seen as the key factor in the next generation of warfare.

The Pentagon’s plan to bring A.I. to the military is taking shape as Chinese researchers assert themselves in the nascent technology field. And that shift is reflected in surprising commercial advances in artificial intelligence among Chinese companies. [emphasis mine]

Having read Marshal McLuhan (de rigeur for any Canadian pursuing a degree in communications [sociology-based] anytime from the 1960s into the late 1980s [at least]), I took the movement of technology from military research to consumer applications as a standard. Television is a classic example but there are many others including modern plastic surgery. The first time, I encountered the reverse (consumer-based technology being adopted by the military) was in a 2004 exhibition “Massive Change: The Future of Global Design” produced by Bruce Mau for the Vancouver (Canada) Art Gallery.

Markoff and Rosenberg develop their thesis further (Note: Links have been removed),

Last year, for example, Microsoft researchers proclaimed that the company had created software capable of matching human skills in understanding speech.

Although they boasted that they had outperformed their United States competitors, a well-known A.I. researcher who leads a Silicon Valley laboratory for the Chinese web services company Baidu gently taunted Microsoft, noting that Baidu had achieved similar accuracy with the Chinese language two years earlier.

That, in a nutshell, is the challenge the United States faces as it embarks on a new military strategy founded on the assumption of its continued superiority in technologies such as robotics and artificial intelligence.

First announced last year by Ashton B. Carter, President Barack Obama’s defense secretary, the “Third Offset” strategy provides a formula for maintaining a military advantage in the face of a renewed rivalry with China and Russia.

As consumer electronics manufacturing has moved to Asia, both Chinese companies and the nation’s government laboratories are making major investments in artificial intelligence.

The advance of the Chinese was underscored last month when Qi Lu, a veteran Microsoft artificial intelligence specialist, left the company to become chief operating officer at Baidu, where he will oversee the company’s ambitious plan to become a global leader in A.I.

The authors note some recent military moves (Note: Links have been removed),

In August [2016], the state-run China Daily reported that the country had embarked on the development of a cruise missile system with a “high level” of artificial intelligence. The new system appears to be a response to a missile the United States Navy is expected to deploy in 2018 to counter growing Chinese military influence in the Pacific.

Known as the Long Range Anti-Ship Missile, or L.R.A.S.M., it is described as a “semiautonomous” weapon. According to the Pentagon, this means that though targets are chosen by human soldiers, the missile uses artificial intelligence technology to avoid defenses and make final targeting decisions.

The new Chinese weapon typifies a strategy known as “remote warfare,” said John Arquilla, a military strategist at the Naval Post Graduate School in Monterey, Calif. The idea is to build large fleets of small ships that deploy missiles, to attack an enemy with larger ships, like aircraft carriers.

“They are making their machines more creative,” he said. “A little bit of automation gives the machines a tremendous boost.”

Whether or not the Chinese will quickly catch the United States in artificial intelligence and robotics technologies is a matter of intense discussion and disagreement in the United States.

Markoff and Rosenberg return to the world of consumer electronics as they finish their article on AI and the military (Note: Links have been removed),

Moreover, while there appear to be relatively cozy relationships between the Chinese government and commercial technology efforts, the same cannot be said about the United States. The Pentagon recently restarted its beachhead in Silicon Valley, known as the Defense Innovation Unit Experimental facility, or DIUx. It is an attempt to rethink bureaucratic United States government contracting practices in terms of the faster and more fluid style of Silicon Valley.

The government has not yet undone the damage to its relationship with the Valley brought about by Edward J. Snowden’s revelations about the National Security Agency’s surveillance practices. Many Silicon Valley firms remain hesitant to be seen as working too closely with the Pentagon out of fear of losing access to China’s market.

“There are smaller companies, the companies who sort of decided that they’re going to be in the defense business, like a Palantir,” said Peter W. Singer, an expert in the future of war at New America, a think tank in Washington, referring to the Palo Alto, Calif., start-up founded in part by the venture capitalist Peter Thiel. “But if you’re thinking about the big, iconic tech companies, they can’t become defense contractors and still expect to get access to the Chinese market.”

Those concerns are real for Silicon Valley.

If you have the time, I recommend reading the article in its entirety.

Impact of the US regime on thinking about AI?

A March 24, 2017 article by Daniel Gross for Slate.com hints that at least one high level offician in the Trump administration may be a little naïve in his understanding of AI and its impending impact on US society (Note: Links have been removed),

Treasury Secretary Steven Mnuchin is a sharp guy. He’s a (legacy) alumnus of Yale and Goldman Sachs, did well on Wall Street, and was a successful movie producer and bank investor. He’s good at, and willing to, put other people’s money at risk alongside some of his own. While he isn’t the least qualified person to hold the post of treasury secretary in 2017, he’s far from the best qualified. For in his 54 years on this planet, he hasn’t expressed or displayed much interest in economic policy, or in grappling with the big picture macroeconomic issues that are affecting our world. It’s not that he is intellectually incapable of grasping them; they just haven’t been in his orbit.

Which accounts for the inanity he uttered at an Axios breakfast Friday morning about the impact of artificial intelligence on jobs.

“it’s not even on our radar screen…. 50-100 more years” away, he said. “I’m not worried at all” about robots displacing humans in the near future, he said, adding: “In fact I’m optimistic.”

A.I. is already affecting the way people work, and the work they do. (In fact, I’ve long suspected that Mike Allen, Mnuchin’s Axios interlocutor, is powered by A.I.) I doubt Mnuchin has spent much time in factories, for example. But if he did, he’d see that machines and software are increasingly doing the work that people used to do. They’re not just moving goods through an assembly line, they’re soldering, coating, packaging, and checking for quality. Whether you’re visiting a GE turbine plant in South Carolina, or a cable-modem factory in Shanghai, the thing you’ll notice is just how few people there actually are. It’s why, in the U.S., manufacturing output rises every year while manufacturing employment is essentially stagnant. It’s why it is becoming conventional wisdom that automation is destroying more manufacturing jobs than trade. And now we are seeing the prospect of dark factories, which can run without lights because there are no people in them, are starting to become a reality. The integration of A.I. into factories is one of the reasons Trump’s promise to bring back manufacturing employment is absurd. You’d think his treasury secretary would know something about that.

It goes far beyond manufacturing, of course. Programmatic advertising buying, Spotify’s recommendation engines, chatbots on customer service websites, Uber’s dispatching system—all of these are examples of A.I. doing the work that people used to do. …

Adding to Mnuchin’s lack of credibility on the topic of jobs and robots/AI, Matthew Rozsa’s March 28, 2017 article for Salon.com features a study from the US National Bureau of Economic Research (Note: Links have been removed),

A new study by the National Bureau of Economic Research shows that every fully autonomous robot added to an American factory has reduced employment by an average of 6.2 workers, according to a report by BuzzFeed. The study also found that for every fully autonomous robot per thousand workers, the employment rate dropped by 0.18 to 0.34 percentage points and wages fell by 0.25 to 0.5 percentage points.

I can’t help wondering if the US Secretary of the Treasury is so oblivious to what is going on in the workplace whether that’s representative of other top-tier officials such as the Secretary of Defense, Secretary of Labor, etc. What is going to happen to US research in fields such as robotics and AI?

I have two more questions, in future what happens to research which contradicts or makes a top tier Trump government official look foolish? Will it be suppressed?

You can find the report “Robots and Jobs: Evidence from US Labor Markets” by Daron Acemoglu and Pascual Restrepo. NBER (US National Bureau of Economic Research) WORKING PAPER SERIES (Working Paper 23285) released March 2017 here. The introduction featured some new information for me; the term ‘technological unemployment’ was introduced in 1930 by John Maynard Keynes.

Moving from a wholly US-centric view of AI

Naturally in a discussion about AI, it’s all US and the country considered its chief sceince rival, China, with a mention of its old rival, Russia. Europe did rate a mention, albeit as a totality. Having recently found out that Canadians were pioneers in a very important aspect of AI, machine-learning, I feel obliged to mention it. You can find more about Canadian AI efforts in my March 24, 2017 posting (scroll down about 40% of the way) where you’ll find a very brief history and mention of the funding for a newly launching, Pan-Canadian Artificial Intelligence Strategy.

If any of my readers have information about AI research efforts in other parts of the world, please feel free to write them up in the comments.

Entangling a single photon with a trillion atoms

Polish scientists have cast light on an eighty-year old ‘paradox’ according to a March 2, 2017 news item on plys.org,

A group of researchers from the Faculty of Physics at the University of Warsaw has shed new light on the famous paradox of Einstein, Podolsky and Rosen after 80 years. They created a multidimensional entangled state of a single photon and a trillion hot rubidium atoms, and stored this hybrid entanglement in the laboratory for several microseconds. …

In their famous Physical Review article, published in 1935, Einstein, Podolsky and Rosen considered the decay of a particle into two products. In their thought experiment, two products of decay were projected in exactly opposite directions—or more scientifically speaking, their momenta were anti-correlated. Though not be a mystery within the framework of classical physics, when applying the rules of quantum theory, the three researchers arrived at a paradox. The Heisenberg uncertainty principle, dictating that position and momentum of a particle cannot be measured at the same time, lies at the center of this paradox. In Einstein’s thought experiment, it is possible to measure the momentum of one particle and immediately know the momentum of the other without measurement, as it is exactly opposite. Then, by measuring the position of the second particle, the Heisenberg uncertainty principle is seemingly violated, an apparent paradox that seriously baffled the three physicists.

A March 2, 2017 University of Warsaw press release (also on EurekAlert), which originated the news item, expands on the topic,

Only today we know that this experiment is not, in fact, a paradox. The mistake of Einstein and co-workers was to use one-particle uncertainty principle to a system of two particles. If we treat these two particles as described by a single quantum state, we learn that the original uncertainty principle ceases to apply, especially if these particles are entangled.

In the Quantum Memories Laboratory at the University of Warsaw, the group of three physicists was first to create such an entangled state consisting of a macroscopic object – a group of about one trillion atoms, and a single photon – a particle of light. “Single photons, scattered during the interaction of a laser beam with atoms, are registered on a sensitive camera. A single registered photon carries information about the quantum state of the entire group of atoms. The atoms may be stored, and their state may be retrieved on demand.” – says Michal Dabrowski, PhD student and co-author of the article.

The results of the experiment confirm that the atoms and the single photon are in a joint, entangled state. By measuring position and momentum of the photon, we gain all information about the state of atoms. To confirm this, polish scientists convert the atomic state into another photon, which again is measured using the state-of-the-art camera developed in the Quantum Memories Laboratory. “We demonstrate the Einstein-Podolsky-Rosen apparent paradox in a very similar version as originally proposed in 1935, however we extend the experiment by adding storage of light within the large group of atoms. Atoms store the photon in a form of a wave made of atomic spins, containing one trillion atoms. Such a state is very robust against loss of a single atoms, as information is spread across so many particles.” – says Michal Parniak, PhD student taking part in the study.

The experiment performed by the group from the University of Warsaw is unique in one other way as well. The quantum memory storing the entangled state, created thanks to “PRELUDIUM” grant from the Poland’s National Science Centre and “Diamentowy Grant” from the Polish Ministry of Science and Higher Education, allows for storage of up to 12 photons at once. This enhanced capacity is promising in terms of applications in quantum information processing. “The multidimensional entanglement is stored in our device for several microseconds, which is roughly a thousand times longer than in any previous experiments, and at the same time long enough to perform subtle quantum operations on the atomic state during storage” – explains Dr. Wojciech Wasilewski, group leader of the Quantum Memories Laboratory team.

The entanglement in the real and momentum space, described in the Optica article, can be used jointly with other well-known degrees of freedom such as polarization, allowing generation of so-called hyper-entanglement. Such elaborate ideas constitute new and original test of the fundamentals of quantum mechanics – a theory that is unceasingly mysterious yet brings immense technological progress.

Here’s a link to and a citation for the paper,

Einstein–Podolsky–Rosen paradox in a hybrid bipartite system by Michał Dąbrowski, Michał Parniak, and Wojciech Wasilewski. Optica Vol. 4, Issue 2, pp. 272-275 (2017) •https://doi.org/10.1364/OPTICA.4.000272

This paper appears to be open access.

ArtSci salon at the University of Toronto opens its Cabinet Project on April 6, 2017

I announced The Cabinet Project in a Sept. 1, 2016 posting,

The ArtSci Salon; A Hub for the Arts & Science communities in Toronto and Beyond is soliciting proposals for ‘The Cabinet Project; An artsci exhibition about cabinets‘ to be held *March 30 – May 1* 2017 at the University of Toronto in a series of ‘science cabinets’ found around campus,

Despite being in full sight, many cabinets and showcases at universities and scientific institutions lie empty or underutilized. Located at the entrance of science departments, in proximity of laboratories, or in busy areas of transition, some contain outdated posters, or dusty scientific objects that have been forgotten there for years. Others lie empty, like old furniture on the curb after a move, waiting for a lucky passer-by in need. The ceaseless flow of bodies walking past these cabinets – some running to meetings, some checking their schedule, some immersed in their thoughts – rarely pay attention to them.

My colleague and I made a submission, which was not accepted (drat). In any event, I was somewhat curious as to which proposals had been successfu. Here they are in a March 24, 2017 ArtSci Salon notice (received via email),

Join us to the opening of
The Cabinet Project
on April 6, 2017

* 4:00 PM Introduction and dry reception -THE FIELDS INSTITUTE FOR
RESEARCH IN MATHEMATICAL SCIENCES

* 4:30 – 6:30 Tour of the Exhibition with the artists
* 6:30 – 9:00 Reception at VICTORIA COLLEGE

All Welcome
You can join at any time during the tour

More information can be found at
http://artscisalon.com/the-cabinet-project

RSVP Here

About The Cabinet Project

The Cabinet Project is a distributed exhibition bringing to life historical, anecdotal and imagined stories evoked by scientific objects, their surrounding spaces and the individuals inhabiting them. The goal is to make the intense creativity existing inside science laboratories visible, and to suggest potential interactions between the sciences and the arts. to achieve this goal, 12 artists have turned 10 cabinets across the University of Toronto  into art installations.

Featuring works by: Catherine Beaudette; Nina Czegledy; Dave Kemp & Jonathon Anderson; Joel Ong & Mick Lorusso; Microcollection;  Nicole Clouston; Nicole Liao;  Rick Hyslop;  Stefan Herda; Stefanie Kuzmiski

You can find out about the project, the artists, the program, and more on The Cabinet Project webpage here.

Revisiting the scientific past for new breakthroughs

A March 2, 2017 article on phys.org features a thought-provoking (and, for some of us, confirming) take on scientific progress  (Note: Links have been removed),

The idea that science isn’t a process of constant progress might make some modern scientists feel a bit twitchy. Surely we know more now than we did 100 years ago? We’ve sequenced the genome, explored space and considerably lengthened the average human lifespan. We’ve invented aircraft, computers and nuclear energy. We’ve developed theories of relativity and quantum mechanics to explain how the universe works.

However, treating the history of science as a linear story of progression doesn’t reflect wholly how ideas emerge and are adapted, forgotten, rediscovered or ignored. While we are happy with the notion that the arts can return to old ideas, for example in neoclassicism, this idea is not commonly recognised in science. Is this constraint really present in principle? Or is it more a comment on received practice or, worse, on the general ignorance of the scientific community of its own intellectual history?

For one thing, not all lines of scientific enquiry are pursued to conclusion. For example, a few years ago, historian of science Hasok Chang undertook a careful examination of notebooks from scientists working in the 19th century. He unearthed notes from experiments in electrochemistry whose results received no explanation at the time. After repeating the experiments himself, Chang showed the results still don’t have a full explanation today. These research programmes had not been completed, simply put to one side and forgotten.

A March 1, 2017 essay by Giles Gasper (Durham University), Hannah Smithson (University of Oxford) and Tom Mcleish (Durham University) for The Conversation, which originated the article, expands on the theme (Note: Links have been removed),

… looping back into forgotten scientific history might also provide an alternative, regenerative way of thinking that doesn’t rely on what has come immediately before it.

Collaborating with an international team of colleagues, we have taken this hypothesis further by bringing scientists into close contact with scientific treatises from the early 13th century. The treatises were composed by the English polymath Robert Grosseteste – who later became Bishop of Lincoln – between 1195 and 1230. They cover a wide range of topics we would recognise as key to modern physics, including sound, light, colour, comets, the planets, the origin of the cosmos and more.

We have worked with paleographers (handwriting experts) and Latinists to decipher Grosseteste’s manuscripts, and with philosophers, theologians, historians and scientists to provide intellectual interpretation and context to his work. As a result, we’ve discovered that scientific and mathematical minds today still resonate with Grosseteste’s deeply physical and structured thinking.

Our first intuition and hope was that the scientists might bring a new analytic perspective to these very technical texts. And so it proved: the deep mathematical structure of a small treatise on colour, the De colore, was shown to describe what we would now call a three-dimensional abstract co-ordinate space for colour.

But more was true. During the examination of each treatise, at some point one of the group would say: “Did anyone ever try doing …?” or “What would happen if we followed through with this calculation, supposing he meant …”. Responding to this thinker from eight centuries ago has, to our delight and surprise, inspired new scientific work of a rather fresh cut. It isn’t connected in a linear way to current research programmes, but sheds light on them from new directions.

I encourage you to read the essay in its entirety.

The Canadian science scene and the 2017 Canadian federal budget

There’s not much happening in the 2017-18 budget in terms of new spending according to Paul Wells’ March 22, 2017 article for TheStar.com,

This is the 22nd or 23rd federal budget I’ve covered. And I’ve never seen the like of the one Bill Morneau introduced on Wednesday [March 22, 2017].

Not even in the last days of the Harper Conservatives did a budget provide for so little new spending — $1.3 billion in the current budget year, total, in all fields of government. That’s a little less than half of one per cent of all federal program spending for this year.

But times are tight. The future is a place where we can dream. So the dollars flow more freely in later years. In 2021-22, the budget’s fifth planning year, new spending peaks at $8.2 billion. Which will be about 2.4 per cent of all program spending.

He’s not alone in this 2017 federal budget analysis; CBC (Canadian Broadcasting Corporation) pundits, Chantal Hébert, Andrew Coyne, and Jennifer Ditchburn said much the same during their ‘At Issue’ segment of the March 22, 2017 broadcast of The National (news).

Before I focus on the science and technology budget, here are some general highlights from the CBC’s March 22, 2017 article on the 2017-18 budget announcement (Note: Links have been removed,

Here are highlights from the 2017 federal budget:

  • Deficit: $28.5 billion, up from $25.4 billion projected in the fall.
  • Trend: Deficits gradually decline over next five years — but still at $18.8 billion in 2021-22.
  • Housing: $11.2 billion over 11 years, already budgeted, will go to a national housing strategy.
  • Child care: $7 billion over 10 years, already budgeted, for new spaces, starting 2018-19.
  • Indigenous: $3.4 billion in new money over five years for infrastructure, health and education.
  • Defence: $8.4 billion in capital spending for equipment pushed forward to 2035.
  • Care givers: New care-giving benefit up to 15 weeks, starting next year.
  • Skills: New agency to research and measure skills development, starting 2018-19.
  • Innovation: $950 million over five years to support business-led “superclusters.”
  • Startups: $400 million over three years for a new venture capital catalyst initiative.
  • AI: $125 million to launch a pan-Canadian Artificial Intelligence Strategy.
  • Coding kids: $50 million over two years for initiatives to teach children to code.
  • Families: Option to extend parental leave up to 18 months.
  • Uber tax: GST to be collected on ride-sharing services.
  • Sin taxes: One cent more on a bottle of wine, five cents on 24 case of beer.
  • Bye-bye: No more Canada Savings Bonds.
  • Transit credit killed: 15 per cent non-refundable public transit tax credit phased out this year.

You can find the entire 2017-18 budget here.

Science and the 2017-18 budget

For anyone interested in the science news, you’ll find most of that in the 2017 budget’s Chapter 1 — Skills, Innovation and Middle Class jobs. As well, Wayne Kondro has written up a précis in his March 22, 2017 article for Science (magazine),

Finance officials, who speak on condition of anonymity during the budget lock-up, indicated the budgets of the granting councils, the main source of operational grants for university researchers, will be “static” until the government can assess recommendations that emerge from an expert panel formed in 2015 and headed by former University of Toronto President David Naylor to review basic science in Canada [highlighted in my June 15, 2016 posting ; $2M has been allocated for the advisor and associated secretariat]. Until then, the officials said, funding for the Natural Sciences and Engineering Research Council of Canada (NSERC) will remain at roughly $848 million, whereas that for the Canadian Institutes of Health Research (CIHR) will remain at $773 million, and for the Social Sciences and Humanities Research Council [SSHRC] at $547 million.

NSERC, though, will receive $8.1 million over 5 years to administer a PromoScience Program that introduces youth, particularly unrepresented groups like Aboriginal people and women, to science, technology, engineering, and mathematics through measures like “space camps and conservation projects.” CIHR, meanwhile, could receive modest amounts from separate plans to identify climate change health risks and to reduce drug and substance abuse, the officials added.

… Canada’s Innovation and Skills Plan, would funnel $600 million over 5 years allocated in 2016, and $112.5 million slated for public transit and green infrastructure, to create Silicon Valley–like “super clusters,” which the budget defined as “dense areas of business activity that contain large and small companies, post-secondary institutions and specialized talent and infrastructure.” …

… The Canadian Institute for Advanced Research will receive $93.7 million [emphasis mine] to “launch a Pan-Canadian Artificial Intelligence Strategy … (to) position Canada as a world-leading destination for companies seeking to invest in artificial intelligence and innovation.”

… Among more specific measures are vows to: Use $87.7 million in previous allocations to the Canada Research Chairs program to create 25 “Canada 150 Research Chairs” honoring the nation’s 150th year of existence, provide $1.5 million per year to support the operations of the office of the as-yet-unappointed national science adviser [see my Dec. 7, 2016 post for information about the job posting, which is now closed]; provide $165.7 million [emphasis mine] over 5 years for the nonprofit organization Mitacs to create roughly 6300 more co-op positions for university students and grads, and provide $60.7 million over five years for new Canadian Space Agency projects, particularly for Canadian participation in the National Aeronautics and Space Administration’s next Mars Orbiter Mission.

Kondros was either reading an earlier version of the budget or made an error regarding Mitacs (from the budget in the “A New, Ambitious Approach to Work-Integrated Learning” subsection),

Mitacs has set an ambitious goal of providing 10,000 work-integrated learning placements for Canadian post-secondary students and graduates each year—up from the current level of around 3,750 placements. Budget 2017 proposes to provide $221 million [emphasis mine] over five years, starting in 2017–18, to achieve this goal and provide relevant work experience to Canadian students.

As well, the budget item for the Pan-Canadian Artificial Intelligence Strategy is $125M.

Moving from Kondros’ précis, the budget (in the “Positioning National Research Council Canada Within the Innovation and Skills Plan” subsection) announces support for these specific areas of science,

Stem Cell Research

The Stem Cell Network, established in 2001, is a national not-for-profit organization that helps translate stem cell research into clinical applications, commercial products and public policy. Its research holds great promise, offering the potential for new therapies and medical treatments for respiratory and heart diseases, cancer, diabetes, spinal cord injury, multiple sclerosis, Crohn’s disease, auto-immune disorders and Parkinson’s disease. To support this important work, Budget 2017 proposes to provide the Stem Cell Network with renewed funding of $6 million in 2018–19.

Space Exploration

Canada has a long and proud history as a space-faring nation. As our international partners prepare to chart new missions, Budget 2017 proposes investments that will underscore Canada’s commitment to innovation and leadership in space. Budget 2017 proposes to provide $80.9 million on a cash basis over five years, starting in 2017–18, for new projects through the Canadian Space Agency that will demonstrate and utilize Canadian innovations in space, including in the field of quantum technology as well as for Mars surface observation. The latter project will enable Canada to join the National Aeronautics and Space Administration’s (NASA’s) next Mars Orbiter Mission.

Quantum Information

The development of new quantum technologies has the potential to transform markets, create new industries and produce leading-edge jobs. The Institute for Quantum Computing is a world-leading Canadian research facility that furthers our understanding of these innovative technologies. Budget 2017 proposes to provide the Institute with renewed funding of $10 million over two years, starting in 2017–18.

Social Innovation

Through community-college partnerships, the Community and College Social Innovation Fund fosters positive social outcomes, such as the integration of vulnerable populations into Canadian communities. Following the success of this pilot program, Budget 2017 proposes to invest $10 million over two years, starting in 2017–18, to continue this work.

International Research Collaborations

The Canadian Institute for Advanced Research (CIFAR) connects Canadian researchers with collaborative research networks led by eminent Canadian and international researchers on topics that touch all humanity. Past collaborations facilitated by CIFAR are credited with fostering Canada’s leadership in artificial intelligence and deep learning. Budget 2017 proposes to provide renewed and enhanced funding of $35 million over five years, starting in 2017–18.

Earlier this week, I highlighted Canada’s strength in the field of regenerative medicine, specifically stem cells in a March 21, 2017 posting. The $6M in the current budget doesn’t look like increased funding but rather a one-year extension. I’m sure they’re happy to receive it  but I imagine it’s a little hard to plan major research projects when you’re not sure how long your funding will last.

As for Canadian leadership in artificial intelligence, that was news to me. Here’s more from the budget,

Canada a Pioneer in Deep Learning in Machines and Brains

CIFAR’s Learning in Machines & Brains program has shaken up the field of artificial intelligence by pioneering a technique called “deep learning,” a computer technique inspired by the human brain and neural networks, which is now routinely used by the likes of Google and Facebook. The program brings together computer scientists, biologists, neuroscientists, psychologists and others, and the result is rich collaborations that have propelled artificial intelligence research forward. The program is co-directed by one of Canada’s foremost experts in artificial intelligence, the Université de Montréal’s Yoshua Bengio, and for his many contributions to the program, the University of Toronto’s Geoffrey Hinton, another Canadian leader in this field, was awarded the title of Distinguished Fellow by CIFAR in 2014.

Meanwhile, from chapter 1 of the budget in the subsection titled “Preparing for the Digital Economy,” there is this provision for children,

Providing educational opportunities for digital skills development to Canadian girls and boys—from kindergarten to grade 12—will give them the head start they need to find and keep good, well-paying, in-demand jobs. To help provide coding and digital skills education to more young Canadians, the Government intends to launch a competitive process through which digital skills training organizations can apply for funding. Budget 2017 proposes to provide $50 million over two years, starting in 2017–18, to support these teaching initiatives.

I wonder if BC Premier Christy Clark is heaving a sigh of relief. At the 2016 #BCTECH Summit, she announced that students in BC would learn to code at school and in newly enhanced coding camp programmes (see my Jan. 19, 2016 posting). Interestingly, there was no mention of additional funding to support her initiative. I guess this money from the federal government comes at a good time as we will have a provincial election later this spring where she can announce the initiative again and, this time, mention there’s money for it.

Attracting brains from afar

Ivan Semeniuk in his March 23, 2017 article (for the Globe and Mail) reads between the lines to analyze the budget’s possible impact on Canadian science,

But a between-the-lines reading of the budget document suggests the government also has another audience in mind: uneasy scientists from the United States and Britain.

The federal government showed its hand at the 2017 #BCTECH Summit. From a March 16, 2017 article by Meera Bains for the CBC news online,

At the B.C. tech summit, Navdeep Bains, Canada’s minister of innovation, said the government will act quickly to fast track work permits to attract highly skilled talent from other countries.

“We’re taking the processing time, which takes months, and reducing it to two weeks for immigration processing for individuals [who] need to come here to help companies grow and scale up,” Bains said.

“So this is a big deal. It’s a game changer.”

That change will happen through the Global Talent Stream, a new program under the federal government’s temporary foreign worker program.  It’s scheduled to begin on June 12, 2017.

U.S. companies are taking notice and a Canadian firm, True North, is offering to help them set up shop.

“What we suggest is that they think about moving their operations, or at least a chunk of their operations, to Vancouver, set up a Canadian subsidiary,” said the company’s founder, Michael Tippett.

“And that subsidiary would be able to house and accommodate those employees.”

Industry experts says while the future is unclear for the tech sector in the U.S., it’s clear high tech in B.C. is gearing up to take advantage.

US business attempts to take advantage of Canada’s relative stability and openness to immigration would seem to be the motive for at least one cross border initiative, the Cascadia Urban Analytics Cooperative. From my Feb. 28, 2017 posting,

There was some big news about the smallest version of the Cascadia region on Thursday, Feb. 23, 2017 when the University of British Columbia (UBC) , the University of Washington (state; UW), and Microsoft announced the launch of the Cascadia Urban Analytics Cooperative. From the joint Feb. 23, 2017 news release (read on the UBC website or read on the UW website),

In an expansion of regional cooperation, the University of British Columbia and the University of Washington today announced the establishment of the Cascadia Urban Analytics Cooperative to use data to help cities and communities address challenges from traffic to homelessness. The largest industry-funded research partnership between UBC and the UW, the collaborative will bring faculty, students and community stakeholders together to solve problems, and is made possible thanks to a $1-million gift from Microsoft.

Today’s announcement follows last September’s [2016] Emerging Cascadia Innovation Corridor Conference in Vancouver, B.C. The forum brought together regional leaders for the first time to identify concrete opportunities for partnerships in education, transportation, university research, human capital and other areas.

A Boston Consulting Group study unveiled at the conference showed the region between Seattle and Vancouver has “high potential to cultivate an innovation corridor” that competes on an international scale, but only if regional leaders work together. The study says that could be possible through sustained collaboration aided by an educated and skilled workforce, a vibrant network of research universities and a dynamic policy environment.

It gets better, it seems Microsoft has been positioning itself for a while if Matt Day’s analysis is correct (from my Feb. 28, 2017 posting),

Matt Day in a Feb. 23, 2017 article for the The Seattle Times provides additional perspective (Note: Links have been removed),

Microsoft’s effort to nudge Seattle and Vancouver, B.C., a bit closer together got an endorsement Thursday [Feb. 23, 2017] from the leading university in each city.

The partnership has its roots in a September [2016] conference in Vancouver organized by Microsoft’s public affairs and lobbying unit [emphasis mine.] That gathering was aimed at tying business, government and educational institutions in Microsoft’s home region in the Seattle area closer to its Canadian neighbor.

Microsoft last year [2016] opened an expanded office in downtown Vancouver with space for 750 employees, an outpost partly designed to draw to the Northwest more engineers than the company can get through the U.S. guest worker system [emphasis mine].

This was all prior to President Trump’s legislative moves in the US, which have at least one Canadian observer a little more gleeful than I’m comfortable with. From a March 21, 2017 article by Susan Lum  for CBC News online,

U.S. President Donald Trump’s efforts to limit travel into his country while simultaneously cutting money from science-based programs provides an opportunity for Canada’s science sector, says a leading Canadian researcher.

“This is Canada’s moment. I think it’s a time we should be bold,” said Alan Bernstein, president of CIFAR [which on March 22, 2017 was awarded $125M to launch the Pan Canada Artificial Intelligence Strategy in the Canadian federal budget announcement], a global research network that funds hundreds of scientists in 16 countries.

Bernstein believes there are many reasons why Canada has become increasingly attractive to scientists around the world, including the political climate in the United States and the Trump administration’s travel bans.

Thankfully, Bernstein calms down a bit,

“It used to be if you were a bright young person anywhere in the world, you would want to go to Harvard or Berkeley or Stanford, or what have you. Now I think you should give pause to that,” he said. “We have pretty good universities here [emphasis mine]. We speak English. We’re a welcoming society for immigrants.”​

Bernstein cautions that Canada should not be seen to be poaching scientists from the United States — but there is an opportunity.

“It’s as if we’ve been in a choir of an opera in the back of the stage and all of a sudden the stars all left the stage. And the audience is expecting us to sing an aria. So we should sing,” Bernstein said.

Bernstein said the federal government, with this week’s so-called innovation budget, can help Canada hit the right notes.

“Innovation is built on fundamental science, so I’m looking to see if the government is willing to support, in a big way, fundamental science in the country.”

Pretty good universities, eh? Thank you, Dr. Bernstein, for keeping some of the boosterism in check. Let’s leave the chest thumping to President Trump and his cronies.

Ivan Semeniuk’s March 23, 2017 article (for the Globe and Mail) provides more details about the situation in the US and in Britain,

Last week, Donald Trump’s first budget request made clear the U.S. President would significantly reduce or entirely eliminate research funding in areas such as climate science and renewable energy if permitted by Congress. Even the National Institutes of Health, which spearheads medical research in the United States and is historically supported across party lines, was unexpectedly targeted for a $6-billion (U.S.) cut that the White House said could be achieved through “efficiencies.”

In Britain, a recent survey found that 42 per cent of academics were considering leaving the country over worries about a less welcoming environment and the loss of research money that a split with the European Union is expected to bring.

In contrast, Canada’s upbeat language about science in the budget makes a not-so-subtle pitch for diversity and talent from abroad, including $117.6-million to establish 25 research chairs with the aim of attracting “top-tier international scholars.”

For good measure, the budget also includes funding for science promotion and $2-million annually for Canada’s yet-to-be-hired Chief Science Advisor, whose duties will include ensuring that government researchers can speak freely about their work.

“What we’ve been hearing over the last few months is that Canada is seen as a beacon, for its openness and for its commitment to science,” said Ms. Duncan [Kirsty Duncan, Minister of Science], who did not refer directly to either the United States or Britain in her comments.

Providing a less optimistic note, Erica Alini in her March 22, 2017 online article for Global News mentions a perennial problem, the Canadian brain drain,

The budget includes a slew of proposed reforms and boosted funding for existing training programs, as well as new skills-development resources for unemployed and underemployed Canadians not covered under current EI-funded programs.

There are initiatives to help women and indigenous people get degrees or training in science, technology, engineering and mathematics (the so-called STEM subjects) and even to teach kids as young as kindergarten-age to code.

But there was no mention of how to make sure Canadians with the right skills remain in Canada, TD’s DePratto {Toronto Dominion Bank} Economics; TD is currently experiencing a scandal {March 13, 2017 Huffington Post news item}] told Global News.

Canada ranks in the middle of the pack compared to other advanced economies when it comes to its share of its graduates in STEM fields, but the U.S. doesn’t shine either, said DePratto [Brian DePratto, senior economist at TD .

The key difference between Canada and the U.S. is the ability to retain domestic talent and attract brains from all over the world, he noted.

To be blunt, there may be some opportunities for Canadian science but it does well to remember (a) US businesses have no particular loyalty to Canada and (b) all it takes is an election to change any perceived advantages to disadvantages.

Digital policy and intellectual property issues

Dubbed by some as the ‘innovation’ budget (official title:  Building a Strong Middle Class), there is an attempt to address a longstanding innovation issue (from a March 22, 2017 posting by Michael Geist on his eponymous blog (Note: Links have been removed),

The release of today’s [march 22, 2017] federal budget is expected to include a significant emphasis on innovation, with the government revealing how it plans to spend (or re-allocate) hundreds of millions of dollars that is intended to support innovation. Canada’s dismal innovation record needs attention, but spending our way to a more innovative economy is unlikely to yield the desired results. While Navdeep Bains, the Innovation, Science and Economic Development Minister, has talked for months about the importance of innovation, Toronto Star columnist Paul Wells today delivers a cutting but accurate assessment of those efforts:

“This government is the first with a minister for innovation! He’s Navdeep Bains. He frequently posts photos of his meetings on Twitter, with the hashtag “#innovation.” That’s how you know there is innovation going on. A year and a half after he became the minister for #innovation, it’s not clear what Bains’s plans are. It’s pretty clear that within the government he has less than complete control over #innovation. There’s an advisory council on economic growth, chaired by the McKinsey guru Dominic Barton, which periodically reports to the government urging more #innovation.

There’s a science advisory panel, chaired by former University of Toronto president David Naylor, that delivered a report to Science Minister Kirsty Duncan more than three months ago. That report has vanished. One presumes that’s because it offered some advice. Whatever Bains proposes, it will have company.”

Wells is right. Bains has been very visible with plenty of meetings and public photo shoots but no obvious innovation policy direction. This represents a missed opportunity since Bains has plenty of policy tools at his disposal that could advance Canada’s innovation framework without focusing on government spending.

For example, Canada’s communications system – wireless and broadband Internet access – falls directly within his portfolio and is crucial for both business and consumers. Yet Bains has been largely missing in action on the file. He gave approval for the Bell – MTS merger that virtually everyone concedes will increase prices in the province and make the communications market less competitive. There are potential policy measures that could bring new competitors into the market (MVNOs [mobile virtual network operators] and municipal broadband) and that could make it easier for consumers to switch providers (ban on unlocking devices). Some of this falls to the CRTC, but government direction and emphasis would make a difference.

Even more troubling has been his near total invisibility on issues relating to new fees or taxes on Internet access and digital services. Canadian Heritage Minister Mélanie Joly has taken control of the issue with the possibility that Canadians could face increased costs for their Internet access or digital services through mandatory fees to contribute to Canadian content.  Leaving aside the policy objections to such an approach (reducing affordable access and the fact that foreign sources now contribute more toward Canadian English language TV production than Canadian broadcasters and distributors), Internet access and e-commerce are supposed to be Bains’ issue and they have a direct connection to the innovation file. How is it possible for the Innovation, Science and Economic Development Minister to have remained silent for months on the issue?

Bains has been largely missing on trade related innovation issues as well. My Globe and Mail column today focuses on a digital-era NAFTA, pointing to likely U.S. demands on data localization, data transfers, e-commerce rules, and net neutrality.  These are all issues that fall under Bains’ portfolio and will impact investment in Canadian networks and digital services. There are innovation opportunities for Canada here, but Bains has been content to leave the policy issues to others, who will be willing to sacrifice potential gains in those areas.

Intellectual property policy is yet another area that falls directly under Bains’ mandate with an obvious link to innovation, but he has done little on the file. Canada won a huge NAFTA victory late last week involving the Canadian patent system, which was challenged by pharmaceutical giant Eli Lilly. Why has Bains not promoted the decision as an affirmation of how Canada’s intellectual property rules?

On the copyright front, the government is scheduled to conduct a review of the Copyright Act later this year, but it is not clear whether Bains will take the lead or again cede responsibility to Joly. The Copyright Act is statutorily under the Industry Minister and reform offers the chance to kickstart innovation. …

For anyone who’s not familiar with this area, innovation is often code for commercialization of science and technology research efforts. These days, digital service and access policies and intellectual property policies are all key to research and innovation efforts.

The country that’s most often (except in mainstream Canadian news media) held up as an example of leadership in innovation is Estonia. The Economist profiled the country in a July 31, 2013 article and a July 7, 2016 article on apolitical.co provides and update.

Conclusions

Science monies for the tri-council science funding agencies (NSERC, SSHRC, and CIHR) are more or less flat but there were a number of line items in the federal budget which qualify as science funding. The $221M over five years for Mitacs, the $125M for the Pan-Canadian Artificial Intelligence Strategy, additional funding for the Canada research chairs, and some of the digital funding could also be included as part of the overall haul. This is in line with the former government’s (Stephen Harper’s Conservatives) penchant for keeping the tri-council’s budgets under control while spreading largesse elsewhere (notably the Perimeter Institute, TRIUMF [Canada’s National Laboratory for Particle and Nuclear Physics], and, in the 2015 budget, $243.5-million towards the Thirty Metre Telescope (TMT) — a massive astronomical observatory to be constructed on the summit of Mauna Kea, Hawaii, a $1.5-billion project). This has lead to some hard feelings in the past with regard to ‘big science’ projects getting what some have felt is an undeserved boost in finances while the ‘small fish’ are left scrabbling for the ever-diminishing (due to budget cuts in years past and inflation) pittances available from the tri-council agencies.

Mitacs, which started life as a federally funded Network Centre for Excellence focused on mathematics, has since shifted focus to become an innovation ‘champion’. You can find Mitacs here and you can find the organization’s March 2016 budget submission to the House of Commons Standing Committee on Finance here. At the time, they did not request a specific amount of money; they just asked for more.

The amount Mitacs expects to receive this year is over $40M which represents more than double what they received from the federal government and almost of 1/2 of their total income in the 2015-16 fiscal year according to their 2015-16 annual report (see p. 327 for the Mitacs Statement of Operations to March 31, 2016). In fact, the federal government forked over $39,900,189. in the 2015-16 fiscal year to be their largest supporter while Mitacs’ total income (receipts) was $81,993,390.

It’s a strange thing but too much money, etc. can be as bad as too little. I wish the folks Mitacs nothing but good luck with their windfall.

I don’t see anything in the budget that encourages innovation and investment from the industrial sector in Canada.

Finallyl, innovation is a cultural issue as much as it is a financial issue and having worked with a number of developers and start-up companies, the most popular business model is to develop a successful business that will be acquired by a large enterprise thereby allowing the entrepreneurs to retire before the age of 30 (or 40 at the latest). I don’t see anything from the government acknowledging the problem let alone any attempts to tackle it.

All in all, it was a decent budget with nothing in it to seriously offend anyone.