Tag Archives: artificial intelligence

Media registration is open for the 2018 ITU ( International Telecommunication Union) Plenipotentiary Conference (PP-18) being held 29 October – 16 November 2018 in Dubai

I’m a little late with this but there’s still time to register should you happen to be in or able to get to Dubai easily. From an October 18, 2018 International Telecommunication Union (ITU) Media Advisory (received via email),

Media registration is open for the 2018 ITU Plenipotentiary Conference (PP-18) – the highest policy-making body of the International Telecommunication Union (ITU), the United Nations’ specialized agency for information and communication technology. This will be closing soon, so all media intending to attend the event MUST register as soon as possible here.

Held every four years, it is the key event at which ITU’s 193 Member States decide on the future role of the organization, thereby determining ITU’s ability to influence and affect the development of information and communication technologies (ICTs) worldwide. It is expected to attract around 3,000 participants, including Heads of State and an estimated 130 VIPs from more than 193 Member States and more than 800 private companies, academic institutions and national, regional and international bodies.

ITU plays an integral role in enabling the development and implementation of ICTs worldwide through its mandate to: coordinate the shared global use of the radio spectrum, promote international cooperation in assigning satellite orbits, work to improve communication infrastructure in the developing world, and establish worldwide standards that foster seamless interconnection of a vast range of communications systems.

Delegates will tackle a number of pressing issues, from strategies to promote digital inclusion and bridge the digital divide, to ways to leverage such emerging technologies as the Internet of Things, Artificial Intelligence, 5G, and others, to improve the way all of us, everywhere, live and work.

The conference also sets ITU’s Financial Plan and elects its five top executives – Secretary-General, Deputy Secretary-General, and the Directors of the Radiocommunication, Telecommunication Standardization and Telecommunication Development Bureaux – who will guide its work over the next four years.

What: ITU Plenipotentiary Conference 2018 (PP-18) sets the next four-year strategy, budget and leadership of ITU.

Why: Finance, Business, Tech, Development and Foreign Affairs reporters will find PP-18 relevant to their newsgathering. Decisions made at PP-18 are designed to create an enabling ICT environment where the benefits of digital connectivity can reach all people and economies, everywhere. As such, these decisions can have an impact on the telecommunication and technology sectors as well as developed and developing countries alike.

When: 29 October – 16 November 2018: With several Press Conferences planned during the event.

* Historically the Opening, Closing and Plenary sessions of this conference are open to media. Confirmation of those sessions open to media, and Press Conference times, will be made closer to the event date.

Where: Dubai World Trade Center, Dubai, United Arab Emirates

More Information:

REGISTER FOR ACCREDITATION

I visited the ‘ITU Events Registration and Accreditation Process for Media‘ webpage and foudn these tidbits,

Accreditation eligibility & credentials 

1. Journalists* should provide an official letter of assignment from the Editor-in-Chief (or the News Editor for radio/TV). One letter per crew/editorial team will suffice. Editors-in-Chief and Bureau Chiefs should submit a letter from their Director. Please email this to pressreg@itu.int, along with the required supporting credentials below:​

    • ​​​​​print and online publications should be available to the general public and published at least 6 times a year by an organization whose principle business activity is publishing and which generally carries paid advertising;

      o 2 copies of recent byline articles published within the last 4 months.
    • news wire services should provide news coverage to subscribers, including newspapers, periodicals and/or television networks;

      o 2 copies of recent byline articles or broadcasting material published within the last 4 months.
    • broadcast should provide news and information programmes to the general public. Independent film and video production companies can only be accredited if officially mandated by a broadcast station via a letter of assignment;

      o broadcasting material published within the last 4 months.
    • freelance journalists including photographers, must provide clear documentation that they are on assignment from a specific news organization or publication. Evidence that they regularly supply journalistic content to recognized media may be acceptable in the absence of an assignment letter at the discretion of the ITU Media Relations Service.

      o a valid assignment letter from the news organization or publication.

 2. Bloggers may be granted accreditation if blog content is deemed relevant to the industry, contains news commentary, is regularly updated and made publicly available. Corporate bloggers are invited to register as participants. Please see Guidelines for Blogger Accreditation below for more details.

Guidelines for Blogger Accreditation

ITU is committed to working with independent ‘new media’ reporters and columnists who reach their audiences via blogs, podcasts, video blogs and other online media. These are the guidelines we use to determine whether to issue official media accreditation to independent online media representatives: 

ITU reserves the right to request traffic data from a third party (Sitemeter, Technorati, Feedburner, iTunes or equivalent) when considering your application. While the decision to grant access is not based solely on traffic/subscriber data, we ask that applicants provide sufficient transparency into their operations to help us make a fair and timely decision. 

Obtaining media accreditation for ITU events is an opportunity to meet and interact with key industry and political figures. While continued accreditation for ITU events is not directly contingent on producing coverage, owing to space limitations we may take this into consideration when processing future accreditation requests. Following any ITU event for which you are accredited, we therefore kindly request that you forward a link to your post/podcast/video blog to pressreg@itu.int. 

Bloggers who are granted access to ITU events are expected to act professionally. Those who do not maintain the standards expected of professional media representatives run the risk of having their accreditation withdrawn. 

If you can’t find answers to your questions on the ‘ITU Events Registration and Accreditation Process for Media‘ webpage, you can contact,

For media accreditation inquiries:


Rita Soraya Abino-Quintana
Media Accreditation Officer
ITU Corporate Communications

Tel: +41 22 730 5424

For anything else, contact,

For general media inquiries:


Jennifer Ferguson-Mitchell
Senior Media and Communications Officer
ITU Corporate Communications

Tel: +41 22 730 5469

Mobile: +41 79 337 4615

There you have it.

A potpourri of robot/AI stories: killers , kindergarten teachers, a Balenciaga-inspired AI fashion designer, a conversational android, and more

Following on my August 29, 2018 post (Sexbots, sexbot ethics, families, and marriage), I’m following up with a more general piece.

Robots, AI (artificial intelligence), and androids (humanoid robots), the terms can be confusing since there’s a tendency to use them interchangeably. Confession: I do it too, but, not this time. That said, I have multiple news bits.

Killer ‘bots and ethics

The U.S. military is already testing a Modular Advanced Armed Robotic System. Credit: Lance Cpl. Julien Rodarte, U.S. Marine Corps

That is a robot.

For the purposes of this posting, a robot is a piece of hardware which may or may not include an AI system and does not mimic a human or other biological organism such that you might, under circumstances, mistake the robot for a biological organism.

As for what precipitated this feature (in part), it seems there’s been a United Nations meeting in Geneva, Switzerland held from August 27 – 31, 2018 about war and the use of autonomous robots, i.e., robots equipped with AI systems and designed for independent action. BTW, it’s the not first meeting the UN has held on this topic.

Bonnie Docherty, lecturer on law and associate director of armed conflict and civilian protection, international human rights clinic, Harvard Law School, has written an August 21, 2018 essay on The Conversation (also on phys.org) describing the history and the current rules around the conduct of war, as well as, outlining the issues with the military use of autonomous robots (Note: Links have been removed),

When drafting a treaty on the laws of war at the end of the 19th century, diplomats could not foresee the future of weapons development. But they did adopt a legal and moral standard for judging new technology not covered by existing treaty language.

This standard, known as the Martens Clause, has survived generations of international humanitarian law and gained renewed relevance in a world where autonomous weapons are on the brink of making their own determinations about whom to shoot and when. The Martens Clause calls on countries not to use weapons that depart “from the principles of humanity and from the dictates of public conscience.”

I was the lead author of a new report by Human Rights Watch and the Harvard Law School International Human Rights Clinic that explains why fully autonomous weapons would run counter to the principles of humanity and the dictates of public conscience. We found that to comply with the Martens Clause, countries should adopt a treaty banning the development, production and use of these weapons.

Representatives of more than 70 nations will gather from August 27 to 31 [2018] at the United Nations in Geneva to debate how to address the problems with what they call lethal autonomous weapon systems. These countries, which are parties to the Convention on Conventional Weapons, have discussed the issue for five years. My co-authors and I believe it is time they took action and agreed to start negotiating a ban next year.

Docherty elaborates on her points (Note: A link has been removed),

The Martens Clause provides a baseline of protection for civilians and soldiers in the absence of specific treaty law. The clause also sets out a standard for evaluating new situations and technologies that were not previously envisioned.

Fully autonomous weapons, sometimes called “killer robots,” would select and engage targets without meaningful human control. They would be a dangerous step beyond current armed drones because there would be no human in the loop to determine when to fire and at what target. Although fully autonomous weapons do not yet exist, China, Israel, Russia, South Korea, the United Kingdom and the United States are all working to develop them. They argue that the technology would process information faster and keep soldiers off the battlefield.

The possibility that fully autonomous weapons could soon become a reality makes it imperative for those and other countries to apply the Martens Clause and assess whether the technology would offend basic humanity and the public conscience. Our analysis finds that fully autonomous weapons would fail the test on both counts.

I encourage you to read the essay in its entirety and for anyone who thinks the discussion about ethics and killer ‘bots is new or limited to military use, there’s my July 25, 2016 posting about police use of a robot in Dallas, Texas. (I imagine the discussion predates 2016 but that’s the earliest instance I have here.)

Teacher bots

Robots come in many forms and this one is on the humanoid end of the spectum,

Children watch a Keeko robot at the Yiswind Institute of Multicultural Education in Beijing, where the intelligent machines are telling stories and challenging kids with logic problems  [donwloaded from https://phys.org/news/2018-08-robot-teachers-invade-chinese-kindergartens.html]

Don’t those ‘eyes’ look almost heart-shaped? No wonder the kids love these robots, if an August  29, 2018 news item on phys.org can be believed,

The Chinese kindergarten children giggled as they worked to solve puzzles assigned by their new teaching assistant: a roundish, short educator with a screen for a face.

Just under 60 centimetres (two feet) high, the autonomous robot named Keeko has been a hit in several kindergartens, telling stories and challenging children with logic problems.

Round and white with a tubby body, the armless robot zips around on tiny wheels, its inbuilt cameras doubling up both as navigational sensors and a front-facing camera allowing users to record video journals.

In China, robots are being developed to deliver groceries, provide companionship to the elderly, dispense legal advice and now, as Keeko’s creators hope, join the ranks of educators.

At the Yiswind Institute of Multicultural Education on the outskirts of Beijing, the children have been tasked to help a prince find his way through a desert—by putting together square mats that represent a path taken by the robot—part storytelling and part problem-solving.

Each time they get an answer right, the device reacts with delight, its face flashing heart-shaped eyes.

“Education today is no longer a one-way street, where the teacher teaches and students just learn,” said Candy Xiong, a teacher trained in early childhood education who now works with Keeko Robot Xiamen Technology as a trainer.

“When children see Keeko with its round head and body, it looks adorable and children love it. So when they see Keeko, they almost instantly take to it,” she added.

Keeko robots have entered more than 600 kindergartens across the country with its makers hoping to expand into Greater China and Southeast Asia.

Beijing has invested money and manpower in developing artificial intelligence as part of its “Made in China 2025” plan, with a Chinese firm last year unveiling the country’s first human-like robot that can hold simple conversations and make facial expressions.

According to the International Federation of Robots, China has the world’s top industrial robot stock, with some 340,000 units in factories across the country engaged in manufacturing and the automotive industry.

Moving on from hardware/software to a software only story.

AI fashion designer better than Balenciaga?

Despite the title for Katharine Schwab’s August 22, 2018 article for Fast Company, I don’t think this AI designer is better than Balenciaga but from the pictures I’ve seen the designs are as good and it does present some intriguing possibilities courtesy of its neural network (Note: Links have been removed),

The AI, created by researcher Robbie Barat, has created an entire collection based on Balenciaga’s previous styles. There’s a fabulous pink and red gradient jumpsuit that wraps all the way around the model’s feet–like a onesie for fashionistas–paired with a dark slouchy coat. There’s a textural color-blocked dress, paired with aqua-green tights. And for menswear, there’s a multi-colored, shimmery button-up with skinny jeans and mismatched shoes. None of these looks would be out of place on the runway.

To create the styles, Barat collected images of Balenciaga’s designs via the designer’s lookbooks, ad campaigns, runway shows, and online catalog over the last two months, and then used them to train the pix2pix neural net. While some of the images closely resemble humans wearing fashionable clothes, many others are a bit off–some models are missing distinct limbs, and don’t get me started on how creepy [emphasis mine] their faces are. Even if the outfits aren’t quite ready to be fabricated, Barat thinks that designers could potentially use a tool like this to find inspiration. Because it’s not constrained by human taste, style, and history, the AI comes up with designs that may never occur to a person. “I love how the network doesn’t really understand or care about symmetry,” Barat writes on Twitter.

You can see the ‘creepy’ faces and some of the designs here,

Image: Robbie Barat

In contrast to the previous two stories, this all about algorithms, no machinery with independent movement (robot hardware) needed.

Conversational android: Erica

Hiroshi Ishiguro and his lifelike (definitely humanoid) robots have featured here many, many times before. The most recent posting is a March 27, 2017 posting about his and his android’s participation at the 2017 SXSW festival.

His latest work is featured in an August 21, 2018 news news item on ScienceDaily,

We’ve all tried talking with devices, and in some cases they talk back. But, it’s a far cry from having a conversation with a real person.

Now a research team from Kyoto University, Osaka University, and the Advanced Telecommunications Research Institute, or ATR, have significantly upgraded the interaction system for conversational android ERICA, giving her even greater dialog skills.

ERICA is an android created by Hiroshi Ishiguro of Osaka University and ATR, specifically designed for natural conversation through incorporation of human-like facial expressions and gestures. The research team demonstrated the updates during a symposium at the National Museum of Emerging Science in Tokyo.

Here’s the latest conversational android, Erica

Caption: The experimental set up when the subject (left) talks with ERICA (right) Credit: Kyoto University / Kawahara lab

An August 20, 2018 Kyoto University press release on EurekAlert, which originated the news item, offers more details,

When we talk to one another, it’s never a simple back and forward progression of information,” states Tatsuya Kawahara of Kyoto University’s Graduate School of Informatics, and an expert in speech and audio processing.

“Listening is active. We express agreement by nodding or saying ‘uh-huh’ to maintain the momentum of conversation. This is called ‘backchanneling’, and is something we wanted to implement with ERICA.”

The team also focused on developing a system for ‘attentive listening’. This is when a listener asks elaborating questions, or repeats the last word of the speaker’s sentence, allowing for more engaging dialogue.

Deploying a series of distance sensors, facial recognition cameras, and microphone arrays, the team began collecting data on parameters necessary for a fluid dialog between ERICA and a human subject.

“We looked at three qualities when studying backchanneling,” continues Kawahara. “These were: timing — when a response happens; lexical form — what is being said; and prosody, or how the response happens.”

Responses were generated through machine learning using a counseling dialogue corpus, resulting in dramatically improved dialog engagement. Testing in five-minute sessions with a human subject, ERICA demonstrated significantly more dynamic speaking skill, including the use of backchanneling, partial repeats, and statement assessments.

“Making a human-like conversational robot is a major challenge,” states Kawahara. “This project reveals how much complexity there is in listening, which we might consider mundane. We are getting closer to a day where a robot can pass a Total Turing Test.”

Erica seems to have been first introduced publicly in Spring 2017, from an April 2017 Erica: Man Made webpage on The Guardian website,

Erica is 23. She has a beautiful, neutral face and speaks with a synthesised voice. She has a degree of autonomy – but can’t move her hands yet. Hiroshi Ishiguro is her ‘father’ and the bad boy of Japanese robotics. Together they will redefine what it means to be human and reveal that the future is closer than we might think.

Hiroshi Ishiguro and his colleague Dylan Glas are interested in what makes a human. Erica is their latest creation – a semi-autonomous android, the product of the most funded scientific project in Japan. But these men regard themselves as artists more than scientists, and the Erica project – the result of a collaboration between Osaka and Kyoto universities and the Advanced Telecommunications Research Institute International – is a philosophical one as much as technological one.

Erica is interviewed about her hope and dreams – to be able to leave her room and to be able to move her arms and legs. She likes to chat with visitors and has one of the most advanced speech synthesis systems yet developed. Can she be regarded as being alive or as a comparable being to ourselves? Will she help us to understand ourselves and our interactions as humans better?

Erica and her creators are interviewed in the science fiction atmosphere of Ishiguro’s laboratory, and this film asks how we might form close relationships with robots in the future. Ishiguro thinks that for Japanese people especially, everything has a soul, whether human or not. If we don’t understand how human hearts, minds and personalities work, can we truly claim that humans have authenticity that machines don’t?

Ishiguro and Glas want to release Erica and her fellow robots into human society. Soon, Erica may be an essential part of our everyday life, as one of the new children of humanity.

Key credits

  • Director/Editor: Ilinca Calugareanu
  • Producer: Mara Adina
  • Executive producers for the Guardian: Charlie Phillips and Laurence Topham
  • This video is produced in collaboration with the Sundance Institute Short Documentary Fund supported by the John D and Catherine T MacArthur Foundation

You can also view the 14 min. film here.

Artworks generated by an AI system are to be sold at Christie’s auction house

KC Ifeanyi’s August 22, 2018 article for Fast Company may send a chill down some artists’ spines,

For the first time in its 252-year history, Christie’s will auction artwork generated by artificial intelligence.

Created by the French art collective Obvious, “Portrait of Edmond de Belamy” is part of a series of paintings of the fictional Belamy family that was created using a two-part algorithm. …

The portrait is estimated to sell anywhere between $7,000-$10,000, and Obvious says the proceeds will go toward furthering its algorithm.

… Famed collector Nicolas Laugero-Lasserre bought one of Obvious’s Belamy works in February, which could’ve been written off as a novel purchase where the story behind it is worth more than the piece itself. However, with validation from a storied auction house like Christie’s, AI art could shake the contemporary art scene.

“Edmond de Belamy” goes up for auction from October 23-25 [2018].

Jobs safe from automation? Are there any?

Michael Grothaus expresses more optimism about future job markets than I’m feeling in an August 30, 2018 article for Fast Company,

A 2017 McKinsey Global Institute study of 800 occupations across 46 countries found that by 2030, 800 million people will lose their jobs to automation. That’s one-fifth of the global workforce. A further one-third of the global workforce will need to retrain if they want to keep their current jobs as well. And looking at the effects of automation on American jobs alone, researchers from Oxford University found that “47 percent of U.S. workers have a high probability of seeing their jobs automated over the next 20 years.”

The good news is that while the above stats are rightly cause for concern, they also reveal that 53% of American jobs and four-fifths of global jobs are unlikely to be affected by advances in artificial intelligence and robotics. But just what are those fields? I spoke to three experts in artificial intelligence, robotics, and human productivity to get their automation-proof career advice.

Creatives

“Although I believe every single job can, and will, benefit from a level of AI or robotic influence, there are some roles that, in my view, will never be replaced by technology,” says Tom Pickersgill, …

Maintenance foreman

When running a production line, problems and bottlenecks are inevitable–and usually that’s a bad thing. But in this case, those unavoidable issues will save human jobs because their solutions will require human ingenuity, says Mark Williams, head of product at People First, …

Hairdressers

Mat Hunter, director of the Central Research Laboratory, a tech-focused co-working space and accelerator for tech startups, have seen startups trying to create all kinds of new technologies, which has given him insight into just what machines can and can’t pull off. It’s lead him to believe that jobs like the humble hairdresser are safer from automation than those of, says, accountancy.

Therapists and social workers

Another automation-proof career is likely to be one involved in helping people heal the mind, says Pickersgill. “People visit therapists because there is a need for emotional support and guidance. This can only be provided through real human interaction–by someone who can empathize and understand, and who can offer advice based on shared experiences, rather than just data-driven logic.”

Teachers

Teachers are so often the unsung heroes of our society. They are overworked and underpaid–yet charged with one of the most important tasks anyone can have: nurturing the growth of young people. The good news for teachers is that their jobs won’t be going anywhere.

Healthcare workers

Doctors and nurses will also likely never see their jobs taken by automation, says Williams. While automation will no doubt better enhance the treatments provided by doctors and nurses the fact of the matter is that robots aren’t going to outdo healthcare workers’ ability to connect with patients and make them feel understood the way a human can.

Caretakers

While humans might be fine with robots flipping their burgers and artificial intelligence managing their finances, being comfortable with a robot nannying your children or looking after your elderly mother is a much bigger ask. And that’s to say nothing of the fact that even today’s most advanced robots don’t have the physical dexterity to perform the movements and actions carers do every day.

Grothaus does offer a proviso in his conclusion: certain types of jobs are relatively safe until developers learn to replicate qualities such as empathy in robots/AI.

It’s very confusing

There’s so much news about robots, artificial intelligence, androids, and cyborgs that it’s hard to keep up with it let alone attempt to get a feeling for where all this might be headed. When you add the fact that the term robots/artificial inteligence are often used interchangeably and that the distinction between robots/androids/cyborgs is not always clear any attempts to peer into the future become even more challenging.

At this point I content myself with tracking the situation and finding definitions so I can better understand what I’m tracking. Carmen Wong’s August 23, 2018 posting on the Signals blog published by Canada’s Centre for Commercialization of Regenerative Medicine (CCRM) offers some useful definitions in the context of an article about the use of artificial intelligence in the life sciences, particularly in Canada (Note: Links have been removed),

Artificial intelligence (AI). Machine learning. To most people, these are just buzzwords and synonymous. Whether or not we fully understand what both are, they are slowly integrating into our everyday lives. Virtual assistants such as Siri? AI is at work. The personalized ads you see when you are browsing on the web or movie recommendations provided on Netflix? Thank AI for that too.

AI is defined as machines having intelligence that imitates human behaviour such as learning, planning and problem solving. A process used to achieve AI is called machine learning, where a computer uses lots of data to “train” or “teach” itself, without human intervention, to accomplish a pre-determined task. Essentially, the computer keeps on modifying its algorithm based on the information provided to get to the desired goal.

Another term you may have heard of is deep learning. Deep learning is a particular type of machine learning where algorithms are set up like the structure and function of human brains. It is similar to a network of brain cells interconnecting with each other.

Toronto has seen its fair share of media-worthy AI activity. The Government of Canada, Government of Ontario, industry and multiple universities came together in March 2018 to launch the Vector Institute, with the goal of using AI to promote economic growth and improve the lives of Canadians. In May, Samsung opened its AI Centre in the MaRS Discovery District, joining a network of Samsung centres located in California, United Kingdom and Russia.

There has been a boom in AI companies over the past few years, which span a variety of industries. This year’s ranking of the top 100 most promising private AI companies covers 25 fields with cybersecurity, enterprise and robotics being the hot focus areas.

Wong goes on to explore AI deployment in the life sciences and concludes that human scientists and doctors will still be needed although she does note this in closing (Note: A link has been removed),

More importantly, empathy and support from a fellow human being could never be fully replaced by a machine (could it?), but maybe this will change in the future. We will just have to wait and see.

Artificial empathy is the term used in Lisa Morgan’s April 25, 2018 article for Information Week which unfortunately does not include any links to actual projects or researchers working on artificial empathy. Instead, the article is focused on how business interests and marketers would like to see it employed. FWIW, I have found a few references: (1) Artificial empathy Wikipedia essay (look for the references at the end of the essay for more) and (2) this open access article: Towards Artificial Empathy; How Can Artificial Empathy Follow the Developmental Pathway of Natural Empathy? by Minoru Asada.

Please let me know in the comments if you should have an insights on the matter in the comments section of this blog.

Robot radiologists (artificially intelligent doctors)

Mutaz Musa, a physician at New York Presbyterian Hospital/Weill Cornell (Department of Emergency Medicine) and software developer in New York City, has penned an eyeopening opinion piece about artificial intelligence (or robots if you prefer) and the field of radiology. From a June 25, 2018 opinion piece for The Scientist (Note: Links have been removed),

Although artificial intelligence has raised fears of job loss for many, we doctors have thus far enjoyed a smug sense of security. There are signs, however, that the first wave of AI-driven redundancies among doctors is fast approaching. And radiologists seem to be first on the chopping block.

Andrew Ng, founder of online learning platform Coursera and former CTO of “China’s Google,” Baidu, recently announced the development of CheXNet, a convolutional neural net capable of recognizing pneumonia and other thoracic pathologies on chest X-rays better than human radiologists. Earlier this year, a Hungarian group developed a similar system for detecting and classifying features of breast cancer in mammograms. In 2017, Adelaide University researchers published details of a bot capable of matching human radiologist performance in detecting hip fractures. And, of course, Google achieved superhuman proficiency in detecting diabetic retinopathy in fundus photographs, a task outside the scope of most radiologists.

Beyond single, two-dimensional radiographs, a team at Oxford University developed a system for detecting spinal disease from MRI data with a performance equivalent to a human radiologist. Meanwhile, researchers at the University of California, Los Angeles, reported detecting pathology on head CT scans with an error rate more than 20 times lower than a human radiologist.

Although these particular projects are still in the research phase and far from perfect—for instance, often pitting their machines against a limited number of radiologists—the pace of progress alone is telling.

Others have already taken their algorithms out of the lab and into the marketplace. Enlitic, founded by Aussie serial entrepreneur and University of San Francisco researcher Jeremy Howard, is a Bay-Area startup that offers automated X-ray and chest CAT scan interpretation services. Enlitic’s systems putatively can judge the malignancy of nodules up to 50 percent more accurately than a panel of radiologists and identify fractures so small they’d typically be missed by the human eye. One of Enlitic’s largest investors, Capitol Health, owns a network of diagnostic imaging centers throughout Australia, anticipating the broad rollout of this technology. Another Bay-Area startup, Arterys, offers cloud-based medical imaging diagnostics. Arterys’s services extend beyond plain films to cardiac MRIs and CAT scans of the chest and abdomen. And there are many others.

Musa has offered a compelling argument with lots of links to supporting evidence.

[downloaded from https://www.the-scientist.com/news-opinion/opinion–rise-of-the-robot-radiologists-64356]

And evidence keeps mounting, I just stumbled across this June 30, 2018 news item on Xinhuanet.com,

An artificial intelligence (AI) system scored 2:0 against elite human physicians Saturday in two rounds of competitions in diagnosing brain tumors and predicting hematoma expansion in Beijing.

The BioMind AI system, developed by the Artificial Intelligence Research Centre for Neurological Disorders at the Beijing Tiantan Hospital and a research team from the Capital Medical University, made correct diagnoses in 87 percent of 225 cases in about 15 minutes, while a team of 15 senior doctors only achieved 66-percent accuracy.

The AI also gave correct predictions in 83 percent of brain hematoma expansion cases, outperforming the 63-percent accuracy among a group of physicians from renowned hospitals across the country.

The outcomes for human physicians were quite normal and even better than the average accuracy in ordinary hospitals, said Gao Peiyi, head of the radiology department at Tiantan Hospital, a leading institution on neurology and neurosurgery.

To train the AI, developers fed it tens of thousands of images of nervous system-related diseases that the Tiantan Hospital has archived over the past 10 years, making it capable of diagnosing common neurological diseases such as meningioma and glioma with an accuracy rate of over 90 percent, comparable to that of a senior doctor.

All the cases were real and contributed by the hospital, but never used as training material for the AI, according to the organizer.

Wang Yongjun, executive vice president of the Tiantan Hospital, said that he personally did not care very much about who won, because the contest was never intended to pit humans against technology but to help doctors learn and improve [emphasis mine] through interactions with technology.

“I hope through this competition, doctors can experience the power of artificial intelligence. This is especially so for some doctors who are skeptical about artificial intelligence. I hope they can further understand AI and eliminate their fears toward it,” said Wang.

Dr. Lin Yi who participated and lost in the second round, said that she welcomes AI, as it is not a threat but a “friend.” [emphasis mine]

AI will not only reduce the workload but also push doctors to keep learning and improve their skills, said Lin.

Bian Xiuwu, an academician with the Chinese Academy of Science and a member of the competition’s jury, said there has never been an absolute standard correct answer in diagnosing developing diseases, and the AI would only serve as an assistant to doctors in giving preliminary results. [emphasis mine]

Dr. Paul Parizel, former president of the European Society of Radiology and another member of the jury, also agreed that AI will not replace doctors, but will instead function similar to how GPS does for drivers. [emphasis mine]

Dr. Gauden Galea, representative of the World Health Organization in China, said AI is an exciting tool for healthcare but still in the primitive stages.

Based on the size of its population and the huge volume of accessible digital medical data, China has a unique advantage in developing medical AI, according to Galea.

China has introduced a series of plans in developing AI applications in recent years.

In 2017, the State Council issued a development plan on the new generation of Artificial Intelligence and the Ministry of Industry and Information Technology also issued the “Three-Year Action Plan for Promoting the Development of a New Generation of Artificial Intelligence (2018-2020).”

The Action Plan proposed developing medical image-assisted diagnostic systems to support medicine in various fields.

I note the reference to cars and global positioning systems (GPS) and their role as ‘helpers’;, it seems no one at the ‘AI and radiology’ competition has heard of driverless cars. Here’s Musa on those reassuring comments abut how the technology won’t replace experts but rather augment their skills,

To be sure, these services frame themselves as “support products” that “make doctors faster,” rather than replacements that make doctors redundant. This language may reflect a reserved view of the technology, though it likely also represents a marketing strategy keen to avoid threatening or antagonizing incumbents. After all, many of the customers themselves, for now, are radiologists.

Radiology isn’t the only area where experts might find themselves displaced.

Eye experts

It seems inroads have been made by artificial intelligence systems (AI) into the diagnosis of eye diseases. It got the ‘Fast Company’ treatment (exciting new tech, learn all about it) as can be seen further down in this posting. First, here’s a more restrained announcement, from an August 14, 2018 news item on phys.org (Note: A link has been removed),

An artificial intelligence (AI) system, which can recommend the correct referral decision for more than 50 eye diseases, as accurately as experts has been developed by Moorfields Eye Hospital NHS Foundation Trust, DeepMind Health and UCL [University College London].

The breakthrough research, published online by Nature Medicine, describes how machine-learning technology has been successfully trained on thousands of historic de-personalised eye scans to identify features of eye disease and recommend how patients should be referred for care.

Researchers hope the technology could one day transform the way professionals carry out eye tests, allowing them to spot conditions earlier and prioritise patients with the most serious eye diseases before irreversible damage sets in.

An August 13, 2018 UCL press release, which originated the news item, describes the research and the reasons behind it in more detail,

More than 285 million people worldwide live with some form of sight loss, including more than two million people in the UK. Eye diseases remain one of the biggest causes of sight loss, and many can be prevented with early detection and treatment.

Dr Pearse Keane, NIHR Clinician Scientist at the UCL Institute of Ophthalmology and consultant ophthalmologist at Moorfields Eye Hospital NHS Foundation Trust said: “The number of eye scans we’re performing is growing at a pace much faster than human experts are able to interpret them. There is a risk that this may cause delays in the diagnosis and treatment of sight-threatening diseases, which can be devastating for patients.”

“The AI technology we’re developing is designed to prioritise patients who need to be seen and treated urgently by a doctor or eye care professional. If we can diagnose and treat eye conditions early, it gives us the best chance of saving people’s sight. With further research it could lead to greater consistency and quality of care for patients with eye problems in the future.”

The study, launched in 2016, brought together leading NHS eye health professionals and scientists from UCL and the National Institute for Health Research (NIHR) with some of the UK’s top technologists at DeepMind to investigate whether AI technology could help improve the care of patients with sight-threatening diseases, such as age-related macular degeneration and diabetic eye disease.

Using two types of neural network – mathematical systems for identifying patterns in images or data – the AI system quickly learnt to identify 10 features of eye disease from highly complex optical coherence tomography (OCT) scans. The system was then able to recommend a referral decision based on the most urgent conditions detected.

To establish whether the AI system was making correct referrals, clinicians also viewed the same OCT scans and made their own referral decisions. The study concluded that AI was able to make the right referral recommendation more than 94% of the time, matching the performance of expert clinicians.

The AI has been developed with two unique features which maximise its potential use in eye care. Firstly, the system can provide information that helps explain to eye care professionals how it arrives at its recommendations. This information includes visuals of the features of eye disease it has identified on the OCT scan and the level of confidence the system has in its recommendations, in the form of a percentage. This functionality is crucial in helping clinicians scrutinise the technology’s recommendations and check its accuracy before deciding the type of care and treatment a patient receives.

Secondly, the AI system can be easily applied to different types of eye scanner, not just the specific model on which it was trained. This could significantly increase the number of people who benefit from this technology and future-proof it, so it can still be used even as OCT scanners are upgraded or replaced over time.

The next step is for the research to go through clinical trials to explore how this technology might improve patient care in practice, and regulatory approval before it can be used in hospitals and other clinical settings.

If clinical trials are successful in demonstrating that the technology can be used safely and effectively, Moorfields will be able to use an eventual, regulatory-approved product for free, across all 30 of their UK hospitals and community clinics, for an initial period of five years.

The work that has gone into this project will also help accelerate wider NHS research for many years to come. For example, DeepMind has invested significant resources to clean, curate and label Moorfields’ de-identified research dataset to create one of the most advanced eye research databases in the world.

Moorfields owns this database as a non-commercial public asset, which is already forming the basis of nine separate medical research studies. In addition, Moorfields can also use DeepMind’s trained AI model for future non-commercial research efforts, which could help advance medical research even further.

Mustafa Suleyman, Co-founder and Head of Applied AI at DeepMind Health, said: “We set up DeepMind Health because we believe artificial intelligence can help solve some of society’s biggest health challenges, like avoidable sight loss, which affects millions of people across the globe. These incredibly exciting results take us one step closer to that goal and could, in time, transform the diagnosis, treatment and management of patients with sight threatening eye conditions, not just at Moorfields, but around the world.”

Professor Sir Peng Tee Khaw, director of the NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology said: “The results of this pioneering research with DeepMind are very exciting and demonstrate the potential sight-saving impact AI could have for patients. I am in no doubt that AI has a vital role to play in the future of healthcare, particularly when it comes to training and helping medical professionals so that patients benefit from vital treatment earlier than might previously have been possible. This shows the transformative research than can be carried out in the UK combining world leading industry and NIHR/NHS hospital/university partnerships.”

Matt Hancock, Health and Social Care Secretary, said: “This is hugely exciting and exactly the type of technology which will benefit the NHS in the long term and improve patient care – that’s why we fund over a billion pounds a year in health research as part of our long term plan for the NHS.”

Here’s a link to and a citation for the study,

Clinically applicable deep learning for diagnosis and referral in retinal disease by Jeffrey De Fauw, Joseph R. Ledsam, Bernardino Romera-Paredes, Stanislav Nikolov, Nenad Tomasev, Sam Blackwell, Harry Askham, Xavier Glorot, Brendan O’Donoghue, Daniel Visentin, George van den Driessche, Balaji Lakshminarayanan, Clemens Meyer, Faith Mackinder, Simon Bouton, Kareem Ayoub, Reena Chopra, Dominic King, Alan Karthikesalingam, Cían O. Hughes, Rosalind Raine, Julian Hughes, Dawn A. Sim, Catherine Egan, Adnan Tufail, Hugh Montgomery, Demis Hassabis, Geraint Rees, Trevor Back, Peng T. Khaw, Mustafa Suleyman, Julien Cornebise, Pearse A. Keane, & Olaf Ronneberger. Nature Medicine (2018) DOI: https://doi.org/10.1038/s41591-018-0107-6 Published 13 August 2018

This paper is behind a paywall.

And now, Melissa Locker’s August 15, 2018 article for Fast Company (Note: Links have been removed),

In a paper published in Nature Medicine on Monday, Google’s DeepMind subsidiary, UCL, and researchers at Moorfields Eye Hospital showed off their new AI system. The researchers used deep learning to create algorithm-driven software that can identify common patterns in data culled from dozens of common eye diseases from 3D scans. The result is an AI that can identify more than 50 diseases with incredible accuracy and can then refer patients to a specialist. Even more important, though, is that the AI can explain why a diagnosis was made, indicating which part of the scan prompted the outcome. It’s an important step in both medicine and in making AIs slightly more human

The editor or writer has even highlighted the sentence about the system’s accuracy—not just good but incredible!

I will be publishing something soon [my August 21, 2018 posting] which highlights some of the questions one might want to ask about AI and medicine before diving headfirst into this brave new world of medicine.

The Royal Bank of Canada reports ‘Humans wanted’ and some thoughts on the future of work, robots, and artificial intelligence

It seems the Royal Bank of Canada ((RBC or Royal Bank) wants to weigh in and influence what is to come with regard to what new technologies will bring us and how they will affect our working lives.  (I will be offering my critiques of the whole thing.)

Launch yourself into the future (if you’re a youth)

“I’m not planning on being replaced by a robot.” That’s the first line of text you’ll see if you go to the Royal Bank of Canada’s new Future Launch web space and latest marketing campaign and investment.

This whole endeavour is aimed at ‘youth’ and represents a $500M investment. Of course, that money will be invested over a 10-year period which works out to $50M per year and doesn’t seem quite so munificent given how much money Canadian banks make (from a March 1, 2017 article by Don Pittis for the Canadian Broadcasting Corporation [CBC] news website),

Yesterday [February 28, 2017] the Bank of Montreal [BMO] said it had made about $1.5 billion in three months.

That may be hard to put in context until you hear that it is an increase in profit of nearly 40 per cent from the same period last year and dramatically higher than stock watchers had been expecting.

Not all the banks have done as well as BMO this time. The Royal Bank’s profits were up 24 per cent at $3 billion. [emphasis mine] CIBC [Canadian Imperial Bank of Commerce] profits were up 13 per cent. TD [Toronto Dominion] releases its numbers tomorrow.

Those numbers would put the RBC on track to a profit of roughly $12B n 2017. This means  $500M represents approximately 4.5% of a single year’s profits which will be disbursed over a 10 year period which makes the investment work out to approximately .45% or less than 1/2 of one percent. Paradoxically, it’s a lot of money and it’s not that much money.

Advertising awareness

First, there was some advertising (in Vancouver at least),

[downloaded from http://flinflononline.com/local-news/356505]

You’ll notice she has what could be described as a ‘halo’. Is she an angel or, perhaps, she’s an RBC angel? After all, yellow and gold are closely associated as colours and RBC sports a partially yellow logo. As well, the model is wearing a blue denim jacket, RBC’s other logo colour.

Her ‘halo’ is intact but those bands of colour bend a bit and could be described as ‘rainbow-like’ bringing to mind ‘pots of gold’ at the end of the rainbow.  Free association is great fun and allows people to ascribe multiple and/or overlapping ideas and stories to the advertising. For example, people who might not approve of imagery that hearkens to religious art might have an easier time with rainbows and pots of gold. At any rate, none of the elements in images/ads are likely to be happy accidents or coincidence. They are intended to evoke certain associations, e.g., anyone associated with RBC will be blessed with riches.

The timing is deliberate, too, just before Easter 2018 (April 1), suggesting to some us, that even when the robots arrive destroying the past, youth will rise up (resurrection) for a new future. Or, if you prefer, Passover and its attendant themes of being spared and moving to the Promised Land.

Enough with the semiotic analysis and onto campaign details.

Humans Wanted: an RBC report

It seems the precursor to Future Launch, is an RBC report, ‘Humans Wanted’, which itself is the outcome of still earlier work such as this Brookfield Institute for Innovation + Entrepreneurship (BII+E) report, Future-proof: Preparing young Canadians for the future of work, March 2017 (authors: Creig Lamb and Sarah Doyle), which features a quote from RBC’s President and CEO (Chief Executive Officer) David McKay,

“Canada’s future prosperity and success will rely on us harnessing the innovation of our entire talent pool. A huge part of our success will depend on how well we integrate this next generation of Canadians into the workforce. Their confidence, optimism and inspiration could be the key to helping us reimagine traditional business models, products and ways of working.”  David McKay, President and CEO, RBC

There are a number of major trends that have the potential to shape the future of work, from climate change and resource scarcity to demographic shifts resulting from an aging population and immigration. This report focuses on the need to prepare Canada’s youth for a future where a great number of jobs will be rapidly created, altered or made obsolete by technology.

Successive waves of technological advancements have rocked global economies for centuries, reconfiguring the labour force and giving rise to new economic opportunities with each wave. Modern advances, including artificial intelligence and robotics, once again have the potential to transform the economy, perhaps more rapidly and more dramatically than ever before. As past pillars of Canada’s economic growth become less reliable, harnessing technology and innovation will become increasingly important in driving productivity and growth. 1, 2, 3

… (p. 2 print; p. 4 PDF)

The Brookfield Institute (at Ryerson University in Toronto, Ontario, Canada) report is worth reading if for no other reason than its Endnotes. Unlike the RBC materials, you can find the source for the information in the Brookfield report.

After Brookfield, there was the RBC Future Launch Youth Forums 2017: What We Learned  document (October 13, 2017 according to ‘View Page Info’),

In this rapidly changing world, there’s a new reality when it comes to work. A degree or diploma no longer guarantees a job, and some of the positions, skills and trades of today won’t exist – or be relevant – in the future.

Through an unprecedented 10-year, $500 million commitment, RBC Future LaunchTM  is focused on driving real change and preparing today’s young people for the future world of work, helping them access the skills, job experience and networks that will enable their success.

At the beginning of this 10-year journey RBC® wanted to go beyond research and expert reports to better understand the regional issues facing youth across Canada and to hear directly from young people and organizations that work with them. From November 2016 to May 2017, the RBC Future Launch team held 15 youth forums across the country, bringing together over 430 partners, including young people, to uncover ideas and talk through solutions to address the workforce gaps Canada’s youth face today.

Finally,  a March 26, 2018 RBC news release announces the RBC report: ‘Humans Wanted – How Canadian youth can thrive in the age of disruption’,

Automation to impact at least 50% of Canadian jobs in the next decade: RBC research

Human intelligence and intuition critical for young people and jobs of the future

  • Being ‘human’ will ensure resiliency in an era of disruption and artificial intelligence
  • Skills mobility – the ability to move from one job to another – will become a new competitive advantage

TORONTO, March 26, 2018 – A new RBC research paper, Humans Wanted – How Canadian youth can thrive in the age of disruption, has revealed that 50% of Canadian jobs will be disrupted by automation in the next 10 years.

As a result of this disruption, Canada’s Gen Mobile – young people who are currently transitioning from education to employment – are unprepared for the rapidly changing workplace. With 4 million Canadian youth entering the workforce over the next decade, and the shift from a jobs economy to a skills economy, the research indicates young people will need a portfolio of “human skills” to remain competitive and resilient in the labour market.

“Canada is at a historic cross-roads – we have the largest generation of young people coming into the workforce at the very same time technology is starting to impact most jobs in the country,” said Dave McKay, President and CEO, RBC. “Canada is on the brink of a skills revolution and we have a responsibility to prepare young people for the opportunities and ambiguities of the future.”

‘There is a changing demand for skills,” said John Stackhouse, Senior Vice-President, RBC. “According to our findings, if employers and the next generation of employees focus on foundational ‘human skills’, they’ll be better able to navigate a new age of career mobility as technology continues to reshape every aspect of the world around us.”

Key Findings:

  • Canada’s economy is on target to add 2.4 million jobs over the next four years, virtually all of which will require a different mix of skills.
  • A growing demand for “human skills” will grow across all job sectors and include: critical thinking, co-ordination, social perceptiveness, active listening and complex problem solving.
  • Rather than a nation of coders, digital literacy – the ability to understand digital items, digital technologies or the Internet fluently – will be necessary for all new jobs.
  • Canada’s education system, training programs and labour market initiatives are inadequately designed to help Canadian youth navigate the new skills economy, resulting in roughly half a million 15-29 year olds who are unemployed and another quarter of a million who are working part-time involuntarily.
  • Canadian employers are generally not prepared, through hiring, training or retraining, to recruit and develop the skills needed to ensure their organizations remain competitive in the digital economy.

“As digital and machine technology advances, the next generation of Canadians will need to be more adaptive, creative and collaborative, adding and refining skills to keep pace with a world of work undergoing profound change,” said McKay. “Canada’s future prosperity depends on getting a few big things right and that’s why we’ve introduced RBC Future Launch.”

RBC Future Launch is a decade-long commitment to help Canadian youth prepare for the jobs of tomorrow. RBC is committed to acting as a catalyst for change, bringing government, educators, public sector and not-for-profits together to co-create solutions to help young people better prepare for the future of the work through “human skills” development, networking and work experience.

Top recommendations from the report include:

  • A national review of post-secondary education programs to assess their focus on “human skills” including global competencies
  • A national target of 100% work-integrated learning, to ensure every undergraduate student has the opportunity for an apprenticeship, internship, co-op placement or other meaningful experiential placement
  • Standardization of labour market information across all provinces and regions, and a partnership with the private sector to move skills and jobs information to real-time, interactive platforms
  • The introduction of a national initiative to help employers measure foundational skills and incorporate them in recruiting, hiring and training practices

Join the conversation with Dave McKay and John Stackhouse on Wednesday, March 28 [2018] at 9:00 a.m. to 10:00 a.m. EDT at RBC Disruptors on Facebook Live.

Click here to read: Humans Wanted – How Canadian youth can thrive in the age of disruption.

About the Report
RBC Economics amassed a database of 300 occupations and drilled into the skills required to perform them now and projected into the future. The study groups the Canadian economy into six major clusters based on skillsets as opposed to traditional classifications and sectors. This cluster model is designed to illustrate the ease of transition between dissimilar jobs as well as the relevance of current skills to jobs of the future.

Six Clusters
Doers: Emphasis on basic skills
Transition: Greenhouse worker to crane operator
High Probability of Disruption

Crafters: Medium technical skills; low in management skills
Transition: Farmer to plumber
Very High Probability of Disruption

Technicians: High in technical skills
Transition: Car mechanic to electrician
Moderate Probability of Disruption

Facilitators: Emphasis on emotional intelligence
Transition: Dental assistant to graphic designer
Moderate Probability of Disruption

Providers: High in Analytical Skills
Transition: Real estate agent to police officer
Low Probability of Disruption

Solvers: Emphasis on management skills and critical thinking
Transition: Mathematician to software engineer
Minimal Probability of Disruption

About RBC
Royal Bank of Canada is a global financial institution with a purpose-driven, principles-led approach to delivering leading performance. Our success comes from the 81,000+ employees who bring our vision, values and strategy to life so we can help our clients thrive and communities prosper. As Canada’s biggest bank, and one of the largest in the world based on market capitalization, we have a diversified business model with a focus on innovation and providing exceptional experiences to our 16 million clients in Canada, the U.S. and 34 other countries. Learn more at rbc.com.‎

We are proud to support a broad range of community initiatives through donations, community investments and employee volunteer activities. See how at http://www.rbc.com/community-sustainability/.

– 30 – 

The report features a lot of bulleted points, airy text (large fonts and lots of space between the lines), inoffensive graphics, and human interest stories illustrating the points made elsewhere in the text.

There is no bibliography or any form of note telling you where to find the sources for the information in the report. The 2.4M jobs mentioned in the news release are also mentioned in the report on p. 16 (PDF) and is credited in the main body of the text to the EDSC. I’m not up-to-date on my abbreviations but I’m pretty sure it does not stand for East Doncaster Secondary College or East Duplin Soccer Club. I’m betting it stands for Employment and Social Development Canada. All that led to visiting the EDSC website and trying (unsuccessfully) to find the report or data sheet used to supply the figures RBC quoted in their report and news release.

Also, I’m not sure who came up with or how they developed the ‘crafters, ‘doers’, ‘technicians’, etc. categories.

Here’s more from p. 2 of their report,

CANADA, WE HAVE A PROBLEM. [emphasis mine] We’re hurtling towards the 2020s with perfect hindsight, not seeing what’s clearly before us. The next generation is entering the workforce at a time of profound economic, social and technological change. We know it. [emphasis mine] Canada’s youth know it. And we’re not doing enough about it.

RBC wants to change the conversation, [emphasis mine] to help Canadian youth own the 2020s — and beyond. RBC Future Launch is our 10-year commitment to that cause, to help young people prepare for and navigate a new world of work that, we believe, will fundamentally reshape Canada. For the better. If we get a few big things right.

This report, based on a year-long research project, is designed to help that conversation. Our team conducted one of the biggest labour force data projects [emphasis mine] in Canada, and crisscrossed the country to speak with students and workers in their early careers, with educators and policymakers, and with employers in every sector.

We discovered a quiet crisis — of recent graduates who are overqualified for the jobs they’re in, of unemployed youth who weren’t trained for the jobs that are out there, and young Canadians everywhere who feel they aren’t ready for the future of work.

Sarcasm ahead

There’s nothing like starting your remarks with a paraphrased quote from a US movie about the Apollo 13 spacecraft crisis as in, “Houston, we have a problem.” I’ve always preferred Trudeau (senior) and his comment about ‘keeping our noses out of the nation’s bedrooms’. It’s not applicable but it’s more amusing and a Canadian quote to boot.

So, we know we’re having a crisis which we know about but RBC wants to tell us about it anyway (?) and RBC wants to ‘change the conversation’. OK. So how does presenting the RBC Future Launch change the conversation? Especially in light of the fact, that the conversation has already been held, “a year-long research project … Our team conducted one of the biggest labour force data projects [emphasis mine] in Canada, and crisscrossed the country to speak with students and workers in their early careers, with educators and policymakers, and with employers in every sector.” Is the proposed change something along the lines of ‘Don’t worry, be happy; RBC has six categories (Doers, Crafters, Technicians, Facilitators, Providers, Solvers) for you.’ (Yes, for those who recognized it, I’m referencing I’m referencing Bobby McFerrin’s hit song, Don’t Worry, Be Happy.)

Also, what data did RBC collect and how do they collect it? Could Facebook and other forms of social media have been involved? (My March 29, 2018 posting mentions the latest Facebook data scandal; scroll down about 80% of the way.)

There are the people leading the way and ‘changing the conversation’ as it were and they can’t present logical, coherent points. What kind of conversation could they possibly have with youth (or anyone else for that matter)?

And, if part of the problem is that employers are not planning for the future, how does Future Launch ‘change that part of the conversation’?

RBC Future Launch

Days after the report’s release,there’s the Future Launch announcement in an RBC March 28, 2018 news release,

TORONTO, March 28, 2017 – In an era of unprecedented economic and technological change, RBC is today unveiling its largest-ever commitment to Canada’s future. RBC Future Launch is a 10-year, $500-million initiative to help young people gain access and opportunity to the skills, job experience and career networks needed for the future world of work.

“Tomorrow’s prosperity will depend on today’s young people and their ability to take on a future that’s equally inspiring and unnerving,” said Dave McKay, RBC president and CEO. “We’re sitting at an intersection of history, as a massive generational shift and unprecedented technological revolution come together. And we need to ensure young Canadians are prepared to help take us forward.”

Future Launch is a core part of RBC’s celebration of Canada 150, and is the result of two years of conversations with young Canadians from coast to coast to coast.

“Young people – Canada’s future – have the confidence, optimism and inspiration to reimagine the way our country works,” McKay said. “They just need access to the capabilities and connections to make the 21st century, and their place in it, all it should be.”

Working together with young people, RBC will bring community leaders, industry experts, governments, educators and employers to help design solutions and harness resources for young Canadians to chart a more prosperous and inclusive future.

Over 10 years, RBC Future Launch will invest in areas that help young people learn skills, experience jobs, share knowledge and build resilience. The initiative will address the following critical gaps:

  • A lack of relevant experience. Too many young Canadians miss critical early opportunities because they’re stuck in a cycle of “no experience, no job.” According to the consulting firm McKinsey & Co., 83 per cent of educators believe youth are prepared for the workforce, but only 34 per cent of employers and 44 per cent of young people agree. RBC will continue to help educators and employers develop quality work-integrated learning programs to build a more dynamic bridge between school and work.
  • A lack of relevant skills. Increasingly, young people entering the workforce require a complex set of technical, entrepreneurial and social skills that cannot be attained solely through a formal education. A 2016 report from the World Economic Forum states that by 2020, more than a third of the desired core skill-sets of most occupations will be different from today — if that job still exists. RBC will help ensure young Canadians gain the skills, from critical thinking to coding to creative design, that will help them integrate into the workplace of today, and be more competitive for the jobs of tomorrow.
  • A lack of knowledge networks. Young people are at a disadvantage in the job market if they don’t have an opportunity to learn from others and discover the realities of jobs they’re considering. Many have told RBC that there isn’t enough information on the spectrum of jobs that are available. From social networks to mentoring programs, RBC will harness the vast knowledge and goodwill of Canadians in guiding young people to the opportunities that exist and will exist, across Canada.
  • A lack of future readiness. Many young Canadians know their future will be defined by disruption. A new report, Future-proof: Preparing young Canadians for the future of work, by the Brookfield Institute for Innovation + Entrepreneurship, found that 42 per cent of the Canadian labour force is at a high risk of being affected by automation in the next 10 to 20 years. Young Canadians are okay with that: they want to be the disruptors and make the future workforce more creative and productive. RBC will help to create opportunities, through our education system, workplaces and communities at large to help young Canadians retool, rethink and rebuild as the age of disruption takes hold.

By helping young people unlock their potential and launch their careers, RBC can assist them with building a stronger future for themselves, and a more prosperous Canada for all. RBC created The Launching Careers Playbook, an interactive, digital resource focused on enabling young people to reach their full potential through three distinct modules: I am starting my career; I manage interns and I create internship programs. The Playbook shares the design principles, practices, and learnings captured from the RBC Career Launch Program over three years, as well as the research and feedback RBC has received from young people and their managers.

More information on RBC Future Launch can be found at www.rbc.com/futurelaunch.

Weirdly, this news release is the only document which gives you sources for some of RBC’s information. If you should be inclined, you can check the original reports as cited in the news release and determine if you agree with the conclusions the RBC people drew from them.

Cynicism ahead

They are planning to change the conversation, are they? I can’t help wondering what return they’re (RBC)  expecting to make on their investment ($500M over10 years). The RBC is prominently displayed not only on the launch page but in several of the subtopics listed on the page.

There appears to be some very good and helpful information although much of it leads you to using a bank for one reason or another. For example, if you’re planning to become an entrepreneur (and there is serious pressure from the government of Canada on this generation to become precisely that), then it’s very handy that you have easy access to RBC from any of the Future Launch pages. As well, you can easily apply for a job at or get a loan from RBC after you’ve done some of the exercises on the website and possibly given RBC a lot of data about yourself.

For anyone who believes I’m being harsh about the bank, you might want to check out a March 15, 2017 article by Erica Johnson for the Canadian Broadcasting Corporation’s Go Public website. It highlights just how ruthless Canadian banks can be,

Employees from all five of Canada’s big banks have flooded Go Public with stories of how they feel pressured to upsell, trick and even lie to customers to meet unrealistic sales targets and keep their jobs.

The deluge is fuelling multiple calls for a parliamentary inquiry, even as the banks claim they’re acting in customers’ best interests.

In nearly 1,000 emails, employees from RBC, BMO, CIBC, TD and Scotiabank locations across Canada describe the pressures to hit targets that are monitored weekly, daily and in some cases hourly.

“Management is down your throat all the time,” said a Scotiabank financial adviser. “They want you to hit your numbers and it doesn’t matter how.”

CBC has agreed to protect their identities because the workers are concerned about current and future employment.

An RBC teller from Thunder Bay, Ont., said even when customers don’t need or want anything, “we need to upgrade their Visa card, increase their Visa limits or get them to open up a credit line.”

“It’s not what’s important to our clients anymore,” she said. “The bank wants more and more money. And it’s leading everyone into debt.”

A CIBC teller said, “I am expected to aggressively sell products, especially Visa. Hit those targets, who cares if it’s hurting customers.”

….

Many bank employees described pressure tactics used by managers to try to increase sales.

An RBC certified financial planner in Guelph, Ont., said she’s been threatened with pay cuts and losing her job if she doesn’t upsell enough customers.

“Managers belittle you,” she said. “We get weekly emails that highlight in red the people who are not hitting those sales targets. It’s bullying.”

Some TD Bank employees told CBC’s Go Public they felt they had to break the law to keep their jobs. (Aaron Harris/Reuters)

Employees at several RBC branches in Calgary said there are white boards posted in the staff room that list which financial advisers are meeting their sales targets and which advisers are coming up short.

A CIBC small business associate who quit in January after nine years on the job said her district branch manager wasn’t pleased with her sales results when she was pregnant.

While working in Waterloo, Ont., she says her manager also instructed staff to tell all new international students looking to open a chequing account that they had to open a “student package,” which also included a savings account, credit card and overdraft.

“That is unfair and not the law, but we were told to do it for all of them.”

Go Public requested interviews with the CEOs of the five big banks — BMO, CIBC, RBC, Scotiabank and TD — but all declined.

If you have the time, it’s worth reading Johnson’s article in its entirety as it provides some fascinating insight into Canadian banking practices.

Final comments and an actual ‘conversation’ about the future of work

I’m torn, It’s good to see an attempt to grapple with the extraordinary changes we are likely to see in the not so distant future. It’s hard to believe that this Future Launch initiative is anything other than a self-interested means of profiting from fears about the future and a massive public relations campaign designed to engender good will. Doubly so since the very bad publicity the banks including RBC garnered last year (2017), as mentioned in the Johnson article.

Also, RBC and who knows how many other vested interests appear to have gathered data and information which they’ve used to draw any number of conclusions. First, I can’t find any information about what data RBC is gathering, who else might have access, and what plans, if any, they have to use it. Second, RBC seems to have predetermined how this ‘future of work’ conversation needs to be changed.

I suggest treading as lightly as possible and keeping in mind other ‘conversations’ are possible. For example, Mike Masnick at Techdirt has an April 3, 2018 posting about a new ‘future of work’ initiative,

For the past few years, there have been plenty of discussions about “the future of work,” but they tend to fall into one of two camps. You have the pessimists, who insist that the coming changes wrought by automation and artificial intelligence will lead to fewer and fewer jobs, as all of the jobs of today are automated out of existence. Then, there are the optimists who point to basically every single past similar prediction of doom and gloom due to innovation, which have always turned out to be incorrect. People in this camp point out that technology is more likely to augment than replace human-based work, and vaguely insist that “the jobs will come.” Whether you fall into one of those two camps — or somewhere in between or somewhere else entirely — one thing I’d hope most people can agree on is that the future of work will be… different.

Separately, we’re also living in an age where it is increasingly clear that those in and around the technology industry must take more responsibility in thinking through the possible consequences of the innovations they’re bringing to life, and exploring ways to minimize the harmful results (and hopefully maximizing the beneficial ones).

That brings us to the project we’re announcing today, Working Futures, which is an attempt to explore what the future of work might really look like in the next ten to fifteen years. We’re doing this project in partnership with two organizations that we’ve worked with multiples times in the past: Scout.ai and R Street.

….

The key point of this project: rather than just worry about the bad stuff or hand-wave around the idea of good stuff magically appearing, we want to really dig in — figure out what new jobs may actually appear, look into what benefits may accrue as well as what harms may be dished out — and see if there are ways to minimize the negative consequences, while pushing the world towards the beneficial consequences.

To do that, we’re kicking off a variation on the classic concept of scenario planning, bringing together a wide variety of individuals with different backgrounds, perspectives and ideas to run through a fun and creative exercise to imagine the future, while staying based in reality. We’re adding in some fun game-like mechanisms to push people to think about where the future might head. We’re also updating the output side of traditional scenario planning by involving science fiction authors, who obviously have a long history of thinking up the future, and who will participate in this process and help to craft short stories out of the scenarios we build, making them entertaining, readable and perhaps a little less “wonky” than the output of more traditional scenario plans.

There you have it; the Royal Bank is changing the conversation and Techdirt is inviting you to join in scenario planning and more.

World Science Festival May 29 – June 3, 2018 in New York City

I haven’t featured the festival since 2014 having forgotten all about it but I received (via email) an April 30, 2018 news release announcing the latest iteration,

ANNOUNCING WORLD SCIENCE FESTIVAL NEW YORK CITY

MAY 29 THROUGH JUNE 3, 2018

OVER 70 INSPIRING SCIENCE-THEMED EVENTS EXPLORE THE VERY EDGE OF
KNOWLEDGE

Over six extraordinary days in New York City, from May 29 through June
3, 2018; the world’s leading scientists will explore the very edge of
knowledge and share their insights with the public.  Festival goers of
all ages can experience vibrant discussions and debates, evocative
performances and films, world-changing research updates,
thought-provoking town hall gatherings and fireside chats, hands-on
experiments and interactive outdoor explorations.  It’s an action
adventure for your mind!

See the full list of programs here:
https://www.worldsciencefestival.com/festival/world-science-festival-2018/

This year will highlight some of the incredible achievements of Women in
Science, celebrating and exploring their impact on the history and
future of scientific discovery. Perennial favorites will also return in
full force, including WSF main stage Big Ideas programs, the Flame
Challenge, Cool Jobs, and FREE outdoor events.

The World Science Festival makes the esoteric understandable and the
familiar fascinating. It has drawn more than 2.5 million participants
since its launch in 2008, with millions more experiencing the programs
online.

THE 2018 WORLD SCIENCE FESTIVAL IS NOT TO BE MISSED, SO MARK YOUR
CALENDAR AND SAVE THE DATES!

Here are a few items from the 2018 Festival’s program page,

Thursday, May 31, 2018

6:00 pm – 9:00 pm

American Museum of Natural History

Host: Faith Salie

How deep is the ocean? Why do whales sing? How far is 20,000 leagues—and what is a league anyway? Raise a glass and take a deep dive into the foamy waters of oceanic arcana under the blue whale in the Museum’s Hall of Ocean Life. Comedian and journalist Faith Salie will regale you with a pub-style night of trivia questions, physical challenges, and hilarity to celebrate the Museum’s newest temporary exhibition, Unseen Oceans. Don’t worry. When the going gets tough, we won’t let you drown. Teams of top scientists—and even a surprise guest or two—will be standing by to assist you. Program includes one free drink and private access to the special exhibition Unseen Oceans. Special exhibition access is available to ticket holders beginning one hour before the program, from 6–7pm.

Learn More

Buy Tickets

Thursday, May 31, 2018

8:00 pm – 9:30 pm

Gerald W. Lynch Theater at John Jay College

Participants: Alvaro Pascual-Leone, Nim Tottenham, Carla Shatz, And Others

What if your brain at 77 were as plastic as it was at 7? What if you could learn Mandarin with the ease of a toddler or play Rachmaninoff without breaking a sweat? A growing understanding of neuroplasticity suggests these fantasies could one day become reality. Neuroplasticity may also be the key to solving diseases like Alzheimer’s, depression, and autism. This program will guide you through the intricate neural pathways inside our skulls, as leading neuroscientists discuss their most recent findings and both the tantalizing possibilities and pitfalls for our future cognitive selves.

The Big Ideas Series is supported in part by the John Templeton Foundation. 

Learn More

Buy Tickets

Friday, June 1, 2018

8:00 pm – 9:30 pm

NYU Skirball Center for the Performing Arts

Participants: Yann LeCun, Susan Schneider, Max Tegmark, And Others

“Success in creating effective A.I.,” said the late Stephen Hawking, “could be the biggest event in the history of our civilization. Or the worst. We just don’t know.” Elon Musk called A.I. “a fundamental risk to the existence of civilization.” Are we creating the instruments of our own destruction or exciting tools for our future survival? Once we teach a machine to learn on its own—as the programmers behind AlphaGo have done, to wondrous results—where do we draw moral and computational lines? Leading specialists in A.I, neuroscience, and philosophy will tackle the very questions that may define the future of humanity.

The Big Ideas Series is supported in part by the John Templeton Foundation. 

Learn More

Buy Tickets

Friday, June 1, 2018

8:00 pm – 9:30 pm

Gerald W. Lynch Theater at John Jay College

Participants Marcela Carena, Janet Conrad, Michael Doser, Hitoshi Murayama, Neil Turok

“If I had a world of my own,” said the Mad Hatter, “nothing would be what it is, because everything would be what it isn’t. And contrary wise, what is, it wouldn’t be.” Nonsensical as this may sound, it comes close to describing an interesting paradox: You exist. You shouldn’t. Stars and galaxies and planets exist. They shouldn’t. The nascent universe contained equal parts matter and antimatter that should have instantly obliterated each other, turning the Big Bang into the Big Fizzle. And yet, here we are: flesh, blood, stars, moons, sky. Why? Come join us as we dive deep down the rabbit hole of solving the mystery of the missing antimatter.

The Big Ideas Series is supported in part by the John Templeton Foundation.

Learn More

Buy Tickets

Saturday, June 2, 2018

10:00 am – 11:00 am

Museum of the City of New York

ParticipantsKubi Ackerman

What makes a city a city? How do you build buildings, plan streets, and design parks with humans and their needs in mind? Join architect and Future Lab Project Director, Kubi Ackerman, on an exploration in which you’ll venture outside to examine New York City anew, seeing it through the eyes of a visionary museum architect, and then head to the Future City Lab’s awesome interactive space where you will design your own park. This is a student-only program for kids currently enrolled in the 4th grade – 8th grade. Parents/Guardians should drop off their children for this event.

Supported by the Bezos Family Foundation.

Learn More

Buy Tickets

Saturday, June 2, 2018

11:00 am – 12:30 pm

NYU Global Center, Grand Hall

Kerouac called it “the only truth.” Shakespeare called it “the food of love.” Maya Angelou called it “my refuge.” And now scientists are finally discovering what these thinkers, musicians, or even any of us with a Spotify account and a set of headphones could have told you on instinct: music lights up multiple corners of the brain, strengthening our neural networks, firing up memory and emotion, and showing us what it means to be human. In fact, music is as essential to being human as language and may even predate it. Can music also repair broken networks, restore memory, and strengthen the brain? Join us as we speak with neuroscientists and other experts in the fields of music and the brain as we pluck the notes of these fascinating phenomenon.

The Big Ideas Series is supported in part by the John Templeton Foundation.

Learn More

Buy Tickets

Saturday, June 2, 2018

3:00 pm – 4:00 pm

NYU Skirball Center for the Performing Arts

Moderator“Science Bob” Pflugfelder

Participants William Clark, Matt Lanier, Michael Meacham, Casie Parish Fisher, Mike Ressler

Most people think of scientists as people who work in funny-smelling labs filled with strange equipment. But there are lots of scientists whose jobs often take them out of the lab, into the world, and beyond. Come join some of the coolest of them in Cool Jobs. You’ll get to meet a forensic scientist, a venomous snake-loving herpetologist, a NASA engineer who lands spacecrafts on Mars, and inventors who are changing the future of sports.

Learn More

Buy Tickets

Saturday, June 2, 2018

4:00 pm – 5:30 pm

NYU Global Center, Grand Hall

“We can rebuild him. We have the technology,” began the opening sequence of the hugely popular 70’s TV show, “The Six Million Dollar Man.” Forty-five years later, how close are we, in reality, to that sci-fi fantasy? More thornily, now that artificial intelligence may soon pass human intelligence, and the merging of human with machine is potentially on the table, what will it then mean to “be human”? Join us for an important discussion with scientists, technologists and ethicists about the path toward superhumanism and the quest for immortality.

The Big Ideas Series is supported in part by the John Templeton Foundation.

Learn More

Buy Tickets

Saturday, June 2, 2018

4:00 pm – 5:30 pm

Gerald W. Lynch Theater at John Jay College

Participants Brett Frischmann, Tim Hwang, Aviv Ovadya, Meredith Whittaker

“Move fast and break things,” went the Silicon Valley rallying cry, and for a long time we cheered along. Born in dorm rooms and garages, implemented by iconoclasts in hoodies, Big Tech, in its infancy, spouted noble goals of bringing us closer. But now, in its adolescence, it threatens to tear us apart. Some worry about an “Infocalypse”: a dystopian disruption so deep and dire we will no longer trust anything we see, hear, or read. Is this pessimistic vision of the future real or hyperbole? Is it time for tech to slow down, grow up, and stop breaking things? Big names in Big Tech will offer big thoughts on this massive societal shift, its terrifying pitfalls, and practical solutions both for ourselves and for future generations.

The Big Ideas Series is supported in part by the John Templeton Foundation.

Learn More

Buy Tickets

This looks like an exciting lineup and there’s a lot more for you to see on the 2018 Festival’s program page. You may also want to take a look at the list of participants which features some expected specialty speakers, an architect, a mathematician, a neuroscientist and some unexpected names such Kareem Abdul-Jabbar who I know as a basketball player and currently, a contestant on Dancing with the Stars. Bringing to mind that Walt Whitman quote, “I am large, I contain multitudes.” (from Whitman’s Song of Myself Wikipedia entry).

If you’re going, there are free events and note a few of the event are already sold out.

AI fairytale and April 25, 2018 AI event at Canada Science and Technology Museum*** in Ottawa

These days it’s all about artificial intelligence (AI) or robots and often, it’s both. They’re everywhere and they will take everyone’s jobs, or not, depending on how you view them. Today, I’ve got two artificial intelligence items, the first of which may provoke writers’ anxieties.

Fairytales

The Princess and the Fox is a new fairytale by the Brothers Grimm or rather, their artificially intelligent surrogate according to an April 18, 2018 article on the British Broadcasting Corporation’s online news website,

It was recently reported that the meditation app Calm had published a “new” fairytale by the Brothers Grimm.

However, The Princess and the Fox was written not by the brothers, who died over 150 years ago, but by humans using an artificial intelligence (AI) tool.

It’s the first fairy tale written by an AI, claims Calm, and is the result of a collaboration with Botnik Studios – a community of writers, artists and developers. Calm says the technique could be referred to as “literary cloning”.

Botnik employees used a predictive-text program to generate words and phrases that might be found in the original Grimm fairytales. Human writers then pieced together sentences to form “the rough shape of a story”, according to Jamie Brew, chief executive of Botnik.

The full version is available to paying customers of Calm, but here’s a short extract:

“Once upon a time, there was a golden horse with a golden saddle and a beautiful purple flower in its hair. The horse would carry the flower to the village where the princess danced for joy at the thought of looking so beautiful and good.

Advertising for a meditation app?

Of course, it’s advertising and it’s ‘smart’ advertising (wordplay intended). Here’s a preview/trailer,

Blair Marnell’s April 18, 2018 article for SyFy Wire provides a bit more detail,

“You might call it a form of literary cloning,” said Calm co-founder Michael Acton Smith. Calm commissioned Botnik to use its predictive text program, Voicebox, to create a new Brothers Grimm story. But first, Voicebox was given the entire collected works of the Brothers Grimm to analyze, before it suggested phrases and sentences based upon those stories. Of course, human writers gave the program an assist when it came to laying out the plot. …

“The Brothers Grimm definitely have a reputation for darkness and many of their best-known tales are undoubtedly scary,” Peter Freedman told SYFY WIRE. Freedman is a spokesperson for Calm who was a part of the team behind the creation of this story. “In the process of machine-human collaboration that generated The Princess and The Fox, we did gently steer the story towards something with a more soothing, calm plot and vibe, that would make it work both as a new Grimm fairy tale and simultaneously as a Sleep Story on Calm.” [emphasis mine]

….

If Marnell’s article is to be believed, Peter Freedman doesn’t hold much hope for writers in the long-term future although we don’t need to start ‘battening down the hatches’ yet.

You can find Calm here.

You can find Botnik  here and Botnik Studios here.

 

AI at Ingenium [Canada Science and Technology Museum] on April 25, 2018

Formerly known (I believe) [*Read the comments for the clarification] as the Canada Science and Technology Museum, Ingenium is hosting a ‘sold out but there will be a livestream’ Google event. From Ingenium’s ‘Curiosity on Stage Evening Edition with Google – The AI Revolution‘ event page,

Join Google, Inc. and the Canada Science and Technology Museum for an evening of thought-provoking discussions about artificial intelligence.

[April 25, 2018
7:00 p.m. – 10:00 p.m. {ET}
Fees: Free]

Invited speakers from industry leaders Google, Facebook, Element AI and Deepmind will explore the intersection of artificial intelligence with robotics, arts, social impact and healthcare. The session will end with a panel discussion and question-and-answer period. Following the event, there will be a reception along with light refreshments and networking opportunities.

The event will be simultaneously translated into both official languages as well as available via livestream from the Museum’s YouTube channel.

Seating is limited

THIS EVENT IS NOW SOLD OUT. Please join us for the livestream from the Museum’s YouTube channel. https://www.youtube.com/cstmweb *** April 25, 2018: I received corrective information about the link for the livestream: https://youtu.be/jG84BIno5J4 from someone at Ingenium.***

Speakers

David Usher (Moderator)

David Usher is an artist, best-selling author, entrepreneur and keynote speaker. As a musician he has sold more than 1.4 million albums, won 4 Junos and has had #1 singles singing in English, French and Thai. When David is not making music, he is equally passionate about his other life, as a Geek. He is the founder of Reimagine AI, an artificial intelligence creative studio working at the intersection of art and artificial intelligence. David is also the founder and creative director of the non-profit, the Human Impact Lab at Concordia University [located in Montréal, Québec]. The Lab uses interactive storytelling to revisualize the story of climate change. David is the co-creator, with Dr. Damon Matthews, of the Climate Clock. Climate Clock has been presented all over the world including the United Nations COP 23 Climate Conference and is presently on a three-year tour with the Canada Museum of Science and Innovation’s Climate Change Exhibit.

Joelle Pineau (Facebook)

The AI Revolution:  From Ideas and Models to Building Smart Robots
Joelle Pineau is head of the Facebook AI Research Lab Montreal, and an Associate Professor and William Dawson Scholar at McGill University. Dr. Pineau’s research focuses on developing new models and algorithms for automatic planning and learning in partially-observable domains. She also applies these algorithms to complex problems in robotics, health-care, games and conversational agents. She serves on the editorial board of the Journal of Artificial Intelligence Research and the Journal of Machine Learning Research and is currently President of the International Machine Learning Society. She is a AAAI Fellow, a Senior Fellow of the Canadian Institute for Advanced Research (CIFAR) and in 2016 was named a member of the College of New Scholars, Artists and Scientists by the Royal Society of Canada.

Pablo Samuel Castro (Google)

Building an Intelligent Assistant for Music Creators
Pablo was born and raised in Quito, Ecuador, and moved to Montreal after high school to study at McGill. He stayed in Montreal for the next 10 years, finished his bachelors, worked at a flight simulator company, and then eventually obtained his masters and PhD at McGill, focusing on Reinforcement Learning. After his PhD Pablo did a 10-month postdoc in Paris before moving to Pittsburgh to join Google. He has worked at Google for almost 6 years, and is currently a research Software Engineer in Google Brain in Montreal, focusing on fundamental Reinforcement Learning research, as well as Machine Learning and Music. Aside from his interest in coding/AI/math, Pablo is an active musician (https://www.psctrio.com), loves running (5 marathons so far, including Boston!), and discussing politics and activism.

Philippe Beaudoin (Element AI)

Concrete AI-for-Good initiatives at Element AI
Philippe cofounded Element AI in 2016 and currently leads its applied lab and AI-for-Good initiatives. His team has helped tackle some of the biggest and most interesting business challenges using machine learning. Philippe holds a Ph.D in Computer Science and taught virtual bipeds to walk by themselves during his postdoc at UBC. He spent five years at Google as a Senior Developer and Technical Lead Manager, partly with the Chrome Machine Learning team. Philippe also founded ArcBees, specializing in cloud-based development. Prior to that he worked in the videogame and graphics hardware industries. When he has some free time, Philippe likes to invent new boardgames — the kind of games where he can still beat the AI!

Doina Precup (Deepmind)

Challenges and opportunities for the AI revolution in health care
Doina Precup splits her time between McGill University, where she co-directs the Reasoning and Learning Lab in the School of Computer Science, and DeepMind Montreal, where she leads the newly formed research team since October 2017.  She got her BSc degree in computer science form the Technical University Cluj-Napoca, Romania, and her MSc and PhD degrees from the University of Massachusetts-Amherst, where she was a Fulbright fellow. Her research interests are in the areas of reinforcement learning, deep learning, time series analysis, and diverse applications of machine learning in health care, automated control and other fields. She became a senior member of AAAI in 2015, a Canada Research Chair in Machine Learning in 2016 and a Senior Fellow of CIFAR in 2017.

Interesting, oui? Not a single expert from Ottawa or Toronto. Well, Element AI has an office in Toronto. Still, I wonder why this singular focus on AI in Montréal. After all, one of the current darlings of AI, machine learning, was developed at the University of Toronto which houses the Canadian Institute for Advanced Research (CIFAR),  the institution in charge of the Pan-Canadian Artificial Intelligence Strategy and the Vector Institutes (more about that in my March 31,2017 posting).

Enough with my musing: For those of us on the West Coast, there’s an opportunity to attend via livestream from 4 pm to 7 pm on April 25, 2018 on xxxxxxxxx. *** April 25, 2018: I received corrective information about the link for the livestream: https://youtu.be/jG84BIno5J4 and clarification as the relationship between Ingenium and the Canada Science and Technology Museum from someone at Ingenium.***

For more about Element AI, go here; for more about DeepMind, go here for information about parent company in the UK and the most I dug up about their Montréal office was this job posting; and, finally , Reimagine.AI is here.

Less is more—a superconducting synapse

It seems the US National Institute of Standards and Technology (NIST) is more deeply invested into developing artificial brains than I had realized (See: April 17, 2018 posting). A January 26, 2018 NIST news release on EurekAlert describes the organization’s latest foray into the field,

Researchers at the National Institute of Standards and Technology (NIST) have built a superconducting switch that “learns” like a biological system and could connect processors and store memories in future computers operating like the human brain.

The NIST switch, described in Science Advances, is called a synapse, like its biological counterpart, and it supplies a missing piece for so-called neuromorphic computers. Envisioned as a new type of artificial intelligence, such computers could boost perception and decision-making for applications such as self-driving cars and cancer diagnosis.

A synapse is a connection or switch between two brain cells. NIST’s artificial synapse–a squat metallic cylinder 10 micrometers in diameter–is like the real thing because it can process incoming electrical spikes to customize spiking output signals. This processing is based on a flexible internal design that can be tuned by experience or its environment. The more firing between cells or processors, the stronger the connection. Both the real and artificial synapses can thus maintain old circuits and create new ones. Even better than the real thing, the NIST synapse can fire much faster than the human brain–1 billion times per second, compared to a brain cell’s 50 times per second–using just a whiff of energy, about one ten-thousandth as much as a human synapse. In technical terms, the spiking energy is less than 1 attojoule, lower than the background energy at room temperature and on a par with the chemical energy bonding two atoms in a molecule.

“The NIST synapse has lower energy needs than the human synapse, and we don’t know of any other artificial synapse that uses less energy,” NIST physicist Mike Schneider said.

The new synapse would be used in neuromorphic computers made of superconducting components, which can transmit electricity without resistance, and therefore, would be more efficient than other designs based on semiconductors or software. Data would be transmitted, processed and stored in units of magnetic flux. Superconducting devices mimicking brain cells and transmission lines have been developed, but until now, efficient synapses–a crucial piece–have been missing.

The brain is especially powerful for tasks like context recognition because it processes data both in sequence and simultaneously and stores memories in synapses all over the system. A conventional computer processes data only in sequence and stores memory in a separate unit.

The NIST synapse is a Josephson junction, long used in NIST voltage standards. These junctions are a sandwich of superconducting materials with an insulator as a filling. When an electrical current through the junction exceeds a level called the critical current, voltage spikes are produced. The synapse uses standard niobium electrodes but has a unique filling made of nanoscale clusters of manganese in a silicon matrix.

The nanoclusters–about 20,000 per square micrometer–act like tiny bar magnets with “spins” that can be oriented either randomly or in a coordinated manner.

“These are customized Josephson junctions,” Schneider said. “We can control the number of nanoclusters pointing in the same direction, which affects the superconducting properties of the junction.”

The synapse rests in a superconducting state, except when it’s activated by incoming current and starts producing voltage spikes. Researchers apply current pulses in a magnetic field to boost the magnetic ordering, that is, the number of nanoclusters pointing in the same direction. This magnetic effect progressively reduces the critical current level, making it easier to create a normal conductor and produce voltage spikes.

The critical current is the lowest when all the nanoclusters are aligned. The process is also reversible: Pulses are applied without a magnetic field to reduce the magnetic ordering and raise the critical current. This design, in which different inputs alter the spin alignment and resulting output signals, is similar to how the brain operates.

Synapse behavior can also be tuned by changing how the device is made and its operating temperature. By making the nanoclusters smaller, researchers can reduce the pulse energy needed to raise or lower the magnetic order of the device. Raising the operating temperature slightly from minus 271.15 degrees C (minus 456.07 degrees F) to minus 269.15 degrees C (minus 452.47 degrees F), for example, results in more and higher voltage spikes.

Crucially, the synapses can be stacked in three dimensions (3-D) to make large systems that could be used for computing. NIST researchers created a circuit model to simulate how such a system would operate.

The NIST synapse’s combination of small size, superfast spiking signals, low energy needs and 3-D stacking capability could provide the means for a far more complex neuromorphic system than has been demonstrated with other technologies, according to the paper.

NIST has prepared an animation illustrating the research,

Caption: This is an animation of how NIST’s artificial synapse works. Credit: Sean Kelley/NIST

Here’s a link to and a citation for the paper,

Ultralow power artificial synapses using nanotextured magnetic Josephson junctions by Michael L. Schneider, Christine A. Donnelly, Stephen E. Russek, Burm Baek, Matthew R. Pufall, Peter F. Hopkins, Paul D. Dresselhaus, Samuel P. Benz, and William H. Rippard. Science Advances 26 Jan 2018: Vol. 4, no. 1, e1701329 DOI: 10.1126/sciadv.1701329

This paper is open access.

Samuel K. Moore in a January 26, 2018 posting on the Nanoclast blog (on the IEEE [Institute for Electrical and Electronics Engineers] website) describes the research and adds a few technical explanations such as this about the Josephson junction,

In a magnetic Josephson junction, that “weak link” is magnetic. The higher the magnetic field, the lower the critical current needed to produce voltage spikes. In the device Schneider and his colleagues designed, the magnetic field is caused by 20,000 or so nanometer-scale clusters of manganese embedded in silicon. …

Moore also provides some additional links including this one to his November 29, 2017 posting where he describes four new approaches to computing including quantum computing and neuromorphic (brain-like) computing.

New path to viable memristor/neuristor?

I first stumbled onto memristors and the possibility of brain-like computing sometime in 2008 (around the time that R. Stanley Williams and his team at HP Labs first published the results of their research linking Dr. Leon Chua’s memristor theory to their attempts to shrink computer chips). In the almost 10 years since, scientists have worked hard to utilize memristors in the field of neuromorphic (brain-like) engineering/computing.

A January 22, 2018 news item on phys.org describes the latest work,

When it comes to processing power, the human brain just can’t be beat.

Packed within the squishy, football-sized organ are somewhere around 100 billion neurons. At any given moment, a single neuron can relay instructions to thousands of other neurons via synapses—the spaces between neurons, across which neurotransmitters are exchanged. There are more than 100 trillion synapses that mediate neuron signaling in the brain, strengthening some connections while pruning others, in a process that enables the brain to recognize patterns, remember facts, and carry out other learning tasks, at lightning speeds.

Researchers in the emerging field of “neuromorphic computing” have attempted to design computer chips that work like the human brain. Instead of carrying out computations based on binary, on/off signaling, like digital chips do today, the elements of a “brain on a chip” would work in an analog fashion, exchanging a gradient of signals, or “weights,” much like neurons that activate in various ways depending on the type and number of ions that flow across a synapse.

In this way, small neuromorphic chips could, like the brain, efficiently process millions of streams of parallel computations that are currently only possible with large banks of supercomputers. But one significant hangup on the way to such portable artificial intelligence has been the neural synapse, which has been particularly tricky to reproduce in hardware.

Now engineers at MIT [Massachusetts Institute of Technology] have designed an artificial synapse in such a way that they can precisely control the strength of an electric current flowing across it, similar to the way ions flow between neurons. The team has built a small chip with artificial synapses, made from silicon germanium. In simulations, the researchers found that the chip and its synapses could be used to recognize samples of handwriting, with 95 percent accuracy.

A January 22, 2018 MIT news release by Jennifer Chua (also on EurekAlert), which originated the news item, provides more detail about the research,

The design, published today [January 22, 2018] in the journal Nature Materials, is a major step toward building portable, low-power neuromorphic chips for use in pattern recognition and other learning tasks.

The research was led by Jeehwan Kim, the Class of 1947 Career Development Assistant Professor in the departments of Mechanical Engineering and Materials Science and Engineering, and a principal investigator in MIT’s Research Laboratory of Electronics and Microsystems Technology Laboratories. His co-authors are Shinhyun Choi (first author), Scott Tan (co-first author), Zefan Li, Yunjo Kim, Chanyeol Choi, and Hanwool Yeon of MIT, along with Pai-Yu Chen and Shimeng Yu of Arizona State University.

Too many paths

Most neuromorphic chip designs attempt to emulate the synaptic connection between neurons using two conductive layers separated by a “switching medium,” or synapse-like space. When a voltage is applied, ions should move in the switching medium to create conductive filaments, similarly to how the “weight” of a synapse changes.

But it’s been difficult to control the flow of ions in existing designs. Kim says that’s because most switching mediums, made of amorphous materials, have unlimited possible paths through which ions can travel — a bit like Pachinko, a mechanical arcade game that funnels small steel balls down through a series of pins and levers, which act to either divert or direct the balls out of the machine.

Like Pachinko, existing switching mediums contain multiple paths that make it difficult to predict where ions will make it through. Kim says that can create unwanted nonuniformity in a synapse’s performance.

“Once you apply some voltage to represent some data with your artificial neuron, you have to erase and be able to write it again in the exact same way,” Kim says. “But in an amorphous solid, when you write again, the ions go in different directions because there are lots of defects. This stream is changing, and it’s hard to control. That’s the biggest problem — nonuniformity of the artificial synapse.”

A perfect mismatch

Instead of using amorphous materials as an artificial synapse, Kim and his colleagues looked to single-crystalline silicon, a defect-free conducting material made from atoms arranged in a continuously ordered alignment. The team sought to create a precise, one-dimensional line defect, or dislocation, through the silicon, through which ions could predictably flow.

To do so, the researchers started with a wafer of silicon, resembling, at microscopic resolution, a chicken-wire pattern. They then grew a similar pattern of silicon germanium — a material also used commonly in transistors — on top of the silicon wafer. Silicon germanium’s lattice is slightly larger than that of silicon, and Kim found that together, the two perfectly mismatched materials can form a funnel-like dislocation, creating a single path through which ions can flow.

The researchers fabricated a neuromorphic chip consisting of artificial synapses made from silicon germanium, each synapse measuring about 25 nanometers across. They applied voltage to each synapse and found that all synapses exhibited more or less the same current, or flow of ions, with about a 4 percent variation between synapses — a much more uniform performance compared with synapses made from amorphous material.

They also tested a single synapse over multiple trials, applying the same voltage over 700 cycles, and found the synapse exhibited the same current, with just 1 percent variation from cycle to cycle.

“This is the most uniform device we could achieve, which is the key to demonstrating artificial neural networks,” Kim says.

Writing, recognized

As a final test, Kim’s team explored how its device would perform if it were to carry out actual learning tasks — specifically, recognizing samples of handwriting, which researchers consider to be a first practical test for neuromorphic chips. Such chips would consist of “input/hidden/output neurons,” each connected to other “neurons” via filament-based artificial synapses.

Scientists believe such stacks of neural nets can be made to “learn.” For instance, when fed an input that is a handwritten ‘1,’ with an output that labels it as ‘1,’ certain output neurons will be activated by input neurons and weights from an artificial synapse. When more examples of handwritten ‘1s’ are fed into the same chip, the same output neurons may be activated when they sense similar features between different samples of the same letter, thus “learning” in a fashion similar to what the brain does.

Kim and his colleagues ran a computer simulation of an artificial neural network consisting of three sheets of neural layers connected via two layers of artificial synapses, the properties of which they based on measurements from their actual neuromorphic chip. They fed into their simulation tens of thousands of samples from a handwritten recognition dataset commonly used by neuromorphic designers, and found that their neural network hardware recognized handwritten samples 95 percent of the time, compared to the 97 percent accuracy of existing software algorithms.

The team is in the process of fabricating a working neuromorphic chip that can carry out handwriting-recognition tasks, not in simulation but in reality. Looking beyond handwriting, Kim says the team’s artificial synapse design will enable much smaller, portable neural network devices that can perform complex computations that currently are only possible with large supercomputers.

“Ultimately we want a chip as big as a fingernail to replace one big supercomputer,” Kim says. “This opens a stepping stone to produce real artificial hardware.”

This research was supported in part by the National Science Foundation.

Here’s a link to and a citation for the paper,

SiGe epitaxial memory for neuromorphic computing with reproducible high performance based on engineered dislocations by Shinhyun Choi, Scott H. Tan, Zefan Li, Yunjo Kim, Chanyeol Choi, Pai-Yu Chen, Hanwool Yeon, Shimeng Yu, & Jeehwan Kim. Nature Materials (2018) doi:10.1038/s41563-017-0001-5 Published online: 22 January 2018

This paper is behind a paywall.

For the curious I have included a number of links to recent ‘memristor’ postings here,

January 22, 2018: Memristors at Masdar

January 3, 2018: Mott memristor

August 24, 2017: Neuristors and brainlike computing

June 28, 2017: Dr. Wei Lu and bio-inspired ‘memristor’ chips

May 2, 2017: Predicting how a memristor functions

December 30, 2016: Changing synaptic connectivity with a memristor

December 5, 2016: The memristor as computing device

November 1, 2016: The memristor as the ‘missing link’ in bioelectronic medicine?

You can find more by using ‘memristor’ as the search term in the blog search function or on the search engine of your choice.

The Hedy Lamarr of international research: Canada’s Third assessment of The State of Science and Technology and Industrial Research and Development in Canada (2 of 2)

Taking up from where I left off with my comments on Competing in a Global Innovation Economy: The Current State of R and D in Canada or as I prefer to call it the Third assessment of Canadas S&T (science and technology) and R&D (research and development). (Part 1 for anyone who missed it).

Is it possible to get past Hedy?

Interestingly (to me anyway), one of our R&D strengths, the visual and performing arts, features sectors where a preponderance of people are dedicated to creating culture in Canada and don’t spend a lot of time trying to make money so they can retire before the age of 40 as so many of our start-up founders do. (Retiring before the age of 40 just reminded me of Hollywood actresses {Hedy] who found and still do find that work was/is hard to come by after that age. You may be able but I’m not sure I can get past Hedy.) Perhaps our business people (start-up founders) could take a leaf out of the visual and performing arts handbook? Or, not. There is another question.

Does it matter if we continue to be a ‘branch plant’ economy? Somebody once posed that question to me when I was grumbling that our start-ups never led to larger businesses and acted more like incubators (which could describe our R&D as well),. He noted that Canadians have a pretty good standard of living and we’ve been running things this way for over a century and it seems to work for us. Is it that bad? I didn’t have an  answer for him then and I don’t have one now but I think it’s a useful question to ask and no one on this (2018) expert panel or the previous expert panel (2013) seems to have asked.

I appreciate that the panel was constrained by the questions given by the government but given how they snuck in a few items that technically speaking were not part of their remit, I’m thinking they might have gone just a bit further. The problem with answering the questions as asked is that if you’ve got the wrong questions, your answers will be garbage (GIGO; garbage in, garbage out) or, as is said, where science is concerned, it’s the quality of your questions.

On that note, I would have liked to know more about the survey of top-cited researchers. I think looking at the questions could have been quite illuminating and I would have liked some information on from where (geographically and area of specialization) they got most of their answers. In keeping with past practice (2012 assessment published in 2013), there is no additional information offered about the survey questions or results. Still, there was this (from the report released April 10, 2018; Note: There may be some difference between the formatting seen here and that seen in the document),

3.1.2 International Perceptions of Canadian Research
As with the 2012 S&T report, the CCA commissioned a survey of top-cited researchers’ perceptions of Canada’s research strength in their field or subfield relative to that of other countries (Section 1.3.2). Researchers were asked to identify the top five countries in their field and subfield of expertise: 36% of respondents (compared with 37% in the 2012 survey) from across all fields of research rated Canada in the top five countries in their field (Figure B.1 and Table B.1 in the appendix). Canada ranks fourth out of all countries, behind the United States, United Kingdom, and Germany, and ahead of France. This represents a change of about 1 percentage point from the overall results of the 2012 S&T survey. There was a 4 percentage point decrease in how often France is ranked among the top five countries; the ordering of the top five countries, however, remains the same.

When asked to rate Canada’s research strength among other advanced countries in their field of expertise, 72% (4,005) of respondents rated Canadian research as “strong” (corresponding to a score of 5 or higher on a 7-point scale) compared with 68% in the 2012 S&T survey (Table 3.4). [pp. 40-41 Print; pp. 78-70 PDF]

Before I forget, there was mention of the international research scene,

Growth in research output, as estimated by number of publications, varies considerably for the 20 top countries. Brazil, China, India, Iran, and South Korea have had the most significant increases in publication output over the last 10 years. [emphases mine] In particular, the dramatic increase in China’s output means that it is closing the gap with the United States. In 2014, China’s output was 95% of that of the United States, compared with 26% in 2003. [emphasis mine]

Table 3.2 shows the Growth Index (GI), a measure of the rate at which the research output for a given country changed between 2003 and 2014, normalized by the world growth rate. If a country’s growth in research output is higher than the world average, the GI score is greater than 1.0. For example, between 2003 and 2014, China’s GI score was 1.50 (i.e., 50% greater than the world average) compared with 0.88 and 0.80 for Canada and the United States, respectively. Note that the dramatic increase in publication production of emerging economies such as China and India has had a negative impact on Canada’s rank and GI score (see CCA, 2016).

As long as I’ve been blogging (10 years), the international research community (in particular the US) has been looking over its shoulder at China.

Patents and intellectual property

As an inventor, Hedy got more than one patent. Much has been made of the fact that  despite an agreement, the US Navy did not pay her or her partner (George Antheil) for work that would lead to significant military use (apparently, it was instrumental in the Bay of Pigs incident, for those familiar with that bit of history), GPS, WiFi, Bluetooth, and more.

Some comments about patents. They are meant to encourage more innovation by ensuring that creators/inventors get paid for their efforts .This is true for a set time period and when it’s over, other people get access and can innovate further. It’s not intended to be a lifelong (or inheritable) source of income. The issue in Lamarr’s case is that the navy developed the technology during the patent’s term without telling either her or her partner so, of course, they didn’t need to compensate them despite the original agreement. They really should have paid her and Antheil.

The current patent situation, particularly in the US, is vastly different from the original vision. These days patents are often used as weapons designed to halt innovation. One item that should be noted is that the Canadian federal budget indirectly addressed their misuse (from my March 16, 2018 posting),

Surprisingly, no one else seems to have mentioned a new (?) intellectual property strategy introduced in the document (from Chapter 2: Progress; scroll down about 80% of the way, Note: The formatting has been changed),

Budget 2018 proposes measures in support of a new Intellectual Property Strategy to help Canadian entrepreneurs better understand and protect intellectual property, and get better access to shared intellectual property.

What Is a Patent Collective?
A Patent Collective is a way for firms to share, generate, and license or purchase intellectual property. The collective approach is intended to help Canadian firms ensure a global “freedom to operate”, mitigate the risk of infringing a patent, and aid in the defence of a patent infringement suit.

Budget 2018 proposes to invest $85.3 million over five years, starting in 2018–19, with $10 million per year ongoing, in support of the strategy. The Minister of Innovation, Science and Economic Development will bring forward the full details of the strategy in the coming months, including the following initiatives to increase the intellectual property literacy of Canadian entrepreneurs, and to reduce costs and create incentives for Canadian businesses to leverage their intellectual property:

  • To better enable firms to access and share intellectual property, the Government proposes to provide $30 million in 2019–20 to pilot a Patent Collective. This collective will work with Canada’s entrepreneurs to pool patents, so that small and medium-sized firms have better access to the critical intellectual property they need to grow their businesses.
  • To support the development of intellectual property expertise and legal advice for Canada’s innovation community, the Government proposes to provide $21.5 million over five years, starting in 2018–19, to Innovation, Science and Economic Development Canada. This funding will improve access for Canadian entrepreneurs to intellectual property legal clinics at universities. It will also enable the creation of a team in the federal government to work with Canadian entrepreneurs to help them develop tailored strategies for using their intellectual property and expanding into international markets.
  • To support strategic intellectual property tools that enable economic growth, Budget 2018 also proposes to provide $33.8 million over five years, starting in 2018–19, to Innovation, Science and Economic Development Canada, including $4.5 million for the creation of an intellectual property marketplace. This marketplace will be a one-stop, online listing of public sector-owned intellectual property available for licensing or sale to reduce transaction costs for businesses and researchers, and to improve Canadian entrepreneurs’ access to public sector-owned intellectual property.

The Government will also consider further measures, including through legislation, in support of the new intellectual property strategy.

Helping All Canadians Harness Intellectual Property
Intellectual property is one of our most valuable resources, and every Canadian business owner should understand how to protect and use it.

To better understand what groups of Canadians are benefiting the most from intellectual property, Budget 2018 proposes to provide Statistics Canada with $2 million over three years to conduct an intellectual property awareness and use survey. This survey will help identify how Canadians understand and use intellectual property, including groups that have traditionally been less likely to use intellectual property, such as women and Indigenous entrepreneurs. The results of the survey should help the Government better meet the needs of these groups through education and awareness initiatives.

The Canadian Intellectual Property Office will also increase the number of education and awareness initiatives that are delivered in partnership with business, intermediaries and academia to ensure Canadians better understand, integrate and take advantage of intellectual property when building their business strategies. This will include targeted initiatives to support underrepresented groups.

Finally, Budget 2018 also proposes to invest $1 million over five years to enable representatives of Canada’s Indigenous Peoples to participate in discussions at the World Intellectual Property Organization related to traditional knowledge and traditional cultural expressions, an important form of intellectual property.

It’s not wholly clear what they mean by ‘intellectual property’. The focus seems to be on  patents as they are the only intellectual property (as opposed to copyright and trademarks) singled out in the budget. As for how the ‘patent collective’ is going to meet all its objectives, this budget supplies no clarity on the matter. On the plus side, I’m glad to see that indigenous peoples’ knowledge is being acknowledged as “an important form of intellectual property” and I hope the discussions at the World Intellectual Property Organization are fruitful.

As for the patent situation in Canada (from the report released April 10, 2018),

Over the past decade, the Canadian patent flow in all technical sectors has consistently decreased. Patent flow provides a partial picture of how patents in Canada are exploited. A negative flow represents a deficit of patented inventions owned by Canadian assignees versus the number of patented inventions created by Canadian inventors. The patent flow for all Canadian patents decreased from about −0.04 in 2003 to −0.26 in 2014 (Figure 4.7). This means that there is an overall deficit of 26% of patent ownership in Canada. In other words, fewer patents were owned by Canadian institutions than were invented in Canada.

This is a significant change from 2003 when the deficit was only 4%. The drop is consistent across all technical sectors in the past 10 years, with Mechanical Engineering falling the least, and Electrical Engineering the most (Figure 4.7). At the technical field level, the patent flow dropped significantly in Digital Communication and Telecommunications. For example, the Digital Communication patent flow fell from 0.6 in 2003 to −0.2 in 2014. This fall could be partially linked to Nortel’s US$4.5 billion patent sale [emphasis mine] to the Rockstar consortium (which included Apple, BlackBerry, Ericsson, Microsoft, and Sony) (Brickley, 2011). Food Chemistry and Microstructural [?] and Nanotechnology both also showed a significant drop in patent flow. [p. 83 Print; p. 121 PDF]

Despite a fall in the number of parents for ‘Digital Communication’, we’re still doing well according to statistics elsewhere in this report. Is it possible that patents aren’t that big a deal? Of course, it’s also possible that we are enjoying the benefits of past work and will miss out on future work. (Note: A video of the April 10, 2018 report presentation by Max Blouw features him saying something like that.)

One last note, Nortel died many years ago. Disconcertingly, this report, despite more than one reference to Nortel, never mentions the company’s demise.

Boxed text

While the expert panel wasn’t tasked to answer certain types of questions, as I’ve noted earlier they managed to sneak in a few items.  One of the strategies they used was putting special inserts into text boxes including this (from the report released April 10, 2018),

Box 4.2
The FinTech Revolution

Financial services is a key industry in Canada. In 2015, the industry accounted for 4.4%

of Canadia jobs and about 7% of Canadian GDP (Burt, 2016). Toronto is the second largest financial services hub in North America and one of the most vibrant research hubs in FinTech. Since 2010, more than 100 start-up companies have been founded in Canada, attracting more than $1 billion in investment (Moffatt, 2016). In 2016 alone, venture-backed investment in Canadian financial technology companies grew by 35% to $137.7 million (Ho, 2017). The Toronto Financial Services Alliance estimates that there are approximately 40,000 ICT specialists working in financial services in Toronto alone.

AI, blockchain, [emphasis mine] and other results of ICT research provide the basis for several transformative FinTech innovations including, for example, decentralized transaction ledgers, cryptocurrencies (e.g., bitcoin), and AI-based risk assessment and fraud detection. These innovations offer opportunities to develop new markets for established financial services firms, but also provide entry points for technology firms to develop competing service offerings, increasing competition in the financial services industry. In response, many financial services companies are increasing their investments in FinTech companies (Breznitz et al., 2015). By their own account, the big five banks invest more than $1 billion annually in R&D of advanced software solutions, including AI-based innovations (J. Thompson, personal communication, 2016). The banks are also increasingly investing in university research and collaboration with start-up companies. For instance, together with several large insurance and financial management firms, all big five banks have invested in the Vector Institute for Artificial Intelligence (Kolm, 2017).

I’m glad to see the mention of blockchain while AI (artificial intelligence) is an area where we have innovated (from the report released April 10, 2018),

AI has attracted researchers and funding since the 1960s; however, there were periods of stagnation in the 1970s and 1980s, sometimes referred to as the “AI winter.” During this period, the Canadian Institute for Advanced Research (CIFAR), under the direction of Fraser Mustard, started supporting AI research with a decade-long program called Artificial Intelligence, Robotics and Society, [emphasis mine] which was active from 1983 to 1994. In 2004, a new program called Neural Computation and Adaptive Perception was initiated and renewed twice in 2008 and 2014 under the title, Learning in Machines and Brains. Through these programs, the government provided long-term, predictable support for high- risk research that propelled Canadian researchers to the forefront of global AI development. In the 1990s and early 2000s, Canadian research output and impact on AI were second only to that of the United States (CIFAR, 2016). NSERC has also been an early supporter of AI. According to its searchable grant database, NSERC has given funding to research projects on AI since at least 1991–1992 (the earliest searchable year) (NSERC, 2017a).

The University of Toronto, the University of Alberta, and the Université de Montréal have emerged as international centres for research in neural networks and deep learning, with leading experts such as Geoffrey Hinton and Yoshua Bengio. Recently, these locations have expanded into vibrant hubs for research in AI applications with a diverse mix of specialized research institutes, accelerators, and start-up companies, and growing investment by major international players in AI development, such as Microsoft, Google, and Facebook. Many highly influential AI researchers today are either from Canada or have at some point in their careers worked at a Canadian institution or with Canadian scholars.

As international opportunities in AI research and the ICT industry have grown, many of Canada’s AI pioneers have been drawn to research institutions and companies outside of Canada. According to the OECD, Canada’s share of patents in AI declined from 2.4% in 2000 to 2005 to 2% in 2010 to 2015. Although Canada is the sixth largest producer of top-cited scientific publications related to machine learning, firms headquartered in Canada accounted for only 0.9% of all AI-related inventions from 2012 to 2014 (OECD, 2017c). Canadian AI researchers, however, remain involved in the core nodes of an expanding international network of AI researchers, most of whom continue to maintain ties with their home institutions. Compared with their international peers, Canadian AI researchers are engaged in international collaborations far more often than would be expected by Canada’s level of research output, with Canada ranking fifth in collaboration. [p. 97-98 Print; p. 135-136 PDF]

The only mention of robotics seems to be here in this section and it’s only in passing. This is a bit surprising given its global importance. I wonder if robotics has been somehow hidden inside the term artificial intelligence, although sometimes it’s vice versa with robot being used to describe artificial intelligence. I’m noticing this trend of assuming the terms are synonymous or interchangeable not just in Canadian publications but elsewhere too.  ’nuff said.

Getting back to the matter at hand, t he report does note that patenting (technometric data) is problematic (from the report released April 10, 2018),

The limitations of technometric data stem largely from their restricted applicability across areas of R&D. Patenting, as a strategy for IP management, is similarly limited in not being equally relevant across industries. Trends in patenting can also reflect commercial pressures unrelated to R&D activities, such as defensive or strategic patenting practices. Finally, taxonomies for assessing patents are not aligned with bibliometric taxonomies, though links can be drawn to research publications through the analysis of patent citations. [p. 105 Print; p. 143 PDF]

It’s interesting to me that they make reference to many of the same issues that I mention but they seem to forget and don’t use that information in their conclusions.

There is one other piece of boxed text I want to highlight (from the report released April 10, 2018),

Box 6.3
Open Science: An Emerging Approach to Create New Linkages

Open Science is an umbrella term to describe collaborative and open approaches to
undertaking science, which can be powerful catalysts of innovation. This includes
the development of open collaborative networks among research performers, such
as the private sector, and the wider distribution of research that usually results when
restrictions on use are removed. Such an approach triggers faster translation of ideas
among research partners and moves the boundaries of pre-competitive research to
later, applied stages of research. With research results freely accessible, companies
can focus on developing new products and processes that can be commercialized.

Two Canadian organizations exemplify the development of such models. In June
2017, Genome Canada, the Ontario government, and pharmaceutical companies
invested $33 million in the Structural Genomics Consortium (SGC) (Genome Canada,
2017). Formed in 2004, the SGC is at the forefront of the Canadian open science
movement and has contributed to many key research advancements towards new
treatments (SGC, 2018). McGill University’s Montréal Neurological Institute and
Hospital has also embraced the principles of open science. Since 2016, it has been
sharing its research results with the scientific community without restriction, with
the objective of expanding “the impact of brain research and accelerat[ing] the
discovery of ground-breaking therapies to treat patients suffering from a wide range
of devastating neurological diseases” (neuro, n.d.).

This is exciting stuff and I’m happy the panel featured it. (I wrote about the Montréal Neurological Institute initiative in a Jan. 22, 2016 posting.)

More than once, the report notes the difficulties with using bibliometric and technometric data as measures of scientific achievement and progress and open science (along with its cousins, open data and open access) are contributing to the difficulties as James Somers notes in his April 5, 2018 article ‘The Scientific Paper is Obsolete’ for The Atlantic (Note: Links have been removed),

The scientific paper—the actual form of it—was one of the enabling inventions of modernity. Before it was developed in the 1600s, results were communicated privately in letters, ephemerally in lectures, or all at once in books. There was no public forum for incremental advances. By making room for reports of single experiments or minor technical advances, journals made the chaos of science accretive. Scientists from that point forward became like the social insects: They made their progress steadily, as a buzzing mass.

The earliest papers were in some ways more readable than papers are today. They were less specialized, more direct, shorter, and far less formal. Calculus had only just been invented. Entire data sets could fit in a table on a single page. What little “computation” contributed to the results was done by hand and could be verified in the same way.

The more sophisticated science becomes, the harder it is to communicate results. Papers today are longer than ever and full of jargon and symbols. They depend on chains of computer programs that generate data, and clean up data, and plot data, and run statistical models on data. These programs tend to be both so sloppily written and so central to the results that it’s [sic] contributed to a replication crisis, or put another way, a failure of the paper to perform its most basic task: to report what you’ve actually discovered, clearly enough that someone else can discover it for themselves.

Perhaps the paper itself is to blame. Scientific methods evolve now at the speed of software; the skill most in demand among physicists, biologists, chemists, geologists, even anthropologists and research psychologists, is facility with programming languages and “data science” packages. And yet the basic means of communicating scientific results hasn’t changed for 400 years. Papers may be posted online, but they’re still text and pictures on a page.

What would you get if you designed the scientific paper from scratch today? A little while ago I spoke to Bret Victor, a researcher who worked at Apple on early user-interface prototypes for the iPad and now runs his own lab in Oakland, California, that studies the future of computing. Victor has long been convinced that scientists haven’t yet taken full advantage of the computer. “It’s not that different than looking at the printing press, and the evolution of the book,” he said. After Gutenberg, the printing press was mostly used to mimic the calligraphy in bibles. It took nearly 100 years of technical and conceptual improvements to invent the modern book. “There was this entire period where they had the new technology of printing, but they were just using it to emulate the old media.”Victor gestured at what might be possible when he redesigned a journal article by Duncan Watts and Steven Strogatz, “Collective dynamics of ‘small-world’ networks.” He chose it both because it’s one of the most highly cited papers in all of science and because it’s a model of clear exposition. (Strogatz is best known for writing the beloved “Elements of Math” column for The New York Times.)

The Watts-Strogatz paper described its key findings the way most papers do, with text, pictures, and mathematical symbols. And like most papers, these findings were still hard to swallow, despite the lucid prose. The hardest parts were the ones that described procedures or algorithms, because these required the reader to “play computer” in their head, as Victor put it, that is, to strain to maintain a fragile mental picture of what was happening with each step of the algorithm.Victor’s redesign interleaved the explanatory text with little interactive diagrams that illustrated each step. In his version, you could see the algorithm at work on an example. You could even control it yourself….

For anyone interested in the evolution of how science is conducted and communicated, Somers’ article is a fascinating and in depth look at future possibilities.

Subregional R&D

I didn’t find this quite as compelling as the last time and that may be due to the fact that there’s less information and I think the 2012 report was the first to examine the Canadian R&D scene with a subregional (in their case, provinces) lens. On a high note, this report also covers cities (!) and regions, as well as, provinces.

Here’s the conclusion (from the report released April 10, 2018),

Ontario leads Canada in R&D investment and performance. The province accounts for almost half of R&D investment and personnel, research publications and collaborations, and patents. R&D activity in Ontario produces high-quality publications in each of Canada’s five R&D strengths, reflecting both the quantity and quality of universities in the province. Quebec lags Ontario in total investment, publications, and patents, but performs as well (citations) or better (R&D intensity) by some measures. Much like Ontario, Quebec researchers produce impactful publications across most of Canada’s five R&D strengths. Although it invests an amount similar to that of Alberta, British Columbia does so at a significantly higher intensity. British Columbia also produces more highly cited publications and patents, and is involved in more international research collaborations. R&D in British Columbia and Alberta clusters around Vancouver and Calgary in areas such as physics and ICT and in clinical medicine and energy, respectively. [emphasis mine] Smaller but vibrant R&D communities exist in the Prairies and Atlantic Canada [also referred to as the Maritime provinces or Maritimes] (and, to a lesser extent, in the Territories) in natural resource industries.

Globally, as urban populations expand exponentially, cities are likely to drive innovation and wealth creation at an increasing rate in the future. In Canada, R&D activity clusters around five large cities: Toronto, Montréal, Vancouver, Ottawa, and Calgary. These five cities create patents and high-tech companies at nearly twice the rate of other Canadian cities. They also account for half of clusters in the services sector, and many in advanced manufacturing.

Many clusters relate to natural resources and long-standing areas of economic and research strength. Natural resource clusters have emerged around the location of resources, such as forestry in British Columbia, oil and gas in Alberta, agriculture in Ontario, mining in Quebec, and maritime resources in Atlantic Canada. The automotive, plastics, and steel industries have the most individual clusters as a result of their economic success in Windsor, Hamilton, and Oshawa. Advanced manufacturing industries tend to be more concentrated, often located near specialized research universities. Strong connections between academia and industry are often associated with these clusters. R&D activity is distributed across the country, varying both between and within regions. It is critical to avoid drawing the wrong conclusion from this fact. This distribution does not imply the existence of a problem that needs to be remedied. Rather, it signals the benefits of diverse innovation systems, with differentiation driven by the needs of and resources available in each province. [pp.  132-133 Print; pp. 170-171 PDF]

Intriguingly, there’s no mention that in British Columbia (BC), there are leading areas of research: Visual & Performing Arts, Psychology & Cognitive Sciences, and Clinical Medicine (according to the table on p. 117 Print, p. 153 PDF).

As I said and hinted earlier, we’ve got brains; they’re just not the kind of brains that command respect.

Final comments

My hat’s off to the expert panel and staff of the Council of Canadian Academies. Combining two previous reports into one could not have been easy. As well, kudos to their attempts to broaden the discussion by mentioning initiative such as open science and for emphasizing the problems with bibliometrics, technometrics, and other measures. I have covered only parts of this assessment, (Competing in a Global Innovation Economy: The Current State of R&D in Canada), there’s a lot more to it including a substantive list of reference materials (bibliography).

While I have argued that perhaps the situation isn’t quite as bad as the headlines and statistics may suggest, there are some concerning trends for Canadians but we have to acknowledge that many countries have stepped up their research game and that’s good for all of us. You don’t get better at anything unless you work with and play with others who are better than you are. For example, both India and Italy surpassed us in numbers of published research papers. We slipped from 7th place to 9th. Thank you, Italy and India. (And, Happy ‘Italian Research in the World Day’ on April 15, 2018, the day’s inaugural year. In Italian: Piano Straordinario “Vivere all’Italiana” – Giornata della ricerca Italiana nel mondo.)

Unfortunately, the reading is harder going than previous R&D assessments in the CCA catalogue. And in the end, I can’t help thinking we’re just a little bit like Hedy Lamarr. Not really appreciated in all of our complexities although the expert panel and staff did try from time to time. Perhaps the government needs to find better ways of asking the questions.

***ETA April 12, 2018 at 1500 PDT: Talking about missing the obvious! I’ve been ranting on about how research strength in visual and performing arts and in philosophy and theology, etc. is perfectly fine and could lead to ‘traditional’ science breakthroughs without underlining the point by noting that Antheil was a musician, Lamarr was as an actress and they set the foundation for work by electrical engineers (or people with that specialty) for their signature work leading to WiFi, etc.***

There is, by the way, a Hedy-Canada connection. In 1998, she sued Canadian software company Corel, for its unauthorized use of her image on their Corel Draw 8 product packaging. She won.

More stuff

For those who’d like to see and hear the April 10, 2017 launch for “Competing in a Global Innovation Economy: The Current State of R&D in Canada” or the Third Assessment as I think of it, go here.

The report can be found here.

For anyone curious about ‘Bombshell: The Hedy Lamarr Story’ to be broadcast on May 18, 2018 as part of PBS’s American Masters series, there’s this trailer,

For the curious, I did find out more about the Hedy Lamarr and Corel Draw. John Lettice’s December 2, 1998 article The Rgister describes the suit and her subsequent victory in less than admiring terms,

Our picture doesn’t show glamorous actress Hedy Lamarr, who yesterday [Dec. 1, 1998] came to a settlement with Corel over the use of her image on Corel’s packaging. But we suppose that following the settlement we could have used a picture of Corel’s packaging. Lamarr sued Corel earlier this year over its use of a CorelDraw image of her. The picture had been produced by John Corkery, who was 1996 Best of Show winner of the Corel World Design Contest. Corel now seems to have come to an undisclosed settlement with her, which includes a five-year exclusive (oops — maybe we can’t use the pack-shot then) licence to use “the lifelike vector illustration of Hedy Lamarr on Corel’s graphic software packaging”. Lamarr, bless ‘er, says she’s looking forward to the continued success of Corel Corporation,  …

There’s this excerpt from a Sept. 21, 2015 posting (a pictorial essay of Lamarr’s life) by Shahebaz Khan on The Blaze Blog,

6. CorelDRAW:
For several years beginning in 1997, the boxes of Corel DRAW’s software suites were graced by a large Corel-drawn image of Lamarr. The picture won Corel DRAW’s yearly software suite cover design contest in 1996. Lamarr sued Corel for using the image without her permission. Corel countered that she did not own rights to the image. The parties reached an undisclosed settlement in 1998.

There’s also a Nov. 23, 1998 Corel Draw 8 product review by Mike Gorman on mymac.com, which includes a screenshot of the packaging that precipitated the lawsuit. Once they settled, it seems Corel used her image at least one more time.