Category Archives: robots

Dancing with a robot

Dancing with Baryshnibot. Alice Williamson, Courtesy Merritt Moore

Dancing robots usually perform to pop music but every once in a while, there’s a move toward classical music and ballet, e.g., my June 8, 2011 posting was titled, Robot swan dances to Tchaikovsky’s Swan Lake. Unlike the dancing robot in the picture above, that robot swan danced alone. (You can still see the robot’s Swan Lake performance in the video embedded in the 2011 posting.)

I don’t usually associate dance magazines with robots but Chava Pearl Lansky’s Nov. 18, 2020 article about dancer/physicist Merritt Moore and her work with Baryshnibot is found in ballet magazine, Pointe (Note: Links have been removed),

When the world went into lockdown last March [2019], most dancers despaired. But not Merritt Moore. The Los Angeles native, who lives in London and has danced with Norwegian National Ballet, English National Ballet and Boston Ballet, holds a PhD in atomic and laser physics from the University of Oxford. A few weeks into the coronavirus pandemic, she came up with a solution for having to train and work alone: robots.

Moore had just come out of a six-week residency at Harvard ArtLab focused on the intersection between dance and robotics. “I knew I needed something to look forward to, and thought how bizarre I’d just been working with robots,” she says. “Who knew they’d be my only potential dance partners for a really long time?” She reached out to Universal Robotics and asked them to collaborate, and they agreed to send her a robot to experiment with.

Baryshnibot is an industrial robot normally used for automation and manufacturing. “It does not look impressive at all,” says Moore. “But there’s so much potential for different movement.” Creating dances for a robot, she says, is like an elaborate puzzle: “I have to figure out how to make this six-jointed rod emulate the dance moves of a head, two arms, a body and two legs.”

Moore started with the basics. She’d learn a simple TikTok dance, and then map the movements into a computer pad attached to the robot. “The 15-second-routine will take me five-hours-plus to program,” she says. Despite the arduous process, she’s built up to more advanced choreography, and is trying on different dance styles, from ballet to hip hop to salsa. For her newest pas de deux, titled Merritt + Robot, Moore worked with director Conor Gorman and cinematographer Howard Mills to beautifully capture her work with Baryshnibot on film. …

You can find Moore’s and Baryshnibot’s performance video embedded in Nov. 18, 2020 article.

Reading a virus like a book

Teaching grammar and syntax to artificial intelligence (AI) algorithms (specifically natural language processing (NLP) algorithms) has helped researchers understand and predict viral mutations more speedily. This facility is especially useful at a time when the Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) virus seems to be mutating into more easily transmissible variants.

Will Douglas Heaven’s Jan. 14, 2021 article for the Massachusetts Institute of Technology’s MIT Technology Review describes the work that links AI, grammar, and mutating viruses (Note: Links have been removed),

Galileo once observed that nature is written in math. Biology might be written in words. Natural-language processing (NLP) algorithms are now able to generate protein sequences and predict virus mutations, including key changes that help the coronavirus evade the immune system.

The key insight making this possible is that many properties of biological systems can be interpreted in terms of words and sentences. “We’re learning the language of evolution,” says Bonnie Berger, a computational biologist at the Massachusetts Institute of Technology [MIT].

In the last few years, a handful of researchers—including teams from geneticist George Church’s [Professor of Health Sciences and Technology at Harvard University and MIT, etc.] lab and Salesforce [emphasis mine]—have shown that protein sequences and genetic codes can be modeled using NLP techniques.

In a study published in Science today, Berger and her colleagues pull several of these strands together and use NLP to predict mutations that allow viruses to avoid being detected by antibodies in the human immune system, a process known as viral immune escape. The basic idea is that the interpretation of a virus by an immune system is analogous to the interpretation of a sentence by a human.

Berger’s team uses two different linguistic concepts: grammar and semantics (or meaning). The genetic or evolutionary fitness of a virus—characteristics such as how good it is at infecting a host—can be interpreted in terms of grammatical correctness. A successful, infectious virus is grammatically correct; an unsuccessful one is not.

Similarly, mutations of a virus can be interpreted in terms of semantics. Mutations that make a virus appear different to things in its environment—such as changes in its surface proteins that make it invisible to certain antibodies—have altered its meaning. Viruses with different mutations can have different meanings, and a virus with a different meaning may need different antibodies to read it.

Instead of millions of sentences, they trained the NLP model on thousands of genetic sequences taken from three different viruses: 45,000 unique sequences for a strain of influenza, 60,000 for a strain of HIV, and between 3,000 and 4,000 for a strain of Sars-Cov-2, the virus that causes covid-19. “There’s less data for the coronavirus because there’s been less surveillance,” says Brian Hie, a graduate student at MIT, who built the models.

The overall aim of the approach is to identify mutations that might let a virus escape an immune system without making it less infectious—that is, mutations that change a virus’s meaning without making it grammatically incorrect.

But it’s also just the beginning. Treating genetic mutations as changes in meaning could be applied in different ways across biology. “A good analogy can go a long way,” says Bryson [Bryan Bryson, a biologist at MIT].

If you have time, I recommend reading Heaven’s Jan. 14, 2021 article in its entirety as it’s well written with clear explanations. As for the article’s mentions of George Church and Salesforce, the former could be expected while the latter is not (by me, I speak for no one else).

I find it fascinating that a company which describes itself (from What is Salesforce?) as providing “… customer relationship management, or CRM. It gives all your departments — including marketing, sales, commerce, and service — a shared view of your customers … ” seems to be conducting investigations into one (or more?) areas of biology.

For those who’d like to dive into the science as described in Heaven’s article, here’s a link to and a citation for the paper,

Learning the language of viral evolution and escape by Brian Hie, Ellen D. Zhong, Bonnie Berger, Bryan Bryson. Science 15 Jan 2021: Vol. 371, Issue 6526, pp. 284-288 DOI: 10.1126/science.abd7331

This paper appears to be open access (or it is, at least for now).

There is also a preprint version available on bioRxiv, which is an open access repository.

Neuromorphic computing with a memristor is capable of replicating bio-neural system

There’s nothing especially new in this latest paper on neuromorphic computing and memristors, however it does a very good job of describing how these new computers might work. From a Nov. 30, 2020 news item on phys.org (Note: A link has been removed),

In a paper published in Nano, researchers study the role of memristors in neuromorphic computing. This novel fundamental electronic component supports the cloning of bio-neural systems with low cost and power.

Contemporary computing systems are unable to deal with critical challenges of size reduction and computing speed in the big data era. The Von Neumann bottleneck is referred to as a hindrance in data transfer through the bus connecting processor and memory cell. This gives an opportunity to create alternative architectures based on a biological neuron model. Neuromorphic computing is one of such alternative architectures that mimic neuro-biological brain architectures.

A November ??, 2020 World Scientific (Publishing) press release (also on EurekAlert and published on Nov. 27, 2020), which originated the news item, continues with this fine explanation,

The humanoid neural brain system comprises approximately 100 billion neurons and numerous synapses of connectivity. An efficient circuit device is therefore essential for the construction of a neural network that mimics the human brain. The development of a basic electrical component, the memristor, with several distinctive features such as scalability, in-memory processing and CMOS compatibility, has significantly facilitated the implementation of neural network hardware.

The memristor was introduced as a “memory-like resistor” where the background of the applied inputs would alter the resistance status of the device. It is a capable electronic component that can memorise the current in order to effectively reduce the size of the device and increase processing speed in neural networks. Parallel calculations, as in the human nervous system, are made with the support of memristor devices in a novel computing architecture.

System instability and uncertainty have been described as current problems for most memory-based applications. This is the opposite of the biological process. Despite noise, nonlinearity, variability and volatility, biological systems work well. It is still unclear, however, that the effectiveness of biological systems actually depends on these obstacles. Neural modeling is sometimes avoided because it is not easy to model and study. The possibility of exploiting these properties is therefore, of course, a critical path to success in the achievement of artificial and biological systems.

Here’s a link to and a citation for the paper (Note: I usually include the link as part of the paper’s title but couldn’t do it this time),

Memristors: Understanding, Utilization and Upgradation for Neuromorphic Computing [https://www.worldscientific.com/doi/abs/10.1142/S1793292020300054] by Mohanbabu Bharathi, Zhiwei Wang, Bingrui Guo, Babu Balraj, Qiuhong Li, Jianwei Shuai and Donghui Guo. NanoVol. 15, No. 11, 2030005 (2020) DOI: https://doi.org/10.1142/S1793292020300054 Published: 12 November 2020

This paper is open access.

Artificial Intelligence (AI), musical creativity conference, art creation, ISEA 2020 (Why Sentience?) recap, and more

I have a number of items from Simon Fraser University’s (SFU) Metacreation Lab January 2021 newsletter (received via email on Jan. 5, 2020).

29th International Joint Conference on Artificial Intelligence and the 17th Pacific Rim International Conference on Artificial Intelligence! or IJCAI-PRICAI2020 being held on Jan. 7 – 15, 2021

This first excerpt features a conference that’s currently taking place,,

Musical Metacreation Tutorial at IIJCAI – PRICAI 2020 [Yes, the 29th International Joint Conference on Artificial Intelligence and the 17th Pacific Rim International Conference on Artificial Intelligence or IJCAI-PRICAI2020 is being held in 2021!]

As part of the International Joint Conference on Artificial Intelligence (IJCAI – PRICAI 2020, January 7-15), Philippe Pasquier will lead a tutorial on Musical Metacreation. This tutorial aims at introducing the field of musical metacreation and its current developments, promises, and challenges.

The tutorial will be held this Friday, January 8th, from 9 am to 12:20 pm JST ([JST = Japanese Standard Time] 12 am to 3:20 am UTC [or 4 pm – 7:30 pm PST]) and a full description of the syllabus can be found here. For details about registration for the conference and tutorials, click below.

Register for IJCAI – PRICAI 2020

The conference will be held at a virtual venue created by Virtual Chair on the gather.town platform, which offers the spontaneity of mingling with colleagues from all over the world while in the comfort of your home. The platform will allow attendees to customize avatars to fit their mood, enjoy a virtual traditional Japanese village, take part in plenary talks and more.

Two calls for papers

These two excerpts from SFU’s Metacreation Lab January 2021 newsletter feature one upcoming conference and an upcoming workshop, both with calls for papers,

2nd Conference on AI Music Creativity (MuMe + CSMC)

The second Conference on AI Music Creativity brings together two overlapping research forums: The Computer Simulation of Music Creativity Conference (est. 2016) and The International Workshop on Musical Metacreation (est. 2012). The objective of the conference is to bring together scholars and artists interested in the emulation and extension of musical creativity through computational means and to provide them with an interdisciplinary platform in which to present and discuss their work in scientific and artistic contexts.

The 2021 Conference on AI Music Creativity will be hosted by the Institute of Electronic Music and Acoustics (IEM) of the University of Music and Performing Arts of Graz, Austria and held online. The five-day program will feature paper presentations, concerts, panel discussions, workshops, tutorials, sound installations and two keynotes.

AIMC 2021 Info & CFP

AIART  2021

The 3rd IEEE Workshop on Artificial Intelligence for Art Creation (AIART) workshop has been announced for 2021. to bring forward cutting-edge technologies and most recent advances in the area of AI art in terms of enabling creation, analysis and understanding technologies. The theme topic of the workshop will be AI creativity, and will be accompanied by a Special Issue of the renowned SCI journal.

AIART is inviting high-quality papers presenting or addressing issues related to AI art, in a wide range of topics. The submission due date is January 31, 2021, and you can learn about the wide range of topics accepted below:

AIART 2021 Info & CFP

Toying with music

SFU’s Metacreation Lab January 2021 newsletter also features a kind of musical toy,

MMM : Multi-Track Music Machine

One of the latest projects at the Metacreation Lab is MMM: a generative music generation system based on Transformer architecture, capable of generating multi-track music, developed by Jeff Enns and Philippe Pasquier.

Based on an auto-regressive model, the system is capable of generating music from scratch using a wide range of preset instruments. Inputs from one or several tracks can condition the generation of new tracks, resampling MIDI input from the user or adding further layers of music.

To learn more about the system and see it in action, click below and watch the demonstration video, hear some examples, or try the program yourself through Google Colab.

Explore MMM: Multi-Track Music Machine

Why Sentience?

Finally, for anyone who was wondering what happened at the 2020 International Symposium of Electronic Arts (ISEA 2020) held virtually in Montreal in the fall, here’s some news from SFU’s Metacreation Lab January 2021 newsletter,

ISEA2020 Recap // Why Sentience? 

As we look back at one of the most unprecedented years, some of the questions explored at ISEA2020 are more salient now than ever. This recap video highlights some of the most memorable moments from last year’s virtual symposium.

ISEA2020 // Why Sentience? Recap Video

The Metacreation Lab’s researchers explored some of these guiding questions at ISEA2020 with two papers presented at the symposium: Chatterbox: an interactive system of gibberish agents and Liminal Scape, An Interactive Visual Installation with Expressive AI. These papers, and the full proceedings from ISEA2020 can now be accessed below. 

ISEA2020 Proceedings

The video is a slick, flashy, and fun 15 minutes or so. In addition to the recap for ISEA 2020, there’s a plug for ISEA 2022 in Barcelona, Spain.

The proceedings took my system a while to download (there are approximately 700 pp.). By the way, here’s another link to the proceedings or rather to the archives for the 2020 and previous years’ ISEA proceedings.

Governments need to tell us when and how they’re using AI (artificial intelligence) algorithms to make decisions

I have two items and an exploration of the Canadian scene all three of which feature governments, artificial intelligence, and responsibility.

Special issue of Information Polity edited by Dutch academics,

A December 14, 2020 IOS Press press release (also on EurekAlert) announces a special issue of Information Polity focused on algorithmic transparency in government,

Amsterdam, NL – The use of algorithms in government is transforming the way bureaucrats work and make decisions in different areas, such as healthcare or criminal justice. Experts address the transparency challenges of using algorithms in decision-making procedures at the macro-, meso-, and micro-levels in this special issue of Information Polity.

Machine-learning algorithms hold huge potential to make government services fairer and more effective and have the potential of “freeing” decision-making from human subjectivity, according to recent research. Algorithms are used in many public service contexts. For example, within the legal system it has been demonstrated that algorithms can predict recidivism better than criminal court judges. At the same time, critics highlight several dangers of algorithmic decision-making, such as racial bias and lack of transparency.

Some scholars have argued that the introduction of algorithms in decision-making procedures may cause profound shifts in the way bureaucrats make decisions and that algorithms may affect broader organizational routines and structures. This special issue on algorithm transparency presents six contributions to sharpen our conceptual and empirical understanding of the use of algorithms in government.

“There has been a surge in criticism towards the ‘black box’ of algorithmic decision-making in government,” explain Guest Editors Sarah Giest (Leiden University) and Stephan Grimmelikhuijsen (Utrecht University). “In this special issue collection, we show that it is not enough to unpack the technical details of algorithms, but also look at institutional, organizational, and individual context within which these algorithms operate to truly understand how we can achieve transparent and responsible algorithms in government. For example, regulations may enable transparency mechanisms, yet organizations create new policies on how algorithms should be used, and individual public servants create new professional repertoires. All these levels interact and affect algorithmic transparency in public organizations.”

The transparency challenges for the use of algorithms transcend different levels of government – from European level to individual public bureaucrats. These challenges can also take different forms; transparency can be enabled or limited by technical tools as well as regulatory guidelines or organizational policies. Articles in this issue address transparency challenges of algorithm use at the macro-, meso-, and micro-level. The macro level describes phenomena from an institutional perspective – which national systems, regulations and cultures play a role in algorithmic decision-making. The meso-level primarily pays attention to the organizational and team level, while the micro-level focuses on individual attributes, such as beliefs, motivation, interactions, and behaviors.

“Calls to ‘keep humans in the loop’ may be moot points if we fail to understand how algorithms impact human decision-making and how algorithmic design impacts the practical possibilities for transparency and human discretion,” notes Rik Peeters, research professor of Public Administration at the Centre for Research and Teaching in Economics (CIDE) in Mexico City. In a review of recent academic literature on the micro-level dynamics of algorithmic systems, he discusses three design variables that determine the preconditions for human transparency and discretion and identifies four main sources of variation in “human-algorithm interaction.”

The article draws two major conclusions: First, human agents are rarely fully “out of the loop,” and levels of oversight and override designed into algorithms should be understood as a continuum. The second pertains to bounded rationality, satisficing behavior, automation bias, and frontline coping mechanisms that play a crucial role in the way humans use algorithms in decision-making processes.

For future research Dr. Peeters suggests taking a closer look at the behavioral mechanisms in combination with identifying relevant skills of bureaucrats in dealing with algorithms. “Without a basic understanding of the algorithms that screen- and street-level bureaucrats have to work with, it is difficult to imagine how they can properly use their discretion and critically assess algorithmic procedures and outcomes. Professionals should have sufficient training to supervise the algorithms with which they are working.”

At the macro-level, algorithms can be an important tool for enabling institutional transparency, writes Alex Ingrams, PhD, Governance and Global Affairs, Institute of Public Administration, Leiden University, Leiden, The Netherlands. This study evaluates a machine-learning approach to open public comments for policymaking to increase institutional transparency of public commenting in a law-making process in the United States. The article applies an unsupervised machine learning analysis of thousands of public comments submitted to the United States Transport Security Administration on a 2013 proposed regulation for the use of new full body imaging scanners in airports. The algorithm highlights salient topic clusters in the public comments that could help policymakers understand open public comments processes. “Algorithms should not only be subject to transparency but can also be used as tool for transparency in government decision-making,” comments Dr. Ingrams.

“Regulatory certainty in combination with organizational and managerial capacity will drive the way the technology is developed and used and what transparency mechanisms are in place for each step,” note the Guest Editors. “On its own these are larger issues to tackle in terms of developing and passing laws or providing training and guidance for public managers and bureaucrats. The fact that they are linked further complicates this process. Highlighting these linkages is a first step towards seeing the bigger picture of why transparency mechanisms are put in place in some scenarios and not in others and opens the door to comparative analyses for future research and new insights for policymakers. To advocate the responsible and transparent use of algorithms, future research should look into the interplay between micro-, meso-, and macro-level dynamics.”

“We are proud to present this special issue, the 100th issue of Information Polity. Its focus on the governance of AI demonstrates our continued desire to tackle contemporary issues in eGovernment and the importance of showcasing excellent research and the insights offered by information polity perspectives,” add Professor Albert Meijer (Utrecht University) and Professor William Webster (University of Stirling), Editors-in-Chief.

This image illustrates the interplay between the various level dynamics,

Caption: Studying algorithms and algorithmic transparency from multiple levels of analyses. Credit: Information Polity.

Here’s a link, to and a citation for the special issue,

Algorithmic Transparency in Government: Towards a Multi-Level Perspective
Guest Editors: Sarah Giest, PhD, and Stephan Grimmelikhuijsen, PhD
Information Polity, Volume 25, Issue 4 (December 2020), published by IOS Press

The issue is open access for three months, Dec. 14, 2020 – March 14, 2021.

Two articles from the special were featured in the press release,

“The agency of algorithms: Understanding human-algorithm interaction in administrative decision-making,” by Rik Peeters, PhD (https://doi.org/10.3233/IP-200253)

“A machine learning approach to open public comments for policymaking,” by Alex Ingrams, PhD (https://doi.org/10.3233/IP-200256)

An AI governance publication from the US’s Wilson Center

Within one week of the release of a special issue of Information Polity on AI and governments, a Wilson Center (Woodrow Wilson International Center for Scholars) December 21, 2020 news release (received via email) announces a new publication,

Governing AI: Understanding the Limits, Possibilities, and Risks of AI in an Era of Intelligent Tools and Systems by John Zysman & Mark Nitzberg

Abstract

In debates about artificial intelligence (AI), imaginations often run wild. Policy-makers, opinion leaders, and the public tend to believe that AI is already an immensely powerful universal technology, limitless in its possibilities. However, while machine learning (ML), the principal computer science tool underlying today’s AI breakthroughs, is indeed powerful, ML is fundamentally a form of context-dependent statistical inference and as such has its limits. Specifically, because ML relies on correlations between inputs and outputs or emergent clustering in training data, today’s AI systems can only be applied in well- specified problem domains, still lacking the context sensitivity of a typical toddler or house-pet. Consequently, instead of constructing policies to govern artificial general intelligence (AGI), decision- makers should focus on the distinctive and powerful problems posed by narrow AI, including misconceived benefits and the distribution of benefits, autonomous weapons, and bias in algorithms. AI governance, at least for now, is about managing those who create and deploy AI systems, and supporting the safe and beneficial application of AI to narrow, well-defined problem domains. Specific implications of our discussion are as follows:

  • AI applications are part of a suite of intelligent tools and systems and must ultimately be regulated as a set. Digital platforms, for example, generate the pools of big data on which AI tools operate and hence, the regulation of digital platforms and big data is part of the challenge of governing AI. Many of the platform offerings are, in fact, deployments of AI tools. Hence, focusing on AI alone distorts the governance problem.
  • Simply declaring objectives—be they assuring digital privacy and transparency, or avoiding bias—is not sufficient. We must decide what the goals actually will be in operational terms.
  • The issues and choices will differ by sector. For example, the consequences of bias and error will differ from a medical domain or a criminal justice domain to one of retail sales.
  • The application of AI tools in public policy decision making, in transportation design or waste disposal or policing among a whole variety of domains, requires great care. There is a substantial risk of focusing on efficiency when the public debate about what the goals should be in the first place is in fact required. Indeed, public values evolve as part of social and political conflict.
  • The economic implications of AI applications are easily exaggerated. Should public investment concentrate on advancing basic research or on diffusing the tools, user interfaces, and training needed to implement them?
  • As difficult as it will be to decide on goals and a strategy to implement the goals of one community, let alone regional or international communities, any agreement that goes beyond simple objective statements is very unlikely.

Unfortunately, I haven’t been able to successfully download the working paper/report from the Wilson Center’s Governing AI: Understanding the Limits, Possibilities, and Risks of AI in an Era of Intelligent Tools and Systems webpage.

However, I have found a draft version of the report (Working Paper) published August 26, 2020 on the Social Science Research Network. This paper originated at the University of California at Berkeley as part of a series from the Berkeley Roundtable on the International Economy (BRIE). ‘Governing AI: Understanding the Limits, Possibility, and Risks of AI in an Era of Intelligent Tools and Systems’ is also known as the BRIE Working Paper 2020-5.

Canadian government and AI

The special issue on AI and governance and the the paper published by the Wilson Center stimulated my interest in the Canadian government’s approach to governance, responsibility, transparency, and AI.

There is information out there but it’s scattered across various government initiatives and ministries. Above all, it is not easy to find, open communication. Whether that’s by design or the blindness and/or ineptitude to be found in all organizations I leave that to wiser judges. (I’ve worked in small companies and they too have the problem. In colloquial terms, ‘the right hand doesn’t know what the left hand is doing’.)

Responsible use? Maybe not after 2019

First there’s a government of Canada webpage, Responsible use of artificial intelligence (AI). Other than a note at the bottom of the page “Date modified: 2020-07-28,” all of the information dates from 2016 up to March 2019 (which you’ll find on ‘Our Timeline’). Is nothing new happening?

For anyone interested in responsible use, there are two sections “Our guiding principles” and “Directive on Automated Decision-Making” that answer some questions. I found the ‘Directive’ to be more informative with its definitions, objectives, and, even, consequences. Sadly, you need to keep clicking to find consequences and you’ll end up on The Framework for the Management of Compliance. Interestingly, deputy heads are assumed in charge of managing non-compliance. I wonder how employees deal with a non-compliant deputy head?

What about the government’s digital service?

You might think Canadian Digital Service (CDS) might also have some information about responsible use. CDS was launched in 2017, according to Luke Simon’s July 19, 2017 article on Medium,

In case you missed it, there was some exciting digital government news in Canada Tuesday. The Canadian Digital Service (CDS) launched, meaning Canada has joined other nations, including the US and the UK, that have a federal department dedicated to digital.

At the time, Simon was Director of Outreach at Code for Canada.

Presumably, CDS, from an organizational perspective, is somehow attached to the Minister of Digital Government (it’s a position with virtually no governmental infrastructure as opposed to the Minister of Innovation, Science and Economic Development who is responsible for many departments and agencies). The current minister is Joyce Murray whose government profile offers almost no information about her work on digital services. Perhaps there’s a more informative profile of the Minister of Digital Government somewhere on a government website.

Meanwhile, they are friendly folks at CDS but they don’t offer much substantive information. From the CDS homepage,

Our aim is to make services easier for government to deliver. We collaborate with people who work in government to address service delivery problems. We test with people who need government services to find design solutions that are easy to use.

Learn more

After clicking on Learn more, I found this,

At the Canadian Digital Service (CDS), we partner up with federal departments to design, test and build simple, easy to use services. Our goal is to improve the experience – for people who deliver government services and people who use those services.

How it works

We work with our partners in the open, regularly sharing progress via public platforms. This creates a culture of learning and fosters best practices. It means non-partner departments can apply our work and use our resources to develop their own services.

Together, we form a team that follows the ‘Agile software development methodology’. This means we begin with an intensive ‘Discovery’ research phase to explore user needs and possible solutions to meeting those needs. After that, we move into a prototyping ‘Alpha’ phase to find and test ways to meet user needs. Next comes the ‘Beta’ phase, where we release the solution to the public and intensively test it. Lastly, there is a ‘Live’ phase, where the service is fully released and continues to be monitored and improved upon.

Between the Beta and Live phases, our team members step back from the service, and the partner team in the department continues the maintenance and development. We can help partners recruit their service team from both internal and external sources.

Before each phase begins, CDS and the partner sign a partnership agreement which outlines the goal and outcomes for the coming phase, how we’ll get there, and a commitment to get them done.

As you can see, there’s not a lot of detail and they don’t seem to have included anything about artificial intelligence as part of their operation. (I’ll come back to the government’s implementation of artificial intelligence and information technology later.)

Does the Treasury Board of Canada have charge of responsible AI use?

I think so but there are government departments/ministries that also have some responsibilities for AI and I haven’t seen any links back to the Treasury Board documentation.

For anyone not familiar with the Treasury Board or even if you are, December 14, 2009 article (Treasury Board of Canada: History, Organization and Issues) on Maple Leaf Web is quite informative,

The Treasury Board of Canada represent a key entity within the federal government. As an important cabinet committee and central agency, they play an important role in financial and personnel administration. Even though the Treasury Board plays a significant role in government decision making, the general public tends to know little about its operation and activities. [emphasis mine] The following article provides an introduction to the Treasury Board, with a focus on its history, responsibilities, organization, and key issues.

It seems the Minister of Digital Government, Joyce Murray is part of the Treasury Board and the Treasury Board is the source for the Digital Operations Strategic Plan: 2018-2022,

I haven’t read the entire document but the table of contents doesn’t include a heading for artificial intelligence and there wasn’t any mention of it in the opening comments.

But isn’t there a Chief Information Officer for Canada?

Herein lies a tale (I doubt I’ll ever get the real story) but the answer is a qualified ‘no’. The Chief Information Officer for Canada, Alex Benay (there is an AI aspect) stepped down in September 2019 to join a startup company according to an August 6, 2019 article by Mia Hunt for Global Government Forum,

Alex Benay has announced he will step down as Canada’s chief information officer next month to “take on new challenge” at tech start-up MindBridge.

“It is with mixed emotions that I am announcing my departure from the Government of Canada,” he said on Wednesday in a statement posted on social media, describing his time as CIO as “one heck of a ride”.

He said he is proud of the work the public service has accomplished in moving the national digital agenda forward. Among these achievements, he listed the adoption of public Cloud across government; delivering the “world’s first” ethical AI management framework; [emphasis mine] renewing decades-old policies to bring them into the digital age; and “solidifying Canada’s position as a global leader in open government”.

He also led the introduction of new digital standards in the workplace, and provided “a clear path for moving off” Canada’s failed Phoenix pay system. [emphasis mine]

I cannot find a current Chief Information of Canada despite searches but I did find this List of chief information officers (CIO) by institution. Where there was one, there are now many.

Since September 2019, Mr. Benay has moved again according to a November 7, 2019 article by Meagan Simpson on the BetaKit,website (Note: Links have been removed),

Alex Benay, the former CIO [Chief Information Officer] of Canada, has left his role at Ottawa-based Mindbridge after a short few months stint.

The news came Thursday, when KPMG announced that Benay was joining the accounting and professional services organization as partner of digital and government solutions. Benay originally announced that he was joining Mindbridge in August, after spending almost two and a half years as the CIO for the Government of Canada.

Benay joined the AI startup as its chief client officer and, at the time, was set to officially take on the role on September 3rd. According to Benay’s LinkedIn, he joined Mindbridge in August, but if the September 3rd start date is correct, Benay would have only been at Mindbridge for around three months. The former CIO of Canada was meant to be responsible for Mindbridge’s global growth as the company looked to prepare for an IPO in 2021.

Benay told The Globe and Mail that his decision to leave Mindbridge was not a question of fit, or that he considered the move a mistake. He attributed his decision to leave to conversations with Mindbridge customer KPMG, over a period of three weeks. Benay told The Globe that he was drawn to the KPMG opportunity to lead its digital and government solutions practice, something that was more familiar to him given his previous role.

Mindbridge has not completely lost what was touted as a start hire, though, as Benay will be staying on as an advisor to the startup. “This isn’t a cutting the cord and moving on to something else completely,” Benay told The Globe. “It’s a win-win for everybody.”

Via Mr. Benay, I’ve re-introduced artificial intelligence and introduced the Phoenix Pay system and now I’m linking them to government implementation of information technology in a specific case and speculating about implementation of artificial intelligence algorithms in government.

Phoenix Pay System Debacle (things are looking up), a harbinger for responsible use of artificial intelligence?

I’m happy to hear that the situation where government employees had no certainty about their paycheques is becoming better. After the ‘new’ Phoenix Pay System was implemented in early 2016, government employees found they might get the correct amount on their paycheque or might find significantly less than they were entitled to or might find huge increases.

The instability alone would be distressing but adding to it with the inability to get the problem fixed must have been devastating. Almost five years later, the problems are being resolved and people are getting paid appropriately, more often.

The estimated cost for fixing the problems was, as I recall, over $1B; I think that was a little optimistic. James Bagnall’s July 28, 2020 article for the Ottawa Citizen provides more detail, although not about the current cost, and is the source of my measured optimism,

Something odd has happened to the Phoenix Pay file of late. After four years of spitting out errors at a furious rate, the federal government’s new pay system has gone quiet.

And no, it’s not because of the even larger drama written by the coronavirus. In fact, there’s been very real progress at Public Services and Procurement Canada [PSPC; emphasis mine], the department in charge of pay operations.

Since January 2018, the peak of the madness, the backlog of all pay transactions requiring action has dropped by about half to 230,000 as of late June. Many of these involve basic queries for information about promotions, overtime and rules. The part of the backlog involving money — too little or too much pay, incorrect deductions, pay not received — has shrunk by two-thirds to 125,000.

These are still very large numbers but the underlying story here is one of long-delayed hope. The government is processing the pay of more than 330,000 employees every two weeks while simultaneously fixing large batches of past mistakes.

While officials with two of the largest government unions — Public Service Alliance of Canada [PSAC] and the Professional Institute of the Public Service of Canada [PPSC] — disagree the pay system has worked out its kinks, they acknowledge it’s considerably better than it was. New pay transactions are being processed “with increased timeliness and accuracy,” the PSAC official noted.

Neither union is happy with the progress being made on historical mistakes. PIPSC president Debi Daviau told this newspaper that many of her nearly 60,000 members have been waiting for years to receive salary adjustments stemming from earlier promotions or transfers, to name two of the more prominent sources of pay errors.

Even so, the sharp improvement in Phoenix Pay’s performance will soon force the government to confront an interesting choice: Should it continue with plans to replace the system?

Treasury Board, the government’s employer, two years ago launched the process to do just that. Last March, SAP Canada — whose technology underpins the pay system still in use at Canada Revenue Agency — won a competition to run a pilot project. Government insiders believe SAP Canada is on track to build the full system starting sometime in 2023.

When Public Services set out the business case in 2009 for building Phoenix Pay, it noted the pay system would have to accommodate 150 collective agreements that contained thousands of business rules and applied to dozens of federal departments and agencies. The technical challenge has since intensified.

Under the original plan, Phoenix Pay was to save $70 million annually by eliminating 1,200 compensation advisors across government and centralizing a key part of the operation at the pay centre in Miramichi, N.B., where 550 would manage a more automated system.

Instead, the Phoenix Pay system currently employs about 2,300.  This includes 1,600 at Miramichi and five regional pay offices, along with 350 each at a client contact centre (which deals with relatively minor pay issues) and client service bureau (which handles the more complex, longstanding pay errors). This has naturally driven up the average cost of managing each pay account — 55 per cent higher than the government’s former pay system according to last fall’s estimate by the Parliamentary Budget Officer.

… As the backlog shrinks, the need for regional pay offices and emergency staffing will diminish. Public Services is also working with a number of high-tech firms to develop ways of accurately automating employee pay using artificial intelligence [emphasis mine].

Given the Phoenix Pay System debacle, it might be nice to see a little information about how the government is planning to integrate more sophisticated algorithms (artificial intelligence) in their operations.

I found this on a Treasury Board webpage, all 1 minute and 29 seconds of it,

The blonde model or actress mentions that companies applying to Public Services and Procurement Canada for placement on the list must use AI responsibly. Her script does not include a definition or guidelines, which, as previously noted, as on the Treasury Board website.

As for Public Services and Procurement Canada, they have an Artificial intelligence source list,

Public Services and Procurement Canada (PSPC) is putting into operation the Artificial intelligence source list to facilitate the procurement of Canada’s requirements for Artificial intelligence (AI).

After research and consultation with industry, academia, and civil society, Canada identified 3 AI categories and business outcomes to inform this method of supply:

Insights and predictive modelling

Machine interactions

Cognitive automation

PSPC is focused only on procuring AI. If there are guidelines on their website for its use, I did not find them.

I found one more government agency that might have some information about artificial intelligence and guidelines for its use, Shared Services Canada,

Shared Services Canada (SSC) delivers digital services to Government of Canada organizations. We provide modern, secure and reliable IT services so federal organizations can deliver digital programs and services that meet Canadians needs.

Since the Minister of Digital Government, Joyce Murray, is listed on the homepage, I was hopeful that I could find out more about AI and governance and whether or not the Canadian Digital Service was associated with this government ministry/agency. I was frustrated on both counts.

To sum up, there is no information that I could find after March 2019 about Canada, it’s government and plans for AI, especially responsible management/governance and AI on a Canadian government website although I have found guidelines, expectations, and consequences for non-compliance. (Should anyone know which government agency has up-to-date information on its responsible use of AI, please let me know in the Comments.

Canadian Institute for Advanced Research (CIFAR)

The first mention of the Pan-Canadian Artificial Intelligence Strategy is in my analysis of the Canadian federal budget in a March 24, 2017 posting. Briefly, CIFAR received a big chunk of that money. Here’s more about the strategy from the CIFAR Pan-Canadian AI Strategy homepage,

In 2017, the Government of Canada appointed CIFAR to develop and lead a $125 million Pan-Canadian Artificial Intelligence Strategy, the world’s first national AI strategy.

CIFAR works in close collaboration with Canada’s three national AI Institutes — Amii in Edmonton, Mila in Montreal, and the Vector Institute in Toronto, as well as universities, hospitals and organizations across the country.

The objectives of the strategy are to:

Attract and retain world-class AI researchers by increasing the number of outstanding AI researchers and skilled graduates in Canada.

Foster a collaborative AI ecosystem by establishing interconnected nodes of scientific excellence in Canada’s three major centres for AI: Edmonton, Montreal, and Toronto.

Advance national AI initiatives by supporting a national research community on AI through training programs, workshops, and other collaborative opportunities.

Understand the societal implications of AI by developing global thought leadership on the economic, ethical, policy, and legal implications [emphasis mine] of advances in AI.

Responsible AI at CIFAR

You can find Responsible AI in a webspace devoted to what they have called, AI & Society. Here’s more from the homepage,

CIFAR is leading global conversations about AI’s impact on society.

The AI & Society program, one of the objectives of the CIFAR Pan-Canadian AI Strategy, develops global thought leadership on the economic, ethical, political, and legal implications of advances in AI. These dialogues deliver new ways of thinking about issues, and drive positive change in the development and deployment of responsible AI.

Solution Networks

AI Futures Policy Labs

AI & Society Workshops

Building an AI World

Under the category of building an AI World I found this (from CIFAR’s AI & Society homepage),

BUILDING AN AI WORLD

Explore the landscape of global AI strategies.

Canada was the first country in the world to announce a federally-funded national AI strategy, prompting many other nations to follow suit. CIFAR published two reports detailing the global landscape of AI strategies.

I skimmed through the second report and it seems more like a comparative study of various country’s AI strategies than a overview of responsible use of AI.

Final comments about Responsible AI in Canada and the new reports

I’m glad to see there’s interest in Responsible AI but based on my adventures searching the Canadian government websites and the Pan-Canadian AI Strategy webspace, I’m left feeling hungry for more.

I didn’t find any details about how AI is being integrated into government departments and for what uses. I’d like to know and I’d like to have some say about how it’s used and how the inevitable mistakes will be dealh with.

The great unwashed

What I’ve found is high minded, but, as far as I can tell, there’s absolutely no interest in talking to the ‘great unwashed’. Those of us who are not experts are being left out of these earlier stage conversations.

I’m sure we’ll be consulted at some point but it will be long past the time when are our opinions and insights could have impact and help us avoid the problems that experts tend not to see. What we’ll be left with is protest and anger on our part and, finally, grudging admissions and corrections of errors on the government’s part.

Let’s take this for an example. The Phoenix Pay System was implemented in its first phase on Feb. 24, 2016. As I recall, problems develop almost immediately. The second phase of implementation starts April 21, 2016. In May 2016 the government hires consultants to fix the problems. November 29, 2016 the government minister, Judy Foote, admits a mistake has been made. February 2017 the government hires consultants to establish what lessons they might learn. February 15, 2018 the pay problems backlog amounts to 633,000. Source: James Bagnall, Feb. 23, 2018 ‘timeline‘ for Ottawa Citizen

Do take a look at the timeline, there’s more to it than what I’ve written here and I’m sure there’s more to the Phoenix Pay System debacle than a failure to listen to warnings from those who would be directly affected. It’s fascinating though how often a failure to listen presages far deeper problems with a project.

The Canadian government, both a conservative and a liberal government, contributed to the Phoenix Debacle but it seems the gravest concern is with senior government bureaucrats. You might think things have changed since this recounting of the affair in a June 14, 2018 article by Michelle Zilio for the Globe and Mail,

The three public servants blamed by the Auditor-General for the Phoenix pay system problems were not fired for mismanagement of the massive technology project that botched the pay of tens of thousands of public servants for more than two years.

Marie Lemay, deputy minister for Public Services and Procurement Canada (PSPC), said two of the three Phoenix executives were shuffled out of their senior posts in pay administration and did not receive performance bonuses for their handling of the system. Those two employees still work for the department, she said. Ms. Lemay, who refused to identify the individuals, said the third Phoenix executive retired.

In a scathing report last month, Auditor-General Michael Ferguson blamed three “executives” – senior public servants at PSPC, which is responsible for Phoenix − for the pay system’s “incomprehensible failure.” [emphasis mine] He said the executives did not tell the then-deputy minister about the known problems with Phoenix, leading the department to launch the pay system despite clear warnings it was not ready.

Speaking to a parliamentary committee on Thursday, Ms. Lemay said the individuals did not act with “ill intent,” noting that the development and implementation of the Phoenix project were flawed. She encouraged critics to look at the “bigger picture” to learn from all of Phoenix’s failures.

Mr. Ferguson, whose office spoke with the three Phoenix executives as a part of its reporting, said the officials prioritized some aspects of the pay-system rollout, such as schedule and budget, over functionality. He said they also cancelled a pilot implementation project with one department that would have helped it detect problems indicating the system was not ready.

Mr. Ferguson’s report warned the Phoenix problems are indicative of “pervasive cultural problems” [emphasis mine] in the civil service, which he said is fearful of making mistakes, taking risks and conveying “hard truths.”

Speaking to the same parliamentary committee on Tuesday, Privy Council Clerk [emphasis mine] Michael Wernick challenged Mr. Ferguson’s assertions, saying his chapter on the federal government’s cultural issues is an “opinion piece” containing “sweeping generalizations.”

The Privy Council Clerk is the top level bureaucrat (and there is only one such clerk) in the civil/public service and I think his quotes are quite telling of “pervasive cultural problems.” There’s a new Privy Council Clerk but from what I can tell he was well trained by his predecessor.

Doe we really need senior government bureaucrats?

I now have an example of bureaucratic interference, specifically with the Global Public Health Information Network (GPHIN) where it would seem that not much has changed, from a December 26, 2020 article by Grant Robertson for the Globe & Mail,

When Canada unplugged support for its pandemic alert system [GPHIN] last year, it was a symptom of bigger problems inside the Public Health Agency. Experienced scientists were pushed aside, expertise was eroded, and internal warnings went unheeded, which hindered the department’s response to COVID-19

As a global pandemic began to take root in February, China held a series of backchannel conversations with Canada, lobbying the federal government to keep its borders open.

With the virus already taking a deadly toll in Asia, Heng Xiaojun, the Minister Counsellor for the Chinese embassy, requested a call with senior Transport Canada officials. Over the course of the conversation, the Chinese representatives communicated Beijing’s desire that flights between the two countries not be stopped because it was unnecessary.

“The Chinese position on the continuation of flights was reiterated,” say official notes taken from the call. “Mr. Heng conveyed that China is taking comprehensive measures to combat the coronavirus.”

Canadian officials seemed to agree, since no steps were taken to restrict or prohibit travel. To the federal government, China appeared to have the situation under control and the risk to Canada was low. Before ending the call, Mr. Heng thanked Ottawa for its “science and fact-based approach.”

It was a critical moment in the looming pandemic, but the Canadian government lacked the full picture, instead relying heavily on what Beijing was choosing to disclose to the World Health Organization (WHO). Ottawa’s ability to independently know what was going on in China – on the ground and inside hospitals – had been greatly diminished in recent years.

Canada once operated a robust pandemic early warning system and employed a public-health doctor based in China who could report back on emerging problems. But it had largely abandoned those international strategies over the past five years, and was no longer as plugged-in.

By late February [2020], Ottawa seemed to be taking the official reports from China at their word, stating often in its own internal risk assessments that the threat to Canada remained low. But inside the Public Health Agency of Canada (PHAC), rank-and-file doctors and epidemiologists were growing increasingly alarmed at how the department and the government were responding.

“The team was outraged,” one public-health scientist told a colleague in early April, in an internal e-mail obtained by The Globe and Mail, criticizing the lack of urgency shown by Canada’s response during January, February and early March. “We knew this was going to be around for a long time, and it’s serious.”

China had locked down cities and restricted travel within its borders. Staff inside the Public Health Agency believed Beijing wasn’t disclosing the whole truth about the danger of the virus and how easily it was transmitted. “The agency was just too slow to respond,” the scientist said. “A sane person would know China was lying.”

It would later be revealed that China’s infection and mortality rates were played down in official records, along with key details about how the virus was spreading.

But the Public Health Agency, which was created after the 2003 SARS crisis to bolster the country against emerging disease threats, had been stripped of much of its capacity to gather outbreak intelligence and provide advance warning by the time the pandemic hit.

The Global Public Health Intelligence Network, an early warning system known as GPHIN that was once considered a cornerstone of Canada’s preparedness strategy, had been scaled back over the past several years, with resources shifted into projects that didn’t involve outbreak surveillance.

However, a series of documents obtained by The Globe during the past four months, from inside the department and through numerous Access to Information requests, show the problems that weakened Canada’s pandemic readiness run deeper than originally thought. Pleas from the international health community for Canada to take outbreak detection and surveillance much more seriously were ignored by mid-level managers [emphasis mine] inside the department. A new federal pandemic preparedness plan – key to gauging the country’s readiness for an emergency – was never fully tested. And on the global stage, the agency stopped sending experts [emphasis mine] to international meetings on pandemic preparedness, instead choosing senior civil servants with little or no public-health background [emphasis mine] to represent Canada at high-level talks, The Globe found.

The curtailing of GPHIN and allegations that scientists had become marginalized within the Public Health Agency, detailed in a Globe investigation this past July [2020], are now the subject of two federal probes – an examination by the Auditor-General of Canada and an independent federal review, ordered by the Minister of Health.

Those processes will undoubtedly reshape GPHIN and may well lead to an overhaul of how the agency functions in some areas. The first steps will be identifying and fixing what went wrong. With the country now topping 535,000 cases of COVID-19 and more than 14,700 dead, there will be lessons learned from the pandemic.

Prime Minister Justin Trudeau has said he is unsure what role added intelligence [emphasis mine] could have played in the government’s pandemic response, though he regrets not bolstering Canada’s critical supplies of personal protective equipment sooner. But providing the intelligence to make those decisions early is exactly what GPHIN was created to do – and did in previous outbreaks.

Epidemiologists have described in detail to The Globe how vital it is to move quickly and decisively in a pandemic. Acting sooner, even by a few days or weeks in the early going, and throughout, can have an exponential impact on an outbreak, including deaths. Countries such as South Korea, Australia and New Zealand, which have fared much better than Canada, appear to have acted faster in key tactical areas, some using early warning information they gathered. As Canada prepares itself in the wake of COVID-19 for the next major health threat, building back a better system becomes paramount.

If you have time, do take a look at Robertson’s December 26, 2020 article and the July 2020 Globe investigation. As both articles make clear, senior bureaucrats whose chief attribute seems to have been longevity took over, reallocated resources, drove out experts, and crippled the few remaining experts in the system with a series of bureaucratic demands while taking trips to attend meetings (in desirable locations) for which they had no significant or useful input.

The Phoenix and GPHIN debacles bear a resemblance in that senior bureaucrats took over and in a state of blissful ignorance made a series of disastrous decisions bolstered by politicians who seem to neither understand nor care much about the outcomes.

If you think I’m being harsh watch Canadian Broadcasting Corporation (CBC) reporter Rosemary Barton interview Prime Minister Trudeau for a 2020 year-end interview, Note: There are some commercials. Then, pay special attention to the Trudeau’s answer to the first question,

Responsible AI, eh?

Based on the massive mishandling of the Phoenix Pay System implementation where top bureaucrats did not follow basic and well established information services procedures and the Global Public Health Information Network mismanagement by top level bureaucrats, I’m not sure I have a lot of confidence in any Canadian government claims about a responsible approach to using artificial intelligence.

Unfortunately, it doesn’t matter as implementation is most likely already taking place here in Canada.

Enough with the pessimism. I feel it’s necessary to end this on a mildly positive note. Hurray to the government employees who worked through the Phoenix Pay System debacle, the current and former GPHIN experts who continued to sound warnings, and all those people striving to make true the principles of ‘Peace, Order, and Good Government’, the bedrock principles of the Canadian Parliament.

A lot of mistakes have been made but we also do make a lot of good decisions.

A computer simulation inside a computer simulation?

Stumbling across an entry from National Film Board of Canada for the Venice VR (virtual reality) Expanded section at the 77th Venice International Film Festival (September 2 to 12, 2020) and a recent Scientific American article on computer simulations provoked a memory from Frank Herbert’s 1965 novel, Dune. From an Oct. 3, 2007 posting on Equivocality; A journal of self-discovery, healing, growth, and growing pains,

Knowing where the trap is — that’s the first step in evading it. This is like single combat, Son, only on a larger scale — a feint within a feint within a feint [emphasis mine]…seemingly without end. The task is to unravel it.

—Duke Leto Atreides, Dune [Note: Dune is a 1965 science-fiction novel by US author Frank Herbert]

Now, onto what provoked memory of that phrase.

The first computer simulation “Agence”

Here’s a description of “Agence” and its creators from an August 11, 2020 Canada National Film Board (NFB) news release,

Two-time Emmy Award-winning storytelling pioneer Pietro Gagliano’s new work Agence (Transitional Forms/National Film Board of Canada) is an industry-first dynamic film that integrates cinematic storytelling, artificial intelligence, and user interactivity to create a different experience each time.

Agence is premiering in official competition in the Venice VR Expanded section at the 77th Venice International Film Festival (September 2 to 12), and accessible worldwide via the online Venice VR Expanded platform.

About the experience

Would you play god to intelligent life? Agence places the fate of artificially intelligent creatures in your hands. In their simulated universe, you have the power to observe, and to interfere. Maintain the balance of their peaceful existence or throw them into a state of chaos as you move from planet to planet. Watch closely and you’ll see them react to each other and their emerging world.

About the creators

Created by Pietro Gagliano, Agence is a co-production between his studio lab Transitional Forms and the NFB. Pietro is a pioneer of new forms of media that allow humans to understand what it means to be machine, and machines what it means to be human. Previously, Pietro co-founded digital studio Secret Location, and with his team, made history in 2015 by winning the first ever Emmy Award for a virtual reality project. His work has been recognized through hundreds of awards and nominations, including two Emmy Awards, 11 Canadian Screen Awards, 31 FWAs, two Webby Awards, a Peabody-Facebook Award, and a Cannes Lion.

Agence is produced by Casey Blustein (Transitional Forms) and David Oppenheim (NFB) and executive produced by Pietro Gagliano (Transitional Forms) and Anita Lee (NFB). 

About Transitional Forms

Transitional Forms is a studio lab focused on evolving entertainment formats through the use of artificial intelligence. Through their innovative approach to content and tool creation, their interdisciplinary team transforms valuable research into dynamic, culturally relevant experiences across a myriad of emerging platforms. Dedicated to the intersection of technology and art, Transitional Forms strives to make humans more creative, and machines more human.

About the NFB

David Oppenheim and Anita Lee’s recent VR credits also include the acclaimed virtual reality/live performance piece Draw Me Close and The Book of Distance, which premiered at the Sundance Film Festival and is in the “Best of VR” section at Venice this year. Canada’s public producer of award-winning creative documentaries, auteur animation, interactive stories and participatory experiences, the NFB has won over 7,000 awards, including 21 Webbys and 12 Academy Awards.

The line that caught my eye? “Would you play god to intelligent life?” For the curious, here’s the film’s trailer,

Now for the second computer simulation (the feint within the feint).

Are we living in a computer simulation?

According to some thinkers in the field, the chances are about 50/50 that we are computer simulations, which makes “Agence” a particularly piquant experience.

An October 13, 2020 article ‘Do We Live in a Simulation? Chances are about 50 – 50‘ by Anil Ananthaswamy for Scientific American poses the question with an answer that’s unexpectedly uncertain, Note: Links have been removed,

It is not often that a comedian gives an astrophysicist goose bumps when discussing the laws of physics. But comic Chuck Nice managed to do just that in a recent episode of the podcast StarTalk.The show’s host Neil deGrasse Tyson had just explained the simulation argument—the idea that we could be virtual beings living in a computer simulation. If so, the simulation would most likely create perceptions of reality on demand rather than simulate all of reality all the time—much like a video game optimized to render only the parts of a scene visible to a player. “Maybe that’s why we can’t travel faster than the speed of light, because if we could, we’d be able to get to another galaxy,” said Nice, the show’s co-host, prompting Tyson to gleefully interrupt. “Before they can program it,” the astrophysicist said,delighting at the thought. “So the programmer put in that limit.”

Such conversations may seem flippant. But ever since Nick Bostrom of the University of Oxford wrote a seminal paper about the simulation argument in 2003, philosophers, physicists, technologists and, yes, comedians have been grappling with the idea of our reality being a simulacrum. Some have tried to identify ways in which we can discern if we are simulated beings. Others have attempted to calculate the chance of us being virtual entities. Now a new analysis shows that the odds that we are living in base reality—meaning an existence that is not simulated—are pretty much even. But the study also demonstrates that if humans were to ever develop the ability to simulate conscious beings, the chances would overwhelmingly tilt in favor of us, too, being virtual denizens inside someone else’s computer. (A caveat to that conclusion is that there is little agreement about what the term “consciousness” means, let alone how one might go about simulating it.)

In 2003 Bostrom imagined a technologically adept civilization that possesses immense computing power and needs a fraction of that power to simulate new realities with conscious beings in them. Given this scenario, his simulation argument showed that at least one proposition in the following trilemma must be true: First, humans almost always go extinct before reaching the simulation-savvy stage. Second, even if humans make it to that stage, they are unlikely to be interested in simulating their own ancestral past. And third, the probability that we are living in a simulation is close to one.

Before Bostrom, the movie The Matrix had already done its part to popularize the notion of simulated realities. And the idea has deep roots in Western and Eastern philosophical traditions, from Plato’s cave allegory to Zhuang Zhou’s butterfly dream. More recently, Elon Musk gave further fuel to the concept that our reality is a simulation: “The odds that we are in base reality is one in billions,” he said at a 2016 conference.

For him [astronomer David Kipping of Columbia University], there is a more obvious answer: Occam’s razor, which says that in the absence of other evidence, the simplest explanation is more likely to be correct. The simulation hypothesis is elaborate, presuming realities nested upon realities, as well as simulated entities that can never tell that they are inside a simulation. “Because it is such an overly complicated, elaborate model in the first place, by Occam’s razor, it really should be disfavored, compared to the simple natural explanation,” Kipping says.

Maybe we are living in base reality after all—The Matrix, Musk and weird quantum physics notwithstanding.

It’s all a little mind-boggling (a computer simulation creating and playing with a computer simulation?) and I’m not sure how far how I want to start thinking about the implications (the feint within the feint within the feint). Still, it seems that the idea could be useful as a kind of thought experiment designed to have us rethink our importance in the world. Or maybe, as a way to have a laugh at our own absurdity.

Wilson Center and artificial intelligence (a Dec. 3, 2020 event, an internship, and more [including some Canadian content])

The Wilson Center (also known as the Woodrow Wilson International Center for Scholars) in Washington, DC is hosting a live webcast tomorrow on Dec. 3, 2020 and a call for applications for an internship (deadline; Dec. 18, 2020) and all of it concerns artificial intelligence (AI).

Assessing the AI Agenda: a Dec. 3, 2020 event

This looks like there could be some very interesting discussion about policy and AI, which could be applicable to other countries, as well as, the US. From a Dec. 2, 2020 Wilson Center announcements (received via email),

Assessing the AI Agenda: Policy Opportunities and Challenges in the 117th Congress

Thursday
Dec. 3, 2020
11:00am – 12:30pm ET

Artificial intelligence (AI) technologies occupy a growing share of the legislative agenda and pose a number of policy opportunities and challenges. Please join The Wilson Center’s Science and Technology Innovation Program (STIP) for a conversation with Senate and House staff from the AI Caucuses, as they discuss current policy proposals on artificial intelligence and what to expect — including oversight measures–in the next Congress. The public event will take place on Thursday, December 3 [2020] from 11am to 12:30pm EDT, and will be hosted virtually on the Wilson Center’s website. RSVP today.

Speakers:

  • Sam Mulopulos, Legislative Assistant, Sen. Rob Portman (R-OH)
  • Sean Duggan, Military Legislative Assistant, Sen. Martin Heinrich (D-NM)
  • Dahlia Sokolov, Staff Director, Subcommittee on Research and Technology, House Committee on Science, Space, and Technology
  • Mike Richards, Deputy Chief of Staff, Rep. Pete Olson (R-TX)

Moderator:

Meg King, Director, Science and Technology Innovation Program, The Wilson Center

We hope you will join us for this critical conversation. To watch, please RSVP and bookmark the webpage. Tune in at the start of the event (you may need to refresh once the event begins) on December 3. Questions about this event can be directed to the Science and Technology Program through email at stip@wilsoncenter.org or Twitter @WilsonSTIP using the hashtag #AICaucus.

Wilson Center’s AI Lab

This initiative brings to mind some of the science programmes that the UK government hosts for the members of Parliament. From the Wilson Center’s Artificial Intelligence Lab webpage,

Artificial Intelligence issues occupy a growing share of the Legislative and Executive Branch agendas; every day, Congressional aides advise their Members and Executive Branch staff encounter policy challenges pertaining to the transformative set of technologies collectively known as artificial intelligence. It is critically important that both lawmakers and government officials be well-versed in the complex subjects at hand.

What the Congressional and Executive Branch Labs Offer

Similar to the Wilson Center’s other technology training programs (e.g. the Congressional Cybersecurity Lab and the Foreign Policy Fellowship Program), the core of the Lab is a six-week seminar series that introduces participants to foundational topics in AI: what is machine learning; how do neural networks work; what are the current and future applications of autonomous intelligent systems; who are currently the main players in AI; and what will AI mean for the nation’s national security. Each seminar is led by top technologists and scholars drawn from the private, public, and non-profit sectors and a critical component of the Lab is an interactive exercise, in which participants are given an opportunity to take a hands-on role on computers to work through some of the major questions surrounding artificial intelligence. Due to COVID-19, these sessions are offered virtually. When health guidance permits, these sessions will return in-person at the Wilson Center.

Who Should Apply

The Wilson Center invites mid- to senior-level Congressional and Executive Branch staff to participate in the Lab; the program is also open to exceptional rising leaders with a keen interest in AI. Applicants should possess a strong understanding of the legislative or Executive Branch governing process and aspire to a career shaping national security policy.

….

Side trip: Science Meets (Canadian) Parliament

Briefly, here’s a bit about a programme in Canada, ‘Science Meets Parliament’ from the Canadian Science Policy Centre (CSPC); a not-for-profit, and the Canadian Office of the Chief Science Advisor (OCSA); a position with the Canadian federal government. Here’s a description of the programme from the Science Meets Parliament application webpage,

The objective of this initiative is to strengthen the connections between Canada’s scientific and political communities, enable a two-way dialogue, and promote mutual understanding. This initiative aims to help scientists become familiar with policy making at the political level, and for parliamentarians to explore using scientific evidence in policy making. [emphases mine] This initiative is not meant to be an advocacy exercise, and will not include any discussion of science funding or other forms of advocacy.

The Science Meets Parliament model is adapted from the successful Australian program held annually since 1999. Similar initiatives exist in the EU, the UK and Spain.

CSPC’s program aims to benefit the parliamentarians, the scientific community and, indirectly, the Canadian public.

This seems to be a training programme designed to teach scientists how to influence policy and to teach politicians to base their decisions on scientific evidence or, perhaps, lean on scientific experts that they met in ‘Science Meets Parliament’?

I hope they add some critical thinking to this programme so that politicians can make assessments of the advice they’re being given. Scientists have their blind spots too.

Here’s more from the Science Meets Parliament application webpage, about the latest edition,

CSPC and OCSA are pleased to offer this program in 2021 to help strengthen the connection between the science and policy communities. The program provides an excellent opportunity for researchers to learn about the inclusion of scientific evidence in policy making in Parliament.

The application deadline is January 4th, 2021

APPLYING FOR SCIENCE MEETS PARLIAMENT 2021 – ENGLISH

APPLYING FOR SCIENCE MEETS PARLIAMENT 2021 – FRENCH

You can find out more about benefits, eligibility, etc. on the application page.

Paid Graduate Research Internship: AI & Facial Recognition

Getting back to the Wilson Center, there’s this opportunity (from a Dec. 1, 2020 notice received by email),

New policy is on the horizon for facial recognition technologies (FRT). Many current proposals, including The Facial Recognition and Biometric Technology Moratorium Act of 2020 and The Ethical Use of Artificial Intelligence Act, either target the use of FRT in areas such as criminal justice or propose general moratoria until guidelines can be put in place. But these approaches are limited by their focus on negative impacts. Effective planning requires a proactive approach that considers broader opportunities as well as limitations and includes consumers, along with federal, state and local government uses.

More research is required to get us there. The Wilson Center seeks to better understand a wide range of opportunities and limitations, with a focus on one critically underrepresented group: consumers. The Science and Technology Innovation Program (STIP) is seeking an intern for Spring 2021 to support a new research project on understanding FRT from the consumer perspective.

A successful candidate will:

  • Have a demonstrated track record of work on policy and ethical issues related to Artificial Intelligence (AI) generally, Facial Recognition specifically, or other emerging technologies.
  • Be able to work remotely.
  • Be enrolled in a degree program, recently graduated (within the last year) and/or have been accepted to enter an advanced degree program within the next year.

Interested applicants should submit:

  • Cover letter explaining your general interest in STIP and specific interest in this topic, including dates and availability.
  • CV / Resume
  • Two brief writing samples (formal and/or informal), ideally demonstrating your work in science and technology research.

Applications are due Friday, December 18th [2020]. Please email all application materials as a single PDF to Erin Rohn, erin.rohn@wilsoncenter.org. Questions on this role can be directed to Anne Bowser, anne.bowser@wilsoncenter.org.

Good luck!

Brain cell-like nanodevices

Given R. Stanley Williams’s presence on the author list, it’s a bit surprising that there’s no mention of memristors. If I read the signs rightly the interest is shifting, in some cases, from the memristor to a more comprehensive grouping of circuit elements referred to as ‘neuristors’ or, more likely, ‘nanocirucuit elements’ in the effort to achieve brainlike (neuromorphic) computing (engineering). (Williams was the leader of the HP Labs team that offered proof and more of the memristor’s existence, which I mentioned here in an April 5, 2010 posting. There are many, many postings on this topic here; try ‘memristors’ or ‘brainlike computing’ for your search terms.)

A September 24, 2020 news item on ScienceDaily announces a recent development in the field of neuromorphic engineering,

In the September [2020] issue of the journal Nature, scientists from Texas A&M University, Hewlett Packard Labs and Stanford University have described a new nanodevice that acts almost identically to a brain cell. Furthermore, they have shown that these synthetic brain cells can be joined together to form intricate networks that can then solve problems in a brain-like manner.

“This is the first study where we have been able to emulate a neuron with just a single nanoscale device, which would otherwise need hundreds of transistors,” said Dr. R. Stanley Williams, senior author on the study and professor in the Department of Electrical and Computer Engineering. “We have also been able to successfully use networks of our artificial neurons to solve toy versions of a real-world problem that is computationally intense even for the most sophisticated digital technologies.”

In particular, the researchers have demonstrated proof of concept that their brain-inspired system can identify possible mutations in a virus, which is highly relevant for ensuring the efficacy of vaccines and medications for strains exhibiting genetic diversity.

A September 24, 2020 Texas A&M University news release (also on EurekAlert) by Vandana Suresh, which originated the news item, provides some context for the research,

Over the past decades, digital technologies have become smaller and faster largely because of the advancements in transistor technology. However, these critical circuit components are fast approaching their limit of how small they can be built, initiating a global effort to find a new type of technology that can supplement, if not replace, transistors.

In addition to this “scaling-down” problem, transistor-based digital technologies have other well-known challenges. For example, they struggle at finding optimal solutions when presented with large sets of data.

“Let’s take a familiar example of finding the shortest route from your office to your home. If you have to make a single stop, it’s a fairly easy problem to solve. But if for some reason you need to make 15 stops in between, you have 43 billion routes to choose from,” said Dr. Suhas Kumar, lead author on the study and researcher at Hewlett Packard Labs. “This is now an optimization problem, and current computers are rather inept at solving it.”

Kumar added that another arduous task for digital machines is pattern recognition, such as identifying a face as the same regardless of viewpoint or recognizing a familiar voice buried within a din of sounds.

But tasks that can send digital machines into a computational tizzy are ones at which the brain excels. In fact, brains are not just quick at recognition and optimization problems, but they also consume far less energy than digital systems. Hence, by mimicking how the brain solves these types of tasks, Williams said brain-inspired or neuromorphic systems could potentially overcome some of the computational hurdles faced by current digital technologies.

To build the fundamental building block of the brain or a neuron, the researchers assembled a synthetic nanoscale device consisting of layers of different inorganic materials, each with a unique function. However, they said the real magic happens in the thin layer made of the compound niobium dioxide.

When a small voltage is applied to this region, its temperature begins to increase. But when the temperature reaches a critical value, niobium dioxide undergoes a quick change in personality, turning from an insulator to a conductor. But as it begins to conduct electric currents, its temperature drops and niobium dioxide switches back to being an insulator.

These back-and-forth transitions enable the synthetic devices to generate a pulse of electrical current that closely resembles the profile of electrical spikes, or action potentials, produced by biological neurons. Further, by changing the voltage across their synthetic neurons, the researchers reproduced a rich range of neuronal behaviors observed in the brain, such as sustained, burst and chaotic firing of electrical spikes.

“Capturing the dynamical behavior of neurons is a key goal for brain-inspired computers,” said Kumar. “Altogether, we were able to recreate around 15 types of neuronal firing profiles, all using a single electrical component and at much lower energies compared to transistor-based circuits.”

To evaluate if their synthetic neurons [neuristor?] can solve real-world problems, the researchers first wired 24 such nanoscale devices together in a network inspired by the connections between the brain’s cortex and thalamus, a well-known neural pathway involved in pattern recognition. Next, they used this system to solve a toy version of the viral quasispecies reconstruction problem, where mutant variations of a virus are identified without a reference genome.

By means of data inputs, the researchers introduced the network to short gene fragments. Then, by programming the strength of connections between the artificial neurons within the network, they established basic rules about joining these genetic fragments. The jigsaw puzzle-like task for the network was to list mutations in the virus’ genome based on these short genetic segments.

The researchers found that within a few microseconds, their network of artificial neurons settled down in a state that was indicative of the genome for a mutant strain.

Williams and Kumar noted this result is proof of principle that their neuromorphic systems can quickly perform tasks in an energy-efficient way.

The researchers said the next steps in their research will be to expand the repertoire of the problems that their brain-like networks can solve by incorporating other firing patterns and some hallmark properties of the human brain like learning and memory. They also plan to address hardware challenges for implementing their technology on a commercial scale.

“Calculating the national debt or solving some large-scale simulation is not the type of task the human brain is good at and that’s why we have digital computers. Alternatively, we can leverage our knowledge of neuronal connections for solving problems that the brain is exceptionally good at,” said Williams. “We have demonstrated that depending on the type of problem, there are different and more efficient ways of doing computations other than the conventional methods using digital computers with transistors.”

If you look at the news release on EurekAlert, you’ll see this informative image is titled: NeuristerSchematic [sic],

Caption: Networks of artificial neurons connected together can solve toy versions the viral quasispecies reconstruction problem. Credit: Texas A&M University College of Engineering

(On the university website, the image is credited to Rachel Barton.) You can see one of the first mentions of a ‘neuristor’ here in an August 24, 2017 posting.

Here’s a link to and a citation for the paper,

Third-order nanocircuit elements for neuromorphic engineering by Suhas Kumar, R. Stanley Williams & Ziwen Wang. Nature volume 585, pages518–523(2020) DOI: https://doi.org/10.1038/s41586-020-2735-5 Published: 23 September 2020 Issue Date: 24 September 2020

This paper is behind a paywall.

A robot that sucks up oil spills

I was surprised to find out that between 1989 when the Exxon Valdez oil spill fouled the coastline along Alaska and northern British Columbia and 2010 when the BP (British Petroleum) oil spill fouled the Gulf of Mexico and a number of US states, which border it, and Mexico’s state coastlines, there had been virtually no improvement in the environmental remediation technologies for oil spills (see my June 4, 2010 posting).

This summer we’ve had two major oil spills, one in the Russian Arctic (as noted in my August 14, 2020 posting; scroll down to the subhead ‘As for the Russian Arctic oil spill‘) and in the Indian Ocean near Mauritius and near a coral reef and marine protected areas (see this August 13, 2020 news item on the Canadian Broadcasting Corporation [CBC] news online website).

No word yet on whether or not remediation techniques have improved but this August 6, 2020 article by Adele Peters for Fast Company highlights a new robotic approach to cleaning marine oil spills,

A decade after a BP drilling rig exploded in the Gulf of Mexico, sending an estimated 168 million gallons of oil gushing into the water over the course of months, local wildlife are still struggling to recover. Many of the people who worked to clean up the spill are still experiencing health effects. At the time, the “cleanup” strategy involved setting oil slicks on fire and spraying mass quantities of a chemical meant to disperse it, both of which helped get rid of the oil, but also worsened pollution [emphasis mine].

A new robot designed to clean oil spills, now in development, demonstrates how future spills could be handled differently. The robot navigates autonomously on the ocean surface, running on solar power. When oil sensors on the device detect a spill, it triggers a pump that pushes oil and water inside, where a custom nanomaterial sucks up the oil and releases clean water.

Kabra [Tejas Sanjay Kabra, a graduate student at North Carolina State University] 3D-printed a small prototype of the robot, which he tested in a lab, a swimming pool, and then the open ocean. (The small version, about two feet across, can collect 20 gallons of oil at a time; the same device can be scaled up to much larger sizes). He now hopes to bring the product to market as quickly as possible, as major oil spills continue to occur—such as the spill in Russia in June that sent more than 20,000 metric tons of diesel into a pristine part of the Arctic.

Peters’s article provides more details and features an embedded video.

Kabra calls his technology, SoilioS (Spilled OIL recovery by Isis & Oleophilic Sponge) and he entered it in the 2020 James Dyson Awards. The undated James Dyson Award news release announcing the 2020 national winners does not include Kabra’s entry. Mind you, over 1700 inventors entered the 2020 competition.

I hope Kabra perseveres as his robot project looks quite interesting for a number of reasons as can be seen in his entry submission (from the James Dyson Award website),

Initially, I started with a literature review on various Nanomaterials made from tree leaves with specific properties of Hydrophobicity and oleophilicity. Then I narrowed down my research on four different types of leaves i.e., Holy basil, betel, subabul, and mango. Nanoparticles from these leaves were made by green synthesis method and SEM, EDX and XRD tests were conducted. From these tests, I found that the efficiency of material made from the subabul tree was max (82.5%). In order to carry out surface cleaning at sea, different robot designs were studied. Initially, the robot was built in a box structure with arms. The arms contained Nano-capillary; however, the prototype was bulky and inefficient. A new model was devised to reduce the weight as well as increase the efficiency of absorbing the oil spill. The new robot was designed to be in a meta-stable state. The curves of the robot are designed in such a way that it gives stability as well as hold all the components. The top part of the robot is a hollow dome to improve the stability in water. The robot is 3D printed to reduce weight. The 3D printed robot was tested in a pool. Further, work is going on to build a 222 feet robot to test with hardware suitable for sea.

Here’s what SoilioS looks like,

[downloaded from https://www.jamesdysonaward.org/en-US/2020/project/soilios/]

Kabra described what makes his technology from what is currently the state-of-the-art and his future plans (from the James Dyson Award website),

The current technology uses carbon Nano-particle, and some other uses plastic PVC with a chemical adhesive, which is harmful to the environment. On the other hand, SoilioS uses Nano-material made from tree leaves. The invented technology absorbs the oil and stores inside the container with a recovery rate of 80%. The recovered oil can be used for further application; however, on the other hand, the current products burn the oil [emphasis mine] at the cleaning site itself without any recovery rate, thereby increasing pollution. The durability of the invented technology is 8-10 years, and the Nanomaterial used for cleaning the oil spill is reusable for 180 cycles. On the other hand, the durability of the current technology is up to 3-5 years, and the material used is non-reusable. The cost of the invented product is only $5 and on the other hand, the existing technology costs up to $750.

I aim to develop, manufacture, and practically test the robot prototype in the sea so that it can be used to solve oil spill issues and can save billions of dollars. I hope this device will help the environment in a lot of ways and eventually decrease the side effects caused due to oil spills such as leukemia and dying marine life. Currently, I am testing the product on different grades of oil to improve its efficiency further and improving its scope of the application so that it can also be used in industries and household purposes.

I wish Kabra good luck as he works to bring his technology to market.