Tag Archives: artificial intelligence (AI)

Artificial Intelligence (AI), musical creativity conference, art creation, ISEA 2020 (Why Sentience?) recap, and more

I have a number of items from Simon Fraser University’s (SFU) Metacreation Lab January 2021 newsletter (received via email on Jan. 5, 2020).

29th International Joint Conference on Artificial Intelligence and the 17th Pacific Rim International Conference on Artificial Intelligence! or IJCAI-PRICAI2020 being held on Jan. 7 – 15, 2021

This first excerpt features a conference that’s currently taking place,,

Musical Metacreation Tutorial at IIJCAI – PRICAI 2020 [Yes, the 29th International Joint Conference on Artificial Intelligence and the 17th Pacific Rim International Conference on Artificial Intelligence or IJCAI-PRICAI2020 is being held in 2021!]

As part of the International Joint Conference on Artificial Intelligence (IJCAI – PRICAI 2020, January 7-15), Philippe Pasquier will lead a tutorial on Musical Metacreation. This tutorial aims at introducing the field of musical metacreation and its current developments, promises, and challenges.

The tutorial will be held this Friday, January 8th, from 9 am to 12:20 pm JST ([JST = Japanese Standard Time] 12 am to 3:20 am UTC [or 4 pm – 7:30 pm PST]) and a full description of the syllabus can be found here. For details about registration for the conference and tutorials, click below.

Register for IJCAI – PRICAI 2020

The conference will be held at a virtual venue created by Virtual Chair on the gather.town platform, which offers the spontaneity of mingling with colleagues from all over the world while in the comfort of your home. The platform will allow attendees to customize avatars to fit their mood, enjoy a virtual traditional Japanese village, take part in plenary talks and more.

Two calls for papers

These two excerpts from SFU’s Metacreation Lab January 2021 newsletter feature one upcoming conference and an upcoming workshop, both with calls for papers,

2nd Conference on AI Music Creativity (MuMe + CSMC)

The second Conference on AI Music Creativity brings together two overlapping research forums: The Computer Simulation of Music Creativity Conference (est. 2016) and The International Workshop on Musical Metacreation (est. 2012). The objective of the conference is to bring together scholars and artists interested in the emulation and extension of musical creativity through computational means and to provide them with an interdisciplinary platform in which to present and discuss their work in scientific and artistic contexts.

The 2021 Conference on AI Music Creativity will be hosted by the Institute of Electronic Music and Acoustics (IEM) of the University of Music and Performing Arts of Graz, Austria and held online. The five-day program will feature paper presentations, concerts, panel discussions, workshops, tutorials, sound installations and two keynotes.

AIMC 2021 Info & CFP

AIART  2021

The 3rd IEEE Workshop on Artificial Intelligence for Art Creation (AIART) workshop has been announced for 2021. to bring forward cutting-edge technologies and most recent advances in the area of AI art in terms of enabling creation, analysis and understanding technologies. The theme topic of the workshop will be AI creativity, and will be accompanied by a Special Issue of the renowned SCI journal.

AIART is inviting high-quality papers presenting or addressing issues related to AI art, in a wide range of topics. The submission due date is January 31, 2021, and you can learn about the wide range of topics accepted below:

AIART 2021 Info & CFP

Toying with music

SFU’s Metacreation Lab January 2021 newsletter also features a kind of musical toy,

MMM : Multi-Track Music Machine

One of the latest projects at the Metacreation Lab is MMM: a generative music generation system based on Transformer architecture, capable of generating multi-track music, developed by Jeff Enns and Philippe Pasquier.

Based on an auto-regressive model, the system is capable of generating music from scratch using a wide range of preset instruments. Inputs from one or several tracks can condition the generation of new tracks, resampling MIDI input from the user or adding further layers of music.

To learn more about the system and see it in action, click below and watch the demonstration video, hear some examples, or try the program yourself through Google Colab.

Explore MMM: Multi-Track Music Machine

Why Sentience?

Finally, for anyone who was wondering what happened at the 2020 International Symposium of Electronic Arts (ISEA 2020) held virtually in Montreal in the fall, here’s some news from SFU’s Metacreation Lab January 2021 newsletter,

ISEA2020 Recap // Why Sentience? 

As we look back at one of the most unprecedented years, some of the questions explored at ISEA2020 are more salient now than ever. This recap video highlights some of the most memorable moments from last year’s virtual symposium.

ISEA2020 // Why Sentience? Recap Video

The Metacreation Lab’s researchers explored some of these guiding questions at ISEA2020 with two papers presented at the symposium: Chatterbox: an interactive system of gibberish agents and Liminal Scape, An Interactive Visual Installation with Expressive AI. These papers, and the full proceedings from ISEA2020 can now be accessed below. 

ISEA2020 Proceedings

The video is a slick, flashy, and fun 15 minutes or so. In addition to the recap for ISEA 2020, there’s a plug for ISEA 2022 in Barcelona, Spain.

The proceedings took my system a while to download (there are approximately 700 pp.). By the way, here’s another link to the proceedings or rather to the archives for the 2020 and previous years’ ISEA proceedings.

Governments need to tell us when and how they’re using AI (artificial intelligence) algorithms to make decisions

I have two items and an exploration of the Canadian scene all three of which feature governments, artificial intelligence, and responsibility.

Special issue of Information Polity edited by Dutch academics,

A December 14, 2020 IOS Press press release (also on EurekAlert) announces a special issue of Information Polity focused on algorithmic transparency in government,

Amsterdam, NL – The use of algorithms in government is transforming the way bureaucrats work and make decisions in different areas, such as healthcare or criminal justice. Experts address the transparency challenges of using algorithms in decision-making procedures at the macro-, meso-, and micro-levels in this special issue of Information Polity.

Machine-learning algorithms hold huge potential to make government services fairer and more effective and have the potential of “freeing” decision-making from human subjectivity, according to recent research. Algorithms are used in many public service contexts. For example, within the legal system it has been demonstrated that algorithms can predict recidivism better than criminal court judges. At the same time, critics highlight several dangers of algorithmic decision-making, such as racial bias and lack of transparency.

Some scholars have argued that the introduction of algorithms in decision-making procedures may cause profound shifts in the way bureaucrats make decisions and that algorithms may affect broader organizational routines and structures. This special issue on algorithm transparency presents six contributions to sharpen our conceptual and empirical understanding of the use of algorithms in government.

“There has been a surge in criticism towards the ‘black box’ of algorithmic decision-making in government,” explain Guest Editors Sarah Giest (Leiden University) and Stephan Grimmelikhuijsen (Utrecht University). “In this special issue collection, we show that it is not enough to unpack the technical details of algorithms, but also look at institutional, organizational, and individual context within which these algorithms operate to truly understand how we can achieve transparent and responsible algorithms in government. For example, regulations may enable transparency mechanisms, yet organizations create new policies on how algorithms should be used, and individual public servants create new professional repertoires. All these levels interact and affect algorithmic transparency in public organizations.”

The transparency challenges for the use of algorithms transcend different levels of government – from European level to individual public bureaucrats. These challenges can also take different forms; transparency can be enabled or limited by technical tools as well as regulatory guidelines or organizational policies. Articles in this issue address transparency challenges of algorithm use at the macro-, meso-, and micro-level. The macro level describes phenomena from an institutional perspective – which national systems, regulations and cultures play a role in algorithmic decision-making. The meso-level primarily pays attention to the organizational and team level, while the micro-level focuses on individual attributes, such as beliefs, motivation, interactions, and behaviors.

“Calls to ‘keep humans in the loop’ may be moot points if we fail to understand how algorithms impact human decision-making and how algorithmic design impacts the practical possibilities for transparency and human discretion,” notes Rik Peeters, research professor of Public Administration at the Centre for Research and Teaching in Economics (CIDE) in Mexico City. In a review of recent academic literature on the micro-level dynamics of algorithmic systems, he discusses three design variables that determine the preconditions for human transparency and discretion and identifies four main sources of variation in “human-algorithm interaction.”

The article draws two major conclusions: First, human agents are rarely fully “out of the loop,” and levels of oversight and override designed into algorithms should be understood as a continuum. The second pertains to bounded rationality, satisficing behavior, automation bias, and frontline coping mechanisms that play a crucial role in the way humans use algorithms in decision-making processes.

For future research Dr. Peeters suggests taking a closer look at the behavioral mechanisms in combination with identifying relevant skills of bureaucrats in dealing with algorithms. “Without a basic understanding of the algorithms that screen- and street-level bureaucrats have to work with, it is difficult to imagine how they can properly use their discretion and critically assess algorithmic procedures and outcomes. Professionals should have sufficient training to supervise the algorithms with which they are working.”

At the macro-level, algorithms can be an important tool for enabling institutional transparency, writes Alex Ingrams, PhD, Governance and Global Affairs, Institute of Public Administration, Leiden University, Leiden, The Netherlands. This study evaluates a machine-learning approach to open public comments for policymaking to increase institutional transparency of public commenting in a law-making process in the United States. The article applies an unsupervised machine learning analysis of thousands of public comments submitted to the United States Transport Security Administration on a 2013 proposed regulation for the use of new full body imaging scanners in airports. The algorithm highlights salient topic clusters in the public comments that could help policymakers understand open public comments processes. “Algorithms should not only be subject to transparency but can also be used as tool for transparency in government decision-making,” comments Dr. Ingrams.

“Regulatory certainty in combination with organizational and managerial capacity will drive the way the technology is developed and used and what transparency mechanisms are in place for each step,” note the Guest Editors. “On its own these are larger issues to tackle in terms of developing and passing laws or providing training and guidance for public managers and bureaucrats. The fact that they are linked further complicates this process. Highlighting these linkages is a first step towards seeing the bigger picture of why transparency mechanisms are put in place in some scenarios and not in others and opens the door to comparative analyses for future research and new insights for policymakers. To advocate the responsible and transparent use of algorithms, future research should look into the interplay between micro-, meso-, and macro-level dynamics.”

“We are proud to present this special issue, the 100th issue of Information Polity. Its focus on the governance of AI demonstrates our continued desire to tackle contemporary issues in eGovernment and the importance of showcasing excellent research and the insights offered by information polity perspectives,” add Professor Albert Meijer (Utrecht University) and Professor William Webster (University of Stirling), Editors-in-Chief.

This image illustrates the interplay between the various level dynamics,

Caption: Studying algorithms and algorithmic transparency from multiple levels of analyses. Credit: Information Polity.

Here’s a link, to and a citation for the special issue,

Algorithmic Transparency in Government: Towards a Multi-Level Perspective
Guest Editors: Sarah Giest, PhD, and Stephan Grimmelikhuijsen, PhD
Information Polity, Volume 25, Issue 4 (December 2020), published by IOS Press

The issue is open access for three months, Dec. 14, 2020 – March 14, 2021.

Two articles from the special were featured in the press release,

“The agency of algorithms: Understanding human-algorithm interaction in administrative decision-making,” by Rik Peeters, PhD (https://doi.org/10.3233/IP-200253)

“A machine learning approach to open public comments for policymaking,” by Alex Ingrams, PhD (https://doi.org/10.3233/IP-200256)

An AI governance publication from the US’s Wilson Center

Within one week of the release of a special issue of Information Polity on AI and governments, a Wilson Center (Woodrow Wilson International Center for Scholars) December 21, 2020 news release (received via email) announces a new publication,

Governing AI: Understanding the Limits, Possibilities, and Risks of AI in an Era of Intelligent Tools and Systems by John Zysman & Mark Nitzberg

Abstract

In debates about artificial intelligence (AI), imaginations often run wild. Policy-makers, opinion leaders, and the public tend to believe that AI is already an immensely powerful universal technology, limitless in its possibilities. However, while machine learning (ML), the principal computer science tool underlying today’s AI breakthroughs, is indeed powerful, ML is fundamentally a form of context-dependent statistical inference and as such has its limits. Specifically, because ML relies on correlations between inputs and outputs or emergent clustering in training data, today’s AI systems can only be applied in well- specified problem domains, still lacking the context sensitivity of a typical toddler or house-pet. Consequently, instead of constructing policies to govern artificial general intelligence (AGI), decision- makers should focus on the distinctive and powerful problems posed by narrow AI, including misconceived benefits and the distribution of benefits, autonomous weapons, and bias in algorithms. AI governance, at least for now, is about managing those who create and deploy AI systems, and supporting the safe and beneficial application of AI to narrow, well-defined problem domains. Specific implications of our discussion are as follows:

  • AI applications are part of a suite of intelligent tools and systems and must ultimately be regulated as a set. Digital platforms, for example, generate the pools of big data on which AI tools operate and hence, the regulation of digital platforms and big data is part of the challenge of governing AI. Many of the platform offerings are, in fact, deployments of AI tools. Hence, focusing on AI alone distorts the governance problem.
  • Simply declaring objectives—be they assuring digital privacy and transparency, or avoiding bias—is not sufficient. We must decide what the goals actually will be in operational terms.
  • The issues and choices will differ by sector. For example, the consequences of bias and error will differ from a medical domain or a criminal justice domain to one of retail sales.
  • The application of AI tools in public policy decision making, in transportation design or waste disposal or policing among a whole variety of domains, requires great care. There is a substantial risk of focusing on efficiency when the public debate about what the goals should be in the first place is in fact required. Indeed, public values evolve as part of social and political conflict.
  • The economic implications of AI applications are easily exaggerated. Should public investment concentrate on advancing basic research or on diffusing the tools, user interfaces, and training needed to implement them?
  • As difficult as it will be to decide on goals and a strategy to implement the goals of one community, let alone regional or international communities, any agreement that goes beyond simple objective statements is very unlikely.

Unfortunately, I haven’t been able to successfully download the working paper/report from the Wilson Center’s Governing AI: Understanding the Limits, Possibilities, and Risks of AI in an Era of Intelligent Tools and Systems webpage.

However, I have found a draft version of the report (Working Paper) published August 26, 2020 on the Social Science Research Network. This paper originated at the University of California at Berkeley as part of a series from the Berkeley Roundtable on the International Economy (BRIE). ‘Governing AI: Understanding the Limits, Possibility, and Risks of AI in an Era of Intelligent Tools and Systems’ is also known as the BRIE Working Paper 2020-5.

Canadian government and AI

The special issue on AI and governance and the the paper published by the Wilson Center stimulated my interest in the Canadian government’s approach to governance, responsibility, transparency, and AI.

There is information out there but it’s scattered across various government initiatives and ministries. Above all, it is not easy to find, open communication. Whether that’s by design or the blindness and/or ineptitude to be found in all organizations I leave that to wiser judges. (I’ve worked in small companies and they too have the problem. In colloquial terms, ‘the right hand doesn’t know what the left hand is doing’.)

Responsible use? Maybe not after 2019

First there’s a government of Canada webpage, Responsible use of artificial intelligence (AI). Other than a note at the bottom of the page “Date modified: 2020-07-28,” all of the information dates from 2016 up to March 2019 (which you’ll find on ‘Our Timeline’). Is nothing new happening?

For anyone interested in responsible use, there are two sections “Our guiding principles” and “Directive on Automated Decision-Making” that answer some questions. I found the ‘Directive’ to be more informative with its definitions, objectives, and, even, consequences. Sadly, you need to keep clicking to find consequences and you’ll end up on The Framework for the Management of Compliance. Interestingly, deputy heads are assumed in charge of managing non-compliance. I wonder how employees deal with a non-compliant deputy head?

What about the government’s digital service?

You might think Canadian Digital Service (CDS) might also have some information about responsible use. CDS was launched in 2017, according to Luke Simon’s July 19, 2017 article on Medium,

In case you missed it, there was some exciting digital government news in Canada Tuesday. The Canadian Digital Service (CDS) launched, meaning Canada has joined other nations, including the US and the UK, that have a federal department dedicated to digital.

At the time, Simon was Director of Outreach at Code for Canada.

Presumably, CDS, from an organizational perspective, is somehow attached to the Minister of Digital Government (it’s a position with virtually no governmental infrastructure as opposed to the Minister of Innovation, Science and Economic Development who is responsible for many departments and agencies). The current minister is Joyce Murray whose government profile offers almost no information about her work on digital services. Perhaps there’s a more informative profile of the Minister of Digital Government somewhere on a government website.

Meanwhile, they are friendly folks at CDS but they don’t offer much substantive information. From the CDS homepage,

Our aim is to make services easier for government to deliver. We collaborate with people who work in government to address service delivery problems. We test with people who need government services to find design solutions that are easy to use.

Learn more

After clicking on Learn more, I found this,

At the Canadian Digital Service (CDS), we partner up with federal departments to design, test and build simple, easy to use services. Our goal is to improve the experience – for people who deliver government services and people who use those services.

How it works

We work with our partners in the open, regularly sharing progress via public platforms. This creates a culture of learning and fosters best practices. It means non-partner departments can apply our work and use our resources to develop their own services.

Together, we form a team that follows the ‘Agile software development methodology’. This means we begin with an intensive ‘Discovery’ research phase to explore user needs and possible solutions to meeting those needs. After that, we move into a prototyping ‘Alpha’ phase to find and test ways to meet user needs. Next comes the ‘Beta’ phase, where we release the solution to the public and intensively test it. Lastly, there is a ‘Live’ phase, where the service is fully released and continues to be monitored and improved upon.

Between the Beta and Live phases, our team members step back from the service, and the partner team in the department continues the maintenance and development. We can help partners recruit their service team from both internal and external sources.

Before each phase begins, CDS and the partner sign a partnership agreement which outlines the goal and outcomes for the coming phase, how we’ll get there, and a commitment to get them done.

As you can see, there’s not a lot of detail and they don’t seem to have included anything about artificial intelligence as part of their operation. (I’ll come back to the government’s implementation of artificial intelligence and information technology later.)

Does the Treasury Board of Canada have charge of responsible AI use?

I think so but there are government departments/ministries that also have some responsibilities for AI and I haven’t seen any links back to the Treasury Board documentation.

For anyone not familiar with the Treasury Board or even if you are, December 14, 2009 article (Treasury Board of Canada: History, Organization and Issues) on Maple Leaf Web is quite informative,

The Treasury Board of Canada represent a key entity within the federal government. As an important cabinet committee and central agency, they play an important role in financial and personnel administration. Even though the Treasury Board plays a significant role in government decision making, the general public tends to know little about its operation and activities. [emphasis mine] The following article provides an introduction to the Treasury Board, with a focus on its history, responsibilities, organization, and key issues.

It seems the Minister of Digital Government, Joyce Murray is part of the Treasury Board and the Treasury Board is the source for the Digital Operations Strategic Plan: 2018-2022,

I haven’t read the entire document but the table of contents doesn’t include a heading for artificial intelligence and there wasn’t any mention of it in the opening comments.

But isn’t there a Chief Information Officer for Canada?

Herein lies a tale (I doubt I’ll ever get the real story) but the answer is a qualified ‘no’. The Chief Information Officer for Canada, Alex Benay (there is an AI aspect) stepped down in September 2019 to join a startup company according to an August 6, 2019 article by Mia Hunt for Global Government Forum,

Alex Benay has announced he will step down as Canada’s chief information officer next month to “take on new challenge” at tech start-up MindBridge.

“It is with mixed emotions that I am announcing my departure from the Government of Canada,” he said on Wednesday in a statement posted on social media, describing his time as CIO as “one heck of a ride”.

He said he is proud of the work the public service has accomplished in moving the national digital agenda forward. Among these achievements, he listed the adoption of public Cloud across government; delivering the “world’s first” ethical AI management framework; [emphasis mine] renewing decades-old policies to bring them into the digital age; and “solidifying Canada’s position as a global leader in open government”.

He also led the introduction of new digital standards in the workplace, and provided “a clear path for moving off” Canada’s failed Phoenix pay system. [emphasis mine]

I cannot find a current Chief Information of Canada despite searches but I did find this List of chief information officers (CIO) by institution. Where there was one, there are now many.

Since September 2019, Mr. Benay has moved again according to a November 7, 2019 article by Meagan Simpson on the BetaKit,website (Note: Links have been removed),

Alex Benay, the former CIO [Chief Information Officer] of Canada, has left his role at Ottawa-based Mindbridge after a short few months stint.

The news came Thursday, when KPMG announced that Benay was joining the accounting and professional services organization as partner of digital and government solutions. Benay originally announced that he was joining Mindbridge in August, after spending almost two and a half years as the CIO for the Government of Canada.

Benay joined the AI startup as its chief client officer and, at the time, was set to officially take on the role on September 3rd. According to Benay’s LinkedIn, he joined Mindbridge in August, but if the September 3rd start date is correct, Benay would have only been at Mindbridge for around three months. The former CIO of Canada was meant to be responsible for Mindbridge’s global growth as the company looked to prepare for an IPO in 2021.

Benay told The Globe and Mail that his decision to leave Mindbridge was not a question of fit, or that he considered the move a mistake. He attributed his decision to leave to conversations with Mindbridge customer KPMG, over a period of three weeks. Benay told The Globe that he was drawn to the KPMG opportunity to lead its digital and government solutions practice, something that was more familiar to him given his previous role.

Mindbridge has not completely lost what was touted as a start hire, though, as Benay will be staying on as an advisor to the startup. “This isn’t a cutting the cord and moving on to something else completely,” Benay told The Globe. “It’s a win-win for everybody.”

Via Mr. Benay, I’ve re-introduced artificial intelligence and introduced the Phoenix Pay system and now I’m linking them to government implementation of information technology in a specific case and speculating about implementation of artificial intelligence algorithms in government.

Phoenix Pay System Debacle (things are looking up), a harbinger for responsible use of artificial intelligence?

I’m happy to hear that the situation where government employees had no certainty about their paycheques is becoming better. After the ‘new’ Phoenix Pay System was implemented in early 2016, government employees found they might get the correct amount on their paycheque or might find significantly less than they were entitled to or might find huge increases.

The instability alone would be distressing but adding to it with the inability to get the problem fixed must have been devastating. Almost five years later, the problems are being resolved and people are getting paid appropriately, more often.

The estimated cost for fixing the problems was, as I recall, over $1B; I think that was a little optimistic. James Bagnall’s July 28, 2020 article for the Ottawa Citizen provides more detail, although not about the current cost, and is the source of my measured optimism,

Something odd has happened to the Phoenix Pay file of late. After four years of spitting out errors at a furious rate, the federal government’s new pay system has gone quiet.

And no, it’s not because of the even larger drama written by the coronavirus. In fact, there’s been very real progress at Public Services and Procurement Canada [PSPC; emphasis mine], the department in charge of pay operations.

Since January 2018, the peak of the madness, the backlog of all pay transactions requiring action has dropped by about half to 230,000 as of late June. Many of these involve basic queries for information about promotions, overtime and rules. The part of the backlog involving money — too little or too much pay, incorrect deductions, pay not received — has shrunk by two-thirds to 125,000.

These are still very large numbers but the underlying story here is one of long-delayed hope. The government is processing the pay of more than 330,000 employees every two weeks while simultaneously fixing large batches of past mistakes.

While officials with two of the largest government unions — Public Service Alliance of Canada [PSAC] and the Professional Institute of the Public Service of Canada [PPSC] — disagree the pay system has worked out its kinks, they acknowledge it’s considerably better than it was. New pay transactions are being processed “with increased timeliness and accuracy,” the PSAC official noted.

Neither union is happy with the progress being made on historical mistakes. PIPSC president Debi Daviau told this newspaper that many of her nearly 60,000 members have been waiting for years to receive salary adjustments stemming from earlier promotions or transfers, to name two of the more prominent sources of pay errors.

Even so, the sharp improvement in Phoenix Pay’s performance will soon force the government to confront an interesting choice: Should it continue with plans to replace the system?

Treasury Board, the government’s employer, two years ago launched the process to do just that. Last March, SAP Canada — whose technology underpins the pay system still in use at Canada Revenue Agency — won a competition to run a pilot project. Government insiders believe SAP Canada is on track to build the full system starting sometime in 2023.

When Public Services set out the business case in 2009 for building Phoenix Pay, it noted the pay system would have to accommodate 150 collective agreements that contained thousands of business rules and applied to dozens of federal departments and agencies. The technical challenge has since intensified.

Under the original plan, Phoenix Pay was to save $70 million annually by eliminating 1,200 compensation advisors across government and centralizing a key part of the operation at the pay centre in Miramichi, N.B., where 550 would manage a more automated system.

Instead, the Phoenix Pay system currently employs about 2,300.  This includes 1,600 at Miramichi and five regional pay offices, along with 350 each at a client contact centre (which deals with relatively minor pay issues) and client service bureau (which handles the more complex, longstanding pay errors). This has naturally driven up the average cost of managing each pay account — 55 per cent higher than the government’s former pay system according to last fall’s estimate by the Parliamentary Budget Officer.

… As the backlog shrinks, the need for regional pay offices and emergency staffing will diminish. Public Services is also working with a number of high-tech firms to develop ways of accurately automating employee pay using artificial intelligence [emphasis mine].

Given the Phoenix Pay System debacle, it might be nice to see a little information about how the government is planning to integrate more sophisticated algorithms (artificial intelligence) in their operations.

I found this on a Treasury Board webpage, all 1 minute and 29 seconds of it,

The blonde model or actress mentions that companies applying to Public Services and Procurement Canada for placement on the list must use AI responsibly. Her script does not include a definition or guidelines, which, as previously noted, as on the Treasury Board website.

As for Public Services and Procurement Canada, they have an Artificial intelligence source list,

Public Services and Procurement Canada (PSPC) is putting into operation the Artificial intelligence source list to facilitate the procurement of Canada’s requirements for Artificial intelligence (AI).

After research and consultation with industry, academia, and civil society, Canada identified 3 AI categories and business outcomes to inform this method of supply:

Insights and predictive modelling

Machine interactions

Cognitive automation

PSPC is focused only on procuring AI. If there are guidelines on their website for its use, I did not find them.

I found one more government agency that might have some information about artificial intelligence and guidelines for its use, Shared Services Canada,

Shared Services Canada (SSC) delivers digital services to Government of Canada organizations. We provide modern, secure and reliable IT services so federal organizations can deliver digital programs and services that meet Canadians needs.

Since the Minister of Digital Government, Joyce Murray, is listed on the homepage, I was hopeful that I could find out more about AI and governance and whether or not the Canadian Digital Service was associated with this government ministry/agency. I was frustrated on both counts.

To sum up, there is no information that I could find after March 2019 about Canada, it’s government and plans for AI, especially responsible management/governance and AI on a Canadian government website although I have found guidelines, expectations, and consequences for non-compliance. (Should anyone know which government agency has up-to-date information on its responsible use of AI, please let me know in the Comments.

Canadian Institute for Advanced Research (CIFAR)

The first mention of the Pan-Canadian Artificial Intelligence Strategy is in my analysis of the Canadian federal budget in a March 24, 2017 posting. Briefly, CIFAR received a big chunk of that money. Here’s more about the strategy from the CIFAR Pan-Canadian AI Strategy homepage,

In 2017, the Government of Canada appointed CIFAR to develop and lead a $125 million Pan-Canadian Artificial Intelligence Strategy, the world’s first national AI strategy.

CIFAR works in close collaboration with Canada’s three national AI Institutes — Amii in Edmonton, Mila in Montreal, and the Vector Institute in Toronto, as well as universities, hospitals and organizations across the country.

The objectives of the strategy are to:

Attract and retain world-class AI researchers by increasing the number of outstanding AI researchers and skilled graduates in Canada.

Foster a collaborative AI ecosystem by establishing interconnected nodes of scientific excellence in Canada’s three major centres for AI: Edmonton, Montreal, and Toronto.

Advance national AI initiatives by supporting a national research community on AI through training programs, workshops, and other collaborative opportunities.

Understand the societal implications of AI by developing global thought leadership on the economic, ethical, policy, and legal implications [emphasis mine] of advances in AI.

Responsible AI at CIFAR

You can find Responsible AI in a webspace devoted to what they have called, AI & Society. Here’s more from the homepage,

CIFAR is leading global conversations about AI’s impact on society.

The AI & Society program, one of the objectives of the CIFAR Pan-Canadian AI Strategy, develops global thought leadership on the economic, ethical, political, and legal implications of advances in AI. These dialogues deliver new ways of thinking about issues, and drive positive change in the development and deployment of responsible AI.

Solution Networks

AI Futures Policy Labs

AI & Society Workshops

Building an AI World

Under the category of building an AI World I found this (from CIFAR’s AI & Society homepage),

BUILDING AN AI WORLD

Explore the landscape of global AI strategies.

Canada was the first country in the world to announce a federally-funded national AI strategy, prompting many other nations to follow suit. CIFAR published two reports detailing the global landscape of AI strategies.

I skimmed through the second report and it seems more like a comparative study of various country’s AI strategies than a overview of responsible use of AI.

Final comments about Responsible AI in Canada and the new reports

I’m glad to see there’s interest in Responsible AI but based on my adventures searching the Canadian government websites and the Pan-Canadian AI Strategy webspace, I’m left feeling hungry for more.

I didn’t find any details about how AI is being integrated into government departments and for what uses. I’d like to know and I’d like to have some say about how it’s used and how the inevitable mistakes will be dealh with.

The great unwashed

What I’ve found is high minded, but, as far as I can tell, there’s absolutely no interest in talking to the ‘great unwashed’. Those of us who are not experts are being left out of these earlier stage conversations.

I’m sure we’ll be consulted at some point but it will be long past the time when are our opinions and insights could have impact and help us avoid the problems that experts tend not to see. What we’ll be left with is protest and anger on our part and, finally, grudging admissions and corrections of errors on the government’s part.

Let’s take this for an example. The Phoenix Pay System was implemented in its first phase on Feb. 24, 2016. As I recall, problems develop almost immediately. The second phase of implementation starts April 21, 2016. In May 2016 the government hires consultants to fix the problems. November 29, 2016 the government minister, Judy Foote, admits a mistake has been made. February 2017 the government hires consultants to establish what lessons they might learn. February 15, 2018 the pay problems backlog amounts to 633,000. Source: James Bagnall, Feb. 23, 2018 ‘timeline‘ for Ottawa Citizen

Do take a look at the timeline, there’s more to it than what I’ve written here and I’m sure there’s more to the Phoenix Pay System debacle than a failure to listen to warnings from those who would be directly affected. It’s fascinating though how often a failure to listen presages far deeper problems with a project.

The Canadian government, both a conservative and a liberal government, contributed to the Phoenix Debacle but it seems the gravest concern is with senior government bureaucrats. You might think things have changed since this recounting of the affair in a June 14, 2018 article by Michelle Zilio for the Globe and Mail,

The three public servants blamed by the Auditor-General for the Phoenix pay system problems were not fired for mismanagement of the massive technology project that botched the pay of tens of thousands of public servants for more than two years.

Marie Lemay, deputy minister for Public Services and Procurement Canada (PSPC), said two of the three Phoenix executives were shuffled out of their senior posts in pay administration and did not receive performance bonuses for their handling of the system. Those two employees still work for the department, she said. Ms. Lemay, who refused to identify the individuals, said the third Phoenix executive retired.

In a scathing report last month, Auditor-General Michael Ferguson blamed three “executives” – senior public servants at PSPC, which is responsible for Phoenix − for the pay system’s “incomprehensible failure.” [emphasis mine] He said the executives did not tell the then-deputy minister about the known problems with Phoenix, leading the department to launch the pay system despite clear warnings it was not ready.

Speaking to a parliamentary committee on Thursday, Ms. Lemay said the individuals did not act with “ill intent,” noting that the development and implementation of the Phoenix project were flawed. She encouraged critics to look at the “bigger picture” to learn from all of Phoenix’s failures.

Mr. Ferguson, whose office spoke with the three Phoenix executives as a part of its reporting, said the officials prioritized some aspects of the pay-system rollout, such as schedule and budget, over functionality. He said they also cancelled a pilot implementation project with one department that would have helped it detect problems indicating the system was not ready.

Mr. Ferguson’s report warned the Phoenix problems are indicative of “pervasive cultural problems” [emphasis mine] in the civil service, which he said is fearful of making mistakes, taking risks and conveying “hard truths.”

Speaking to the same parliamentary committee on Tuesday, Privy Council Clerk [emphasis mine] Michael Wernick challenged Mr. Ferguson’s assertions, saying his chapter on the federal government’s cultural issues is an “opinion piece” containing “sweeping generalizations.”

The Privy Council Clerk is the top level bureaucrat (and there is only one such clerk) in the civil/public service and I think his quotes are quite telling of “pervasive cultural problems.” There’s a new Privy Council Clerk but from what I can tell he was well trained by his predecessor.

Doe we really need senior government bureaucrats?

I now have an example of bureaucratic interference, specifically with the Global Public Health Information Network (GPHIN) where it would seem that not much has changed, from a December 26, 2020 article by Grant Robertson for the Globe & Mail,

When Canada unplugged support for its pandemic alert system [GPHIN] last year, it was a symptom of bigger problems inside the Public Health Agency. Experienced scientists were pushed aside, expertise was eroded, and internal warnings went unheeded, which hindered the department’s response to COVID-19

As a global pandemic began to take root in February, China held a series of backchannel conversations with Canada, lobbying the federal government to keep its borders open.

With the virus already taking a deadly toll in Asia, Heng Xiaojun, the Minister Counsellor for the Chinese embassy, requested a call with senior Transport Canada officials. Over the course of the conversation, the Chinese representatives communicated Beijing’s desire that flights between the two countries not be stopped because it was unnecessary.

“The Chinese position on the continuation of flights was reiterated,” say official notes taken from the call. “Mr. Heng conveyed that China is taking comprehensive measures to combat the coronavirus.”

Canadian officials seemed to agree, since no steps were taken to restrict or prohibit travel. To the federal government, China appeared to have the situation under control and the risk to Canada was low. Before ending the call, Mr. Heng thanked Ottawa for its “science and fact-based approach.”

It was a critical moment in the looming pandemic, but the Canadian government lacked the full picture, instead relying heavily on what Beijing was choosing to disclose to the World Health Organization (WHO). Ottawa’s ability to independently know what was going on in China – on the ground and inside hospitals – had been greatly diminished in recent years.

Canada once operated a robust pandemic early warning system and employed a public-health doctor based in China who could report back on emerging problems. But it had largely abandoned those international strategies over the past five years, and was no longer as plugged-in.

By late February [2020], Ottawa seemed to be taking the official reports from China at their word, stating often in its own internal risk assessments that the threat to Canada remained low. But inside the Public Health Agency of Canada (PHAC), rank-and-file doctors and epidemiologists were growing increasingly alarmed at how the department and the government were responding.

“The team was outraged,” one public-health scientist told a colleague in early April, in an internal e-mail obtained by The Globe and Mail, criticizing the lack of urgency shown by Canada’s response during January, February and early March. “We knew this was going to be around for a long time, and it’s serious.”

China had locked down cities and restricted travel within its borders. Staff inside the Public Health Agency believed Beijing wasn’t disclosing the whole truth about the danger of the virus and how easily it was transmitted. “The agency was just too slow to respond,” the scientist said. “A sane person would know China was lying.”

It would later be revealed that China’s infection and mortality rates were played down in official records, along with key details about how the virus was spreading.

But the Public Health Agency, which was created after the 2003 SARS crisis to bolster the country against emerging disease threats, had been stripped of much of its capacity to gather outbreak intelligence and provide advance warning by the time the pandemic hit.

The Global Public Health Intelligence Network, an early warning system known as GPHIN that was once considered a cornerstone of Canada’s preparedness strategy, had been scaled back over the past several years, with resources shifted into projects that didn’t involve outbreak surveillance.

However, a series of documents obtained by The Globe during the past four months, from inside the department and through numerous Access to Information requests, show the problems that weakened Canada’s pandemic readiness run deeper than originally thought. Pleas from the international health community for Canada to take outbreak detection and surveillance much more seriously were ignored by mid-level managers [emphasis mine] inside the department. A new federal pandemic preparedness plan – key to gauging the country’s readiness for an emergency – was never fully tested. And on the global stage, the agency stopped sending experts [emphasis mine] to international meetings on pandemic preparedness, instead choosing senior civil servants with little or no public-health background [emphasis mine] to represent Canada at high-level talks, The Globe found.

The curtailing of GPHIN and allegations that scientists had become marginalized within the Public Health Agency, detailed in a Globe investigation this past July [2020], are now the subject of two federal probes – an examination by the Auditor-General of Canada and an independent federal review, ordered by the Minister of Health.

Those processes will undoubtedly reshape GPHIN and may well lead to an overhaul of how the agency functions in some areas. The first steps will be identifying and fixing what went wrong. With the country now topping 535,000 cases of COVID-19 and more than 14,700 dead, there will be lessons learned from the pandemic.

Prime Minister Justin Trudeau has said he is unsure what role added intelligence [emphasis mine] could have played in the government’s pandemic response, though he regrets not bolstering Canada’s critical supplies of personal protective equipment sooner. But providing the intelligence to make those decisions early is exactly what GPHIN was created to do – and did in previous outbreaks.

Epidemiologists have described in detail to The Globe how vital it is to move quickly and decisively in a pandemic. Acting sooner, even by a few days or weeks in the early going, and throughout, can have an exponential impact on an outbreak, including deaths. Countries such as South Korea, Australia and New Zealand, which have fared much better than Canada, appear to have acted faster in key tactical areas, some using early warning information they gathered. As Canada prepares itself in the wake of COVID-19 for the next major health threat, building back a better system becomes paramount.

If you have time, do take a look at Robertson’s December 26, 2020 article and the July 2020 Globe investigation. As both articles make clear, senior bureaucrats whose chief attribute seems to have been longevity took over, reallocated resources, drove out experts, and crippled the few remaining experts in the system with a series of bureaucratic demands while taking trips to attend meetings (in desirable locations) for which they had no significant or useful input.

The Phoenix and GPHIN debacles bear a resemblance in that senior bureaucrats took over and in a state of blissful ignorance made a series of disastrous decisions bolstered by politicians who seem to neither understand nor care much about the outcomes.

If you think I’m being harsh watch Canadian Broadcasting Corporation (CBC) reporter Rosemary Barton interview Prime Minister Trudeau for a 2020 year-end interview, Note: There are some commercials. Then, pay special attention to the Trudeau’s answer to the first question,

Responsible AI, eh?

Based on the massive mishandling of the Phoenix Pay System implementation where top bureaucrats did not follow basic and well established information services procedures and the Global Public Health Information Network mismanagement by top level bureaucrats, I’m not sure I have a lot of confidence in any Canadian government claims about a responsible approach to using artificial intelligence.

Unfortunately, it doesn’t matter as implementation is most likely already taking place here in Canada.

Enough with the pessimism. I feel it’s necessary to end this on a mildly positive note. Hurray to the government employees who worked through the Phoenix Pay System debacle, the current and former GPHIN experts who continued to sound warnings, and all those people striving to make true the principles of ‘Peace, Order, and Good Government’, the bedrock principles of the Canadian Parliament.

A lot of mistakes have been made but we also do make a lot of good decisions.

Wilson Center and artificial intelligence (a Dec. 3, 2020 event, an internship, and more [including some Canadian content])

The Wilson Center (also known as the Woodrow Wilson International Center for Scholars) in Washington, DC is hosting a live webcast tomorrow on Dec. 3, 2020 and a call for applications for an internship (deadline; Dec. 18, 2020) and all of it concerns artificial intelligence (AI).

Assessing the AI Agenda: a Dec. 3, 2020 event

This looks like there could be some very interesting discussion about policy and AI, which could be applicable to other countries, as well as, the US. From a Dec. 2, 2020 Wilson Center announcements (received via email),

Assessing the AI Agenda: Policy Opportunities and Challenges in the 117th Congress

Thursday
Dec. 3, 2020
11:00am – 12:30pm ET

Artificial intelligence (AI) technologies occupy a growing share of the legislative agenda and pose a number of policy opportunities and challenges. Please join The Wilson Center’s Science and Technology Innovation Program (STIP) for a conversation with Senate and House staff from the AI Caucuses, as they discuss current policy proposals on artificial intelligence and what to expect — including oversight measures–in the next Congress. The public event will take place on Thursday, December 3 [2020] from 11am to 12:30pm EDT, and will be hosted virtually on the Wilson Center’s website. RSVP today.

Speakers:

  • Sam Mulopulos, Legislative Assistant, Sen. Rob Portman (R-OH)
  • Sean Duggan, Military Legislative Assistant, Sen. Martin Heinrich (D-NM)
  • Dahlia Sokolov, Staff Director, Subcommittee on Research and Technology, House Committee on Science, Space, and Technology
  • Mike Richards, Deputy Chief of Staff, Rep. Pete Olson (R-TX)

Moderator:

Meg King, Director, Science and Technology Innovation Program, The Wilson Center

We hope you will join us for this critical conversation. To watch, please RSVP and bookmark the webpage. Tune in at the start of the event (you may need to refresh once the event begins) on December 3. Questions about this event can be directed to the Science and Technology Program through email at stip@wilsoncenter.org or Twitter @WilsonSTIP using the hashtag #AICaucus.

Wilson Center’s AI Lab

This initiative brings to mind some of the science programmes that the UK government hosts for the members of Parliament. From the Wilson Center’s Artificial Intelligence Lab webpage,

Artificial Intelligence issues occupy a growing share of the Legislative and Executive Branch agendas; every day, Congressional aides advise their Members and Executive Branch staff encounter policy challenges pertaining to the transformative set of technologies collectively known as artificial intelligence. It is critically important that both lawmakers and government officials be well-versed in the complex subjects at hand.

What the Congressional and Executive Branch Labs Offer

Similar to the Wilson Center’s other technology training programs (e.g. the Congressional Cybersecurity Lab and the Foreign Policy Fellowship Program), the core of the Lab is a six-week seminar series that introduces participants to foundational topics in AI: what is machine learning; how do neural networks work; what are the current and future applications of autonomous intelligent systems; who are currently the main players in AI; and what will AI mean for the nation’s national security. Each seminar is led by top technologists and scholars drawn from the private, public, and non-profit sectors and a critical component of the Lab is an interactive exercise, in which participants are given an opportunity to take a hands-on role on computers to work through some of the major questions surrounding artificial intelligence. Due to COVID-19, these sessions are offered virtually. When health guidance permits, these sessions will return in-person at the Wilson Center.

Who Should Apply

The Wilson Center invites mid- to senior-level Congressional and Executive Branch staff to participate in the Lab; the program is also open to exceptional rising leaders with a keen interest in AI. Applicants should possess a strong understanding of the legislative or Executive Branch governing process and aspire to a career shaping national security policy.

….

Side trip: Science Meets (Canadian) Parliament

Briefly, here’s a bit about a programme in Canada, ‘Science Meets Parliament’ from the Canadian Science Policy Centre (CSPC); a not-for-profit, and the Canadian Office of the Chief Science Advisor (OCSA); a position with the Canadian federal government. Here’s a description of the programme from the Science Meets Parliament application webpage,

The objective of this initiative is to strengthen the connections between Canada’s scientific and political communities, enable a two-way dialogue, and promote mutual understanding. This initiative aims to help scientists become familiar with policy making at the political level, and for parliamentarians to explore using scientific evidence in policy making. [emphases mine] This initiative is not meant to be an advocacy exercise, and will not include any discussion of science funding or other forms of advocacy.

The Science Meets Parliament model is adapted from the successful Australian program held annually since 1999. Similar initiatives exist in the EU, the UK and Spain.

CSPC’s program aims to benefit the parliamentarians, the scientific community and, indirectly, the Canadian public.

This seems to be a training programme designed to teach scientists how to influence policy and to teach politicians to base their decisions on scientific evidence or, perhaps, lean on scientific experts that they met in ‘Science Meets Parliament’?

I hope they add some critical thinking to this programme so that politicians can make assessments of the advice they’re being given. Scientists have their blind spots too.

Here’s more from the Science Meets Parliament application webpage, about the latest edition,

CSPC and OCSA are pleased to offer this program in 2021 to help strengthen the connection between the science and policy communities. The program provides an excellent opportunity for researchers to learn about the inclusion of scientific evidence in policy making in Parliament.

The application deadline is January 4th, 2021

APPLYING FOR SCIENCE MEETS PARLIAMENT 2021 – ENGLISH

APPLYING FOR SCIENCE MEETS PARLIAMENT 2021 – FRENCH

You can find out more about benefits, eligibility, etc. on the application page.

Paid Graduate Research Internship: AI & Facial Recognition

Getting back to the Wilson Center, there’s this opportunity (from a Dec. 1, 2020 notice received by email),

New policy is on the horizon for facial recognition technologies (FRT). Many current proposals, including The Facial Recognition and Biometric Technology Moratorium Act of 2020 and The Ethical Use of Artificial Intelligence Act, either target the use of FRT in areas such as criminal justice or propose general moratoria until guidelines can be put in place. But these approaches are limited by their focus on negative impacts. Effective planning requires a proactive approach that considers broader opportunities as well as limitations and includes consumers, along with federal, state and local government uses.

More research is required to get us there. The Wilson Center seeks to better understand a wide range of opportunities and limitations, with a focus on one critically underrepresented group: consumers. The Science and Technology Innovation Program (STIP) is seeking an intern for Spring 2021 to support a new research project on understanding FRT from the consumer perspective.

A successful candidate will:

  • Have a demonstrated track record of work on policy and ethical issues related to Artificial Intelligence (AI) generally, Facial Recognition specifically, or other emerging technologies.
  • Be able to work remotely.
  • Be enrolled in a degree program, recently graduated (within the last year) and/or have been accepted to enter an advanced degree program within the next year.

Interested applicants should submit:

  • Cover letter explaining your general interest in STIP and specific interest in this topic, including dates and availability.
  • CV / Resume
  • Two brief writing samples (formal and/or informal), ideally demonstrating your work in science and technology research.

Applications are due Friday, December 18th [2020]. Please email all application materials as a single PDF to Erin Rohn, erin.rohn@wilsoncenter.org. Questions on this role can be directed to Anne Bowser, anne.bowser@wilsoncenter.org.

Good luck!

Toronto’s ArtSci Salon and its Kaleidoscopic Imaginations on Oct 27, 2020 – 7:30 pm (EDT)

The ArtSci Salon is getting quite active these days. Here’s the latest from an Oct. 22, 2020 ArtSci Salon announcement (received via email), which can also be viewed on their Kaleidoscope event page,

Kaleidoscopic Imaginations

Performing togetherness in empty spaces

An experimental  collaboration between the ArtSci Salon, the Digital Dramaturgy Lab_squared/ DDL2 and Sensorium: Centre for Digital Arts and Technology, York University (Toronto, Ontario, Canada)

Tuesday, October 27, 2020

7:30 pm [EDT]

Join our evening of live-streamed, multi-media  performances, following a kaleidoscopic dramaturgy of complexity discourses as inspired by computational complexity theory gatherings.

We are presenting installations, site-specific artistic interventions and media experiments, featuring networked audio and video, dance and performances as we repopulate spaces – The Fields Institute and surroundings – forced to lie empty due to the pandemic. Respecting physical distance and new sanitation and safety rules can be challenging, but it can also open up new ideas and opportunities.

NOTE: DDL2  contributions to this event are sourced or inspired by their recent kaleidoscopic performance “Rattling the the Curve – Paradoxical ECODATA performances of A/I (artistic intelligence), and facial recognition of humans and trees

Virtual space/live streaming concept and design: DDL2  Antje Budde, Karyn McCallum and Don Sinclair

Virtual space and streaming pilot: Don Sinclair

Here are specific programme details (from the announcement),

  1. Signing the Virus – Video (2 min.)
    Collaborators: DDL2 Antje Budde, Felipe Cervera, Grace Whiskin
  2. Niimi II – – Performance and outdoor video projection (15 min.)
    (Nimii means in Anishinaabemowin: s/he dances) Collaborators: DDL2 Candy Blair, Antje Budde, Jill Carter, Lars Crosby, Nina Czegledy, Dave Kemp
  3. Oracle Jane (Scene 2) – A partial playreading on the politics of AI (30 min.)
    Playwright: DDL2 Oracle Collaborators: DDL2 Antje Budde, Frans Robinow, George Bwannika Seremba, Amy Wong and AI ethics consultant Vicki Zhang
  4. Vriksha/Tree – Dance video and outdoor projection (8 min.)
    Collaborators: DDL2 Antje Budde, Lars Crosby, Astad Deboo, Dave Kemp, Amit Kumar
  5. Facial Recognition – Performing a Plate Camera from a Distance (3 min.)
    Collaborators: DDL2 Antje Budde, Jill Carter, Felipe Cervera, Nina Czegledy, Karyn McCallum, Lars Crosby, Martin Kulinna, Montgomery C. Martin, George Bwanika Seremba, Don Sinclair, Heike Sommer
  6. Cutting Edge – Growing Data (6 min.)
    DDL2 A performance by Antje Budde
  7. “void * ambience” – Architectural and instrumental acoustics, projection mapping Concept: Sensorium: The Centre for Digital Art and Technology, York University Collaborators: Michael Palumbo, Ilze Briede [Kavi], Debashis Sinha, Joel Ong

This performance is part of a series (from the announcement),

These three performances are part of Boundary-Crossings: Multiscalar Entanglements in Art, Science and Society, a public Outreach program supported by the Fiends [sic] Institute for Research in Mathematical Science. Boundary Crossings is a series exploring how the notion of boundaries can be transcended and dissolved in the arts and the humanities, the biological and the mathematical sciences, as well as human geography and political economy. Boundaries are used to establish delimitations among disciplines; to discriminate between the human and the non-human (body and technologies, body and bacteria); and to indicate physical and/or artificial boundaries, separating geographical areas and nation states. Our goal is to cross these boundaries by proposing new narratives to show how the distinctions, and the barriers that science, technology, society and the state have created can in fact be re-interpreted as porous and woven together.

This event is curated and produced by ArtSci Salon; Digital Dramaturgy Lab_squared/ DDL2; Sensorium: Centre for Digital Arts and Technology, York University; and Ryerson University; it is supported by The Fields Institute for Research in Mathematical Sciences

Streaming Link 

Finally, the announcement includes biographical information about all of the ‘boundary-crossers’,

Candy Blair (Tkaron:to/Toronto)
Candy Blair/Otsίkh:èta (they/them) is a mixed First Nations/European,
2-spirit interdisciplinary visual and performing artist from Tio’tía:ke – where the group split (“Montreal”) in Québec.

While continuing their work as an artist they also finished their Creative Arts, Literature, and Languages program at Marianopolis College (cégep), their 1st year in the Theatre program at York University, and their 3rd year Acting Conservatory Program at the Centre For Indigenous Theatre in Tsí Tkaròn:to – Where the trees stand in water (Toronto”).

Some of Candy’s noteable performances are Jill Carter’s Encounters at the Edge of the Woods, exploring a range of issues with colonization; Ange Loft’s project Talking Treaties, discussing the treaties of the “Toronto” purchase; Cheri Maracle’s The Story of Six Nations, exploring Six Nation’s origin story through dance/combat choreography, and several other performances, exploring various topics around Indigenous language, land, and cultural restoration through various mediums such as dance,
modelling, painting, theatre, directing, song, etc. As an activist and soon to be entrepreneur, Candy also enjoys teaching workshops around promoting Indigenous resurgence such as Indigenous hand drumming, food sovereignty, beading, medicine knowledge, etc..

Working with their collectives like Weave and Mend, they were responsible for the design, land purification, and installation process of the four medicine plots and a community space with their 3 other members. Candy aspires to continue exploring ways of decolonization through healthy traditional practices from their mixed background and the arts in the hopes of eventually supporting Indigenous relations
worldwide.

Antje Budde
Antje Budde is a conceptual, queer-feminist, interdisciplinary experimental scholar-artist and an Associate Professor of Theatre Studies, Cultural Communication and Modern Chinese Studies at the Centre for Drama, Theatre and Performance Studies, University of Toronto. Antje has created multi-disciplinary artistic works in Germany, China and Canada and works tri-lingually in German, English and Mandarin. She is the founder of a number of queerly feminist performing art projects including most recently the (DDL)2 or (Digital Dramaturgy Lab)Squared – a platform for experimental explorations of digital culture, creative labor, integration of arts and science, and technology in performance. She is interested in the intersections of natural sciences, the arts, engineering and computer science.

Roberta Buiani
Roberta Buiani (MA; PhD York University) is the Artistic Director of the ArtSci Salon at the Fields Institute for Research in Mathematical Sciences (Toronto). Her artistic work has travelled to art festivals (Transmediale; Hemispheric Institute Encuentro; Brazil), community centres and galleries (the Free Gallery Toronto; Immigrant Movement
International, Queens, Myseum of Toronto), and science institutions (RPI; the Fields Institute). Her writing has appeared on Space and Culture, Cultural Studies and The Canadian Journal of Communication_among others. With the ArtSci Salon she has launched a series of experiments in “squatting academia”, by re-populating abandoned spaces and cabinets across university campuses with SciArt installations.

Currently, she is a research associate at the Centre for Feminist Research and a Scholar in Residence at Sensorium: Centre for Digital Arts and Technology at York University [Toronto, Ontario, Canada].

Jill Carter (Tkaron:to/ Toronto)
Jill (Anishinaabe/Ashkenazi) is a theatre practitioner and researcher, currently cross appointed to the Centre for Drama, Theatre and Performance Studies; the Transitional Year Programme; and Indigenous Studies at the University of Toronto. She works with many members of Tkaron:to’s Indigenous theatre community to support the development of new works and to disseminate artistic objectives, process, and outcomes through community- driven research projects. Her scholarly research,
creative projects, and activism are built upon ongoing relationships with Indigenous Elders, Artists and Activists, positioning her as witness to, participant in, and disseminator of oral histories that speak to the application of Indigenous aesthetic principles and traditional knowledge systems to contemporary performance.The research questions she pursues revolve around the mechanics of story creation,
the processes of delivery and the manufacture of affect.

More recently, she has concentrated upon Indigenous pedagogical models for the rehearsal studio and the lecture hall; the application of Indigenous [insurgent] research methods within performance studies; the politics of land acknowledgements; and land – based dramaturgies/activations/interventions.

Jill also works as a researcher and tour guide with First Story Toronto; facilitates Land Acknowledgement, Devising, and Land-based Dramaturgy Workshops for theatre makers in this city; and performs with the Talking Treaties Collective (Jumblies Theatre, Toronto).

In September 2019, Jill directed Encounters at the Edge of the Woods. This was a devised show, featuring Indigenous and Settler voices, and it opened Hart House Theatre’s 100th season; it is the first instance of Indigenous presence on Hart House Theatre’s stage in its 100 years of existence as the cradle for Canadian theatre.

Nina Czegledy
(Toronto) artist, curator, educator, works internationally on collaborative art, science & technology projects. The changing perception of the human body and its environment as well as paradigm shifts in the arts inform her projects. She has exhibited and published widely, won awards for her artwork and has initiated, lead and participated in workshops, forums and festivals worldwide at international events.

Astad Deboo (Mumbai, India)
Astad Deboo is a contemporary dancer and choreographer who employs his
training in Indian classical dance forms of Kathak as well as Kathakali to create a dance form that is unique to him. He has become a pioneer of modern dance in India. Astad describes his style as “contemporary in vocabulary and traditional in restraints.” Throughout his long and illustrious career, he has worked with various prominent performers such as Pina Bausch, Alis on Becker Chase and Pink Floyd and performed in many parts of the world. He has been awarded the Sangeet Natak Akademi Award (1996) and Padma Shri (2007), awarded by the Government of India. In January 2005 along with 12 young women with hearing impairment supported by the Astad Deboo Dance Foundation, he performed at the 20th Annual Deaf Olympics at Melbourne, Australia. Astad has a long record of working with disadvantaged youth.

Ilze Briede [Kavi]
Ilze Briede [artist name: Kavi] is a Latvian/Canadian artist and researcher with broad and diverse interests. Her artistic practice, a hybrid of video, image and object making, investigates the phenomenon of perception and the constraints and boundaries between the senses and knowing. Kavi is currently pursuing a PhD degree in Digital Media at York University with a research focus on computational creativity and generative art. She sees computer-generated systems and algorithms as a potentiality for co-creation and collaboration between human and machine. Kavi has previously worked and exhibited with Fashion Art Toronto, Kensington Market Art Fair, Toronto Burlesque Festival, Nuit Blanche, Sidewalk Toronto and the Toronto Symphony Orchestra.

Dave Kemp
Dave Kemp is a visual artist whose practice looks at the intersections and interactions between art, science and technology: particularly at how these fields shape our perception and understanding of the world. His artworks have been exhibited widely at venues such as at the McIntosh Gallery, The Agnes Etherington Art Centre, Art Gallery of Mississauga, The Ontario Science Centre, York Quay Gallery, Interaccess,
Modern Fuel Artist-Run Centre, and as part of the Switch video festival in Nenagh, Ireland. His works are also included in the permanent collections of the Agnes Etherington Art Centre and the Canada Council Art Bank.

Stephen Morris
Stephen Morris is Professor of experimental non-linear Physics in the faculty of Physics at the University of Toronto. He is the scientific Director of the ArtSci salon at the Fields Institute for Research in Mathematical Sciences. He often collaborates with artists and has himself performed and produced art involving his own scientific instruments and experiments in non-linear physics and pattern formation

Michael Palumbo
Michael Palumbo (MA, BFA) is an electroacoustic music improviser, coder, and researcher. His PhD research spans distributed creativity and version control systems, and is expressed through “git show”, a distributed electroacoustic music composition and design experiment, and “Mischmasch”, a collaborative modular synthesizer in virtual reality. He studies with Dr. Doug Van Nort as a researcher in the Distributed
Performance and Sensorial Immersion Lab, and Dr. Graham Wakefield at the Alice Lab for Computational Worldmaking. His works have been presented internationally, including at ISEA, AES, NIME, Expo ’74, TIES, and the Network Music Festival. He performs regularly with a modular synthesizer, runs the Exit Points electroacoustic improvisation series, and is an enthusiastic gardener and yoga practitioner.

Joel Ong (PhD. Digital Arts and Experimental Media (DXARTS, University
of Washington)

Joel Ong is a media artist whose works connect scientific and artistic approaches to the environment, particularly with respect to sound and physical space.  Professor Ong’s work explores the way objects and spaces can function as repositories of ‘frozen sound’, and in elucidating these, he is interested in creating what systems theorist Jack Burnham (1968) refers to as “art (that) does not reside in material entities, but in relations between people and between people and the components of their environment”.

A serial collaborator, Professor Ong is invested in the broader scope of Art-Science collaborations and is engaged constantly in the discourses and processes that facilitate viewing these two polemical disciplines on similar ground.  His graduate interdisciplinary work in nanotechnology and sound was conducted at SymbioticA, the Center of Excellence for Biological Arts at the University of Western Australia and supervised by BioArt pioneers and TCA (The Tissue Culture and Art Project) artists Dr Ionat Zurr and Oron Catts.

George Bwanika Seremba
George Bwanika Seremba,is an actor, playwright and scholar. He was born
in Uganda. George holds an M. Phil, and a Ph.D. in Theatre Studies, from Trinity
College Dublin. In 1980, having barely survived a botched execution by the Military Intelligence, he fled into exile, resettling in Canada (1983). He has performed in numerous plays including in his own, “Come Good Rain”, which was awarded a Dora award (1993). In addition, he published a number of edited play collections including “Beyond the pale: dramatic writing from First Nations writers & writers of colour” co-edited by Yvette Nolan, Betty Quan, George Bwanika Seremba. (1996).

George was nominated for the Irish Times’ Best Actor award in Dublin’s Calypso Theatre’s for his role in Athol Fugard’s “Master Harold and the boys”. In addition to theatre he performed in several movies and on television. His doctoral thesis (2008) entitled “Robert Serumaga and the Golden Age of Uganda’s Theatre (1968-1978): (Solipsism, Activism, Innovation)” will be published as a monograph by CSP (U.K) in 2021.

Don Sinclair (Toronto)
Don is Associate Professor in the Department of Computational Arts at York University. His creative research areas include interactive performance, projections for dance, sound art, web and data art, cycling art, sustainability, and choral singing most often using code and programming. Don is particularly interested in processes of artistic creation that integrate digital creative coding-based practices with performance in dance and theatre. As well, he is an enthusiastic cyclist.

Debashis Sinha
Driven by a deep commitment to the primacy of sound in creative expression, Debashis Sinha has realized projects in radiophonic art, music, sound art, audiovisual performance, theatre, dance, and music across Canada and internationally. Sound design and composition credits include numerous works for Peggy Baker Dance Projects and productions with Canada’s premiere theatre companies including The Stratford Festival, Soulpepper, Volcano Theatre, Young People’s Theatre, Project Humanity, The Theatre Centre, Nightwood Theatre, Why Not Theatre, MTC Warehouse and Necessary Angel. His live sound practice on the concert stage has led to appearances at MUTEK Montreal, MUTEK Japan, the Guelph Jazz Festival, the Banff Centre, The Music Gallery, and other venues. Sinha teaches sound design at York University and the National Theatre School, and is currently working on a multi-part audio/performance work incorporating machine learning and AI funded by the Canada Council for the Arts.

Vicki (Jingjing) Zhang (Toronto)
Vicki Zhang is a faculty member at University of Toronto’s statistics department. She is the author of Uncalculated Risks (Canadian Scholar’s Press, 2014). She is also a playwright, whose plays have been produced or stage read in various festivals and venues in Canada including Toronto’s New Ideas Festival, Winnipeg’s FemFest, Hamilton Fringe Festival, Ergo Pink Fest, InspiraTO festival, Toronto’s Festival of Original Theatre (FOOT), Asper Center for Theatre and Film, Canadian Museum for Human Rights, Cultural Pluralism in the Arts Movement Ontario (CPAMO), and the Canadian Play Thing. She has also written essays and short fiction for Rookie Magazine and Thread.

If you can’t attend this Oct. 27, 2020 event, there’s still the Oct. 29, 2020 Boundary-Crossings event: Beauty Kit (see my Oct. 12, 2020 posting for more).

As for Kaleidoscopic Imaginations, you can access the Streaming Link On Oct. 27, 2020 at 7:30 pm EDT (4 pm PDT).

Technical University of Munich: embedded ethics approach for AI (artificial intelligence) and storing a tv series in synthetic DNA

I stumbled across two news bits of interest from the Technical University of Munich in one day (Sept. 1, 2020, I think). The topics: artificial intelligence (AI) and synthetic DNA (deoxyribonucleic acid).

Embedded ethics and artificial intelligence (AI)

An August 27, 2020 Technical University of Munich (TUM) press release (also on EurekAlert but published Sept. 1, 2020) features information about a proposal to embed ethicists in with AI development teams,

The increasing use of AI (artificial intelligence) in the development of new medical technologies demands greater attention to ethical aspects. An interdisciplinary team at the Technical University of Munich (TUM) advocates the integration of ethics from the very beginning of the development process of new technologies. Alena Buyx, Professor of Ethics in Medicine and Health Technologies, explains the embedded ethics approach.

Professor Buyx, the discussions surrounding a greater emphasis on ethics in AI research have greatly intensified in recent years, to the point where one might speak of “ethics hype” …

Prof. Buyx: … and many committees in Germany and around the world such as the German Ethics Council or the EU Commission High-Level Expert Group on Artificial Intelligence have responded. They are all in agreement: We need more ethics in the development of AI-based health technologies. But how do things look in practice for engineers and designers? Concrete solutions are still few and far between. In a joint pilot project with two Integrative Research Centers at TUM, the Munich School of Robotics and Machine Intelligence (MSRM) with its director, Prof. Sami Haddadin, and the Munich Center for Technology in Society (MCTS), with Prof. Ruth Müller, we want to try out the embedded ethics approach. We published the proposal in Nature Machine Intelligence at the end of July [2020].

What exactly is meant by the “embedded ethics approach”?

Prof.Buyx: The idea is to make ethics an integral part of the research process by integrating ethicists into the AI development team from day one. For example, they attend team meetings on a regular basis and create a sort of “ethical awareness” for certain issues. They also raise and analyze specific ethical and social issues.

Is there an example of this concept in practice?

Prof. Buyx: The Geriatronics Research Center, a flagship project of the MSRM in Garmisch-Partenkirchen, is developing robot assistants to enable people to live independently in old age. The center’s initiatives will include the construction of model apartments designed to try out residential concepts where seniors share their living space with robots. At a joint meeting with the participating engineers, it was noted that the idea of using an open concept layout everywhere in the units – with few doors or individual rooms – would give the robots considerable range of motion. With the seniors, however, this living concept could prove upsetting because they are used to having private spaces. At the outset, the engineers had not given explicit consideration to this aspect.

Prof.Buyx: The approach sounds promising. But how can we avoid “embedded ethics” from turning into an “ethics washing” exercise, offering companies a comforting sense of “being on the safe side” when developing new AI technologies?

That’s not something we can be certain of avoiding. The key is mutual openness and a willingness to listen, with the goal of finding a common language – and subsequently being prepared to effectively implement the ethical aspects. At TUM we are ideally positioned to achieve this. Prof. Sami Haddadin, the director of the MSRM, is also a member of the EU High-Level Group of Artificial Intelligence. In his research, he is guided by the concept of human centered engineering. Consequently, he has supported the idea of embedded ethics from the very beginning. But one thing is certain: Embedded ethics alone will not suddenly make AI “turn ethical”. Ultimately, that will require laws, codes of conduct and possibly state incentives.

Here’s a link to and a citation for the paper espousing the embedded ethics for AI development approach,

An embedded ethics approach for AI development by Stuart McLennan, Amelia Fiske, Leo Anthony Celi, Ruth Müller, Jan Harder, Konstantin Ritt, Sami Haddadin & Alena Buyx. Nature Machine Intelligence (2020) DOI: https://doi.org/10.1038/s42256-020-0214-1 Published 31 July 2020

This paper is behind a paywall.

Religion, ethics and and AI

For some reason embedded ethics and AI got me to thinking about Pope Francis and other religious leaders.

The Roman Catholic Church and AI

There was a recent announcement that the Roman Catholic Church will be working with MicroSoft and IBM on AI and ethics (from a February 28, 2020 article by Jen Copestake for British Broadcasting Corporation (BBC) news online (Note: A link has been removed),

Leaders from the two tech giants met senior church officials in Rome, and agreed to collaborate on “human-centred” ways of designing AI.

Microsoft president Brad Smith admitted some people may “think of us as strange bedfellows” at the signing event.

“But I think the world needs people from different places to come together,” he said.

The call was supported by Pope Francis, in his first detailed remarks about the impact of artificial intelligence on humanity.

The Rome Call for Ethics [sic] was co-signed by Mr Smith, IBM executive vice-president John Kelly and president of the Pontifical Academy for Life Archbishop Vincenzo Paglia.

It puts humans at the centre of new technologies, asking for AI to be designed with a focus on the good of the environment and “our common and shared home and of its human inhabitants”.

Framing the current era as a “renAIssance”, the speakers said the invention of artificial intelligence would be as significant to human development as the invention of the printing press or combustion engine.

UN Food and Agricultural Organization director Qu Dongyu and Italy’s technology minister Paola Pisano were also co-signatories.

Hannah Brockhaus’s February 28, 2020 article for the Catholic News Agency provides some details missing from the BBC report and I found it quite helpful when trying to understand the various pieces that make up this initiative,

The Pontifical Academy for Life signed Friday [February 28, 2020], alongside presidents of IBM and Microsoft, a call for ethical and responsible use of artificial intelligence technologies.

According to the document, “the sponsors of the call express their desire to work together, in this context and at a national and international level, to promote ‘algor-ethics.’”

“Algor-ethics,” according to the text, is the ethical use of artificial intelligence according to the principles of transparency, inclusion, responsibility, impartiality, reliability, security, and privacy.

The signing of the “Rome Call for AI Ethics [PDF]” took place as part of the 2020 assembly of the Pontifical Academy for Life, which was held Feb. 26-28 [2020] on the theme of artificial intelligence.

One part of the assembly was dedicated to private meetings of the academics of the Pontifical Academy for Life. The second was a workshop on AI and ethics that drew 356 participants from 41 countries.

On the morning of Feb. 28 [2020], a public event took place called “renAIssance. For a Humanistic Artificial Intelligence” and included the signing of the AI document by Microsoft President Brad Smith, and IBM Executive Vice-president John Kelly III.

The Director General of FAO, Dongyu Qu, and politician Paola Pisano, representing the Italian government, also signed.

The president of the European Parliament, David Sassoli, was also present Feb. 28.

Pope Francis canceled his scheduled appearance at the event due to feeling unwell. His prepared remarks were read by Archbishop Vincenzo Paglia, president of the Academy for Life.

You can find Pope Francis’s comments about the document here (if you’re not comfortable reading Italian, hopefully, the English translation which follows directly afterward will be helpful). The Pope’s AI initiative has a dedicated website, Rome Call for AI ethics, and while most of the material dates from the February 2020 announcement, they are keeping up a blog. It has two entries, one dated in May 2020 and another in September 2020.

Buddhism and AI

The Dalai Lama is well known for having an interest in science and having hosted scientists for various dialogues. So, I was able to track down a November 10, 2016 article by Ariel Conn for the futureoflife.org website, which features his insights on the matter,

The question of what it means and what it takes to feel needed is an important problem for ethicists and philosophers, but it may be just as important for AI researchers to consider. The Dalai Lama argues that lack of meaning and purpose in one’s work increases frustration and dissatisfaction among even those who are gainfully employed.

“The problem,” says the Dalai Lama, “is … the growing number of people who feel they are no longer useful, no longer needed, no longer one with their societies. … Feeling superfluous is a blow to the human spirit. It leads to social isolation and emotional pain, and creates the conditions for negative emotions to take root.”

If feeling needed and feeling useful are necessary for happiness, then AI researchers may face a conundrum. Many researchers hope that job loss due to artificial intelligence and automation could, in the end, provide people with more leisure time to pursue enjoyable activities. But if the key to happiness is feeling useful and needed, then a society without work could be just as emotionally challenging as today’s career-based societies, and possibly worse.

I also found a talk on the topic by The Venerable Tenzin Priyadarshi, first here’s a description from his bio at the Dalai Lama Center for Ethics and Transformative Values webspace on the Massachusetts Institute of Technology (MIT) website,

… an innovative thinker, philosopher, educator and a polymath monk. He is Director of the Ethics Initiative at the MIT Media Lab and President & CEO of The Dalai Lama Center for Ethics and Transformative Values at the Massachusetts Institute of Technology. Venerable Tenzin’s unusual background encompasses entering a Buddhist monastery at the age of ten and receiving graduate education at Harvard University with degrees ranging from Philosophy to Physics to International Relations. He is a Tribeca Disruptive Fellow and a Fellow at the Center for Advanced Study in Behavioral Sciences at Stanford University. Venerable Tenzin serves on the boards of a number of academic, humanitarian, and religious organizations. He is the recipient of several recognitions and awards and received Harvard’s Distinguished Alumni Honors for his visionary contributions to humanity.

He gave the 2018 Roger W. Heyns Lecture in Religion and Society at Stanford University on the topic, “Religious and Ethical Dimensions of Artificial Intelligence.” The video runs over one hour but he is a sprightly speaker (in comparison to other Buddhist speakers I’ve listened to over the years).

Judaism, Islam, and other Abrahamic faiths examine AI and ethics

I was delighted to find this January 30, 2020 Artificial Intelligence: Implications for Ethics and Religion event as it brought together a range of thinkers from various faiths and disciplines,

New technologies are transforming our world every day, and the pace of change is only accelerating.  In coming years, human beings will create machines capable of out-thinking us and potentially taking on such uniquely-human traits as empathy, ethical reasoning, perhaps even consciousness.  This will have profound implications for virtually every human activity, as well as the meaning we impart to life and creation themselves.  This conference will provide an introduction for non-specialists to Artificial Intelligence (AI):

What is it?  What can it do and be used for?  And what will be its implications for choice and free will; economics and worklife; surveillance economies and surveillance states; the changing nature of facts and truth; and the comparative intelligence and capabilities of humans and machines in the future? 

Leading practitioners, ethicists and theologians will provide cross-disciplinary and cross-denominational perspectives on such challenges as technology addiction, inherent biases and resulting inequalities, the ethics of creating destructive technologies and of turning decision-making over to machines from self-driving cars to “autonomous weapons” systems in warfare, and how we should treat the suffering of “feeling” machines.  The conference ultimately will address how we think about our place in the universe and what this means for both religious thought and theological institutions themselves.

UTS [Union Theological Seminary] is the oldest independent seminary in the United States and has long been known as a bastion of progressive Christian scholarship.  JTS [Jewish Theological Seminary] is one of the academic and spiritual centers of Conservative Judaism and a major center for academic scholarship in Jewish studies. The Riverside Church is an interdenominational, interracial, international, open, welcoming, and affirming church and congregation that has served as a focal point of global and national activism for peace and social justice since its inception and continues to serve God through word and public witness. The annual Greater Good Gathering, the following week at Columbia University’s School of International & Public Affairs, focuses on how technology is changing society, politics and the economy – part of a growing nationwide effort to advance conversations promoting the “greater good.”

They have embedded a video of the event (it runs a little over seven hours) on the January 30, 2020 Artificial Intelligence: Implications for Ethics and Religion event page. For anyone who finds that a daunting amount of information, you may want to check out the speaker list for ideas about who might be writing and thinking on this topic.

As for Islam, I did track down this November 29, 2018 article by Shahino Mah Abdullah, a fellow at the Institute of Advanced Islamic Studies (IAIS) Malaysia,

As the global community continues to work together on the ethics of AI, there are still vast opportunities to offer ethical inputs, including the ethical principles based on Islamic teachings.

This is in line with Islam’s encouragement for its believers to convey beneficial messages, including to share its ethical principles with society.

In Islam, ethics or akhlak (virtuous character traits) in Arabic, is sometimes employed interchangeably in the Arabic language with adab, which means the manner, attitude, behaviour, and etiquette of putting things in their proper places. Islamic ethics cover all the legal concepts ranging from syariah (Islamic law), fiqh ( jurisprudence), qanun (ordinance), and ‘urf (customary practices).

Adopting and applying moral values based on the Islamic ethical concept or applied Islamic ethics could be a way to address various issues in today’s societies.

At the same time, this approach is in line with the higher objectives of syariah (maqasid alsyariah) that is aimed at conserving human benefit by the protection of human values, including faith (hifz al-din), life (hifz alnafs), lineage (hifz al-nasl), intellect (hifz al-‘aql), and property (hifz al-mal). This approach could be very helpful to address contemporary issues, including those related to the rise of AI and intelligent robots.

..

Part of the difficulty with tracking down more about AI, ethics, and various religions is linguistic. I simply don’t have the language skills to search for the commentaries and, even in English, I may not have the best or most appropriate search terms.

Television (TV) episodes stored on DNA?

According to a Sept. 1, 2020 news item on Nanowerk, the first episode of a tv series, ‘Biohackers’ has been stored on synthetic DNA (deoxyribonucleic acid) by a researcher at TUM and colleagues at another institution,

The first episode of the newly released series “Biohackers” was stored in the form of synthetic DNA. This was made possible by the research of Prof. Reinhard Heckel of the Technical University of Munich (TUM) and his colleague Prof. Robert Grass of ETH Zürich.

They have developed a method that permits the stable storage of large quantities of data on DNA for over 1000 years.

A Sept. 1, 2020 TUM press release, which originated the news item, proceeds with more detail in an interview format,

Prof. Heckel, Biohackers is about a medical student seeking revenge on a professor with a dark past – and the manipulation of DNA with biotechnology tools. You were commissioned to store the series on DNA. How does that work?

First, I should mention that what we’re talking about is artificially generated – in other words, synthetic – DNA. DNA consists of four building blocks: the nucleotides adenine (A), thymine (T), guanine (G) and cytosine (C). Computer data, meanwhile, are coded as zeros and ones. The first episode of Biohackers consists of a sequence of around 600 million zeros and ones. To code the sequence 01 01 11 00 in DNA, for example, we decide which number combinations will correspond to which letters. For example: 00 is A, 01 is C, 10 is G and 11 is T. Our example then produces the DNA sequence CCTA. Using this principle of DNA data storage, we have stored the first episode of the series on DNA.

And to view the series – is it just a matter of “reverse translation” of the letters?

In a very simplified sense, you can visualize it like that. When writing, storing and reading the DNA, however, errors occur. If these errors are not corrected, the data stored on the DNA will be lost. To solve the problem, I have developed an algorithm based on channel coding. This method involves correcting errors that take place during information transfers. The underlying idea is to add redundancy to the data. Think of language: When we read or hear a word with missing or incorrect letters, the computing power of our brain is still capable of understanding the word. The algorithm follows the same principle: It encodes the data with sufficient redundancy to ensure that even highly inaccurate data can be restored later.

Channel coding is used in many fields, including in telecommunications. What challenges did you face when developing your solution?

The first challenge was to create an algorithm specifically geared to the errors that occur in DNA. The second one was to make the algorithm so efficient that the largest possible quantities of data can be stored on the smallest possible quantity of DNA, so that only the absolutely necessary amount of redundancy is added. We demonstrated that our algorithm is optimized in that sense.

DNA data storage is very expensive because of the complexity of DNA production as well as the reading process. What makes DNA an attractive storage medium despite these challenges?

First, DNA has a very high information density. This permits the storage of enormous data volumes in a minimal space. In the case of the TV series, we stored “only” 100 megabytes on a picogram – or a billionth of a gram of DNA. Theoretically, however, it would be possible to store up to 200 exabytes on one gram of DNA. And DNA lasts a long time. By comparison: If you never turned on your PC or wrote data to the hard disk it contains, the data would disappear after a couple of years. By contrast, DNA can remain stable for many thousands of years if it is packed right.

And the method you have developed also makes the DNA strands durable – practically indestructible.

My colleague Robert Grass was the first to develop a process for the “stable packing” of DNA strands by encapsulating them in nanometer-scale spheres made of silica glass. This ensures that the DNA is protected against mechanical influences. In a joint paper in 2015, we presented the first robust DNA data storage concept with our algorithm and the encapsulation process developed by Prof. Grass. Since then we have continuously improved our method. In our most recent publication in Nature Protocols of January 2020, we passed on what we have learned.

What are your next steps? Does data storage on DNA have a future?

We’re working on a way to make DNA data storage cheaper and faster. “Biohackers” was a milestone en route to commercialization. But we still have a long way to go. If this technology proves successful, big things will be possible. Entire libraries, all movies, photos, music and knowledge of every kind – provided it can be represented in the form of data – could be stored on DNA and would thus be available to humanity for eternity.

Here’s a link to and a citation for the paper,

Reading and writing digital data in DNA by Linda C. Meiser, Philipp L. Antkowiak, Julian Koch, Weida D. Chen, A. Xavier Kohll, Wendelin J. Stark, Reinhard Heckel & Robert N. Grass. Nature Protocols volume 15, pages86–101(2020) Issue Date: January 2020 DOI: https://doi.org/10.1038/s41596-019-0244-5 Published [online] 29 November 2019

This paper is behind a paywall.

As for ‘Biohackers’, it’s a German science fiction television series and you can find out more about it here on the Internet Movie Database.

Neurotransistor for brainlike (neuromorphic) computing

According to researchers at Helmholtz-Zentrum Dresden-Rossendorf and the rest of the international team collaborating on the work, it’s time to look more closely at plasticity in the neuronal membrane,.

From the abstract for their paper, Intrinsic plasticity of silicon nanowire neurotransistors for dynamic memory and learning functions by Eunhye Baek, Nikhil Ranjan Das, Carlo Vittorio Cannistraci, Taiuk Rim, Gilbert Santiago Cañón Bermúdez, Khrystyna Nych, Hyeonsu Cho, Kihyun Kim, Chang-Ki Baek, Denys Makarov, Ronald Tetzlaff, Leon Chua, Larysa Baraban & Gianaurelio Cuniberti. Nature Electronics volume 3, pages 398–408 (2020) DOI: https://doi.org/10.1038/s41928-020-0412-1 Published online: 25 May 2020 Issue Date: July 2020

Neuromorphic architectures merge learning and memory functions within a single unit cell and in a neuron-like fashion. Research in the field has been mainly focused on the plasticity of artificial synapses. However, the intrinsic plasticity of the neuronal membrane is also important in the implementation of neuromorphic information processing. Here we report a neurotransistor made from a silicon nanowire transistor coated by an ion-doped sol–gel silicate film that can emulate the intrinsic plasticity of the neuronal membrane.

Caption: Neurotransistors: from silicon chips to neuromorphic architecture. Credit: TU Dresden / E. Baek Courtesy: Helmholtz-Zentrum Dresden-Rossendorf

A July 14, 2020 news item on Nanowerk announced the research (Note: A link has been removed),

Especially activities in the field of artificial intelligence, like teaching robots to walk or precise automatic image recognition, demand ever more powerful, yet at the same time more economical computer chips. While the optimization of conventional microelectronics is slowly reaching its physical limits, nature offers us a blueprint how information can be processed and stored quickly and efficiently: our own brain.

For the very first time, scientists at TU Dresden and the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) have now successfully imitated the functioning of brain neurons using semiconductor materials. They have published their research results in the journal Nature Electronics (“Intrinsic plasticity of silicon nanowire neurotransistors for dynamic memory and learning functions”).

A July 14, 2020 Helmholtz-Zentrum Dresden-Rossendorf press release (also on EurekAlert), which originated the news items delves further into the research,

Today, enhancing the performance of microelectronics is usually achieved by reducing component size, especially of the individual transistors on the silicon computer chips. “But that can’t go on indefinitely – we need new approaches”, Larysa Baraban asserts. The physicist, who has been working at HZDR since the beginning of the year, is one of the three primary authors of the international study, which involved a total of six institutes. One approach is based on the brain, combining data processing with data storage in an artificial neuron.

“Our group has extensive experience with biological and chemical electronic sensors,” Baraban continues. “So, we simulated the properties of neurons using the principles of biosensors and modified a classical field-effect transistor to create an artificial neurotransistor.” The advantage of such an architecture lies in the simultaneous storage and processing of information in a single component. In conventional transistor technology, they are separated, which slows processing time and hence ultimately also limits performance.

Silicon wafer + polymer = chip capable of learning

Modeling computers on the human brain is no new idea. Scientists made attempts to hook up nerve cells to electronics in Petri dishes decades ago. “But a wet computer chip that has to be fed all the time is of no use to anybody,” says Gianaurelio Cuniberti from TU Dresden. The Professor for Materials Science and Nanotechnology is one of the three brains behind the neurotransistor alongside Ronald Tetzlaff, Professor of Fundamentals of Electrical Engineering in Dresden, and Leon Chua [emphasis mine] from the University of California at Berkeley, who had already postulated similar components in the early 1970s.

Now, Cuniberti, Baraban and their team have been able to implement it: “We apply a viscous substance – called solgel – to a conventional silicon wafer with circuits. This polymer hardens and becomes a porous ceramic,” the materials science professor explains. “Ions move between the holes. They are heavier than electrons and slower to return to their position after excitation. This delay, called hysteresis, is what causes the storage effect.” As Cuniberti explains, this is a decisive factor in the functioning of the transistor. “The more an individual transistor is excited, the sooner it will open and let the current flow. This strengthens the connection. The system is learning.”

Cuniberti and his team are not focused on conventional issues, though. “Computers based on our chip would be less precise and tend to estimate mathematical computations rather than calculating them down to the last decimal,” the scientist explains. “But they would be more intelligent. For example, a robot with such processors would learn to walk or grasp; it would possess an optical system and learn to recognize connections. And all this without having to develop any software.” But these are not the only advantages of neuromorphic computers. Thanks to their plasticity, which is similar to that of the human brain, they can adapt to changing tasks during operation and, thus, solve problems for which they were not originally programmed.

I highlighted Dr. Leon Chua’s name as he was one of the first to conceptualize the notion of a memristor (memory resistor), which is what the press release seems to be referencing with the mention of artificial synapses. Dr. Chua very kindly answered a few questions for me about his work which I published in an April 13, 2010 posting (scroll down about 40% of the way).

Brain-inspired computer with optimized neural networks

Caption: Left to right: The experiment was performed on a prototype of the BrainScales-2 chip; Schematic representation of a neural network; Results for simple and complex tasks. Credit: Heidelberg University

I don’t often stumble across research from the European Union’s flagship Human Brain Project. So, this is a delightful occurrence especially with my interest in neuromorphic computing. From a July 22, 2020 Human Brain Project press release (also on EurekAlert),

Many computational properties are maximized when the dynamics of a network are at a “critical point”, a state where systems can quickly change their overall characteristics in fundamental ways, transitioning e.g. between order and chaos or stability and instability. Therefore, the critical state is widely assumed to be optimal for any computation in recurrent neural networks, which are used in many AI [artificial intelligence] applications.

Researchers from the HBP [Human Brain Project] partner Heidelberg University and the Max-Planck-Institute for Dynamics and Self-Organization challenged this assumption by testing the performance of a spiking recurrent neural network on a set of tasks with varying complexity at – and away from critical dynamics. They instantiated the network on a prototype of the analog neuromorphic BrainScaleS-2 system. BrainScaleS is a state-of-the-art brain-inspired computing system with synaptic plasticity implemented directly on the chip. It is one of two neuromorphic systems currently under development within the European Human Brain Project.

First, the researchers showed that the distance to criticality can be easily adjusted in the chip by changing the input strength, and then demonstrated a clear relation between criticality and task-performance. The assumption that criticality is beneficial for every task was not confirmed: whereas the information-theoretic measures all showed that network capacity was maximal at criticality, only the complex, memory intensive tasks profited from it, while simple tasks actually suffered. The study thus provides a more precise understanding of how the collective network state should be tuned to different task requirements for optimal performance.

Mechanistically, the optimal working point for each task can be set very easily under homeostatic plasticity by adapting the mean input strength. The theory behind this mechanism was developed very recently at the Max Planck Institute. “Putting it to work on neuromorphic hardware shows that these plasticity rules are very capable in tuning network dynamics to varying distances from criticality”, says senior author Viola Priesemann, group leader at MPIDS. Thereby tasks of varying complexity can be solved optimally within that space.

The finding may also explain why biological neural networks operate not necessarily at criticality, but in the dynamically rich vicinity of a critical point, where they can tune their computation properties to task requirements. Furthermore, it establishes neuromorphic hardware as a fast and scalable avenue to explore the impact of biological plasticity rules on neural computation and network dynamics.

“As a next step, we now study and characterize the impact of the spiking network’s working point on classifying artificial and real-world spoken words”, says first author Benjamin Cramer of Heidelberg University.

Here’s a link to and a citation for the paper,

Control of criticality and computation in spiking neuromorphic networks with plasticity by Benjamin Cramer, David Stöckel, Markus Kreft, Michael Wibral, Johannes Schemmel, Karlheinz Meier & Viola Priesemann. Nature Communications volume 11, Article number: 2853 (2020) DOI: https://doi.org/10.1038/s41467-020-16548-3 Published: 05 June 2020

This paper is open access.

News from the Canadian Light Source (CLS), Canadian Science Policy Conference (CSPC) 2020, the International Symposium on Electronic Arts (ISEA) 2020, and HotPopRobot

I have some news about conserving art; early bird registration deadlines for two events, and, finally, an announcement about contest winners.

Canadian Light Source (CLS) and modern art

Rita Letendre. Victoire [Victory], 1961. Oil on canvas, Overall: 202.6 × 268 cm. Art Gallery of Ontario. Gift of Jessie and Percy Waxer, 1974, donated by the Ontario Heritage Foundation, 1988. © Rita Letendre L74.8. Photography by Ian Lefebvre

This is one of three pieces by Rita Letendre that underwent chemical mapping according to an August 5, 2020 CLS news release by Victoria Martinez (also received via email),

Research undertaken at the Canadian Light Source (CLS) at the University of Saskatchewan was key to understanding how to conserve experimental oil paintings by Rita Letendre, one of Canada’s most respected living abstract artists.

The work done at the CLS was part of a collaborative research project between the Art Gallery of Ontario (AGO) and the Canadian Conservation Institute (CCI) that came out of a recent retrospective Rita Letendre: Fire & Light at the AGO. During close examination, Meaghan Monaghan, paintings conservator from the Michael and Sonja Koerner Centre for Conservation, observed that several of Letendre’s oil paintings from the fifties and sixties had suffered significant degradation, most prominently, uneven gloss and patchiness, snowy crystalline structures coating the surface known as efflorescence, and cracking and lifting of the paint in several areas.

Kate Helwig, Senior Conservation Scientist at the Canadian Conservation Institute, says these problems are typical of mid-20th century oil paintings. “We focused on three of Rita Letendre’s paintings in the AGO collection, which made for a really nice case study of her work and also fits into the larger question of why oil paintings from that period tend to have degradation issues.”

Growing evidence indicates that paintings from this period have experienced these problems due to the combination of the experimental techniques many artists employed and the additives paint manufacturers had begun to use.

In order to determine more precisely how these factors affected Letendre’s paintings, the research team members applied a variety of analytical techniques, using microscopic samples taken from key points in the works.

“The work done at the CLS was particularly important because it allowed us to map the distribution of materials throughout a paint layer such as an impasto stroke,” Helwig said. The team used Mid-IR chemical mapping at the facility, which provides a map of different molecules in a small sample.

For example, chemical mapping at the CLS allowed the team to understand the distribution of the paint additive aluminum stearate throughout the paint layers of the painting Méduse. This painting showed areas of soft, incompletely dried paint, likely due to the high concentration and incomplete mixing of this additive. 

The painting Victoire had a crumbling base paint layer in some areas and cracking and efflorescence at the surface in others.  Infrared mapping at the CLS allowed the team to determine that excess free fatty acids in the paint were linked to both problems; where the fatty acids were found at the base they formed zing “soaps” which led to crumbling and cracking, and where they had moved to the surface they had crystallized, causing the snowflake-like efflorescence.

AGO curators and conservators interviewed Letendre to determine what was important to her in preserving and conserving her works, and she highlighted how important an even gloss across the surface was to her artworks, and the philosophical importance of the colour black in her paintings. These priorities guided conservation efforts, while the insights gained through scientific research will help maintain the works in the long term.

In order to restore the black paint to its intended even finish for display, conservator Meaghan Monaghan removed the white crystallization from the surface of Victoire, but it is possible that it could begin to recur. Understanding the processes that lead to this degradation will be an important tool to keep Letendre’s works in good condition.

“The world of modern paint research is complicated; each painting is unique, which is why it’s important to combine theoretical work on model paint systems with this kind of case study on actual works of art” said Helwig. The team hopes to collaborate on studying a larger cross section of Letendre’s paintings in oil and acrylic in the future to add to the body of knowledge.

Here’s a link to and a citation for the paper,

Rita Letendre’s Oil Paintings from the 1960s: The Effect of Artist’s Materials on Degradation Phenomena by Kate Helwig, Meaghan Monaghan, Jennifer Poulin, Eric J. Henderson & Maeve Moriarty. Studies in Conservation (2020): 1-15 DOI: https://doi.org/10.1080/00393630.2020.1773055 Published online: 06 Jun 2020

This paper is behind a paywall.

Canadian Science Policy Conference (CSPC) 2020

The latest news from the CSPC 2020 (November 16 – 20 with preconference events from Nov. 1 -14) organizers is that registration is open and early birds have a deadline of September 27, 2020 (from an August 6, 2020 CSPC 2020 announcement received via email),

It’s time! Registration for the 12th Canadian Science Policy Conference (CSPC 2020) is open now. Early Bird registration is valid until Sept. 27th [2020].

CSPC 2020 is coming to your offices and homes:

Register for full access to 3 weeks of programming of the biggest science and innovation policy forum of 2020 under the overarching theme: New Decade, New Realities: Hindsight, Insight, Foresight.

2500+ Participants

300+ Speakers from five continents

65+ Panel sessions, 15 pre conference sessions and symposiums

50+ On demand videos and interviews with the most prominent figures of science and innovation policy 

20+ Partner-hosted functions

15+ Networking sessions

15 Open mic sessions to discuss specific topics

The virtual conference features an exclusive array of offerings:

3D Lounge and Exhibit area

Advance access to the Science Policy Magazine, featuring insightful reflections from the frontier of science and policy innovation

Many more

Don’t miss this unique opportunity to engage in the most important discussions of science and innovation policy with insights from around the globe, just from your office, home desk, or your mobile phone.

Benefit from significantly reduced registration fees for an online conference with an option for discount for multiple ticket purchases

Register now to benefit from the Early Bird rate!

The preliminary programme can be found here. This year there will be some discussion of a Canadian synthetic biology roadmap, presentations on various Indigenous concerns (mostly health), a climate challenge presentation focusing on Mexico and social vulnerability and another on parallels between climate challenges and COVID-19. There are many presentations focused on COVID-19 and.or health.

There doesn’t seem to be much focus on cyber security and, given that we just lost two ice caps (see Brandon Spektor’s August 1, 2020 article [Two Canadian ice caps have completely vanished from the Arctic, NASA imagery shows] on the Live Science website), it’s surprising that there are no presentations concerning the Arctic.

International Symposium on Electronic Arts (ISEA) 2020

According to my latest information, the early bird rate for ISEA 2020 (Oct. 13 -18) ends on August 13, 2020. (My June 22, 2020 posting describes their plans for the online event.)

You can find registration information here.

Margaux Davoine has written up a teaser for the 2020 edition of ISEA in the form of an August 6, 2020 interview with Yan Breuleux. I’ve excerpted one bit,

Finally, thinking about this year’s theme [Why Sentience?], there might be something a bit ironic about exploring the notion of sentience (historically reserved for biological life, and quite a small subsection of it) through digital media and electronic arts. There’s been much work done in the past 25 years to loosen the boundaries between such distinctions: how do you imagine ISEA2020 helping in that?

The similarities shared between humans, animals, and machines are fundamental in cybernetic sciences. According to the founder of cybernetics Norbert Wiener, the main tenets of the information paradigm – the notion of feedback – can be applied to humans, animals as well as the material world. Famously, the AA predictor (as analysed by Peter Galison in 1994) can be read as a first attempt at human-machine fusion (otherwise known as a cyborg).

The infamous Turing test also tends to blur the lines between humans and machines, between language and informational systems. Second-order cybernetics are often associated with biologists Francisco Varela and Humberto Maturana. The very notion of autopoiesis (a system capable of maintaining a certain level of stability in an unstable environment) relates back to the concept of homeostasis formulated by Willam Ross [William Ross Ashby] in 1952. Moreover, the concept of “ecosystems” emanates directly from the field of second-order cybernetics, providing researchers with a clearer picture of the interdependencies between living and non-living organisms. In light of these theories, the absence of boundaries between animals, humans, and machines constitutes the foundation of the technosciences paradigm. New media, technological arts, virtual arts, etc., partake in the dialogue between humans and machines, and thus contribute to the prolongation of this paradigm. Frank Popper nearly called his book “Techno Art” instead of “Virtual Art”, in reference to technosciences (his editor suggested the name change). For artists in the technological arts community, Jakob von Uexkull’s notion of “human-animal milieu” is an essential reference. Also present in Simondon’s reflections on human environments (both natural and artificial), the notion of “milieu” is quite important in the discourses about art and the environment. Concordia University’s artistic community chose the concept of “milieu” as the rallying point of its research laboratories.

ISEA2020’s theme resonates particularly well with the recent eruption of processing and artificial intelligence technologies. For me, Sentience is a purely human and animal idea: machines can only simulate our ways of thinking and feeling. Partly in an effort to explore the illusion of sentience in computers, Louis-Philippe Rondeau, Benoît Melançon and I have established the Mimesis laboratory at NAD University. Processing and AI technologies are especially useful in the creation of “digital doubles”, “Vactors”, real-time avatar generation, Deep Fakes and new forms of personalised interactions.

I adhere to the epistemological position that the living world is immeasurable. Through their ability to simulate, machines can merely reduce complex logics to a point of understandability. The utopian notion of empathetic computers is an idea mostly explored by popular science-fiction movies. Nonetheless, research into computer sentience allows us to devise possible applications, explore notions of embodiment and agency, and thereby develop new forms of interaction. Beyond my own point of view, the idea that machines can somehow feel emotions gives artists and researchers the opportunity to experiment with certain findings from the fields of the cognitive sciences, computer sciences and interactive design. For example, in 2002 I was particularly marked by an immersive installation at Universal Exhibition in Neuchatel, Switzerland titled Ada: Intelligence Space. The installation comprised an artificial environment controlled by a computer, which interacted with the audience on the basis of artificial emotion. The system encouraged visitors to participate by intelligently analysing their movements and sounds. Another example, Louis-Philippe Demers’ Blind Robot (2012),  demonstrates how artists can be both critical of, and amazed by, these new forms of knowledge. Additionally, the 2016 BIAN (Biennale internationale d’art numérique), organized by ELEKTRA (Alain Thibault) explored the various ways these concepts were appropriated in installation and interactive art. The way I see it, current works of digital art operate as boundary objects. The varied usages and interpretations of a particular work of art allow it to be analyzed from nearly every angle or field of study. Thus, philosophers can ask themselves: how does a computer come to understand what being human really is?

I have yet to attend conferences or exchange with researchers on that subject. Although the sheer number of presentation propositions sent to ISEA2020, I have no doubt that the symposium will be the ideal context to reflect on the concept of Sentience and many issues raised therein.

For the last bit of news.

HotPopRobot, one of six global winners of 2020 NASA SpaceApps COVID-19 challenge

I last wrote about HotPopRobot’s (Artash and Arushi with a little support from their parents) response to the 2020 NASA (US National Aeronautics and Space Administration) SpaceApps challenge in my July 1, 2020 post, Toronto COVID-19 Lockdown Musical: a data sonification project from HotPopRobot. (You’ll find a video of the project embedded in the post.)

Here’s more news from HotPopRobot’s August 4, 2020 posting (Note: Links have been removed),

Artash (14 years) and Arushi (10 years). Toronto.

We are excited to become the global winners of the 2020 NASA SpaceApps COVID-19 Challenge from among 2,000 teams from 150 countries. The six Global Winners will be invited to visit a NASA Rocket Launch site to view a spacecraft launch along with the SpaceApps Organizing team once travel is deemed safe. They will also receive an invitation to present their projects to NASA, ESA [European Space Agency], JAXA [Japan Aerospace Exploration Agency], CNES [Centre National D’Etudes Spatiales; France], and CSA [Canadian Space Agency] personnel. https://covid19.spaceappschallenge.org/awards

15,000 participants joined together to submit over 1400 projects for the COVID-19 Global Challenge that was held on 30-31 May 2020. 40 teams made to the Global Finalists. Amongst them, 6 teams became the global winners!

The 2020 SpaceApps was an international collaboration between NASA, Canadian Space Agency, ESA, JAXA, CSA,[sic] and CNES focused on solving global challenges. During a period of 48 hours, participants from around the world were required to create virtual teams and solve any of the 12 challenges related to the COVID-19 pandemic posted on the SpaceApps website. More details about the 2020 SpaceApps COVID-19 Challenge:  https://sa-2019.s3.amazonaws.com/media/documents/Space_Apps_FAQ_COVID_.pdf

We have been participating in NASA Space Challenge for the last seven years since 2014. We were only 8 years and 5 years respectively when we participated in our very first SpaceApps 2014.

We have grown up learning more about space, tacking global challenges, making hardware and software projects, participating in meetings, networking with mentors and teams across the globe, and giving presentations through the annual NASA Space Apps Challenges. This is one challenge we look forward to every year.

It has been a fun and exciting journey meeting so many people and astronauts and visiting several fascinating places on the way! We hope more kids, youths, and families are inspired by our space journey. Space is for all and is yours to discover!

If you have the time, I recommend reading HotPopRobot’s August 4, 2020 posting in its entirety.

China’s neuromorphic chips: Darwin and Tianjic

I believe that China has more than two neuromorphic chips. The two being featured here are the ones for which I was easily able to find information.

The Darwin chip

The first information (that I stumbled across) about China and a neuromorphic chip (Darwin) was in a December 22, 2015 Science China Press news release on EurekAlert,

Artificial Neural Network (ANN) is a type of information processing system based on mimicking the principles of biological brains, and has been broadly applied in application domains such as pattern recognition, automatic control, signal processing, decision support system and artificial intelligence. Spiking Neural Network (SNN) is a type of biologically-inspired ANN that perform information processing based on discrete-time spikes. It is more biologically realistic than classic ANNs, and can potentially achieve much better performance-power ratio. Recently, researchers from Zhejiang University and Hangzhou Dianzi University in Hangzhou, China successfully developed the Darwin Neural Processing Unit (NPU), a neuromorphic hardware co-processor based on Spiking Neural Networks, fabricated by standard CMOS technology.

With the rapid development of the Internet-of-Things and intelligent hardware systems, a variety of intelligent devices are pervasive in today’s society, providing many services and convenience to people’s lives, but they also raise challenges of running complex intelligent algorithms on small devices. Sponsored by the college of Computer science of Zhejiang University, the research group led by Dr. De Ma from Hangzhou Dianzi university and Dr. Xiaolei Zhu from Zhejiang university has developed a co-processor named as Darwin.The Darwin NPU aims to provide hardware acceleration of intelligent algorithms, with target application domain of resource-constrained, low-power small embeddeddevices. It has been fabricated by 180nm standard CMOS process, supporting a maximum of 2048 neurons, more than 4 million synapses and 15 different possible synaptic delays. It is highly configurable, supporting reconfiguration of SNN topology and many parameters of neurons and synapses.Figure 1 shows photos of the die and the prototype development board, which supports input/output in the form of neural spike trains via USB port.

The successful development ofDarwin demonstrates the feasibility of real-time execution of Spiking Neural Networks in resource-constrained embedded systems. It supports flexible configuration of a multitude of parameters of the neural network, hence it can be used to implement different functionalities as configured by the user. Its potential applications include intelligent hardware systems, robotics, brain-computer interfaces, and others.Since it uses spikes for information processing and transmission,similar to biological neural networks, it may be suitable for analysis and processing of biological spiking neural signals, and building brain-computer interface systems by interfacing with animal or human brains. As a prototype application in Brain-Computer Interfaces, Figure 2 [not included here] describes an application example ofrecognizingthe user’s motor imagery intention via real-time decoding of EEG signals, i.e., whether he is thinking of left or right, and using it to control the movement direction of a basketball in the virtual environment. Different from conventional EEG signal analysis algorithms, the input and output to Darwin are both neural spikes: the input is spike trains that encode EEG signals; after processing by the neural network, the output neuron with the highest firing rate is chosen as the classification result.

The most recent development for this chip was announced in a September 2, 2019 Zhejiang University press release (Note: Links have been removed),

The second generation of the Darwin Neural Processing Unit (Darwin NPU 2) as well as its corresponding toolchain and micro-operating system was released in Hangzhou recently. This research was led by Zhejiang University, with Hangzhou Dianzi University and Huawei Central Research Institute participating in the development and algorisms of the chip. The Darwin NPU 2 can be primarily applied to smart Internet of Things (IoT). It can support up to 150,000 neurons and has achieved the largest-scale neurons on a nationwide basis.

The Darwin NPU 2 is fabricated by standard 55nm CMOS technology. Every “neuromorphic” chip is made up of 576 kernels, each of which can support 256 neurons. It contains over 10 million synapses which can construct a powerful brain-inspired computing system.

“A brain-inspired chip can work like the neurons inside a human brain and it is remarkably unique in image recognition, visual and audio comprehension and naturalistic language processing,” said MA De, an associate professor at the College of Computer Science and Technology on the research team.

“In comparison with traditional chips, brain-inspired chips are more adept at processing ambiguous data, say, perception tasks. Another prominent advantage is their low energy consumption. In the process of information transmission, only those neurons that receive and process spikes will be activated while other neurons will stay dormant. In this case, energy consumption can be extremely low,” said Dr. ZHU Xiaolei at the School of Microelectronics.

To cater to the demands for voice business, Huawei Central Research Institute designed an efficient spiking neural network algorithm in accordance with the defining feature of the Darwin NPU 2 architecture, thereby increasing computing speeds and improving recognition accuracy tremendously.

Scientists have developed a host of applications, including gesture recognition, image recognition, voice recognition and decoding of electroencephalogram (EEG) signals, on the Darwin NPU 2 and reduced energy consumption by at least two orders of magnitude.

In comparison with the first generation of the Darwin NPU which was developed in 2015, the Darwin NPU 2 has escalated the number of neurons by two orders of magnitude from 2048 neurons and augmented the flexibility and plasticity of the chip configuration, thus expanding the potential for applications appreciably. The improvement in the brain-inspired chip will bring in its wake the revolution of computer technology and artificial intelligence. At present, the brain-inspired chip adopts a relatively simplified neuron model, but neurons in a real brain are far more sophisticated and many biological mechanisms have yet to be explored by neuroscientists and biologists. It is expected that in the not-too-distant future, a fascinating improvement on the Darwin NPU 2 will come over the horizon.

I haven’t been able to find a recent (i.e., post 2017) research paper featuring Darwin but there is another chip and research on that one was published in July 2019. First, the news.

The Tianjic chip

A July 31, 2019 article in the New York Times by Cade Metz describes the research and offers what seems to be a jaundiced perspective about the field of neuromorphic computing (Note: A link has been removed),

As corporate giants like Ford, G.M. and Waymo struggle to get their self-driving cars on the road, a team of researchers in China is rethinking autonomous transportation using a souped-up bicycle.

This bike can roll over a bump on its own, staying perfectly upright. When the man walking just behind it says “left,” it turns left, angling back in the direction it came.

It also has eyes: It can follow someone jogging several yards ahead, turning each time the person turns. And if it encounters an obstacle, it can swerve to the side, keeping its balance and continuing its pursuit.

… Chinese researchers who built the bike believe it demonstrates the future of computer hardware. It navigates the world with help from what is called a neuromorphic chip, modeled after the human brain.

Here’s a video, released by the researchers, demonstrating the chip’s abilities,

Now back to back to Metz’s July 31, 2019 article (Note: A link has been removed),

The short video did not show the limitations of the bicycle (which presumably tips over occasionally), and even the researchers who built the bike admitted in an email to The Times that the skills on display could be duplicated with existing computer hardware. But in handling all these skills with a neuromorphic processor, the project highlighted the wider effort to achieve new levels of artificial intelligence with novel kinds of chips.

This effort spans myriad start-up companies and academic labs, as well as big-name tech companies like Google, Intel and IBM. And as the Nature paper demonstrates, the movement is gaining significant momentum in China, a country with little experience designing its own computer processors, but which has invested heavily in the idea of an “A.I. chip.”

If you can get past what seems to be a patronizing attitude, there are some good explanations and cogent criticisms in the piece (Metz’s July 31, 2019 article, Note: Links have been removed),

… it faces significant limitations.

A neural network doesn’t really learn on the fly. Engineers train a neural network for a particular task before sending it out into the real world, and it can’t learn without enormous numbers of examples. OpenAI, a San Francisco artificial intelligence lab, recently built a system that could beat the world’s best players at a complex video game called Dota 2. But the system first spent months playing the game against itself, burning through millions of dollars in computing power.

Researchers aim to build systems that can learn skills in a manner similar to the way people do. And that could require new kinds of computer hardware. Dozens of companies and academic labs are now developing chips specifically for training and operating A.I. systems. The most ambitious projects are the neuromorphic processors, including the Tianjic chip under development at Tsinghua University in China.

Such chips are designed to imitate the network of neurons in the brain, not unlike a neural network but with even greater fidelity, at least in theory.

Neuromorphic chips typically include hundreds of thousands of faux neurons, and rather than just processing 1s and 0s, these neurons operate by trading tiny bursts of electrical signals, “firing” or “spiking” only when input signals reach critical thresholds, as biological neurons do.

Tiernan Ray’s August 3, 2019 article about the chip for ZDNet.com offers some thoughtful criticism with a side dish of snark (Note: Links have been removed),

Nature magazine’s cover story [July 31, 2019] is about a Chinese chip [Tianjic chip]that can run traditional deep learning code and also perform “neuromorophic” operations in the same circuitry. The work’s value seems obscured by a lot of hype about “artificial general intelligence” that has no real justification.

The term “artificial general intelligence,” or AGI, doesn’t actually refer to anything, at this point, it is merely a placeholder, a kind of Rorschach Test for people to fill the void with whatever notions they have of what it would mean for a machine to “think” like a person.

Despite that fact, or perhaps because of it, AGI is an ideal marketing term to attach to a lot of efforts in machine learning. Case in point, a research paper featured on the cover of this week’s Nature magazine about a new kind of computer chip developed by researchers at China’s Tsinghua University that could “accelerate the development of AGI,” they claim.

The chip is a strange hybrid of approaches, and is intriguing, but the work leaves unanswered many questions about how it’s made, and how it achieves what researchers claim of it. And some longtime chip observers doubt the impact will be as great as suggested.

“This paper is an example of the good work that China is doing in AI,” says Linley Gwennap, longtime chip-industry observer and principal analyst with chip analysis firm The Linley Group. “But this particular idea isn’t going to take over the world.”

The premise of the paper, “Towards artificial general intelligence with hybrid Tianjic chip architecture,” is that to achieve AGI, computer chips need to change. That’s an idea supported by fervent activity these days in the land of computer chips, with lots of new chip designs being proposed specifically for machine learning.

The Tsinghua authors specifically propose that the mainstream machine learning of today needs to be merged in the same chip with what’s called “neuromorphic computing.” Neuromorphic computing, first conceived by Caltech professor Carver Mead in the early ’80s, has been an obsession for firms including IBM for years, with little practical result.

[Missing details about the chip] … For example, the part is said to have “reconfigurable” circuits, but how the circuits are to be reconfigured is never specified. It could be so-called “field programmable gate array,” or FPGA, technology or something else. Code for the project is not provided by the authors as it often is for such research; the authors offer to provide the code “on reasonable request.”

More important is the fact the chip may have a hard time stacking up to a lot of competing chips out there, says analyst Gwennap. …

What the paper calls ANN and SNN are two very different means of solving similar problems, kind of like rotating (helicopter) and fixed wing (airplane) are for aviation,” says Gwennap. “Ultimately, I expect ANN [?] and SNN [spiking neural network] to serve different end applications, but I don’t see a need to combine them in a single chip; you just end up with a chip that is OK for two things but not great for anything.”

But you also end up generating a lot of buzz, and given the tension between the U.S. and China over all things tech, and especially A.I., the notion China is stealing a march on the U.S. in artificial general intelligence — whatever that may be — is a summer sizzler of a headline.

ANN could be either artificial neural network or something mentioned earlier in Ray’s article, a shortened version of CANN [continuous attractor neural network].

Shelly Fan’s August 7, 2019 article for the SingularityHub is almost as enthusiastic about the work as the podcasters for Nature magazine  were (a little more about that later),

The study shows that China is readily nipping at the heels of Google, Facebook, NVIDIA, and other tech behemoths investing in developing new AI chip designs—hell, with billions in government investment it may have already had a head start. A sweeping AI plan from 2017 looks to catch up with the US on AI technology and application by 2020. By 2030, China’s aiming to be the global leader—and a champion for building general AI that matches humans in intellectual competence.

The country’s ambition is reflected in the team’s parting words.

“Our study is expected to stimulate AGI [artificial general intelligence] development by paving the way to more generalized hardware platforms,” said the authors, led by Dr. Luping Shi at Tsinghua University.

Using nanoscale fabrication, the team arranged 156 FCores, containing roughly 40,000 neurons and 10 million synapses, onto a chip less than a fifth of an inch in length and width. Initial tests showcased the chip’s versatility, in that it can run both SNNs and deep learning algorithms such as the popular convolutional neural network (CNNs) often used in machine vision.

Compared to IBM TrueNorth, the density of Tianjic’s cores increased by 20 percent, speeding up performance ten times and increasing bandwidth at least 100-fold, the team said. When pitted against GPUs, the current hardware darling of machine learning, the chip increased processing throughput up to 100 times, while using just a sliver (1/10,000) of energy.

BTW, Fan is a neuroscientist (from her SingularityHub profile page),

Shelly Xuelai Fan is a neuroscientist-turned-science writer. She completed her PhD in neuroscience at the University of British Columbia, where she developed novel treatments for neurodegeneration. While studying biological brains, she became fascinated with AI and all things biotech. Following graduation, she moved to UCSF [University of California at San Francisco] to study blood-based factors that rejuvenate aged brains. She is the co-founder of Vantastic Media, a media venture that explores science stories through text and video, and runs the award-winning blog NeuroFantastic.com. Her first book, “Will AI Replace Us?” (Thames & Hudson) will be out April 2019.

Onto Nature. Here’s a link to and a citation for the paper,

Towards artificial general intelligence with hybrid Tianjic chip architecture by Jing Pei, Lei Deng, Sen Song, Mingguo Zhao, Youhui Zhang, Shuang Wu, Guanrui Wang, Zhe Zou, Zhenzhi Wu, Wei He, Feng Chen, Ning Deng, Si Wu, Yu Wang, Yujie Wu, Zheyu Yang, Cheng Ma, Guoqi Li, Wentao Han, Huanglong Li, Huaqiang Wu, Rong Zhao, Yuan Xie & Luping Shi. Nature volume 572, pages106–111(2019) DOI: https//doi.org/10.1038/s41586-019-1424-8 Published: 31 July 2019 Issue Date: 01 August 2019

This paper is behind a paywall.

The July 31, 2019 Nature podcast, which includes a segment about the Tianjic chip research from China, which is at the 9 mins. 13 secs. mark (AI hardware) or you can scroll down about 55% of the way to the transcript of the interview with Luke Fleet, the Nature editor who dealt with the paper.

Some thoughts

The pundits put me in mind of my own reaction when I heard about phones that could take pictures. I didn’t see the point but, as it turned out, there was a perfectly good reason for combining what had been two separate activities into one device. It was no longer just a telephone and I had completely missed the point.

This too may be the case with the Tianjic chip. I think it’s too early to say whether or not it represents a new type of chip or if it’s a dead end.