While this research team was heavily dominated by researchers from the University of Ottawa, there were two members associated with the University of Talca (Universidad de Talca; located in Chile), two members associated with the University of Montreal (Université de Montréal), and one member with McGill University (located in Montréal).
Combining biomedical finesse and nature-inspired engineering, a uOttawa-led team of scientists have created a jelly-like material that shows great potential for on-the-spot repair to a remarkable range of damaged organs and tissues in the human body.
Cutting-edge research co-led by uOttawa Faculty of Medicine Associate Professor Dr. Emilio I. Alarcón could eventually impact millions of lives with peptide-based hydrogels that will close skin wounds, deliver therapeutics to damaged heart muscle, as well as reshape and heal injured corneas.
“We are using peptides to fabricate therapeutic solutions. The team is drawing inspiration from nature to develop simple solutions for wound closure and tissue repair,” says Dr. Alarcón, a scientist and director at the BioEngineering and Therapeutic Solutions (BEaTS) group at the University of Ottawa Heart Institutek whose innovative research work is focused on developing new materials with capabilities for tissue regeneration.
Peptides are molecules in living organisms and hydrogels are a water-based material with a gelatinous texture that have proven useful in therapeutics.
The approach used in the study – just published in Advanced Functional Materials and co-led by Dr. Erik Suuronen & Dr. Marc Ruel – is unique. Most hydrogels explored in tissue engineering are animal-derived and protein-based materials, but the biomaterial created by the collaborative team is supercharged by engineered peptides. This makes it more clinically translatable.
Dr. Ruel, a full professor in the uOttawa Faculty of Medicine’s Department of Cellular and Molecular Medicine and the endowed chair of research in the Division of Cardiac Surgery at the University of Ottawa Heart Institute, says the study’s insights could be a game changer.
“Despite millennia of evolution, the human response to wound healing still remains imperfect,” Dr. Ruel says. “We see maladapted scarring in everything from skin incisions to eye injuries, to heart repair after a myocardial infarction. Drs. Alarcón, Suuronen, and the rest of our team have focused on this problem for almost two decades. The publication by Dr. Alarcón in Advanced Functional Materials reveals a novel way to make wound healing, organ healing, and even basic scarring after surgery much more therapeutically modulatable and, therefore, optimizable for human health.”
Indeed, the ability to modulate the peptide-based biomaterial is key. The uOttawa-led team’s hydrogels are designed to be customizable, making the durable material adaptable for use in a surprising range of tissues. Essentially, the two-component recipe could be adjusted to ramp up adhesivity or dial down other components depending on the part of the body needing repair.
“We were in fact very surprised by the range of applications our materials can achieve,” says Dr. Alarcón. “Our technology offers an integrated solution that is customizable depending on the targeted tissue.”
Dr. Alarcón says that not only does the study’s data suggest that the therapeutic action of the biomimetic hydrogels are highly effective, but its application is far simpler and cost-effective than other regenerative approaches.
The materials are engineered in a low-cost and scalable manner – hugely important qualities for any number of major biomedical applications. The team also devised a rapid-screening system that allowed them to significantly slash the design costs and testing timespans.
“This significant reduction in cost and time not only makes our material more economically viable but also accelerates its potential for clinical use,” Dr. Alarcón says.
What are next steps for the talent-rich research team? They will conduct large animal tests in preparation for tests in human subjects. So far, heart and skin tests were conducted with rodents, and the cornea work was done ex vivo.
Part of the work for this study was funded by the uOttawa Faculty of Medicine’s “Path to Patenting & Pre-Commercialization” (3P), an innovation-focused approach to provide our community’s top-flight researchers with the assistance needed to bring their most promising breakthroughs to the wider world.
Here’s a link to and a citation for the paper,
Multipurpose On-the-Spot Peptide-Based Hydrogels for Skin, Cornea, and Heart Repair by Alex Ross, Xixi Guo, German A. Mercado Salazar, Sergio David Garcia Schejtman, Jinane El-Hage, Maxime Comtois-Bona, Aidan Macadam, Irene Guzman-Soto, Hiroki Takaya, Kevin Hu, Bryan Liu, Ryan Tu, Bilal Siddiqi, Erica Anderson, Marcelo Muñoz, Patricio Briones-Rebolledo, Tianqin Ning, May Griffith, Benjamin Rotsein, Horacio Poblete, Jianyu Li, Marc Ruel, Erik J. Suuronen, Emilio I. Alarcon. Advanced Functional Materials DOI: https://doi.org/10.1002/adfm.202402564 First published: 23 April 2024
A theoretical possibility has been proven by an international team including researchers from the Université de Montréal (University of Montreal) according to a March 27, 2024 news item on phys.org,
For years, C130 fullertubes—molecules made up of 130 carbon atoms—have existed only in theory. Now, leading an international team of scientists, a UdeM doctoral student in physics has successfully shown them in real life—and even managed to capture some in a photograph.
This feat in the realm of basic research has led Emmanuel Bourret to have a cover-page illustration of his discovery in a prestigious scientific journal, the Journal of the American Chemical Society.
First published online last October [2023], the discovery was made by Bourret as lead scientist of an inter-university team that also included researchers from Purdue University, Virginia Tech and the Oak Ridge National Laboratory, in Tennessee.
A fullertube is basically an assembly of carbon atoms arranged to form a closed tubular cage. It is related to fullerenes, molecules that are represented as cages of interconnected hexagons and pentagons, and come in a wide variety of sizes and shapes.
For example, a C60 fullerene is made up of 60 carbon atoms and is shaped like a soccer ball. It is relatively small, spherical and very abundant. C120 fullerenes are less common. They are longer and shaped like a tube capped at either ends with the two halves of a C60 fullerene.
Found in soot
The C130 fullertube (or C130-D5h, its full scientific name) is more elongated than the C120 and even rarer. To isolate it, Bourret and his team generated an electric arc between two graphite electrodes to produce soot containing fullerene and fullertube molecules. The electronic structure of these molecules was then calculated using density functional theory (DFT).
“Drawing on principles of quantum mechanics, DFT enables us to calculate electronic structures and predict the properties of a molecule using the fundamental rules of physics,” explained Bourret’s thesis supervisor, UdeM physics professor Michel Côté, a researcher at the university’s Institut Courtois.
Using special software, Bourret was able to describe the structure of the C130 molecule: it is a tube with two hemispheres at the ends, making it look like a microscopic capsule. It measures just under 2 nanometres long by 1 nm wide (a nanometre is one billionth of a metre).
“The structure of the tube is basically made up of atoms arranged in hexagons,” said Bourret. “At the two ends, these hexagons are linked by pentagons, giving them their rounded shape.”
Bourret began doing theoretical work on fullertubes in 2014 under his then-supervisor Jiri Patera, an UdeM mathematics professor. After Patera passed away in January 2022, Bourret then approached Côté, who became his new supervisor.
Existence shown in 2020
Two years before that, Bourret had read an article by Purdue University at Fort Wayne professor Steven Stevenson, who described the experimental isolation of certain fullertubes, demonstrating their existence but not identifying all of them.
Under Côté’s guidance, Bourret set to work advancing knowledge on the topic.
“Emmanuel had a strong background in abstract mathematics,” Bourret recalled, “and he added an interesting dimension to my research group, which focuses on more computational approaches.”
Are any possible future applications in the offing?
“It’s hard to say at this stage, but one possibility might be the production of hydrogen,” said Côté. “Currently, what’s used is a catalyst made of platinum and rubidium, both of which are rare and expensive. Replacing them with carbon structures such as C130 would make it possible to produce hydrogen in a ‘greener’ way.”
Last year, Bourret’s groundbreaking work earned him an invitation to deliver a paper at the annual meeting of the U.S. Electrochemical Society (ECS), in Boston. This May [2024], he’ll chair a panel on fullerenes and fullertubes at the ECS annual meeting in San Francisco.
Regulation of artificial intelligence (AI) has become very topical in the last couple of years. There was an AI safety summit in November 2023 at Bletchley Park in the UK (see my November 2, 2023 posting for more about that international meeting).
A very software approach?
This year (2024) has seen a rise in legislative and proposed legislative activity. I have some articles on a few of these activities. China was the first to enact regulations of any kind on AI according to Matt Sheehan’s February 27, 2024 paper for the Carnegie Endowment for International Peace,
In 2021 and 2022, China became the first country to implement detailed, binding regulations on some of the most common applications of artificial intelligence (AI). These rules formed the foundation of China’s emerging AI governance regime, an evolving policy architecture that will affect everything from frontier AI research to the functioning of the world’s second-largest economy, from large language models in Africa to autonomous vehicles in Europe.
…
The Chinese Communist Party (CCP) and the Chinese government started that process with the 2021 rules on recommendation algorithms, an omnipresent use of the technology that is often overlooked in international AI governance discourse. Those rules imposed new obligations on companies to intervene in content recommendations, granted new rights to users being recommended content, and offered protections to gig workers subject to algorithmic scheduling. The Chinese party-state quickly followed up with a new regulation on “deep synthesis,” the use of AI to generate synthetic media such as deepfakes. Those rules required AI providers to watermark AI-generated content and ensure that content does not violate people’s “likeness rights” or harm the “nation’s image.” Together, these two regulations also created and amended China’s algorithm registry, a regulatory tool that would evolve into a cornerstone of the country’s AI governance regime.
…
The UK has adopted a more generalized approach focused on encouraging innovation according to Valeria Gallo’s and Suchitra Nair’s February 21, 2024 article for Deloitte (a British professional services firm also considered one of the big four accounting firms worldwide),
At a glance
The UK Government has adopted a cross-sector and outcome-based framework for regulating AI, underpinned by five core principles. These are safety, security and robustness, appropriate transparency and explainability, fairness, accountability and governance, and contestability and redress.
Regulators will implement the framework in their sectors/domains by applying existing laws and issuing supplementary regulatory guidance. Selected regulators will publish their AI annual strategic plans by 30th April [2024], providing businesses with much-needed direction.
Voluntary safety and transparency measures for developers of highly capable AI models and systems will also supplement the framework and the activities of individual regulators.
The framework will not be codified into law for now, but the Government anticipates the need for targeted legislative interventions in the future. These interventions will address gaps in the current regulatory framework, particularly regarding the risks posed by complex General Purpose AI and the key players involved in its development.
Organisations must prepare for increased AI regulatory activity over the next year, including guidelines, information gathering, and enforcement. International firms will inevitably have to navigate regulatory divergence.
…
While most of the focus appears to be on the software (e.g., General Purpose AI), the UK framework does not preclude hardware.
As part of its digital strategy, the EU wants to regulate artificial intelligence (AI) to ensure better conditions for the development and use of this innovative technology. AI can create many benefits, such as better healthcare; safer and cleaner transport; more efficient manufacturing; and cheaper and more sustainable energy.
In April 2021, the European Commission proposed the first EU regulatory framework for AI. It says that AI systems that can be used in different applications are analysed and classified according to the risk they pose to users. The different risk levels will mean more or less regulation.
The agreed text is expected to be finally adopted in April 2024. It will be fully applicable 24 months after entry into force, but some parts will be applicable sooner:
*The ban of AI systems posing unacceptable risks will apply six months after the entry into force
*Codes of practice will apply nine months after entry into force
*Rules on general-purpose AI systems that need to comply with transparency requirements will apply 12 months after the entry into force
High-risk systems will have more time to comply with the requirements as the obligations concerning them will become applicable 36 months after the entry into force.
…
This EU initiative, like the UK framework, seems largely focused on AI software and according to the Wikipedia entry “Regulation of artificial intelligence,”
… The AI Act is expected to come into effect in late 2025 or early 2026.[109
I do have a few postings about Canadian regulatory efforts, which also seem to be focused on software but don’t preclude hardware. While the January 20, 2024 posting is titled “Canada’s voluntary code of conduct relating to advanced generative AI (artificial intelligence) systems,” information about legislative efforts is also included although you might find my May 1, 2023 posting titled “Canada, AI regulation, and the second reading of the Digital Charter Implementation Act, 2022 (Bill C-27)” offers more comprehensive information about Canada’s legislative progress or lack thereof.
A February 15, 2024 news item on ScienceDaily suggests that regulating hardware may be the most effective way of regulating AI,
Chips and datacentres — the ‘compute’ power driving the AI revolution — may be the most effective targets for risk-reducing AI policies as they have to be physically possessed, according to a new report.
A global registry tracking the flow of chips destined for AI supercomputers is one of the policy options highlighted by a major new report calling for regulation of “compute” — the hardware that underpins all AI — to help prevent artificial intelligence misuse and disasters.
Other technical proposals floated by the report include “compute caps” — built-in limits to the number of chips each AI chip can connect with — and distributing a “start switch” for AI training across multiple parties to allow for a digital veto of risky AI before it feeds on data.
The experts point out that powerful computing chips required to drive generative AI models are constructed via highly concentrated supply chains, dominated by just a handful of companies — making the hardware itself a strong intervention point for risk-reducing AI policies.
The report, published 14 February [2024], is authored by nineteen experts and co-led by three University of Cambridge institutes — the Leverhulme Centre for the Future of Intelligence (LCFI), the Centre for the Study of Existential Risk (CSER) and the Bennett Institute for Public Policy — along with OpenAI and the Centre for the Governance of AI.
“Artificial intelligence has made startling progress in the last decade, much of which has been enabled by the sharp increase in computing power applied to training algorithms,” said Haydn Belfield, a co-lead author of the report from Cambridge’s LCFI.
“Governments are rightly concerned about the potential consequences of AI, and looking at how to regulate the technology, but data and algorithms are intangible and difficult to control.
“AI supercomputers consist of tens of thousands of networked AI chips hosted in giant data centres often the size of several football fields, consuming dozens of megawatts of power,” said Belfield.
“Computing hardware is visible, quantifiable, and its physical nature means restrictions can be imposed in a way that might soon be nearly impossible with more virtual elements of AI.”
The computing power behind AI has grown exponentially since the “deep learning era” kicked off in earnest, with the amount of “compute” used to train the largest AI models doubling around every six months since 2010. The biggest AI models now use 350 million times more compute than thirteen years ago.
Government efforts across the world over the past year – including the US Executive Order on AI, EU AI Act, China’s Generative AI Regulation, and the UK’s AI Safety Institute – have begun to focus on compute when considering AI governance.
Outside of China, the cloud compute market is dominated by three companies, termed “hyperscalers”: Amazon, Microsoft, and Google. “Monitoring the hardware would greatly help competition authorities in keeping in check the market power of the biggest tech companies, and so opening the space for more innovation and new entrants,” said co-author Prof Diane Coyle from Cambridge’s Bennett Institute.
The report provides “sketches” of possible directions for compute governance, highlighting the analogy between AI training and uranium enrichment. “International regulation of nuclear supplies focuses on a vital input that has to go through a lengthy, difficult and expensive process,” said Belfield. “A focus on compute would allow AI regulation to do the same.”
Policy ideas are divided into three camps: increasing the global visibility of AI computing; allocating compute resources for the greatest benefit to society; enforcing restrictions on computing power.
For example, a regularly-audited international AI chip registry requiring chip producers, sellers, and resellers to report all transfers would provide precise information on the amount of compute possessed by nations and corporations at any one time.
The report even suggests a unique identifier could be added to each chip to prevent industrial espionage and “chip smuggling”.
“Governments already track many economic transactions, so it makes sense to increase monitoring of a commodity as rare and powerful as an advanced AI chip,” said Belfield. However, the team point out that such approaches could lead to a black market in untraceable “ghost chips”.
Other suggestions to increase visibility – and accountability – include reporting of large-scale AI training by cloud computing providers, and privacy-preserving “workload monitoring” to help prevent an arms race if massive compute investments are made without enough transparency.
“Users of compute will engage in a mixture of beneficial, benign and harmful activities, and determined groups will find ways to circumvent restrictions,” said Belfield. “Regulators will need to create checks and balances that thwart malicious or misguided uses of AI computing.”
These might include physical limits on chip-to-chip networking, or cryptographic technology that allows for remote disabling of AI chips in extreme circumstances. One suggested approach would require the consent of multiple parties to unlock AI compute for particularly risky training runs, a mechanism familiar from nuclear weapons.
AI risk mitigation policies might see compute prioritised for research most likely to benefit society – from green energy to health and education. This could even take the form of major international AI “megaprojects” that tackle global issues by pooling compute resources.
The report’s authors are clear that their policy suggestions are “exploratory” rather than fully fledged proposals and that they all carry potential downsides, from risks of proprietary data leaks to negative economic impacts and the hampering of positive AI development.
They offer five considerations for regulating AI through compute, including the exclusion of small-scale and non-AI computing, regular revisiting of compute thresholds, and a focus on privacy preservation.
Added Belfield: “Trying to govern AI models as they are deployed could prove futile, like chasing shadows. Those seeking to establish AI regulation should look upstream to compute, the source of the power driving the AI revolution. If compute remains ungoverned it poses severe risks to society.”
Authors include: Girish Sastry, Lennart Heim, Haydn Belfield, Markus Anderljung, Miles Brundage, Julian Hazell, Cullen O’Keefe, Gillian K. Hadfield, Richard Ngo, Konstantin Pilz, George Gor, Emma Bluemke, Sarah Shoker, Janet Egan, Robert F. Trager, Shahar Avin, Adrian Weller, Yoshua Bengio, and Diane Coyle.
The authors are associated with these companies/agencies: OpenAI, Centre for the Governance of AI (GovAI), Leverhulme Centre for the Future of Intelligence at the Uni. of Cambridge, Oxford Internet Institute, Institute for Law & AI, University of Toronto Vector Institute for AI, Georgetown University, ILINA Program, Harvard Kennedy School (of Government), *AI Governance Institute,* Uni. of Oxford, Centre for the Study of Existential Risk at Uni. of Cambridge, Uni. of Cambridge, Uni. of Montreal / Mila, Bennett Institute for Public Policy at the Uni. of Cambridge.
“The ILINIA program is dedicated to providing an outstanding platform for Africans to learn and work on questions around maximizing wellbeing and responding to global catastrophic risks” according to the organization’s homepage.
*As for the AI Governance Institute, I believe that should be the Centre for the Governance of AI at Oxford University since the associated academic is Robert F. Trager from the University of Oxford.
As the months (years?) fly by, I guess we’ll find out if this hardware approach gains any traction where AI regulation is concerned.
Two molecular languages at the origin of life have been successfully recreated and mathematically validated, thanks to pioneering work by Canadian scientists at Université de Montréal.
Published this week in the Journal of American Chemical Society, the breakthrough opens new doors for the development of nanotechnologies with applications ranging from biosensing, drug delivery and molecular imaging.
Living organisms are made up of billions of nanomachines and nanostructures that communicate to create higher-order entities able to do many essential things, such as moving, thinking, surviving and reproducing.
“The key to life’s emergence relies on the development of molecular languages – also called signalling mechanisms – which ensure that all molecules in living organisms are working together to achieve specific tasks,” said the study’s principal investigator, UdeM bioengineering professor Alexis Vallée-Bélisle.
In yeasts, for example, upon detecting and binding a mating pheromone, billions of molecules will communicate and coordinate their activities to initiate union, said Vallée-Bélisle, holder of a Canada Research Chair in Bioengineering and Bionanotechnology.
“As we enter the era of nanotechnology, many scientists believe that the key to designing and programming more complex and useful artificial nanosystems relies on our ability to understand and better employ molecular languages developed by living organisms,” he said.
Two types of languages
One well-known molecular language is allostery. The mechanism of this language is “lock-and-key”: a molecule binds and modifies the structure of another molecule, directing it to trigger or inhibit an activity.
Another, lesser-known molecular language is multivalency, also known as the chelate effect. It works like a puzzle: as one molecule binds to another, it facilitates (or not) the binding of a third molecule by simply increasing its binding interface.
Although these two languages are observed in all molecular systems of all living organisms, it is only recently that scientists have started to understand their rules and principles – and so use these languages to design and program novel artificial nanotechnologies.
“Given the complexity of natural nanosystems, before now nobody was able to compare the basic rules, advantage or limitations of these two languages on the same system,” said Vallée-Bélisle.
To do so, his doctoral student Dominic Lauzon, first author of the study, had the idea of creating a DNA-based molecular system that could function using both languages. “DNA is like Lego bricks for nanoengineers,” said Lauzon. “It’s a remarkable molecule that offers simple, programmable and easy-to-use chemistry.”
Simple mathematical equations to detect antibodies
The researchers found that simple mathematical equations could well describe both languages, which unravelled the parameters and design rules to program the communication between molecules within a nanosystem.
For example, while the multivalent language enabled control of both the sensitivity and cooperativity of the activation or deactivation of the molecules, the corresponding allosteric translation only enabled control of the sensitivity of the response.
With this new understanding at hand, the researchers used the language of multivalency to design and engineer a programmable antibody sensor that allows the detection of antibodies over different ranges of concentration.
“As shown with the recent pandemic, our ability to precisely monitor the concentration of antibodies in the general population is a powerful tool to determine the people’s individual and collective immunity,” said Vallée-Bélisle.
In addition to expanding the synthetic toolbox to create the next generation of nanotechnology, the scientist’s discovery also shines a light on why some natural nanosystems may have selected one language over another to communicate chemical information.
I associate the idea of ‘creative destruction’ with economics and Joseph Schumpeter but it is more widespread and has a much longer history (see more at the end of this posting).
Here we have Université de Montréal researchers being inspired by the idea from (what was to me) an unexpected source, from a February 9, 2023 news item on Nanowerk,
“Every act of creation,” Picasso famously noted, “is first an act of destruction.”
Taking this concept literally, researchers in Canada have now discovered that “breaking” molecular nanomachines basic to life can create new ones that work even better.
Life on Earth is made possible by tens of thousands of nanomachines that have evolved over millions of years. Often made of proteins or nucleic acids, they typically contain thousands of atoms and are less than 10,000 times the size of a human hair.
“These nanomachines control all molecular activities in our body, and problems with their regulation or structure are at the origin of most human diseases,” said the new study’s principal investigator Alexis Vallée-Bélisle, a chemistry professor at Université de Montréal.
Studying the way these nanomachines are built, Vallée-Bélisle, holder of the Canada Research Chair in Bioengineering and Bio-Nanotechnology, noticed that while some are made using a single component or part (often long biopolymers), others use several components that spontaneously assemble.
“Since most of my students spend their lives creating nanomachines, we started to wonder if it is more beneficial to create them using one or more self-assembling molecular components,” said Vallée-Bélisle.
A ‘destructive’ idea
To explore this question, his doctoral student Dominic Lauzon, had the “destructive” idea of breaking up some nanomachines to see if they could be reassembled. To do so, he made artificial DNA-based nanomachines that could be “destroyed” by breaking them up.
“DNA is a remarkable molecule that offers simple, programmable and easy-to-use chemistry,” said Lauzon, the study’s first author. “We believed that DNA-based nanomachines could help answer fundamental questions about the creation and evolution of natural and human-made nanomachines.”
Lauzon and Vallée-Bélisle spent years performing the experimental validations. They were able to demonstrate that nanomachines could easily withstand fragmentation, but more importantly, that such a destructive event allowed for the creation of various novel functionalities, including different sensitivity levels towards variation in component concentration, temperature and mutations.
What the researchers found is that these functionalities could arise simply by controlling the concentration of each individual component. For example, when cutting a nanomachine in three components, nanomachines were found to activate more sensitively at high concentration of components. In contrast, at low concentration of components, nanomachines could be programmed to activate or deactivate at specific moment in time or to simply inhibit their function.
“Overall, these novel functionalities were created by simply cutting up, or destroying, the structure of an existing nanomachine,” said Lauzon. “These functionalities could drastically improve human-based nanotechnologies such as sensors, drug carriers and even molecular computers”.
Evolving new functionalities
Just as Picasso typically destroyed dozens of unfinished works to create his famous artworks, and just like muscles need to break down to get stronger, and innovative new companies are born by eliminating older competitors from the market, nanoscale machines can evolve new functionalities by being taken apart.
Unlike common machines like cell phones, televisions and cars, which are made by combining components using screws and bolts, glue, solder or electronics, “nanomachines rely on thousands of weak dynamic intermolecular forces that can spontaneously reform, enabling broken nanomachines to re-assemble,” said Vallée-Bélisle.
In addition to providing nanotechnology researchers with a simple design strategy to create the next generation of nanomachines, the UdeM team’s findings also shed light on how natural molecular nanomachines may have evolved.
“Biologists have recently discovered that about 20 per cent of biological nanomachines may have evolved through the fragmentation of their genes,” said Vallée-Bélisle. “With our results, biologists now have a rational basis for understanding how the fragmentation of these ancestral proteins could have created new molecular functionalities for life on Earth.”
The Wikipedia entry for ‘Creative destruction’ is primarily on economic theory and various philosophies with no mention of Picasso. However, there is a fascinating segue into Eastern mysticism,
Other early usage
…
Hugo Reinert has argued that Sombart’s formulation of the concept was influenced by Eastern mysticism, specifically the image of the Hindu god Shiva, who is presented in the paradoxical aspect of simultaneous destroyer and creator.
I thought it best to break this up a bit. There are a couple of ‘objects’ still to be discussed but this is mostly the commentary part of this letter to you. (Here’s a link for anyone who stumbled here but missed Part 1.)
Ethics, the natural world, social justice, eeek, and AI
Dorothy Woodend in her March 10, 2022 review for The Tyee) suggests some ethical issues in her critique of the ‘bee/AI collaboration’ and she’s not the only one with concerns. UNESCO (United Nations Educational, Scientific and Cultural Organization) has produced global recommendations for ethical AI (see my March 18, 2022 posting). More recently, there’s “Racist and sexist robots have flawed AI,” a June 23, 2022 posting, where researchers prepared a conference presentation and paper about deeply flawed AI still being used in robots.
Ultimately, the focus is always on humans and Woodend has extended the ethical AI conversation to include insects and the natural world. In short, something less human-centric.
My friend, this reference to the de Young exhibit may seem off topic but I promise it isn’t in more ways than one. The de Young Museum in San Francisco (February 22, 2020 – June 27, 2021) also held and AI and art show called, “Uncanny Valley: Being Human in the Age of AI”), from the exhibitions page,
In today’s AI-driven world, increasingly organized and shaped by algorithms that track, collect, and evaluate our data, the question of what it means to be human [emphasis mine] has shifted. Uncanny Valley is the first major exhibition to unpack this question through a lens of contemporary art and propose new ways of thinking about intelligence, nature, and artifice. [emphasis mine]
As you can see, it hinted (perhaps?) at an attempt to see beyond human-centric AI. (BTW, I featured this ‘Uncanny Valley’ show in my February 25, 2020 posting where I mentioned Stephanie Dinkins [featured below] and other artists.)
Social justice
While the VAG show doesn’t see much past humans and AI, it does touch on social justice. In particular there’s Pod 15 featuring the Algorithmic Justice League (AJL). The group “combine[s] art and research to illuminate the social implications and harms of AI” as per their website’s homepage.
In Pod 9, Stephanie Dinkins’ video work with a robot (Bina48), which was also part of the de Young Museum ‘Uncanny Valley’ show, addresses some of the same issues.
From the the de Young Museum’s Stephanie Dinkins “Conversations with Bina48” April 23, 2020 article by Janna Keegan (Dinkins submitted the same work you see at the VAG show), Note: Links have been removed,
Transdisciplinary artist and educator Stephanie Dinkins is concerned with fostering AI literacy. The central thesis of her social practice is that AI, the internet, and other data-based technologies disproportionately impact people of color, LGBTQ+ people, women, and disabled and economically disadvantaged communities—groups rarely given a voice in tech’s creation. Dinkins strives to forge a more equitable techno-future by generating AI that includes the voices of multiple constituencies …
The artist’s ongoing Conversations with Bina48 takes the form of a series of interactions with the social robot Bina48 (Breakthrough Intelligence via Neural Architecture, 48 exaflops per second). The machine is the brainchild of Martine Rothblatt, an entrepreneur in the field of biopharmaceuticals who, with her wife, Bina, cofounded the Terasem Movement, an organization that seeks to extend human life through cybernetic means. In 2007 Martine commissioned Hanson Robotics to create a robot whose appearance and consciousness simulate Bina’s. The robot was released in 2010, and Dinkins began her work with it in 2014.
…
Part psychoanalytical discourse, part Turing test, Conversations with Bina48 also participates in a larger dialogue regarding bias and representation in technology. Although Bina Rothblatt is a Black woman, Bina48 was not programmed with an understanding of its Black female identity or with knowledge of Black history. Dinkins’s work situates this omission amid the larger tech industry’s lack of diversity, drawing attention to the problems that arise when a roughly homogenous population creates technologies deployed globally. When this occurs, writes art critic Tess Thackara, “the unconscious biases of white developers proliferate on the internet, mapping our social structures and behaviors onto code and repeating imbalances and injustices that exist in the real world.” One of the most appalling and public of these instances occurred when a Google Photos image-recognition algorithm mislabeled the faces of Black people as “gorillas.”
…
Eeek
You will find as you go through the ‘imitation game’ a pod with a screen showing your movements through the rooms in realtime on a screen. The installation is called “Creepers” (2021-22). The student team from Vancouver’s Centre for Digital Media (CDM) describes their project this way, from the CDM’s AI-driven Installation Piece for the Vancouver Art Gallery webpage,
Project Description
Kaleidoscope [team name] is designing an installation piece that harnesses AI to collect and visualize exhibit visitor behaviours, and interactions with art, in an impactful and thought-provoking way.
There’s no warning that you’re being tracked and you can see they’ve used facial recognition software to track your movements through the show. It’s claimed on the pod’s signage that they are deleting the data once you’ve left.
‘Creepers’ is an interesting approach to the ethics of AI. The name suggests that even the student designers were aware it was problematic.
In recovery from an existential crisis (meditations)
There’s something greatly ambitious about “The Imitation Game: Visual Culture in the Age of Artificial Intelligence” and walking up the VAG’s grand staircase affirms that ambition. Bravo to the two curators, Grenville and Entis for an exhibition.that presents a survey (or overview) of artificial intelligence, and its use in and impact on creative visual culture.
I’ve already enthused over the history (specifically Turing, Lovelace, Ovid), admitted to being mesmerized by Scott Eaton’s sculpture/AI videos, and confessed to a fascination (and mild repulsion) regarding Oxman’s honeycombs.
It’s hard to remember all of the ‘objects’ as the curators have offered a jumble of work, almost all of them on screens. Already noted, there’s Norbert Wiener’s The Moth (1949) and there are also a number of other computer-based artworks from the 1960s and 1970s. Plus, you’ll find works utilizing a GAN (generative adversarial network), an AI agent that is explained in the exhibit.
It’s worth going more than once to the show as there is so much to experience.
Why did they do that?
Dear friend, I’ve already commented on the poor flow through the show and It’s hard to tell if the curators intended the experience to be disorienting but this is to the point of chaos, especially when the exhibition is crowded.
I’ve seen Grenville’s shows before. In particular there was “MashUp: The Birth of Modern Culture, a massive survey documenting the emergence of a mode of creativity that materialized in the late 1800s and has grown to become the dominant model of cultural production in the 21st century” and there was “KRAZY! The Delirious World of Anime + Manga + Video Games + Art.” As you can see from the description, he pulls together disparate works and ideas into a show for you to ‘make sense’ of them.
One of the differences between those shows and the “imitation Game: …” is that most of us have some familiarity, whether we like it or not, with modern art/culture and anime/manga/etc. and can try to ‘make sense’ of it.
By contrast, artificial intelligence (which even experts have difficulty defining) occupies an entirely different set of categories; all of them associated with science/technology. This makes for a different kind of show so the curators cannot rely on the audience’s understanding of basics. It’s effectively an art/sci or art/tech show and, I believe, the first of its kind at the Vancouver Art Gallery. Unfortunately, the curators don’t seem to have changed their approach to accommodate that difference.
AI is also at the centre of a current panic over job loss, loss of personal agency, automated racism and sexism, etc. which makes the experience of viewing the show a little tense. In this context, their decision to commission and use ‘Creepers’ seems odd.
Where were Ai-Da and Dall-E-2 and the others?
Oh friend, I was hoping for a robot. Those roomba paintbots didn’t do much for me. All they did was lie there on the floor
To be blunt I wanted some fun and perhaps a bit of wonder and maybe a little vitality. I wasn’t necessarily expecting Ai-Da, an artisitic robot, but something three dimensional and fun in this very flat, screen-oriented show would have been nice.
Ai-Da was first featured here in a December 17, 2021 posting about performing poetry that she had written in honour of the 700th anniversary of poet Dante Alighieri’s death.
Named in honour of Ada Lovelace, Ai-Da visited the 2022 Venice Biennale as Leah Henrickson and Simone Natale describe in their May 12, 2022 article for Fast Company (Note: Links have been removed),
Ai-Da sits behind a desk, paintbrush in hand. She looks up at the person posing for her, and then back down as she dabs another blob of paint onto the canvas. A lifelike portrait is taking shape. If you didn’t know a robot produced it, this portrait could pass as the work of a human artist.
Ai-Da is touted as the “first robot to paint like an artist,” and an exhibition of her work, called Leaping into the Metaverse, opened at the Venice Biennale.
Ai-Da produces portraits of sitting subjects using a robotic hand attached to her lifelike feminine figure. She’s also able to talk, giving detailed answers to questions about her artistic process and attitudes toward technology. She even gave a TEDx talk about “The Intersection of Art and AI” in Oxford a few years ago. While the words she speaks are programmed, Ai-Da’s creators have also been experimenting with having her write and perform her own poetry.
DALL-E 2 is a new neural network [AI] algorithm that creates a picture from a short phrase or sentence that you provide. The program, which was announced by the artificial intelligence research laboratory OpenAI in April 2022, hasn’t been released to the public. But a small and growing number of people – myself included – have been given access to experiment with it.
As a researcher studying the nexus of technology and art, I was keen to see how well the program worked. After hours of experimentation, it’s clear that DALL-E – while not without shortcomings – is leaps and bounds ahead of existing image generation technology. It raises immediate questions about how these technologies will change how art is made and consumed. It also raises questions about what it means to be creative when DALL-E 2 seems to automate so much of the creative process itself.
…
A July 4, 2022 article “DALL-E, Make Me Another Picasso, Please” by Laura Lane for The New Yorker has a rebuttal to Ada Lovelace’s contention that creativity is uniquely human (Note: A link has been removed),
…
“There was this belief that creativity is this deeply special, only-human thing,” Sam Altman, OpenAI’s C.E.O., explained the other day. Maybe not so true anymore, he said. Altman, who wore a gray sweater and had tousled brown hair, was videoconferencing from the company’s headquarters, in San Francisco. DALL-E is still in a testing phase. So far, OpenAI has granted access to a select group of people—researchers, artists, developers—who have used it to produce a wide array of images: photorealistic animals, bizarre mashups, punny collages. Asked by a user to generate “a plate of various alien fruits from another planet photograph,” DALL-E returned something kind of like rambutans. “The rest of mona lisa” is, according to DALL-E, mostly just one big cliff. Altman described DALL-E as “an extension of your own creativity.”
AI artists first hit my radar in August 2018 when Christie’s Auction House advertised an art auction of a ‘painting’ by an algorithm (artificial intelligence). There’s more in my August 31, 2018 posting but, briefly, a French art collective, Obvious, submitted a painting, “Portrait of Edmond de Belamy,” that was created by an artificial intelligence agent to be sold for an estimated to $7000 – $10,000. They weren’t even close. According to Ian Bogost’s March 6, 2019 article for The Atlantic, the painting sold for $432,500 In October 2018.
…
That posting also included AI artist, AICAN. Both artist-AI agents (Obvious and AICAN) are based on GANs (generative adversarial networks) for learning and eventual output. Both artist-AI agents work independently or with human collaborators on art works that are available for purchase.
As might be expected not everyone is excited about AI and visual art. Sonja Drimmer, Professor of Medieval Art, University of Massachusetts at Amherst, provides another perspective on AI, visual art, and, her specialty, art history in her November 1, 2021 essay for The Conversation (Note: Links have been removed),
Over the past year alone, I’ve come across articles highlighting how artificial intelligence recovered a “secret” painting of a “lost lover” of Italian painter Modigliani, “brought to life” a “hidden Picasso nude”, “resurrected” Austrian painter Gustav Klimt’s destroyed works and “restored” portions of Rembrandt’s 1642 painting “The Night Watch.” The list goes on.
As an art historian, I’ve become increasingly concerned about the coverage and circulation of these projects.
They have not, in actuality, revealed one secret or solved a single mystery.
What they have done is generate feel-good stories about AI.
…
Take the reports about the Modigliani and Picasso paintings.
These were projects executed by the same company, Oxia Palus, which was founded not by art historians but by doctoral students in machine learning.
In both cases, Oxia Palus relied upon traditional X-rays, X-ray fluorescence and infrared imaging that had already been carried out and published years prior – work that had revealed preliminary paintings beneath the visible layer on the artists’ canvases.
The company edited these X-rays and reconstituted them as new works of art by applying a technique called “neural style transfer.” This is a sophisticated-sounding term for a program that breaks works of art down into extremely small units, extrapolates a style from them and then promises to recreate images of other content in that same style.
…
As you can ‘see’ my friend, the topic of AI and visual art is a juicy one. In fact, I have another example in my June 27, 2022 posting, which is titled, “Art appraised by algorithm.” So, Grenville’s and Entis’ decision to focus on AI and its impact on visual culture is quite timely.
Visual culture: seeing into the future
The VAG Imitation Game webpage lists these categories of visual culture “animation, architecture, art, fashion, graphic design, urban design and video games …” as being represented in the show. Movies and visual art, not mentioned in the write up, are represented while theatre and other performing arts are not mentioned or represented. That’ s not a surprise.
In addition to an area of science/technology that’s not well understood even by experts, the curators took on the truly amorphous (and overwhelming) topic of visual culture. Given that even writing this commentary has been a challenge, I imagine pulling the show together was quite the task.
Grenville often grounds his shows in a history of the subject and, this time, it seems especially striking. You’re in a building that is effectively a 19th century construct and in galleries that reflect a 20th century ‘white cube’ aesthetic, while looking for clues into the 21st century future of visual culture employing technology that has its roots in the 19th century and, to some extent, began to flower in the mid-20th century.
Chung’s collaboration is one of the only ‘optimistic’ notes about the future and, as noted earlier, it bears a resemblance to Wiener’s 1949 ‘Moth’
Overall, it seems we are being cautioned about the future. For example, Oxman’s work seems bleak (bees with no flowers to pollinate and living in an eternal spring). Adding in ‘Creepers’ and surveillance along with issues of bias and social injustice reflects hesitation and concern about what we will see, who sees it, and how it will be represented visually.
Learning about robots, automatons, artificial intelligence, and more
I wish the Vancouver Art Gallery (and Vancouver’s other art galleries) would invest a little more in audience education. A couple of tours, by someone who may or may not know what they’re talking, about during the week do not suffice. The extra material about Stephanie Dinkins and her work (“Conversations with Bina48,” 2014–present) came from the de Young Museum’s website. In my July 26, 2021 commentary on North Vancouver’s Polygon Gallery 2021 show “Interior Infinite,” I found background information for artist Zanele Muholi on the Tate Modern’s website. There is nothing on the VAG website that helps you to gain some perspective on the artists’ works.
It seems to me that if the VAG wants to be considered world class, it should conduct itself accordingly and beefing up its website with background information about their current shows would be a good place to start.
Robots, automata, and artificial intelligence
Prior to 1921, robots were known exclusively as automatons. These days, the word ‘automaton’ (or ‘automata’ in the plural) seems to be used to describe purely mechanical representations of humans from over 100 years ago whereas the word ‘robot’ can be either ‘humanlike’ or purely machine, e.g. a mechanical arm that performs the same function over and over. I have a good February 24, 2017 essay on automatons by Miguel Barral for OpenMind BBVA*, which provides some insight into the matter,
The concept of robot is relatively recent. The idea was introduced in 1921 by the Czech writer Karel Capek in his work R.U.R to designate a machine that performs tasks in place of man. But their predecessors, the automatons (from the Greek automata, or “mechanical device that works by itself”), have been the object of desire and fascination since antiquity. Some of the greatest inventors in history, such as Leonardo Da Vinci, have contributed to our fascination with these fabulous creations:
The Al-Jazari automatons
The earliest examples of known automatons appeared in the Islamic world in the 12th and 13th centuries. In 1206, the Arab polymath Al-Jazari, whose creations were known for their sophistication, described some of his most notable automatons: an automatic wine dispenser, a soap and towels dispenser and an orchestra-automaton that operated by the force of water. This latter invention was meant to liven up parties and banquets with music while floating on a pond, lake or fountain.
As the water flowed, it started a rotating drum with pegs that, in turn, moved levers whose movement produced different sounds and movements. As the pegs responsible for the musical notes could be exchanged for different ones in order to interpret another melody, it is considered one of the first programmable machines in history.
…
If you’re curious about automata, my friend, I found this Sept. 26, 2016 ABC news radio news item about singer Roger Daltrey’s and his wife, Heather’s auction of their collection of 19th century French automata (there’s an embedded video showcasing these extraordinary works of art). For more about automata, robots, and androids, there’s an excellent May 4, 2022 article by James Vincent, ‘A visit to the human factory; How to build the world’s most realistic robot‘ for The Verge; Vincent’s article is about Engineered Arts, the UK-based company that built Ai-Da.
AI is often used interchangeably with ‘robot’ but they aren’t the same. Not all robots have AI integrated into their processes. At its simplest AI is an algorithm or set of algorithms, which may ‘live’ in a CPU and be effectively invisible or ‘live’ in or make use of some kind of machine and/or humanlike body. As the experts have noted, the concept of artificial intelligence is a slippery concept.
*OpenMind BBVA is a Spanish multinational financial services company, Banco Bilbao Vizcaya Argentaria (BBVA), which runs the non-profit project, OpenMind (About us page) to disseminate information on robotics and so much more.*
You can’t always get what you want
My friend,
I expect many of the show’s shortcomings (as perceived by me) are due to money and/or scheduling issues. For example, Ai-Da was at the Venice Biennale and if there was a choice between the VAG and Biennale, I know where I’d be.
Even with those caveats in mind, It is a bit surprising that there were no examples of wearable technology. For example, Toronto’s Tapestry Opera recently performed R.U.R. A Torrent of Light (based on the word ‘robot’ from Karel Čapek’s play, R.U.R., ‘Rossumovi Univerzální Roboti’), from my May 24, 2022 posting,
I have more about tickets prices, dates, and location later in this post but first, here’s more about the opera and the people who’ve created it from the Tapestry Opera’s ‘R.U.R. A Torrent of Light’ performance webpage,
“This stunning new opera combines dance, beautiful multimedia design, a chamber orchestra including 100 instruments creating a unique electronica-classical sound, and wearable technology [emphasis mine] created with OCAD University’s Social Body Lab, to create an immersive and unforgettable science-fiction experience.”
And, from later in my posting,
“Despite current stereotypes, opera was historically a launchpad for all kinds of applied design technologies. [emphasis mine] Having the opportunity to collaborate with OCAD U faculty is an invigorating way to reconnect to that tradition and foster connections between art, music and design, [emphasis mine]” comments the production’s Director Michael Hidetoshi Mori, who is also Tapestry Opera’s Artistic Director.
That last quote brings me back to the my comment about theatre and performing arts not being part of the show. Of course, the curators couldn’t do it all but a website with my hoped for background and additional information could have helped to solve the problem.
The absence of the theatrical and performing arts in the VAG’s ‘Imitation Game’ is a bit surprising as the Council of Canadian Academies (CCA) in their third assessment, “Competing in a Global Innovation Economy: The Current State of R&D in Canada” released in 2018 noted this (from my April 12, 2018 posting),
Canada, relative to the world, specializes in subjects generally referred to as the humanities and social sciences (plus health and the environment), and does not specialize as much as others in areas traditionally referred to as the physical sciences and engineering. Specifically, Canada has comparatively high levels of research output in Psychology and Cognitive Sciences, Public Health and Health Services, Philosophy and Theology, Earth and Environmental Sciences, and Visual and Performing Arts. [emphasis mine] It accounts for more than 5% of world research in these fields. Conversely, Canada has lower research output than expected in Chemistry, Physics and Astronomy, Enabling and Strategic Technologies, Engineering, and Mathematics and Statistics. The comparatively low research output in core areas of the natural sciences and engineering is concerning, and could impair the flexibility of Canada’s research base, preventing research institutions and researchers from being able to pivot to tomorrow’s emerging research areas. [p. xix Print; p. 21 PDF]
US-centric
My friend,
I was a little surprised that the show was so centered on work from the US given that Grenville has curated ate least one show where there was significant input from artists based in Asia. Both Japan and Korea are very active with regard to artificial intelligence and it’s hard to believe that their artists haven’t kept pace. (I’m not as familiar with China and its AI efforts, other than in the field of facial recognition, but it’s hard to believe their artists aren’t experimenting.)
The Americans, of course, are very important developers in the field of AI but they are not alone and it would have been nice to have seen something from Asia and/or Africa and/or something from one of the other Americas. In fact, anything which takes us out of the same old, same old. (Luba Elliott wrote this (2019/2020/2021?) essay, “Artificial Intelligence Art from Africa and Black Communities Worldwide” on Aya Data if you want to get a sense of some of the activity on the African continent. Elliott does seem to conflate Africa and Black Communities, for some clarity you may want to check out the Wikipedia entry on Africanfuturism, which contrasts with this August 12, 2020 essay by Donald Maloba, “What is Afrofuturism? A Beginner’s Guide.” Maloba also conflates the two.)
As it turns out, Luba Elliott presented at the 2019 Montréal Digital Spring event, which brings me to Canada’s artificial intelligence and arts scene.
I promise I haven’t turned into a flag waving zealot, my friend. It’s just odd there isn’t a bit more given that machine learning was pioneered at the University of Toronto. Here’s more about that (from Wikipedia entry for Geoffrey Hinston),
Geoffrey Everest HintonCCFRSFRSC[11] (born 6 December 1947) is a British-Canadian cognitive psychologist and computer scientist, most noted for his work on artificial neural networks.
…
Hinton received the 2018 Turing Award, together with Yoshua Bengio [Canadian scientist] and Yann LeCun, for their work on deep learning.[24] They are sometimes referred to as the “Godfathers of AI” and “Godfathers of Deep Learning“,[25][26] and have continued to give public talks together.[27][28]
…
Some of Hinton’s work was started in the US but since 1987, he has pursued his interests at the University of Toronto. He wasn’t proven right until 2012. Katrina Onstad’s February 29, 2018 article (Mr. Robot) for Toronto Life is a gripping read about Hinton and his work on neural networks. BTW, Yoshua Bengio (co-Godfather) is a Canadian scientist at the Université de Montréal and Yann LeCun (co-Godfather) is a French scientist at New York University.
Then, there’s another contribution, our government was the first in the world to develop a national artificial intelligence strategy. Adding those developments to the CCA ‘State of Science’ report findings about visual arts and performing arts, is there another word besides ‘odd’ to describe the lack of Canadian voices?
You’re going to point out the installation by Ben Bogart (a member of Simon Fraser University’s Metacreation Lab for Creative AI and instructor at the Emily Carr University of Art + Design (ECU)) but it’s based on the iconic US scifi film, 2001: A Space Odyssey. As for the other Canadian, Sougwen Chung, she left Canada pretty quickly to get her undergraduate degree in the US and has since moved to the UK. (You could describe hers as the quintessential success story, i.e., moving from Canada only to get noticed here after success elsewhere.)
In 2019, Bruce Grenville, Senior Curator at Vancouver Art Gallery, approached [the] Centre for Digital Media to collaborate on several industry projects for the forthcoming exhibition. Four student teams tackled the project briefs over the course of the next two years and produced award-winning installations that are on display until October 23 [2022].
…
Basically, my friend, it would have been nice to see other voices or, at the least, an attempt at representing other voices and visual cultures informed by AI. As for Canadian contributions, maybe put something on the VAG website?
Playing well with others
it’s always a mystery to me why the Vancouver cultural scene seems comprised of a set of silos or closely guarded kingdoms. Reaching out to the public library and other institutions such as Science World might have cost time but could have enhanced the show
For example, one of the branches of the New York Public Library ran a programme called, “We are AI” in March 2022 (see my March 23, 2022 posting about the five-week course, which was run as a learning circle). The course materials are available for free (We are AI webpage) and I imagine that adding a ‘visual culture module’ wouldn’t be that difficult.
There is one (rare) example of some Vancouver cultural institutions getting together to offer an art/science programme and that was in 2017 when the Morris and Helen Belkin Gallery (at the University of British Columbia; UBC) hosted an exhibition of Santiago Ramon y Cajal’s work (see my Sept. 11, 2017 posting about the gallery show) along with that show was an ancillary event held by the folks at Café Scientifique at Science World and featuring a panel of professionals from UBC’s Faculty of Medicine and Dept. of Psychology, discussing Cajal’s work.
In fact, where were the science and technology communities for this show?
On a related note, the 2022 ACM SIGGRAPH conference (August 7 – 11, 2022) is being held in Vancouver. (ACM is the Association for Computing Machinery; SIGGRAPH is for Special Interest Group on Computer Graphics and Interactive Techniques.) SIGGRAPH has been holding conferences in Vancouver every few years since at least 2011.
This is both an international conference and an exhibition (of art) and the whole thing seems to have kicked off on July 25, 2022. If you’re interested, the programme can be found here and registration here.
Last time SIGGRAPH was here the organizers seemed interested in outreach and they offered some free events.
In the end
It was good to see the show. The curators brought together some exciting material. As is always the case, there were some missed opportunities and a few blind spots. But all is not lost.
July 27, 2022, the VAG held a virtual event with an artist,
… Gwenyth Chao to learn more about what happened to the honeybees and hives in Oxman’s Synthetic Apiary project. As a transdisciplinary artist herself, Chao will also discuss the relationship between art, science, technology and design. She will then guide participants to create a space (of any scale, from insect to human) inspired by patterns found in nature.
Hopefully there will be more more events inspired by specific ‘objects’. Meanwhile, August 12, 2022, the VAG is hosting,
… in partnership with the Canadian Music Centre BC, New Music at the Gallery is a live concert series hosted by the Vancouver Art Gallery that features an array of musicians and composers who draw on contemporary art themes.
Highlighting a selection of twentieth- and twenty-first-century music compositions, this second concert, inspired by the exhibition The Imitation Game: Visual Culture in the Age of Artificial Intelligence, will spotlight The Iliac Suite (1957), the first piece ever written using only a computer, and Kaija Saariaho’s Terra Memoria (2006), which is in a large part dependent on a computer-generated musical process.
…
It would be lovely if they could include an Ada Lovelace Day event. This is an international celebration held on October 11, 2022.
It’s been a busy week for the Council of Canadian Academies (CCA); I don’t usually get two notices in such close order.
2022 science policy internship
The application deadline is Oct. 18, 2021, you will work remotely, and the stipend for the 2020 internship was $18,500 for six months.
Here’s more from a September 13, 2021 CCA notice (received Sept. 13, 2021 via email),
CCA Accepting Applications for Internship Program
…
The program provides interns with an opportunity to gain experience working at the interface of science and public policy. Interns will participate in the development of assessments by conducting research in support of CCA’s expert panel process.
The internship program is a full-time commitment of six months and will be a remote opportunity due to the Covid-19 pandemic.
Applicants must be recent graduates with a graduate or professional degree, or post-doctoral fellows, with a strong interest in the use of evidence for policy. Theapplication deadline is October 18, 2021. The start date is January 10, 2022. Applications and letters of reference should be addressed to Anita Melnyk at internship@cca-reports.ca.
More information about the CCA Internship Program and the application process can be found here. [Note: The link takes you to a page with information about a 2020 internship opportunity; presumably, the application requirements have not changed.]
Good luck!
Expert Panel on Public Safety in the Digital Age Announced
I have a few comments (see the ‘Concerns and hopes’ subhead) about this future report but first, here’s the announcement of the expert panel that was convened to look into the matter of public safety (received via email September 15, 2021),
CCA Appoints Expert Panel on Public Safety in the Digital Age
…
Access to the internet and digital technologies are essential for people, businesses, and governments to carry out everyday activities. But as more and more activities move online, people and organizations are increasingly vulnerable to serious threats and harms that are enabled by constantly evolving technology. At the request of Public Safety Canada, [emphasis mine] the Council of Canadian Academies (CCA) has formed an Expert Panel to examine leading practices that could help address risks to public safety while respecting human rights and privacy. Jennifer Stoddart, O.C., Strategic Advisor, Privacy and Cybersecurity Group, Fasken Martineau DuMoulin [law firm], will serve as Chair of the Expert Panel.
“The ever-evolving nature of crimes and threats that take place online present a huge challenge for governments and law enforcement,” said Ms. Stoddart. “Safeguarding public safety while protecting civil liberties requires a better understanding of the impacts of advances in digital technology and the challenges they create.”
As Chair, Ms. Stoddart will lead a multidisciplinary group with expertise in cybersecurity, social sciences, criminology, law enforcement, and law and governance. The Panel will answer the following question:
Considering the impact that advances in information and communications technologies have had on a global scale, what do current evidence and knowledge suggest regarding promising and leading practices that could be applied in Canada for investigating, preventing, and countering threats to public safety while respecting human rights and privacy?
“This is an important question, the answer to which will have both immediate and far-reaching implications for the safety and well-being of people living in Canada. Jennifer Stoddart and this expert panel are very well-positioned to answer it,” said Eric M. Meslin, PhD, FRSC, FCAHS, President and CEO of the CCA.
More information about the assessment can be found here.
The Expert Panel on Public Safety in the Digital Age:
Jennifer Stoddart(Chair), O.C., Strategic Advisor, Privacy and Cybersecurity Group, Fasken Martineau DuMoulin [law firm].
Benoît Dupont, Professor, School of Criminology, and Canada Research Chair in Cybersecurity and Research Chair for the Prevention of Cybercrime, Université de Montréal; Scientific Director, Smart Cybersecurity Network (SERENE-RISC). Note: This is one of Canada’s Networks of Centres of Excellence (NCE)
Richard Frank, Associate Professor, School of Criminology, Simon Fraser University; Director, International CyberCrime Research Centre International. Note: This is an SFU/ Society for the Policing of Cyberspace (POLCYB) partnership
Colin Gavaghan, Director, New Zealand Law Foundation Centre for Law and Policy in Emerging Technologies, Faculty of Law, University of Otago.
Laura Huey, Professor, Department of Sociology, Western University; Founder, Canadian Society of Evidence Based Policing [Can-SEPB].
Emily Laidlaw, Associate Professor and Canada Research Chair in Cybersecurity Law, Faculty of Law, University of Calgary.
Arash Habibi Lashkari, Associate Professor, Faculty of Computer Science, University of New Brunswick; Research Coordinator, Canadian Institute of Cybersecurity [CIC].
Christian Leuprecht, Class of 1965 Professor in Leadership, Department of Political Science and Economics, Royal Military College; Director, Institute of Intergovernmental Relations, School of Policy Studies, Queen’s University.
Florian Martin-Bariteau, Associate Professor of Law and University Research Chair in Technology and Society, University of Ottawa; Director, Centre for Law, Technology and Society.
Christopher Parsons, Senior Research Associate, Citizen Lab, Munk School of Global Affairs & Public Policy, University of Toronto.
Jad Saliba, Founder and Chief Technology Officer, Magnet Forensics Inc.
Heidi Tworek, Associate Professor, School of Public Policy and Global Affairs, and Department of History, University of British Columbia.
Oddly, there’s no mention that Jennifer Stoddart (Wikipedia entry) was Canada’s sixth privacy commissioner. Also, Fasken Martineau DuMoulin (her employer) changed its name to Fasken in 2017 (Wikipedia entry). The company currently has offices in Canada, UK, South Africa, and China (Firm webpage on company website).
Information and communications technologies have profoundly changed almost every aspect of life and business in the last two decades. While the digital revolution has brought about many positive changes, it has also created opportunities for criminal organizations and malicious actors [emphasis mine] to target individuals, businesses, and systems. Ultimately, serious crime facilitated by technology and harmful online activities pose a threat to the safety and well-being of people in Canada and beyond.
Damaging or criminal online activities can be difficult to measure and often go unreported. Law enforcement agencies and other organizations working to address issues such as the sexual exploitation of children, human trafficking, and violent extremism [emphasis mine] must constantly adapt their tools and methods to try and prevent and respond to crimes committed online.
A better understanding of the impacts of these technological advances on public safety and the challenges they create could help to inform approaches to protecting public safety in Canada.
This assessment will examine promising practices that could help to address threats to public safety related to the use of digital technologies while respecting human rights and privacy.
The Sponsor:
Public Safety Canada
The Question:
Considering the impact that advances in information and communications technologies have had on a global scale, what do current evidence and knowledge suggest regarding promising and leading practices that could be applied in Canada for investigating, preventing, and countering threats to public safety while respecting human rights and privacy?
Three things stand out for me. First, public safety, what is it?, second, ‘malicious actors’, and third, the examples used for the issues being addressed (more about this in the Comments subsection, which follows).
What is public safety?
Before launching into any comments, here’s a description for Public Safety Canada (from their About webpage) where you’ll find a hodge podge,
Public Safety Canada was created in 2003 to ensure coordination across all federal departments and agencies responsible for national security and the safety of Canadians.
Our mandate is to keep Canadians safe from a range of risks such as natural disasters, crime and terrorism.
Our mission is to build a safe and resilient Canada.
…
The Public Safety Portfolio
A cohesive and integrated approach to Canada’s security requires cooperation across government. Together, these agencies have an annual budget of over $9 billion and more than 66,000 employees working in every part of the country.
Public Safety Partner Agencies
The Canada Border Services Agency (CBSA) manages the nation’s borders by enforcing Canadian laws governing trade and travel, as well as international agreements and conventions. CBSA facilitates legitimate cross-border traffic and supports economic development while stopping people and goods that pose a potential threat to Canada.
The Canadian Security Intelligence Service (CSIS) investigates and reports on activities that may pose a threat to the security of Canada. CSIS also provides security assessments, on request, to all federal departments and agencies.
The Correctional Service of Canada (CSC) helps protect society by encouraging offenders to become law-abiding citizens while exercising reasonable, safe, secure and humane control. CSC is responsible for managing offenders sentenced to two years or more in federal correctional institutions and under community supervision.
The Parole Board of Canada (PBC) is an independent body that grants, denies or revokes parole for inmates in federal prisons and provincial inmates in province without their own parole board. The PBC helps protect society by facilitating the timely reintegration of offenders into society as law-abiding citizens.
The Royal Canadian Mounted Police (RCMP) enforces Canadian laws, prevents crime and maintains peace, order and security.
…
So, Public Safety includes a spy agency (CSIS), the prison system (Correctional Services and Parole Board), and the national police force (RCMP) and law enforcement at the borders with the Canada Border Services Agency (CBSA). None of the partner agencies are dedicated to natural disasters although it’s mentioned in the department’s mandate.
The focus is largely on criminal activity and espionage. On that note, a very senior civilian RCMP intelligence official, Cameron Ortis*, was charged with passing secrets to foreign entities (malicious actors?). (See the September 13, 2021 [updated Sept. 15, 2021] news article by Amanda Connolly, Mercedes Stephenson, Stewart Bell, Sam Cooper & Rachel Browne for CTV news and the Sept. 18, 2019 [updated January 6, 2020] article by Douglas Quan for the National Post for more details.)
There appears to be at least one other major security breach; that involving Canada’s only level four laboratory, the Winnipeg-based National Microbiology Lab (NML). (See a June 10, 2021 article by Karen Pauls for Canadian Broadcasting Corporation news online for more details.)
As far as I’m aware, Ortis is still being held with a trial date scheduled for September 2022 (see Catherine Tunney’s April 9, 2021 article for CBC news online) and, to date, there have been no charges laid in the Winnipeg lab case.
Concerns and hopes
Ordinarily I’d note links and relationships between the various expert panel members but in this case it would be a big surprise if they weren’t linked in some fashion as the focus seems to be heavily focused on cybersecurity (as per the panel member’s bios.), which I imagine is a smallish community in Canada.
As I’ve made clear in the paragraphs leading into the comments, Canada appears to have seriously fumbled the ball where national and international cybersecurity is concerned.
So getting back to “First, public safety, what is it?, second, ‘malicious actors’, and third, the examples used for the issues,” I’m a bit puzzled.
Public safety as best I can tell, is just about anything they’d like it to be. ‘Malicious actors’ is a term I’ve seen used to imply a foreign power is behind the actions being held up for scrutiny.
The examples used for the issues being addressed “sexual exploitation of children, human trafficking, and violent extremism” hint at a focus on crimes that cross borders and criminal organizations, as well as, like-minded individuals organizing violent and extremist acts but not specifically at any national or international security concerns.
On a more mundane note, I’m a little surprised that identity theft wasn’t mentioned as an example.
I’m hopeful there will be some examination of emerging technologies such as quantum communication (specifically, encryption issues) and artificial intelligence. I also hope the report will include a discussion about mistakes and over reliance on technology (for a refresher course on what happens when organizations, such as the Canadian federal government, make mistakes in the digital world; search ‘Phoenix payroll system’, a 2016 made-in-Canada and preventable debacle, which to this day is still being fixed).
In the end, I think the only topic that can be safely excluded from the report is climate change otherwise it’s a pretty open mandate as far as can be told from publicly available information.
I noticed the international panel member is from New Zealand (the international component is almost always from the US, UK, northern Europe, and/or the Commonwealth). Given that New Zealand (as well as being part of the commonwealth) is one of the ‘Five Eyes Intelligence Community’, which includes Canada, Australia, the UK, the US, and, NZ, I was expecting a cybersecurity expert. If Professor Colin Gavaghan does have that expertise, it’s not obvious on his University of Otaga profile page (Note: Links have been removed),
Research interests
Colin is the first director of the New Zealand Law Foundation sponsored Centre for Law and Policy in Emerging Technologies. The Centre examines the legal, ethical and policy issues around new technologies. To date, the Centre has carried out work on biotechnology, nanotechnology, information and communication technologies and artificial intelligence.
In addition to emerging technologies, Colin lectures and writes on medical and criminal law.
Together with colleagues in Computer Science and Philosophy, Colin is the leader of a three-year project exploring the legal, ethical and social implications of artificial intelligence for New Zealand.
Background
Colin regularly advises on matters of technology and regulation. He is first Chair of the NZ Police’s Advisory Panel on Emergent Technologies, and a member of the Digital Council for Aotearoa, which advises the Government on digital technologies. Since 2017, he has been a member (and more recently Deputy Chair) of the Advisory Committee on Assisted Reproductive Technology. He was an expert witness in the High Court case of Seales v Attorney General, and has advised members of parliament on draft legislation.
He is a frustrated writer of science fiction, but compensates with occasional appearances on panels at SF conventions.
…
I appreciate the sense of humour evident in that last line.
Almost breaking news
Wednesday, September 15, 2021 an announcement of a new alliance in the Indo-Pacific region, the Three Eyes (Australia, UK, and US or AUKUS) was made.
Interestingly all three are part of the Five Eyes intelligence alliance comprised of Australia, Canada, New Zealand, UK, and US. Hmmm … Canada and New Zealand both border the Pacific and last I heard, the UK is still in Europe.
A September 17, 2021 article, “Canada caught off guard by exclusion from security pact” by Robert Fife and Steven Chase for the Globe and Mail (I’m quoting from my paper copy),
The Canadian government was surprised this week by the announcement of a new security pact among the United States, Britain and Australia, one that excluded Canada [and New Zealand too] and is aimed at confronting China’s growing military and political influence in the Indo-Pacific region, according to senior government officials.
Three officials, representing Canada’s Foreign Affairs, Intelligence and Defence departments, told the Globe and Mail that Ottawa was not consulted about the pact, and had no idea the trilateral security announcement was coming until it was made on Wednesday [September 15, 2021] by U.S. President Joe Biden, British Prime Minister Boris Johnson and Australian Prime Minister Scott Morrison.
…
The new trilateral alliance, dubbed AUKUS, after the initials of the three countries, will allow for greater sharing of information in areas such as artificial intelligence and cyber and underwater defence capabilities.
…
Fife and Chase have also written a September 17, 2021 Globe and Mail article titled, “Chinese Major-General worked with fired Winnipeg Lab scientist,”
…
… joint research conducted between Major-General Chen Wei and former Canadian government lab scientist Xiangguo Qiu indicates that co-operation between the Chinese military and scientists at the National Microbiology Laboratory (NML) went much higher than was previously known. The People’s Liberation Army is the military of China’s ruling Communist Party.
…
Given that no one overseeing the Canadian lab, which is a level 4 and which should have meant high security, seems to have known that Wei was a member of the military and with the Cameron Ortis situation still looming, would you have included Canada in the new pact?
These workshops will inform recommendations to the Government of Canada on how to boost public awareness of and foster trust in AI. The conversations will be grounded in an understanding of the technology, its potential uses, and its associated risks.
Each workshop is approximately 2.5 hrs in length and free to attend. Our goal is to engage more than 1,000 people across Canada, building on the results of a national survey that was conducted in December 2020.
What to expect
Opening plenary session (15 min)
Breakout session with 6-10 participants
BREAK (10 minutes)
Recommendations (40 min)
Closing remarks ( 8 min)
Closing plenary Session (22 min)
Registration
Oddly, there’s isn’t a registration link from the event page, you have to click on one of two (Regional or Youth) workshop tabs at the top of the page (this is from the Regional Workshops webpage),
Join us for a virtual workshop taking place in your region. Each workshop will include facilitated discussions based on Artificial Intelligence (AI) scenarios and provide an opportunity to share your views on AI.
To register by phone, please call Grace at 416-971-6937. If you require accommodations to participate, please contact events@cifar.ca.
The regions are split into the West (Pacific and Mountain time zones), Central (Central and Ontario time zones), and East (Newfoundland, Atlantic and Quebec time zones). There are French and English sessions in each of the three regions and they have included the North on the regional maps.
Questions
Sadly, the events team at CIFAR did not answer questions (I tried twice) nor did Julian Posada who is apparently the facilitator for the workshops,
He’s also a PhD candidate at the University of Toronto.
Note Posada has identified Innovation, Science and Economic Development Canada (ISED) as the workshop organizer but it’s not listed on CIFAR’s Open Dialogue: Artificial Intelligence (AI) in Canada event page as an organizer or even one of the partners. There is this (from the event page),
The Government of Canada’s Advisory Council on Artificial Intelligence Public Awareness Working Group includes representatives from: AI Global | AI Network of BC | Amii | Brookfield Institute | Canadian Chamber of Commerce | CIFAR | DeepSense/Dalhousie | Glassbox | Ivado | Kids Code Jeunesse | Let’s Talk Science | Mila | Saskinteractive | Université de Montréal
The partners, represented by logos, are the Government of Canada (as in Advisory Committee?), Algora Lab, Université de Montréal, CIFAR, and for the Youth Workshops, Let’s Talk Science, Kids Code Jeunesse, and workshop materials are being provided by the Canadian Commission for UNESCO (United Nations Educational, Scientific and Cultural Organization).
By the third time, I’d reworded a few things and added one or two question so, here’s the final list as sent to Julian Posada on Thursday, March 18, 2021,
(1) I understand it’s a joint CIFAR/Government of Canada Advisory Council on Artificial Intelligence Public Awareness Working Group workshop series called Open Dialogue: Artificial Intelligence (AI) in Canada, is that correct and the series will be held from March 30 – April 30, 2021?
(2) Are regular folks invited to join in or is this primarily for academics, business people, entrepreneurs, AI researchers, and other cognoscenti?
(3) Will a distinction be made between AI and robots?
(4) Are you facilitating all of the planned workshops? Will you also have assigned leaders for the breakout groups or will that be decided amongst the participants? If leaders are assigned, who are they?
(5) What do you have planned for your workshop(s)? e.g., Will participants be presented with various scenarios for discussion in the breakout groups? Or will participants be given specific topics to discuss, such as AI in the military? AI in senior’s facilities (e.g., social or companion robots for seniors? etc.
(6) Are the workshops being conducted over Zoom and is a Zoom account required for participation? Is there an alternative technology being used?
(7) Will AI be used to review and analyze the sessions and data gathered?
(8) Are there security measures in place for the session and for the data, specifically, participants’ personal data given up during registration?
(9) Will participants get a copy of the report afterwards or notified when it’s made available?
Since the workshops start on March 30, 2021 and I’m sure everyone’s busy and not able to spare time for questions, I’ve elected to publish what i can about the workshops despite a few misgivings.
Critique
I’m glad to see this initiative and to note that the North is included. It would be interesting to learn how these workshops have been publicized (I stumbled across them in a retweet of Julian Posada’s announcement on my Twitter feed). However, it’s not vital.
Priorities for the Advisory Council on Artificial Intelligence
Artificial intelligence (AI) represents a set of complex and powerful technologies that will touch or transform every sector and industry. It has the power to help us address some of our most challenging problems in areas like health and the environment, and to introduce new sources of sustainable economic growth. As a digital nation, Canada is taking steps to harness the potential of AI.
As announced by the Minister of Innovation, Science and Economic Development on May 14, 2019, the Advisory Council on Artificial Intelligence will advise the Government of Canada on building Canada’s strengths and global leadership in AI, identifying opportunities to create economic growth that benefits all Canadians, and ensuring that AI advancements reflect Canadian’ values. The Advisory Council will be a central reference point to draw on leading AI experts from Canadian industry, civil society, academia, and government.
Public Awareness Working Group
Recognizing the importance of a two-way dialogue with the Canadian public on AI, the Advisory Council launched a working group dedicated to public awareness in 2020. The Public Awareness Working Group is looking at mechanisms to boost public awareness and foster trust in AI. It also aims to ground the Canadian discussion in a measured understanding of AI technology, its potential uses, and its associated risks.
Commercialization Working Group
Recognizing that Canada has an imperative to commercialize its AI, and to capitalize on existing Canadian advantages in research and talent, the Advisory Council launched a working group dedicated to commercialization in August 2019 [emphasis mine]. The Commercialization Working Group explored ways to translate Canadian-owned artificial intelligence into economic growth that includes higher business productivity and benefits for Canadians.
The first order of business was commercialization in August 2019 and that’s to be expected given that this is ISED. The Public Awareness Working Group was launched at least four months after.
Priorities, eh?
Is awareness a dialogue?
As they very nicely note on the CIFAR AI dialogue event page, these workshops are going to help the government figure out “how to boost public awareness of and foster trust in AI.” It’s very flattering to be consulted this way.
So to sum this up, the ‘dialogue’ in the regional and youth workshops will be mined for ideas on how to boost public awareness and foster trust. You’re not really just getting an opportunity “to share your views on AI,” are you?
It seems a bit narrow but then they’ve already conducted a survey in December 2020, which has in all likelihood informed the content for these workshops and they have . Plus the workshop materials being provided by the Canadian Commission for UNESCO have in all likelihood been used elsewhere and repackaged for the Canadian market.
Hmmm I wouldn’t call this an ‘open dialogue’ since so much has already been done to frame it.
Abattoirs
Many years ago I read a fascinating article about Temple Grandin and her work redesigning abattoirs (slaughterhouses) to make them more humane. I don’t remember much about it but calming the cattle by dampening the noise while distracting them a little by making them move around rather then directly leading them to their deaths seemed the key elements to the redesign.
This ‘open dialogue’ reminds me of the article. The outcome is predetermined and we’re being distracted in the nicest way possible.
Mining the data?
Nine workshop sessions in total with one hour and 40 minutes (rough estimate) of discussion and recommendations for each session. That’s roughly 15 hours of material from the dialogues and recommendations to analyze.
Remember this question “(7) Will AI be used to review and analyze the sessions and data gathered?”
It’s hard to believe that CIFAR and its partners don’t have a system that could do the job or, at the very least, a system that could learn from the sessions.
Not necessarily evil
While I have a number of misgivings about these ‘dialogues’, I don’t expect that most of the people involved are trying to be nefarious. There are probably some good intentions (you know where those take you, yes?) but the overarching purpose here is commercialization which is made much easier with universal acceptance. (awareness + trust)
To be blunt, a dialogue with a predetermined outcome seems more like a script to me than an open conversation.
This sort of thing has been called a ‘public consultation’ but that term has gotten a bad reputation as it was used to disguise the kind of manipulation that I suspect is going on with this effort.
How they expect to foster trust in circumstances that are not conducive to that is a bit of a mystery to me. Plus, I have to wonder if these organizers or committee members have taken into the possible aftereffects of one of the great Canadian government debacles.
The Phoenix pay system is a payroll processing system for Canadian federal government employees, provided by IBM in June 2011 using PeopleSoft software, and run by Public Services and Procurement Canada. The Public Service Pay Centre is located in Miramichi, New Brunswick. It was first introduced in 2009 as part of Prime Minister Stephen Harper’s Transformation of Pay Administration Initiative, intended to replace Canada’s 40-year old system with a new, cost-saving “automated, off-the-shelf commercial system.” By July 2018, Phoenix has caused pay problems to close to 80 percent of the federal government’s 290,000 public servants through underpayments, over-payments, and non-payments.[1] The Standing Senate Committee on National Finance, chaired by Senator Percy Mockler, investigated the Phoenix Pay system and submitted their report, “The Phoenix Pay Problem: Working Towards a Solution” on July 31, 2018, in which they called Phoenix a failure and an “international embarrassment”.[1] Instead of saving $70 million a year as planned, the report said that the cost to taxpayers to fix Phoenix’s problems could reach a total of $2.2 billion by 2023. [emphasis mine]
…
The entry leaves out a couple of details. Yes, Harper’s government nurtured this disaster but it was (1) Prime Minister Justin Trudeau and his (2) Liberal government who implemented the system in February 2016. Whoever wrote this entry is very friendly to the Liberals so I don’t think the politicians were quite as uninformed as represented in the entry.
As for the cost to taxpayers, I think $2.2 billion by 2023 is an over modest estimate. For comparison, Australia’s Queensland Health Authority also had a pay system debacle. It was the same vendor (IBM) and, in 2013, the estimate to fix the problems was $1.2 billion Australian dollars (see this Dec.11.13 article by Robert N. Charette for the IEEE Spectrum or this Aug.7.13 article by Michael Madigan, Sarah Vogler, and Greg Stolz for The Courier Mail).
Note 1: I checked on a currency converter today (March 23, 2021) and $1 CAD = $1.04 AUS.
Note 2: For anyone unfamiliar with the organization, IEEE is the Institute of Electrical and Electronics Engineers.
I’m pretty sure $2.2 billion (which I think is an underestimate) does not include the human costs (anxiety, alcohol abuse, self-harm, suicide, etc.).
Plus
The situation was exacerbated as Catharine Tunney wrote in a February 18, 2020 article for CBC (Canadian Broadcasting Corporation) online (Note: A link has been removed),
More than 69,000 public servants caught up in the Phoenix pay system debacle are now victims of a privacy breach after their personal information was accidentally emailed to the wrong people, says Public Services and Procurement Canada.
The problem-plagued electronic payroll system has improperly paid tens of thousands of public servants since its launch in 2016. Some employees have gone months with little or no pay, while others have been overpaid, sometimes for months at a time.
…
Earlier this month, a report naming 69,087 public servants was accidentally emailed to the wrong federal departments.
The report included the employees’ full names, their personal record identifier numbers, home addresses and overpayment amounts.
More than 161 chief financial officers and 62 heads of HR in 62 departments received the report in error, according to a statement posted to Public Services and Procurement Canada’s website on Monday.
…
Public Services and Procurement Canada isn’t the only department to accidentally breach the confidentiality of workers’ personal information.
According to figures recently tabled in the House of Commons, federal departments or agencies mishandled personal information belonging to 144,000 Canadians over the past two years.
Privacy Commissioner Daniel Therrien has long called out “strong indications of systemic under-reporting” of privacy breaches across government.
Final comments
Overhauling the government payroll system is not the same as introducing new artificial intelligence systems but the problem is that many of the same people in the upper echelons of Canada’s civil service (government employees) were and are instrumental in the deployment of these systems.
“Phoenix pay system an ‘incomprehensible failure,’ Auditor-General says” was the headline for a May 29, 2018 article by Michelle Zilio for the Globe and Mail. I might feel more trust if after the report, there’d been signs that things had changed. However, the government is still highly secretive and we have a ‘dialogue’ with a predetermined outcome (just like the public consultations of yesteryear).
As for M. Posada, the facilitator for one or more of the workshops, he seems relatively new to Canada (scroll down his University of Toronto profile page and click on Degrees),
M.A., Economic Sociology – School for Advanced Studies in the Social Sciences (EHESS) [École des hautes études en sciences sociales in Paris, France]
B.A., Humanities – Sorbonne University [also in Paris]
As I noted in my December 10, 2021 posting where a chapter on science communication in Canada where two of the three authors were from other countries (Brazil and Australia), outsider perspectives can be quite valuable. (Both of the authors spent some time in Canada. At least one of them had taught here.)
In any event, I have to wonder how well he’s been briefed.
After my experience in something called “participatory budgeting” (City of Vancouver, 2019), where citizens were asked come together and decide how to spend $100,000 of the city budget in our neighbourhood, A surprising number of city employees were involved as ‘members’ of the working groups and ,of course, other employees at City Hall had veto power over what was eventually presented to the community for voting. I can say that at the end of the process I felt used.
It could be interesting but I warn against any high expectations if you’re looking for genuine dialogue. You can click through to registration from the Canadian Institute for Advanced Research (CIFAR) AI dialogues event page, choose either the Regional Workshops or Youth Workshops from the tabs at the top of the page.
There are movies, plays, a multimedia installation experience all in Vancouver, and the ‘CHAOSMOSIS mAchInesexhibition/performance/discussion/panel/in-situ experiments/art/ science/ techne/ philosophy’ event in Toronto. But first, there’s a a Vancouver talk about engaging scientists in the upcoming federal election. .
Science in the Age of Misinformation (and the upcoming federal election) in Vancouver
Dr. Katie Gibbs, co-founder and executive director of Evidence for Democracy, will be giving a talk today (Sept. 4, 2019) at the University of British Columbia (UBC; Vancouver). From the Eventbrite webpage for Science in the Age of Misinformation,
Science in the Age of Misinformation, with Katie Gibbs, Evidence for Democracy In the lead up to the federal election, it is more important than ever to understand the role that researchers play in shaping policy. Join us in this special Policy in Practice event with Dr. Katie Gibbs, Executive Director of Evidence for Democracy, Canada’s leading, national, non-partisan, and not-for-profit organization promoting science and the transparent use of evidence in government decision making. A Musqueam land acknowledgement, welcome remarks and moderation of this event will be provided by MPPGA students Joshua Tafel, and Chengkun Lv.
Wednesday, September 4, 2019 12:30 pm – 1:50 pm (Doors will open at noon) Liu Institute for Global Issues – xʷθəθiqətəm (Place of Many Trees), 1st floor Pizza will be provided starting at noon on first come, first serve basis. Please RSVP.
What role do researchers play in a political environment that is increasingly polarized and influenced by misinformation? Dr. Katie Gibbs, Executive Director of Evidence for Democracy, will give an overview of the current state of science integrity and science policy in Canada highlighting progress made over the past four years and what this means in a context of growing anti-expert movements in Canada and around the world. Dr. Gibbs will share concrete ways for researchers to engage heading into a critical federal election [emphasis mine], and how they can have lasting policy impact.
Bio: Katie Gibbs is a scientist, organizer and advocate for science and evidence-based policies. While completing her Ph.D. at the University of Ottawa in Biology, she was one of the lead organizers of the ‘Death of Evidence’—one of the largest science rallies in Canadian history. Katie co-founded Evidence for Democracy, Canada’s leading, national, non-partisan, and not-for-profit organization promoting science and the transparent use of evidence in government decision making. Her ongoing success in advocating for the restoration of public science in Canada has made Katie a go-to resource for national and international media outlets including Science, The Guardian and the Globe and Mail.
Katie has also been involved in international efforts to increase evidence-based decision-making and advises science integrity movements in other countries and is a member of the Open Government Partnership Multi-stakeholder Forum.
Disclaimer: Please note that by registering via Eventbrite, your information will be stored on the Eventbrite server, which is located outside Canada. If you do not wish to use this service, please email Joelle.Lee@ubc.ca directly to register. Thank you.
Location Liu Institute for Global Issues – Place of Many Trees 6476 NW Marine Drive Vancouver, British Columbia V6T 1Z2
Sadly I was not able to post the information about Dr. Gibbs’s more informal talk last night (Sept. 3, 2019) which was a special event with Café Scientifique but I do have a link to a website encouraging anyone who wants to help get science on the 2019 federal election agenda, Vote Science. P.S. I’m sorry I wasn’t able to post this in a more timely fashion.
Transmissions; a multimedia installation in Vancouver, September 6 -28, 2019
Lisa Jackson is a filmmaker, but she’s never allowed that job description to limit what she creates or where and how she screens her works.
The Anishinaabe artist’s breakout piece was last year’s haunting virtual-reality animation Biidaaban: First Light. In its eerie world, one that won a Canadian Screen Award, nature has overtaken a near-empty, future Toronto, with trees growing through cracks in the sidewalks, vines enveloping skyscrapers, and people commuting by canoe.
…
All that and more has brought her here, to Transmissions, a 6,000-square-foot, immersive film installation that invites visitors to wander through windy coastal forests, by hauntingly empty glass towers, into soundscapes of ancient languages, and more.
Through the labyrinthine multimedia work at SFU [Simon Fraser University] Woodward’s, Jackson asks big questions—about Earth’s future, about humanity’s relationship to it, and about time and Indigeneity.
Simultaneously, she mashes up not just disciplines like film and sculpture, but concepts of science, storytelling, and linguistics [emphasis mine].
…
“The tag lines I’m working with now are ‘the roots of meaning’ and ‘knitting the world together’,” she explains. “In western society, we tend to hive things off into ‘That’s culture. That’s science.’ But from an Indigenous point of view, it’s all connected.”
Transmissions is split into three parts, with what Jackson describes as a beginning, a middle, and an end. Like Biidaaban, it’s also visually stunning: the artist admits she’s playing with Hollywood spectacle.
Without giving too much away—a big part of the appeal of Jackson’s work is the sense of surprise—Vancouver audiences will first enter a 48-foot-long, six-foot-wide tunnel, surrounded by projections that morph from empty urban streets to a forest and a river. Further engulfing them is a soundscape that features strong winds, while black mirrors along the floor skew perspective and play with what’s above and below ground.
“You feel out of time and space,” says Jackson, who wants to challenge western society’s linear notions of minutes and hours. “I want the audience to have a physical response and an emotional response. To me, that gets closer to the Indigenous understanding. Because the Eurocentric way is more rational, where the intellectual is put ahead of everything else.”
Viewers then enter a room, where the highly collaborative Jackson has worked with artist Alan Storey, who’s helped create Plexiglas towers that look like the ghost high-rises of an abandoned city. (Storey has also designed other components of the installation.) As audience members wander through them on foot, projections make their shadows dance on the structures. Like Biidaaban, the section hints at a postapocalyptic or posthuman world. Jackson operates in an emerging realm of Indigenous futurism.
…
The words “science, storytelling, and linguistics” were emphasized due to a minor problem I have with terminology. Linguistics is defined as the scientific study of language combining elements from the natural sciences, social sciences, and the humanities. I wish either Jackson or Smith had discussed the scientific element of Transmissions at more length and perhaps reconnected linguistics to science along with the physics of time and space, as well as, storytelling, film, and sculpture. It would have been helpful since it’s my understanding, Transmissions is designed to showcase all of those connections and more in ways that may not be obvious to everyone. On the plus side, perhaps the tour, which is part of this installation experience includes that information.
The Roots of Meaning World Premiere September 6 – 28, 2019
Fei & Milton Wong Experimental Theatre SFU Woodward’s, 149 West Hastings Tuesday to Friday, 1pm to 7pm Saturday and Sunday, 1pm to 5pm FREE
In partnership with SFU Woodward’s Cultural Programs and produced by Electric Company Theatre and Violator Films.
TRANSMISSIONS is a three-part, 6000 square foot multimedia installation by award-winning Anishinaabe filmmaker and artist Lisa Jackson. It extends her investigation into the connections between land, language, and people, most recently with her virtual reality work Biidaaban: First Light.
Projections, sculpture, and film combine to create urban and natural landscapes that are eerie and beautiful, familiar and foreign, concrete and magical. Past and future collide in a visceral and thought-provoking journey that questions our current moment and opens up the complexity of thought systems embedded in Indigenous languages. Radically different from European languages, they embody sets of relationships to the land, to each other, and to time itself.
Transmissions invites us to untether from our day-to-day world and imagine a possible future. It provides a platform to activate and cross-pollinate knowledge systems, from science to storytelling, ecology to linguistics, art to commerce. To begin conversations, to listen deeply, to engage varied perspectives and expertise, to knit the world together and find our place within the circle of all our relations.
…
Produced in association with McMaster University Socrates Project, Moving Images Distribution and Cobalt Connects Creativity.
….
Admission: Free Public Tours Tuesday through Sunday Reservations accepted from 1pm to 3pm. Reservations are booked in 15 minute increments. Individuals and groups up to 10 welcome. Please email: sfuw@sfu.ca for more information or to book groups of 10 or more.
Her Story: Canadian Women Scientists (short film subjects); Sept. 13 – 14, 2019
Curiosity Collider, producer of art/science events in Vancouver, is presenting a film series featuring Canadian women scientists, according to an August 27 ,2019 press release (received via email),
“Her Story: Canadian Women Scientists,” a film series dedicated to sharing the stories of Canadian women scientists, will premiere on September 13th and 14th at the Annex theatre. Four pairs of local filmmakers and Canadian women scientists collaborated to create 5-6 minute videos; for each film in the series, a scientist tells her own story, interwoven with the story of an inspiring Canadian women scientist who came before her in her field of study.
Produced by Vancouver-based non-profit organization Curiosity Collider, this project was developed to address the lack of storytelling videos showcasing remarkable women scientists and their work available via popular online platforms. “Her Story reveals the lives of women working in science,” said Larissa Blokhuis, curator for Her Story. “This project acts as a beacon to girls and women who want to see themselves in the scientific community. The intergenerational nature of the project highlights the fact that women have always worked in and contributed to science.
This sentiment was reflected by Samantha Baglot as well, a PhD student in neuroscience who collaborated with filmmaker/science cartoonist Armin Mortazavi in Her Story. “It is empowering to share stories of previous Canadian female scientists… it is empowering for myself as a current female scientist to learn about other stories of success, and gain perspective of how these women fought through various hardships and inequality.”
When asked why seeing better representation of women in scientific work is important, artist/filmmaker Michael Markowsky shared his thoughts. “It’s important for women — and their male allies — to question and push back against these perceived social norms, and to occupy space which rightfully belongs to them.” In fact, his wife just gave birth to their first child, a daughter; “It’s personally very important to me that she has strong female role models to look up to.” His film will feature collaborating scientist Jade Shiller, and Kathleen Conlan – who was named one of Canada’s greatest explorers by Canadian Geographic in 2015.
Other participating filmmakers and collaborating scientists include: Leslie Kennah (Filmmaker), Kimberly Girling (scientist, Research and Policy Director at Evidence for Democracy), Lucas Kavanagh and Jesse Lupini (Filmmakers, Avocado Video), and Jessica Pilarczyk (SFU Assistant Professor, Department of Earth Sciences).
This film series is supported by Westcoast Women in Engineering, Science and Technology (WWEST) and Eng.Cite. The venue for the events is provided by Vancouver Civic Theatres.
Event Information
Screening events will be hosted at Annex (823 Seymour St, Vancouver) on September 13th and 14th [2019]. Events will also include a talkback with filmmakers and collab scientists on the 13th, and a panel discussion on representations of women in science and culture on the 14th. Visit http://bit.ly/HerStoryTickets2019 for tickets ($14.99-19.99) and http://bit.ly/HerStoryWomenScientists for project information.
I have a film collage,
I looks like they’re presenting films with a diversity of styles. You can find out more about Curiosity Collider and its various programmes and events here.
Vancouver Fringe Festival September 5 – 16, 2019
I found two plays in this year’s fringe festival programme that feature science in one way or another. Not having seen either play I make no guarantees as to content. First up is,
AI Love You Exit Productions London, UK Playwright: Melanie Anne Ball exitproductionsltd.com
Adam and April are a regular 20-something couple, very nearly blissfully generic, aside from one important detail: one of the pair is an “artificially intelligent companion.” Their joyful veneer has begun to crack and they need YOU to decide the future of their relationship. Is the freedom of a robot or the will of a human more important? For AI Love You:
***** “Magnificent, complex and beautifully addictive.” —Spy in the Stalls **** “Emotionally charged, deeply moving piece … I was left with goosebumps.” —West End Wilma **** —London City Nights Past shows: ***** “The perfect show.” —Theatre Box
The first show is on Friday, September 6, 2019 at 5 pm. There are another five showings being presented. You can get tickets and more information here.
The second play is this,
Red Glimmer Dusty Foot Productions Vancouver, Canada Written & Directed by Patricia Trinh
Abstract Sci-Fi dramedy. An interdimensional science experiment! Woman involuntarily takes an all inclusive internal trip after falling into a deep depression. A scientist is hired to navigate her neurological pathways from inside her mind – tackling the fact that humans cannot physically re-experience somatosensory sensation, like pain. What if that were the case for traumatic emotional pain? A creepy little girl is heard running by. What happens next?
This show is created by an underrepresented Artist. Written, directed, and produced by local theatre Artist Patricia Trinh, a Queer, Asian-Canadian female.
The first showing is tonight, September 5, 2019 at 8:30 pm. There are another six showings being presented. You can get tickets and more information here.
CHAOSMOSIS mAchInes exhibition/performance/discussion/panel/in-situ experiments/art/ science/ techne/ philosophy, 28 September, 2019 in Toronto
An Art/Sci Salon September 2, 2019 announcement (received via email), Note: I have made some formatting changes,
CHAOSMOSIS mAchInes
28 September, 2019 7pm-11pm. Helen-Gardiner-Phelan Theatre, 2nd floor University of Toronto. 79 St. George St.
A playful co-presentation by the Topological Media Lab (Concordia U-Montreal) and The Digital Dramaturgy Labsquared (U of T-Toronto). This event is part of our collaboration with DDLsquared lab, the Topological Lab and the Leonardo LASER network
7pm-9.30pm, Installation-performances, 9.30pm-11pm, Reception and cash bar, Front and Long Room, Ground floor
Description: From responsive sculptures to atmosphere-creating machines; from sensorial machines to affective autonomous robots, Chaosmosis mAchInes is an eclectic series of installations and performances reflecting on today’s complex symbiotic relations between humans, machines and the environment.
This will be the first encounter between Montreal-based Topological Media Lab (Concordia University) and the Toronto-based Digital Dramaturgy Labsquared (U of T) to co-present current process-based and experimental works. Both labs have a history of notorious playfulness, conceptual abysmal depth, human-machine interplays, Art&Science speculations (what if?), collaborative messes, and a knack for A/I as in Artistic Intelligence.
Thanks to Nina Czegledy (Laser series, Leonardo network) for inspiring the event and for initiating the collaboration
Project presentations will include: Topological Media Lab tangibleFlux φ plenumorphic ∴ chaosmosis SPIEL On Air The Sound That Severs Now from Now Cloud Chamber (2018) | Caustic Scenography, Responsive Cloud Formation Liquid Light Robots: Machine Menagerie Phaze Phase Passing Light Info projects Digital Dramaturgy Labsquared Btw Lf & Dth – interFACING disappearance Info project
Earlier last month [August 2019?], surgeons at St Paul’s Hospital performed an ankle replacement for a Cloverdale resident using a 3D printed bone. The first procedure of its kind in Western Canada, it saved the patient all of his ten toes — something doctors had originally decided to amputate due to the severity of the motorcycle accident.
Maker Faire Vancouver Co-producer, John Biehler, may not be using his 3D printer for medical breakthroughs, but he does see a subtle connection between his home 3D printer and the Health Canada-approved bone.
“I got into 3D printing to make fun stuff and gadgets,” John says of the box-sized machine that started as a hobby and turned into a side business. “But the fact that the very same technology can have life-changing and life-saving applications is amazing.”
When John showed up to Maker Faire Vancouver seven years ago, opportunities to access this hobby were limited. Armed with a 3D printer he had just finished assembling the night before, John was hoping to meet others in the community with similar interests to build, experiment and create. Much like the increase in accessibility to these portable machines has changed over the years—with universities, libraries and makerspaces making them readily available alongside CNC Machines, laser cutters and more — John says the excitement around crafting and tinkering has skyrocketed as well.
“The kind of technology that inspires people to print a bone or spinal insert all starts at ground zero in places like a Maker Faire where people get exposed to STEAM,” John says …
…
… From 3D printing enthusiasts like John to knitters, metal artists and roboticists, this full one-day event [Maker Faire Vancouver on Saturday, September 14, 2019] will facilitate cross-pollination between hobbyists, small businesses, artists and tinkerers. Described as part science fair, part county fair and part something entirely new, Maker Faire Vancouver hopes to facilitate discovery and what John calls “pure joy moments.”
A scientific breakthrough by Professor Michel Meunier of Polytechnique Montréal and his collaborators offers hope for people with glaucoma, retinitis or macular degeneration.
In January 2009, the life of engineer Michel Meunier, a professor at Polytechnique Montréal, changed dramatically. Like others, he had observed that the extremely short pulse of a femtosecond laser (0.000000000000001 second) could make nanometre-sized holes appear in silicon when it was covered by gold nanoparticles. But this researcher, recognized internationally for his skills in laser and nanotechnology, decided to go a step further with what was then just a laboratory curiosity. He wondered if it was possible to go from silicon to living matter, from inorganic to organic. Could the gold nanoparticles and the femtosecond laser, this “light scalpel,” reproduce the same phenomenon with living cells?
…
A very pretty image illustrating the work,
Caption: Gold nanoparticles, which act like “nanolenses,” concentrate the energy produced by the extremely short pulse of a femtosecond laser to create a nanoscale incision on the surface of the eye’s retina cells. This technology, which preserves cell integrity, can be used to effectively inject drugs or genes into specific areas of the eye, offering new hope to people with glaucoma, retinitis or macular degeneration. Credit and Copyright: Polytechnique Montréal
The news release goes on to describe the technology in more detail,
Professor Meunier started working on cells in vitro in his Polytechnique laboratory. The challenge was to make a nanometric incision in the cells’ extracellular membrane without damaging it. Using gold nanoparticles that acted as “nanolenses,” Professor Meunier realized that it was possible to concentrate the light energy coming from the laser at a wavelength of 800 nanometres. Since there is very little energy absorption by the cells at this wavelength, their integrity is preserved. Mission accomplished!
Based on this finding, Professor Meunier decided to work on cells in vivo, cells that are part of a complex living cell structure, such as the eye for example.
The eye and the light scalpel
In April 2012, Professor Meunier met Przemyslaw Sapieha, an internationally renowned eye specialist, particularly recognized for his work on the retina. “Mike”, as he goes by, is a professor in the Department of Ophthalmology at Université de Montréal and a researcher at Centre intégré universitaire de santé et de services sociaux (CIUSSS) de l’Est-de-l’Île-de-Montréal. He immediately saw the potential of this new technology and everything that could be done in the eye if you could block the ripple effect that occurs following a trigger that leads to glaucoma or macular degeneration, for example, by injecting drugs, proteins or even genes.
Using a femtosecond laser to treat the eye–a highly specialized and fragile organ–is very complex, however. The eye is part of the central nervous system, and therefore many of the cells or families of cells that compose it are neurons. And when a neuron dies, it does not regenerate like other cells do. Mike Sapieha’s first task was therefore to ensure that a femtosecond laser could be used on one or several neurons without affecting them. This is what is referred to as “proof of concept.”
Proof of concept
Mike and Michel called on biochemistry researcher Ariel Wilson, an expert in eye structures and vision mechanisms, as well as Professor Santiago Costantino and his team from the Department of Ophthalmology at Université de Montréal and the CIUSSS de l’Est-de-l’Île-de-Montréal for their expertise in biophotonics. The team first decided to work on healthy cells, because they are better understood than sick cells. They injected gold nanoparticles combined with antibodies to target specific neuronal cells in the eye, and then waited for the nanoparticles to settle around the various neurons or families of neurons, such as the retina. Following the bright flash generated by the femtosecond laser, the expected phenomenon occurred: small holes appeared in the cells of the eye’s retina, making it possible to effectively inject drugs or genes in specific areas of the eye. It was another victory for Michel Meunier and his collaborators, with these conclusive results now opening the path to new treatments.
The key feature of the technology developed by the researchers from Polytechnique and CIUSSS de l’Est-de-l’Île-de-Montréal is its extreme precision. With the use of functionalized gold nanoparticles, the light scalpel makes it possible to precisely locate the family of cells where the doctor will have to intervene.
Having successfully demonstrated proof of concept, Professor Meunier and his team filed a patent application in the United States. This tremendous work was also the subject of a paper reviewed by an impressive reading committee and published in the renowned journal Nano Letters in October 2018.
While there is still a lot of research to be done–at least 10 years’ worth, first on animals and then on humans–this technology could make all the difference in an aging population suffering from eye deterioration for which there are still no effective long-term treatments. It also has the advantage of avoiding the use of viruses commonly employed in gene therapy. These researchers are looking at applications of this technology in all eye diseases, but more particularly in glaucoma, retinitis and macular degeneration.