Tag Archives: Barack Obama

Hardware policies best way to manage AI safety?

Regulation of artificial intelligence (AI) has become very topical in the last couple of years. There was an AI safety summit in November 2023 at Bletchley Park in the UK (see my November 2, 2023 posting for more about that international meeting).

A very software approach?

This year (2024) has seen a rise in legislative and proposed legislative activity. I have some articles on a few of these activities. China was the first to enact regulations of any kind on AI according to Matt Sheehan’s February 27, 2024 paper for the Carnegie Endowment for International Peace,

In 2021 and 2022, China became the first country to implement detailed, binding regulations on some of the most common applications of artificial intelligence (AI). These rules formed the foundation of China’s emerging AI governance regime, an evolving policy architecture that will affect everything from frontier AI research to the functioning of the world’s second-largest economy, from large language models in Africa to autonomous vehicles in Europe.

The Chinese Communist Party (CCP) and the Chinese government started that process with the 2021 rules on recommendation algorithms, an omnipresent use of the technology that is often overlooked in international AI governance discourse. Those rules imposed new obligations on companies to intervene in content recommendations, granted new rights to users being recommended content, and offered protections to gig workers subject to algorithmic scheduling. The Chinese party-state quickly followed up with a new regulation on “deep synthesis,” the use of AI to generate synthetic media such as deepfakes. Those rules required AI providers to watermark AI-generated content and ensure that content does not violate people’s “likeness rights” or harm the “nation’s image.” Together, these two regulations also created and amended China’s algorithm registry, a regulatory tool that would evolve into a cornerstone of the country’s AI governance regime.

The UK has adopted a more generalized approach focused on encouraging innovation according to Valeria Gallo’s and Suchitra Nair’s February 21, 2024 article for Deloitte (a British professional services firm also considered one of the big four accounting firms worldwide),

At a glance

The UK Government has adopted a cross-sector and outcome-based framework for regulating AI, underpinned by five core principles. These are safety, security and robustness, appropriate transparency and explainability, fairness, accountability and governance, and contestability and redress.

Regulators will implement the framework in their sectors/domains by applying existing laws and issuing supplementary regulatory guidance. Selected regulators will publish their AI annual strategic plans by 30th April [2024], providing businesses with much-needed direction.

Voluntary safety and transparency measures for developers of highly capable AI models and systems will also supplement the framework and the activities of individual regulators.

The framework will not be codified into law for now, but the Government anticipates the need for targeted legislative interventions in the future. These interventions will address gaps in the current regulatory framework, particularly regarding the risks posed by complex General Purpose AI and the key players involved in its development.

Organisations must prepare for increased AI regulatory activity over the next year, including guidelines, information gathering, and enforcement. International firms will inevitably have to navigate regulatory divergence.

While most of the focus appears to be on the software (e.g., General Purpose AI), the UK framework does not preclude hardware.

The European Union (EU) is preparing to pass its own AI regulation act through the European Parliament in 2024 according to a December 19, 2023 “EU AI Act: first regulation on artificial intelligence” article update, Note: Links have been removed,

As part of its digital strategy, the EU wants to regulate artificial intelligence (AI) to ensure better conditions for the development and use of this innovative technology. AI can create many benefits, such as better healthcare; safer and cleaner transport; more efficient manufacturing; and cheaper and more sustainable energy.

In April 2021, the European Commission proposed the first EU regulatory framework for AI. It says that AI systems that can be used in different applications are analysed and classified according to the risk they pose to users. The different risk levels will mean more or less regulation.

The agreed text is expected to be finally adopted in April 2024. It will be fully applicable 24 months after entry into force, but some parts will be applicable sooner:

*The ban of AI systems posing unacceptable risks will apply six months after the entry into force

*Codes of practice will apply nine months after entry into force

*Rules on general-purpose AI systems that need to comply with transparency requirements will apply 12 months after the entry into force

High-risk systems will have more time to comply with the requirements as the obligations concerning them will become applicable 36 months after the entry into force.

This EU initiative, like the UK framework, seems largely focused on AI software and according to the Wikipedia entry “Regulation of artificial intelligence,”

… The AI Act is expected to come into effect in late 2025 or early 2026.[109

I do have a few postings about Canadian regulatory efforts, which also seem to be focused on software but don’t preclude hardware. While the January 20, 2024 posting is titled “Canada’s voluntary code of conduct relating to advanced generative AI (artificial intelligence) systems,” information about legislative efforts is also included although you might find my May 1, 2023 posting titled “Canada, AI regulation, and the second reading of the Digital Charter Implementation Act, 2022 (Bill C-27)” offers more comprehensive information about Canada’s legislative progress or lack thereof.

The US is always to be considered in these matters and I have a November 2023 ‘briefing’ by Müge Fazlioglu on the International Association of Privacy Professionals (IAPP) website where she provides a quick overview of the international scene before diving deeper into US AI governance policy through the Barack Obama, Donald Trump, and Joe Biden administrations. There’s also this January 29, 2024 US White House “Fact Sheet: Biden-⁠Harris Administration Announces Key AI Actions Following President Biden’s Landmark Executive Order.”

What about AI and hardware?

A February 15, 2024 news item on ScienceDaily suggests that regulating hardware may be the most effective way of regulating AI,

Chips and datacentres — the ‘compute’ power driving the AI revolution — may be the most effective targets for risk-reducing AI policies as they have to be physically possessed, according to a new report.

A global registry tracking the flow of chips destined for AI supercomputers is one of the policy options highlighted by a major new report calling for regulation of “compute” — the hardware that underpins all AI — to help prevent artificial intelligence misuse and disasters.

Other technical proposals floated by the report include “compute caps” — built-in limits to the number of chips each AI chip can connect with — and distributing a “start switch” for AI training across multiple parties to allow for a digital veto of risky AI before it feeds on data.

The experts point out that powerful computing chips required to drive generative AI models are constructed via highly concentrated supply chains, dominated by just a handful of companies — making the hardware itself a strong intervention point for risk-reducing AI policies.

The report, published 14 February [2024], is authored by nineteen experts and co-led by three University of Cambridge institutes — the Leverhulme Centre for the Future of Intelligence (LCFI), the Centre for the Study of Existential Risk (CSER) and the Bennett Institute for Public Policy — along with OpenAI and the Centre for the Governance of AI.

A February 14, 2024 University of Cambridge press release by Fred Lewsey (also on EurekAlert), which originated the news item, provides more information about the ‘hardware approach to AI regulation’,

“Artificial intelligence has made startling progress in the last decade, much of which has been enabled by the sharp increase in computing power applied to training algorithms,” said Haydn Belfield, a co-lead author of the report from Cambridge’s LCFI. 

“Governments are rightly concerned about the potential consequences of AI, and looking at how to regulate the technology, but data and algorithms are intangible and difficult to control.

“AI supercomputers consist of tens of thousands of networked AI chips hosted in giant data centres often the size of several football fields, consuming dozens of megawatts of power,” said Belfield.

“Computing hardware is visible, quantifiable, and its physical nature means restrictions can be imposed in a way that might soon be nearly impossible with more virtual elements of AI.”

The computing power behind AI has grown exponentially since the “deep learning era” kicked off in earnest, with the amount of “compute” used to train the largest AI models doubling around every six months since 2010. The biggest AI models now use 350 million times more compute than thirteen years ago.

Government efforts across the world over the past year – including the US Executive Order on AI, EU AI Act, China’s Generative AI Regulation, and the UK’s AI Safety Institute – have begun to focus on compute when considering AI governance.

Outside of China, the cloud compute market is dominated by three companies, termed “hyperscalers”: Amazon, Microsoft, and Google. “Monitoring the hardware would greatly help competition authorities in keeping in check the market power of the biggest tech companies, and so opening the space for more innovation and new entrants,” said co-author Prof Diane Coyle from Cambridge’s Bennett Institute. 

The report provides “sketches” of possible directions for compute governance, highlighting the analogy between AI training and uranium enrichment. “International regulation of nuclear supplies focuses on a vital input that has to go through a lengthy, difficult and expensive process,” said Belfield. “A focus on compute would allow AI regulation to do the same.”

Policy ideas are divided into three camps: increasing the global visibility of AI computing; allocating compute resources for the greatest benefit to society; enforcing restrictions on computing power.

For example, a regularly-audited international AI chip registry requiring chip producers, sellers, and resellers to report all transfers would provide precise information on the amount of compute possessed by nations and corporations at any one time.

The report even suggests a unique identifier could be added to each chip to prevent industrial espionage and “chip smuggling”.

“Governments already track many economic transactions, so it makes sense to increase monitoring of a commodity as rare and powerful as an advanced AI chip,” said Belfield. However, the team point out that such approaches could lead to a black market in untraceable “ghost chips”.

Other suggestions to increase visibility – and accountability – include reporting of large-scale AI training by cloud computing providers, and privacy-preserving “workload monitoring” to help prevent an arms race if massive compute investments are made without enough transparency.  

“Users of compute will engage in a mixture of beneficial, benign and harmful activities, and determined groups will find ways to circumvent restrictions,” said Belfield. “Regulators will need to create checks and balances that thwart malicious or misguided uses of AI computing.”

These might include physical limits on chip-to-chip networking, or cryptographic technology that allows for remote disabling of AI chips in extreme circumstances. One suggested approach would require the consent of multiple parties to unlock AI compute for particularly risky training runs, a mechanism familiar from nuclear weapons.

AI risk mitigation policies might see compute prioritised for research most likely to benefit society – from green energy to health and education. This could even take the form of major international AI “megaprojects” that tackle global issues by pooling compute resources.

The report’s authors are clear that their policy suggestions are “exploratory” rather than fully fledged proposals and that they all carry potential downsides, from risks of proprietary data leaks to negative economic impacts and the hampering of positive AI development.

They offer five considerations for regulating AI through compute, including the exclusion of small-scale and non-AI computing, regular revisiting of compute thresholds, and a focus on privacy preservation.

Added Belfield: “Trying to govern AI models as they are deployed could prove futile, like chasing shadows. Those seeking to establish AI regulation should look upstream to compute, the source of the power driving the AI revolution. If compute remains ungoverned it poses severe risks to society.”

You can find the report, “Computing Power and the Governance of Artificial Intelligence” on the University of Cambridge’s Centre for the Study of Existential Risk.

Authors include: Girish Sastry, Lennart Heim, Haydn Belfield, Markus Anderljung, Miles Brundage, Julian Hazell, Cullen O’Keefe, Gillian K. Hadfield, Richard Ngo, Konstantin Pilz, George Gor, Emma Bluemke, Sarah Shoker, Janet Egan, Robert F. Trager, Shahar Avin, Adrian Weller, Yoshua Bengio, and Diane Coyle.

The authors are associated with these companies/agencies: OpenAI, Centre for the Governance of AI (GovAI), Leverhulme Centre for the Future of Intelligence at the Uni. of Cambridge, Oxford Internet Institute, Institute for Law & AI, University of Toronto Vector Institute for AI, Georgetown University, ILINA Program, Harvard Kennedy School (of Government), *AI Governance Institute,* Uni. of Oxford, Centre for the Study of Existential Risk at Uni. of Cambridge, Uni. of Cambridge, Uni. of Montreal / Mila, Bennett Institute for Public Policy at the Uni. of Cambridge.

“The ILINIA program is dedicated to providing an outstanding platform for Africans to learn and work on questions around maximizing wellbeing and responding to global catastrophic risks” according to the organization’s homepage.

*As for the AI Governance Institute, I believe that should be the Centre for the Governance of AI at Oxford University since the associated academic is Robert F. Trager from the University of Oxford.

As the months (years?) fly by, I guess we’ll find out if this hardware approach gains any traction where AI regulation is concerned.

Portable and non-invasive (?) mind-reading AI (artificial intelligence) turns thoughts into text and some thoughts about the near future

First, here’s some of the latest research and if by ‘non-invasive,’ you mean that electrodes are not being planted in your brain, then this December 12, 2023 University of Technology Sydney (UTS) press release (also on EurekAlert) highlights non-invasive mind-reading AI via a brain-computer interface (BCI), Note: Links have been removed,

In a world-first, researchers from the GrapheneX-UTS Human-centric Artificial Intelligence Centre at the University of Technology Sydney (UTS) have developed a portable, non-invasive system that can decode silent thoughts and turn them into text. 

The technology could aid communication for people who are unable to speak due to illness or injury, including stroke or paralysis. It could also enable seamless communication between humans and machines, such as the operation of a bionic arm or robot.

The study has been selected as the spotlight paper at the NeurIPS conference, a top-tier annual meeting that showcases world-leading research on artificial intelligence and machine learning, held in New Orleans on 12 December 2023.

The research was led by Distinguished Professor CT Lin, Director of the GrapheneX-UTS HAI Centre, together with first author Yiqun Duan and fellow PhD candidate Jinzhou Zhou from the UTS Faculty of Engineering and IT.

In the study participants silently read passages of text while wearing a cap that recorded electrical brain activity through their scalp using an electroencephalogram (EEG). A demonstration of the technology can be seen in this video [See UTS press release].

The EEG wave is segmented into distinct units that capture specific characteristics and patterns from the human brain. This is done by an AI model called DeWave developed by the researchers. DeWave translates EEG signals into words and sentences by learning from large quantities of EEG data. 

“This research represents a pioneering effort in translating raw EEG waves directly into language, marking a significant breakthrough in the field,” said Distinguished Professor Lin.

“It is the first to incorporate discrete encoding techniques in the brain-to-text translation process, introducing an innovative approach to neural decoding. The integration with large language models is also opening new frontiers in neuroscience and AI,” he said.

Previous technology to translate brain signals to language has either required surgery to implant electrodes in the brain, such as Elon Musk’s Neuralink [emphasis mine], or scanning in an MRI machine, which is large, expensive, and difficult to use in daily life.

These methods also struggle to transform brain signals into word level segments without additional aids such as eye-tracking, which restrict the practical application of these systems. The new technology is able to be used either with or without eye-tracking.

The UTS research was carried out with 29 participants. This means it is likely to be more robust and adaptable than previous decoding technology that has only been tested on one or two individuals, because EEG waves differ between individuals. 

The use of EEG signals received through a cap, rather than from electrodes implanted in the brain, means that the signal is noisier. In terms of EEG translation however, the study reported state-of the art performance, surpassing previous benchmarks.

“The model is more adept at matching verbs than nouns. However, when it comes to nouns, we saw a tendency towards synonymous pairs rather than precise translations, such as ‘the man’ instead of ‘the author’,” said Duan. [emphases mine; synonymous, eh? what about ‘woman’ or ‘child’ instead of the ‘man’?]

“We think this is because when the brain processes these words, semantically similar words might produce similar brain wave patterns. Despite the challenges, our model yields meaningful results, aligning keywords and forming similar sentence structures,” he said.

The translation accuracy score is currently around 40% on BLEU-1. The BLEU score is a number between zero and one that measures the similarity of the machine-translated text to a set of high-quality reference translations. The researchers hope to see this improve to a level that is comparable to traditional language translation or speech recognition programs, which is closer to 90%.

The research follows on from previous brain-computer interface technology developed by UTS in association with the Australian Defence Force [ADF] that uses brainwaves to command a quadruped robot, which is demonstrated in this ADF video [See my June 13, 2023 posting, “Mind-controlled robots based on graphene: an Australian research story” for the story and embedded video].

About one month after the research announcement regarding the University of Technology Sydney’s ‘non-invasive’ brain-computer interface (BCI), I stumbled across an in-depth piece about the field of ‘non-invasive’ mind-reading research.

Neurotechnology and neurorights

Fletcher Reveley’s January 18, 2024 article on salon.com (originally published January 3, 2024 on Undark) shows how quickly the field is developing and raises concerns, Note: Links have been removed,

One afternoon in May 2020, Jerry Tang, a Ph.D. student in computer science at the University of Texas at Austin, sat staring at a cryptic string of words scrawled across his computer screen:

“I am not finished yet to start my career at twenty without having gotten my license I never have to pull out and run back to my parents to take me home.”

The sentence was jumbled and agrammatical. But to Tang, it represented a remarkable feat: A computer pulling a thought, however disjointed, from a person’s mind.

For weeks, ever since the pandemic had shuttered his university and forced his lab work online, Tang had been at home tweaking a semantic decoder — a brain-computer interface, or BCI, that generates text from brain scans. Prior to the university’s closure, study participants had been providing data to train the decoder for months, listening to hours of storytelling podcasts while a functional magnetic resonance imaging (fMRI) machine logged their brain responses. Then, the participants had listened to a new story — one that had not been used to train the algorithm — and those fMRI scans were fed into the decoder, which used GPT1, a predecessor to the ubiquitous AI chatbot ChatGPT, to spit out a text prediction of what it thought the participant had heard. For this snippet, Tang compared it to the original story:

“Although I’m twenty-three years old I don’t have my driver’s license yet and I just jumped out right when I needed to and she says well why don’t you come back to my house and I’ll give you a ride.”

The decoder was not only capturing the gist of the original, but also producing exact matches of specific words — twenty, license. When Tang shared the results with his adviser, a UT Austin neuroscientist named Alexander Huth who had been working towards building such a decoder for nearly a decade, Huth was floored. “Holy shit,” Huth recalled saying. “This is actually working.” By the fall of 2021, the scientists were testing the device with no external stimuli at all — participants simply imagined a story and the decoder spat out a recognizable, albeit somewhat hazy, description of it. “What both of those experiments kind of point to,” said Huth, “is the fact that what we’re able to read out here was really like the thoughts, like the idea.”

The scientists brimmed with excitement over the potentially life-altering medical applications of such a device — restoring communication to people with locked-in syndrome, for instance, whose near full-body paralysis made talking impossible. But just as the potential benefits of the decoder snapped into focus, so too did the thorny ethical questions posed by its use. Huth himself had been one of the three primary test subjects in the experiments, and the privacy implications of the device now seemed visceral: “Oh my god,” he recalled thinking. “We can look inside my brain.”

Huth’s reaction mirrored a longstanding concern in neuroscience and beyond: that machines might someday read people’s minds. And as BCI technology advances at a dizzying clip, that possibility and others like it — that computers of the future could alter human identities, for example, or hinder free will — have begun to seem less remote. “The loss of mental privacy, this is a fight we have to fight today,” said Rafael Yuste, a Columbia University neuroscientist. “That could be irreversible. If we lose our mental privacy, what else is there to lose? That’s it, we lose the essence of who we are.”

Spurred by these concerns, Yuste and several colleagues have launched an international movement advocating for “neurorights” — a set of five principles Yuste argues should be enshrined in law as a bulwark against potential misuse and abuse of neurotechnology. But he may be running out of time.

Reveley’s January 18, 2024 article provides fascinating context and is well worth reading if you have the time.

For my purposes, I’m focusing on ethics, Note: Links have been removed,

… as these and other advances propelled the field forward, and as his own research revealed the discomfiting vulnerability of the brain to external manipulation, Yuste found himself increasingly concerned by the scarce attention being paid to the ethics of these technologies. Even Obama’s multi-billion-dollar BRAIN Initiative, a government program designed to advance brain research, which Yuste had helped launch in 2013 and supported heartily, seemed to mostly ignore the ethical and societal consequences of the research it funded. “There was zero effort on the ethical side,” Yuste recalled.

Yuste was appointed to the rotating advisory group of the BRAIN Initiative in 2015, where he began to voice his concerns. That fall, he joined an informal working group to consider the issue. “We started to meet, and it became very evident to me that the situation was a complete disaster,” Yuste said. “There was no guidelines, no work done.” Yuste said he tried to get the group to generate a set of ethical guidelines for novel BCI technologies, but the effort soon became bogged down in bureaucracy. Frustrated, he stepped down from the committee and, together with a University of Washington bioethicist named Sara Goering, decided to independently pursue the issue. “Our aim here is not to contribute to or feed fear for doomsday scenarios,” the pair wrote in a 2016 article in Cell, “but to ensure that we are reflective and intentional as we prepare ourselves for the neurotechnological future.”

In the fall of 2017, Yuste and Goering called a meeting at the Morningside Campus of Columbia, inviting nearly 30 experts from all over the world in such fields as neurotechnology, artificial intelligence, medical ethics, and the law. By then, several other countries had launched their own versions of the BRAIN Initiative, and representatives from Australia, Canada [emphasis mine], China, Europe, Israel, South Korea, and Japan joined the Morningside gathering, along with veteran neuroethicists and prominent researchers. “We holed ourselves up for three days to study the ethical and societal consequences of neurotechnology,” Yuste said. “And we came to the conclusion that this is a human rights issue. These methods are going to be so powerful, that enable to access and manipulate mental activity, and they have to be regulated from the angle of human rights. That’s when we coined the term ‘neurorights.’”

The Morningside group, as it became known, identified four principal ethical priorities, which were later expanded by Yuste into five clearly defined neurorights: The right to mental privacy, which would ensure that brain data would be kept private and its use, sale, and commercial transfer would be strictly regulated; the right to personal identity, which would set boundaries on technologies that could disrupt one’s sense of self; the right to fair access to mental augmentation, which would ensure equality of access to mental enhancement neurotechnologies; the right of protection from bias in the development of neurotechnology algorithms; and the right to free will, which would protect an individual’s agency from manipulation by external neurotechnologies. The group published their findings in an often-cited paper in Nature.

But while Yuste and the others were focused on the ethical implications of these emerging technologies, the technologies themselves continued to barrel ahead at a feverish speed. In 2014, the first kick of the World Cup was made by a paraplegic man using a mind-controlled robotic exoskeleton. In 2016, a man fist bumped Obama using a robotic arm that allowed him to “feel” the gesture. The following year, scientists showed that electrical stimulation of the hippocampus could improve memory, paving the way for cognitive augmentation technologies. The military, long interested in BCI technologies, built a system that allowed operators to pilot three drones simultaneously, partially with their minds. Meanwhile, a confusing maelstrom of science, science-fiction, hype, innovation, and speculation swept the private sector. By 2020, over $33 billion had been invested in hundreds of neurotech companies — about seven times what the NIH [US National Institutes of Health] had envisioned for the 12-year span of the BRAIN Initiative itself.

Now back to Tang and Huth (from Reveley’s January 18, 2024 article), Note: Links have been removed,

Central to the ethical questions Huth and Tang grappled with was the fact that their decoder, unlike other language decoders developed around the same time, was non-invasive — it didn’t require its users to undergo surgery. Because of that, their technology was free from the strict regulatory oversight that governs the medical domain. (Yuste, for his part, said he believes non-invasive BCIs pose a far greater ethical challenge than invasive systems: “The non-invasive, the commercial, that’s where the battle is going to get fought.”) Huth and Tang’s decoder faced other hurdles to widespread use — namely that fMRI machines are enormous, expensive, and stationary. But perhaps, the researchers thought, there was a way to overcome that hurdle too.

The information measured by fMRI machines — blood oxygenation levels, which indicate where blood is flowing in the brain — can also be measured with another technology, functional Near-Infrared Spectroscopy, or fNIRS. Although lower resolution than fMRI, several expensive, research-grade, wearable fNIRS headsets do approach the resolution required to work with Huth and Tang’s decoder. In fact, the scientists were able to test whether their decoder would work with such devices by simply blurring their fMRI data to simulate the resolution of research-grade fNIRS. The decoded result “doesn’t get that much worse,” Huth said.

And while such research-grade devices are currently cost-prohibitive for the average consumer, more rudimentary fNIRS headsets have already hit the market. Although these devices provide far lower resolution than would be required for Huth and Tang’s decoder to work effectively, the technology is continually improving, and Huth believes it is likely that an affordable, wearable fNIRS device will someday provide high enough resolution to be used with the decoder. In fact, he is currently teaming up with scientists at Washington University to research the development of such a device.

Even comparatively primitive BCI headsets can raise pointed ethical questions when released to the public. Devices that rely on electroencephalography, or EEG, a commonplace method of measuring brain activity by detecting electrical signals, have now become widely available — and in some cases have raised alarm. In 2019, a school in Jinhua, China, drew criticism after trialing EEG headbands that monitored the concentration levels of its pupils. (The students were encouraged to compete to see who concentrated most effectively, and reports were sent to their parents.) Similarly, in 2018 the South China Morning Post reported that dozens of factories and businesses had begun using “brain surveillance devices” to monitor workers’ emotions, in the hopes of increasing productivity and improving safety. The devices “caused some discomfort and resistance in the beginning,” Jin Jia, then a brain scientist at Ningbo University, told the reporter. “After a while, they got used to the device.”

But the primary problem with even low-resolution devices is that scientists are only just beginning to understand how information is actually encoded in brain data. In the future, powerful new decoding algorithms could discover that even raw, low-resolution EEG data contains a wealth of information about a person’s mental state at the time of collection. Consequently, nobody can definitively know what they are giving away when they allow companies to collect information from their brains.

Huth and Tang concluded that brain data, therefore, should be closely guarded, especially in the realm of consumer products. In an article on Medium from last April, Tang wrote that “decoding technology is continually improving, and the information that could be decoded from a brain scan a year from now may be very different from what can be decoded today. It is crucial that companies are transparent about what they intend to do with brain data and take measures to ensure that brain data is carefully protected.” (Yuste said the Neurorights Foundation recently surveyed the user agreements of 30 neurotech companies and found that all of them claim ownership of users’ brain data — and most assert the right to sell that data to third parties. [emphases mine]) Despite these concerns, however, Huth and Tang maintained that the potential benefits of these technologies outweighed their risks, provided the proper guardrails [emphasis mine] were put in place.

It would seem the first guardrails are being set up in South America (from Reveley’s January 18, 2024 article), Note: Links have been removed,

On a hot summer night in 2019, Yuste sat in the courtyard of an adobe hotel in the north of Chile with his close friend, the prominent Chilean doctor and then-senator Guido Girardi, observing the vast, luminous skies of the Atacama Desert and discussing, as they often did, the world of tomorrow. Girardi, who every year organizes the Congreso Futuro, Latin America’s preeminent science and technology event, had long been intrigued by the accelerating advance of technology and its paradigm-shifting impact on society — “living in the world at the speed of light,” as he called it. Yuste had been a frequent speaker at the conference, and the two men shared a conviction that scientists were birthing technologies powerful enough to disrupt the very notion of what it meant to be human.

Around midnight, as Yuste finished his pisco sour, Girardi made an intriguing proposal: What if they worked together to pass an amendment to Chile’s constitution, one that would enshrine protections for mental privacy as an inviolable right of every Chilean? It was an ambitious idea, but Girardi had experience moving bold pieces of legislation through the senate; years earlier he had spearheaded Chile’s famous Food Labeling and Advertising Law, which required companies to affix health warning labels on junk food. (The law has since inspired dozens of countries to pursue similar legislation.) With BCI, here was another chance to be a trailblazer. “I said to Rafael, ‘Well, why don’t we create the first neuro data protection law?’” Girardi recalled. Yuste readily agreed.

… Girardi led the political push, promoting a piece of legislation that would amend Chile’s constitution to protect mental privacy. The effort found surprising purchase across the political spectrum, a remarkable feat in a country famous for its political polarization. In 2021, Chile’s congress unanimously passed the constitutional amendment, which Piñera [Sebastián Piñera] swiftly signed into law. (A second piece of legislation, which would establish a regulatory framework for neurotechnology, is currently under consideration by Chile’s congress.) “There was no divide between the left or right,” recalled Girardi. “This was maybe the only law in Chile that was approved by unanimous vote.” Chile, then, had become the first country in the world to enshrine “neurorights” in its legal code.

Even before the passage of the Chilean constitutional amendment, Yuste had begun meeting regularly with Jared Genser, an international human rights lawyer who had represented such high-profile clients as Desmond Tutu, Liu Xiaobo, and Aung San Suu Kyi. (The New York Times Magazine once referred to Genser as “the extractor” for his work with political prisoners.) Yuste was seeking guidance on how to develop an international legal framework to protect neurorights, and Genser, though he had just a cursory knowledge of neurotechnology, was immediately captivated by the topic. “It’s fair to say he blew my mind in the first hour of discussion,” recalled Genser. Soon thereafter, Yuste, Genser, and a private-sector entrepreneur named Jamie Daves launched the Neurorights Foundation, a nonprofit whose first goal, according to its website, is “to protect the human rights of all people from the potential misuse or abuse of neurotechnology.”

To accomplish this, the organization has sought to engage all levels of society, from the United Nations and regional governing bodies like the Organization of American States, down to national governments, the tech industry, scientists, and the public at large. Such a wide-ranging approach, said Genser, “is perhaps insanity on our part, or grandiosity. But nonetheless, you know, it’s definitely the Wild West as it comes to talking about these issues globally, because so few people know about where things are, where they’re heading, and what is necessary.”

This general lack of knowledge about neurotech, in all strata of society, has largely placed Yuste in the role of global educator — he has met several times with U.N. Secretary-General António Guterres, for example, to discuss the potential dangers of emerging neurotech. And these efforts are starting to yield results. Guterres’s 2021 report, “Our Common Agenda,” which sets forth goals for future international cooperation, urges “updating or clarifying our application of human rights frameworks and standards to address frontier issues,” such as “neuro-technology.” Genser attributes the inclusion of this language in the report to Yuste’s advocacy efforts.

But updating international human rights law is difficult, and even within the Neurorights Foundation there are differences of opinion regarding the most effective approach. For Yuste, the ideal solution would be the creation of a new international agency, akin to the International Atomic Energy Agency — but for neurorights. “My dream would be to have an international convention about neurotechnology, just like we had one about atomic energy and about certain things, with its own treaty,” he said. “And maybe an agency that would essentially supervise the world’s efforts in neurotechnology.”

Genser, however, believes that a new treaty is unnecessary, and that neurorights can be codified most effectively by extending interpretation of existing international human rights law to include them. The International Covenant of Civil and Political Rights, for example, already ensures the general right to privacy, and an updated interpretation of the law could conceivably clarify that that clause extends to mental privacy as well.

There is no need for immediate panic (from Reveley’s January 18, 2024 article),

… while Yuste and the others continue to grapple with the complexities of international and national law, Huth and Tang have found that, for their decoder at least, the greatest privacy guardrails come not from external institutions but rather from something much closer to home — the human mind itself. Following the initial success of their decoder, as the pair read widely about the ethical implications of such a technology, they began to think of ways to assess the boundaries of the decoder’s capabilities. “We wanted to test a couple kind of principles of mental privacy,” said Huth. Simply put, they wanted to know if the decoder could be resisted.

In late 2021, the scientists began to run new experiments. First, they were curious if an algorithm trained on one person could be used on another. They found that it could not — the decoder’s efficacy depended on many hours of individualized training. Next, they tested whether the decoder could be thrown off simply by refusing to cooperate with it. Instead of focusing on the story that was playing through their headphones while inside the fMRI machine, participants were asked to complete other mental tasks, such as naming random animals, or telling a different story in their head. “Both of those rendered it completely unusable,” Huth said. “We didn’t decode the story they were listening to, and we couldn’t decode anything about what they were thinking either.”

Given how quickly this field of research is progressing, it seems like a good idea to increase efforts to establish neurorights (from Reveley’s January 18, 2024 article),

For Yuste, however, technologies like Huth and Tang’s decoder may only mark the beginning of a mind-boggling new chapter in human history, one in which the line between human brains and computers will be radically redrawn — or erased completely. A future is conceivable, he said, where humans and computers fuse permanently, leading to the emergence of technologically augmented cyborgs. “When this tsunami hits us I would say it’s not likely it’s for sure that humans will end up transforming themselves — ourselves — into maybe a hybrid species,” Yuste said. He is now focused on preparing for this future.

In the last several years, Yuste has traveled to multiple countries, meeting with a wide assortment of politicians, supreme court justices, U.N. committee members, and heads of state. And his advocacy is beginning to yield results. In August, Mexico began considering a constitutional reform that would establish the right to mental privacy. Brazil is currently considering a similar proposal, while Spain, Argentina, and Uruguay have also expressed interest, as has the European Union. In September [2023], neurorights were officially incorporated into Mexico’s digital rights charter, while in Chile, a landmark Supreme Court ruling found that Emotiv Inc, a company that makes a wearable EEG headset, violated Chile’s newly minted mental privacy law. That suit was brought by Yuste’s friend and collaborator, Guido Girardi.

“This is something that we should take seriously,” he [Huth] said. “Because even if it’s rudimentary right now, where is that going to be in five years? What was possible five years ago? What’s possible now? Where’s it gonna be in five years? Where’s it gonna be in 10 years? I think the range of reasonable possibilities includes things that are — I don’t want to say like scary enough — but like dystopian enough that I think it’s certainly a time for us to think about this.”

You can find The Neurorights Foundation here and/or read Reveley’s January 18, 2024 article on salon.com or as originally published January 3, 2024 on Undark. Finally, thank you for the article, Fletcher Reveley!

US President’s Council of Advisors on Science and Technology (PCAST) meeting on Biomanufacturing, the Federal Science and Technology Workforce, and the National Nanotechnology Initiative

It’s been years since I’ve featured a PCAST meeting here; I just don’t stumble across the notices all that often anymore.

Unfortunately, I got there late this time, It’s especially unfortunate as the meeting was on “Biomanufacturing, the Federal Science and Technology Workforce, and the National Nanotechnology Initiative.” Held on November 29, 2021, it was livestreamed. Happily, there’s already a video of the meeting (a little over 4.5 hours long) on YouTube.

If you go to the White House PCAST Meetings webpage, you’ll find, after scrolling down about 40% of the way, ‘Past Meetings’, which in addition to the past meetings includes agendas, lists of guests and their biographies and more. Given the title of the meeting and the invitees, this looks like it will have a focus on the business of biotechnology and nanotechnology. This hearkens back to when former President Barack Obama pushed for nanotechnology manufacturing taking the science out of the laboratories and commercializing it.

Here’s part of the agenda for the November 29, 2021 meeting (I’m particularly interested in the third session; apologies for the formatting),

President’s Council of Advisors on Science and Technology

Public Meeting Agenda
November 29, 2021
Virtual
(All times Eastern)

12:15 pm Welcome

PCAST Co-Chairs: Frances Arnold, Eric Lander, Maria Zuber


3:45 pm Session 3: Overview of the National Nanotechnology Initiative

Moderator: Eric Lander

Speaker: Lisa Friedersdorf, National Nanotechnology Coordination Office

The biographies for the speakers can be found here. (I’m glad to see that President Joe Biden has revitalized the council.)

For anyone unfamiliar with PCAST, it has an interesting history (from the President’s Council of Advisors on Science and Technology webpage),

Beginning in 1933 with President Franklin D. Roosevelt’s Science Advisory Board, each President has established an advisory committee of scientists, engineers, and health professionals. Although the name of the advisory board has changed over the years, the purpose has remained the same—to provide scientific and technical advice to the President of the United States.

Drawing from the nation’s most talented and accomplished individuals, President Biden’s PCAST consists of 30 members, including 20 elected members of the National Academies of Sciences, Engineering and Medicine, five MacArthur “Genius” Fellows, two former Cabinet secretaries, and two Nobel laureates. Its members include experts in astrophysics and agriculture, biochemistry and computer engineering, ecology and entrepreneurship, immunology and nanotechnology, neuroscience and national security, social science and cybersecurity, and more

Enjoy!

China, US, and the race for artificial intelligence research domination

John Markoff and Matthew Rosenberg have written a fascinating analysis of the competition between US and China regarding technological advances, specifically in the field of artificial intelligence. While the focus of the Feb. 3, 2017 NY Times article is military, the authors make it easy to extrapolate and apply the concepts to other sectors,

Robert O. Work, the veteran defense official retained as deputy secretary by President Trump, calls them his “A.I. dudes.” The breezy moniker belies their serious task: The dudes have been a kitchen cabinet of sorts, and have advised Mr. Work as he has sought to reshape warfare by bringing artificial intelligence to the battlefield.

Last spring, he asked, “O.K., you guys are the smartest guys in A.I., right?”

No, the dudes told him, “the smartest guys are at Facebook and Google,” Mr. Work recalled in an interview.

Now, increasingly, they’re also in China. The United States no longer has a strategic monopoly on the technology, which is widely seen as the key factor in the next generation of warfare.

The Pentagon’s plan to bring A.I. to the military is taking shape as Chinese researchers assert themselves in the nascent technology field. And that shift is reflected in surprising commercial advances in artificial intelligence among Chinese companies. [emphasis mine]

Having read Marshal McLuhan (de rigeur for any Canadian pursuing a degree in communications [sociology-based] anytime from the 1960s into the late 1980s [at least]), I took the movement of technology from military research to consumer applications as a standard. Television is a classic example but there are many others including modern plastic surgery. The first time, I encountered the reverse (consumer-based technology being adopted by the military) was in a 2004 exhibition “Massive Change: The Future of Global Design” produced by Bruce Mau for the Vancouver (Canada) Art Gallery.

Markoff and Rosenberg develop their thesis further (Note: Links have been removed),

Last year, for example, Microsoft researchers proclaimed that the company had created software capable of matching human skills in understanding speech.

Although they boasted that they had outperformed their United States competitors, a well-known A.I. researcher who leads a Silicon Valley laboratory for the Chinese web services company Baidu gently taunted Microsoft, noting that Baidu had achieved similar accuracy with the Chinese language two years earlier.

That, in a nutshell, is the challenge the United States faces as it embarks on a new military strategy founded on the assumption of its continued superiority in technologies such as robotics and artificial intelligence.

First announced last year by Ashton B. Carter, President Barack Obama’s defense secretary, the “Third Offset” strategy provides a formula for maintaining a military advantage in the face of a renewed rivalry with China and Russia.

As consumer electronics manufacturing has moved to Asia, both Chinese companies and the nation’s government laboratories are making major investments in artificial intelligence.

The advance of the Chinese was underscored last month when Qi Lu, a veteran Microsoft artificial intelligence specialist, left the company to become chief operating officer at Baidu, where he will oversee the company’s ambitious plan to become a global leader in A.I.

The authors note some recent military moves (Note: Links have been removed),

In August [2016], the state-run China Daily reported that the country had embarked on the development of a cruise missile system with a “high level” of artificial intelligence. The new system appears to be a response to a missile the United States Navy is expected to deploy in 2018 to counter growing Chinese military influence in the Pacific.

Known as the Long Range Anti-Ship Missile, or L.R.A.S.M., it is described as a “semiautonomous” weapon. According to the Pentagon, this means that though targets are chosen by human soldiers, the missile uses artificial intelligence technology to avoid defenses and make final targeting decisions.

The new Chinese weapon typifies a strategy known as “remote warfare,” said John Arquilla, a military strategist at the Naval Post Graduate School in Monterey, Calif. The idea is to build large fleets of small ships that deploy missiles, to attack an enemy with larger ships, like aircraft carriers.

“They are making their machines more creative,” he said. “A little bit of automation gives the machines a tremendous boost.”

Whether or not the Chinese will quickly catch the United States in artificial intelligence and robotics technologies is a matter of intense discussion and disagreement in the United States.

Markoff and Rosenberg return to the world of consumer electronics as they finish their article on AI and the military (Note: Links have been removed),

Moreover, while there appear to be relatively cozy relationships between the Chinese government and commercial technology efforts, the same cannot be said about the United States. The Pentagon recently restarted its beachhead in Silicon Valley, known as the Defense Innovation Unit Experimental facility, or DIUx. It is an attempt to rethink bureaucratic United States government contracting practices in terms of the faster and more fluid style of Silicon Valley.

The government has not yet undone the damage to its relationship with the Valley brought about by Edward J. Snowden’s revelations about the National Security Agency’s surveillance practices. Many Silicon Valley firms remain hesitant to be seen as working too closely with the Pentagon out of fear of losing access to China’s market.

“There are smaller companies, the companies who sort of decided that they’re going to be in the defense business, like a Palantir,” said Peter W. Singer, an expert in the future of war at New America, a think tank in Washington, referring to the Palo Alto, Calif., start-up founded in part by the venture capitalist Peter Thiel. “But if you’re thinking about the big, iconic tech companies, they can’t become defense contractors and still expect to get access to the Chinese market.”

Those concerns are real for Silicon Valley.

If you have the time, I recommend reading the article in its entirety.

Impact of the US regime on thinking about AI?

A March 24, 2017 article by Daniel Gross for Slate.com hints that at least one high level offician in the Trump administration may be a little naïve in his understanding of AI and its impending impact on US society (Note: Links have been removed),

Treasury Secretary Steven Mnuchin is a sharp guy. He’s a (legacy) alumnus of Yale and Goldman Sachs, did well on Wall Street, and was a successful movie producer and bank investor. He’s good at, and willing to, put other people’s money at risk alongside some of his own. While he isn’t the least qualified person to hold the post of treasury secretary in 2017, he’s far from the best qualified. For in his 54 years on this planet, he hasn’t expressed or displayed much interest in economic policy, or in grappling with the big picture macroeconomic issues that are affecting our world. It’s not that he is intellectually incapable of grasping them; they just haven’t been in his orbit.

Which accounts for the inanity he uttered at an Axios breakfast Friday morning about the impact of artificial intelligence on jobs.

“it’s not even on our radar screen…. 50-100 more years” away, he said. “I’m not worried at all” about robots displacing humans in the near future, he said, adding: “In fact I’m optimistic.”

A.I. is already affecting the way people work, and the work they do. (In fact, I’ve long suspected that Mike Allen, Mnuchin’s Axios interlocutor, is powered by A.I.) I doubt Mnuchin has spent much time in factories, for example. But if he did, he’d see that machines and software are increasingly doing the work that people used to do. They’re not just moving goods through an assembly line, they’re soldering, coating, packaging, and checking for quality. Whether you’re visiting a GE turbine plant in South Carolina, or a cable-modem factory in Shanghai, the thing you’ll notice is just how few people there actually are. It’s why, in the U.S., manufacturing output rises every year while manufacturing employment is essentially stagnant. It’s why it is becoming conventional wisdom that automation is destroying more manufacturing jobs than trade. And now we are seeing the prospect of dark factories, which can run without lights because there are no people in them, are starting to become a reality. The integration of A.I. into factories is one of the reasons Trump’s promise to bring back manufacturing employment is absurd. You’d think his treasury secretary would know something about that.

It goes far beyond manufacturing, of course. Programmatic advertising buying, Spotify’s recommendation engines, chatbots on customer service websites, Uber’s dispatching system—all of these are examples of A.I. doing the work that people used to do. …

Adding to Mnuchin’s lack of credibility on the topic of jobs and robots/AI, Matthew Rozsa’s March 28, 2017 article for Salon.com features a study from the US National Bureau of Economic Research (Note: Links have been removed),

A new study by the National Bureau of Economic Research shows that every fully autonomous robot added to an American factory has reduced employment by an average of 6.2 workers, according to a report by BuzzFeed. The study also found that for every fully autonomous robot per thousand workers, the employment rate dropped by 0.18 to 0.34 percentage points and wages fell by 0.25 to 0.5 percentage points.

I can’t help wondering if the US Secretary of the Treasury is so oblivious to what is going on in the workplace whether that’s representative of other top-tier officials such as the Secretary of Defense, Secretary of Labor, etc. What is going to happen to US research in fields such as robotics and AI?

I have two more questions, in future what happens to research which contradicts or makes a top tier Trump government official look foolish? Will it be suppressed?

You can find the report “Robots and Jobs: Evidence from US Labor Markets” by Daron Acemoglu and Pascual Restrepo. NBER (US National Bureau of Economic Research) WORKING PAPER SERIES (Working Paper 23285) released March 2017 here. The introduction featured some new information for me; the term ‘technological unemployment’ was introduced in 1930 by John Maynard Keynes.

Moving from a wholly US-centric view of AI

Naturally in a discussion about AI, it’s all US and the country considered its chief sceince rival, China, with a mention of its old rival, Russia. Europe did rate a mention, albeit as a totality. Having recently found out that Canadians were pioneers in a very important aspect of AI, machine-learning, I feel obliged to mention it. You can find more about Canadian AI efforts in my March 24, 2017 posting (scroll down about 40% of the way) where you’ll find a very brief history and mention of the funding for a newly launching, Pan-Canadian Artificial Intelligence Strategy.

If any of my readers have information about AI research efforts in other parts of the world, please feel free to write them up in the comments.

Changes to the US 21st Century Nanotechnology Research and Development Act

This is one of Barack Obama’s last acts as President of the US according to a Jan. 5, 2017 posting by Lynn L. Bergeson on the Nanotechnology Now website,

The American Innovation and Competitiveness Act (S. 3084) would amend the 21st Century Nanotechnology Research and Development Act (15 U.S.C. § 7501 et seq.) to change the frequency of National Nanotechnology Initiative (NNI) reports. The strategic plan would be released every five instead of every three years, and the triennial review would be renamed the quadrennial review and be prepared every four years instead of every three. The evaluation of the NNI, which is submitted to Congress, would be due every four instead of every three years. … On December 28, 2016, the bill was presented to President Obama. President Obama is expected to sign the bill.

Congress.gov is hosting the S.3084 – American Innovation and Competitiveness Act webpage listing all of the actions, to date, taken on behalf of this bill; Obama signed the act on Jan. 6, 2017.

One final note, Obama’s last day as US President is Friday, Jan. 20, 2016 but his last ‘full’ day is Thursday, Jan. 19, 2016 (according to a Nov. 4, 2016 posting by Tom Muse for About.com).

Dear Science Minister Kirsty Duncan and Science, Innovation and Economic Development Minister Navdeep Bains: a Happy Canada Day! open letter

Dear Minister of Science Kirsty Duncan and Minister of Science, Innovation and Economic Development Navdeep Bains,

Thank you both. It’s been heartening to note some of the moves you’ve made since entering office. Taking the muzzles off Environment Canada and Natural Resources Canada scientists was a big relief and it was wonderful to hear that the mandatory longform census was reinstated along with the Experimental Lakes Area programme. (Btw, I can’t be the only one who’s looking forward to hearing the news once Canada’s Chief Science Officer is appointed. In the fall, eh?)

Changing the National Science and Technology week by giving it a news name “Science Odyssey” and rescheduling it from the fall to the spring seems to have revitalized the effort. Then, there was the news about a review focused on fundamental science (see my June 16, 2016 post). It seems as if the floodgates have opened or at least communication about what’s going on has become much freer. Brava and Bravo!

The recently announced (June 29, 2016) third assessment on the State of S&T (Science and Technology) and IR&D (Industrial Research and Development; my July 1, 2016 post features the announcement) by the Council of Canadian Academies adds to the impression that you both have adopted a dizzying pace for science of all kinds in Canada.

With the initiatives I’ve just mentioned in mind, it would seem that encouraging a more vital science culture and and re-establishing science as a fundamental part of Canadian society is your aim.

Science education and outreach as a whole population effort

It’s facey to ask for more but that’s what I’m going to do.

In general, the science education and outreach efforts in Canada have focused on children. This is wonderful but not likely to be as successful as we would hope when a significant and influential chunk of the population is largely ignored: adults. (There is a specific situation where outreach to adults is undertaken but more about that later.)

There is research suggesting that children’s attitudes to science and future careers is strongly influenced by their family. From my Oct. 9, 2013 posting,

One of the research efforts in the UK is the ASPIRES research project at King’s College London (KCL), which is examining children’s attitudes to science and future careers. Their latest report, Ten Science Facts and Fictions: the case for early education about STEM careers (PDF), is profiled in a Jan. 11, 2012 news item on physorg.com (from the news item),

Professor Archer [Louise Archer, Professor of Sociology of Education at King’s] said: “Children and their parents hold quite complex views of science and scientists and at age 10 or 11 these views are largely positive. The vast majority of children at this age enjoy science at school, have parents who are supportive of them studying science and even undertake science-related activities in their spare time. They associate scientists with important work, such as finding medical cures, and with work that is well paid.

“Nevertheless, less than 17 per cent aspire to a career in science. These positive impressions seem to lead to the perception that science offers only a very limited range of careers, for example doctor, scientist or science teacher. It appears that this positive stereotype is also problematic in that it can lead people to view science as out of reach for many, only for exceptional or clever people, and ‘not for me’. [emphases mine]

Family as a bigger concept

I suggest that ‘family’ be expanded to include the social environment in which children operate. When I was a kid no one in our family or extended group of friends had been to university let alone become a scientist. My parents had aspirations for me but when it came down to brass tacks, even though I was encouraged to go to university, they were much happier when I dropped out and got a job.

It’s very hard to break out of the mold. The odd thing about it all? I had two uncles who were electricians which when you think about it means they were working in STEM (science, technology,engineering, mathematics) jobs. Electricians, then and now. despite their technical skills, are considered tradespeople.

It seems to me that if more people saw themselves as having STEM or STEM-influenced occupations: hairdressers, artists, automechanics, plumbers, electricians, musicians, etc., we might find more children willing to engage directly in STEM opportunities. We might also find there’s more public support for science in all its guises.

That situation where adults are targeted for science outreach? It’s when the science is considered controversial or problematic and, suddenly, public (actually they mean voter) engagement or outreach is considered vital.

Suggestion

Given the initiatives you both have undertaken and Prime Minister Trudeau’s recent public outbreak of enthusiasm for and interest in quantum computing (my April 18, 2016 posting), I’m hopeful that you will consider the notion and encourage (fund?) science promotion programmes aimed at adults. Preferably attention-grabbing and imaginative programmes.

Should you want to discuss the matter further (I have some suggestions), please feel free to contact me.

Regardless, I’m very happy to see the initiatives that have been undertaken and, just as importantly, the communication about science.

Yours sincerely,

Maryse de la Giroday
(FrogHeart blog)

P.S. I very much enjoyed the June 22, 2016 interview with Léo Charbonneau for University Affairs,

UA: Looking ahead, where would you like Canada to be in terms of research in five to 10 years?

Dr. Duncan: Well, I’ll tell you, it breaks my heart that in a 10-year period we fell from third to eighth place among OECD countries in terms of HERD [government expenditures on higher education research and development as a percentage of gross domestic product]. That should never have happened. That’s why it was so important for me to get that big investment in the granting councils.

Do we have a strong vision for science? Do we have the support of the research community? Do we have the funding systems that allow our world-class researchers to do the work they want do to? And, with the chief science officer, are we building a system where we have the evidence to inform decision-making? My job is to support research and to make sure evidence makes its way to the cabinet table.

As stated earlier, I’m hoping you will expand your vision to include Canadian society, not forgetting seniors (being retired or older doesn’t mean that you’re senile and/or incapable of public participation), and supporting Canada’s emerging science media environment.

P.P.S. As a longstanding observer of the interplay between pop culture, science, and society I was much amused and inspired by news of Justin Trudeau’s emergence as a character in a Marvel comic book (from a June 28, 2016 CBC [Canadian Broadcasting Corporation] news online item),

Trudeau Comic Cover 20160628

The variant cover of the comic Civil War II: Choosing Sides #5, featuring Prime Minister Justin Trudeau surrounded by the members of Alpha Flight: Sasquatch, top, Puck, bottom left, Aurora, right, and Iron Man in the background. (The Canadian Press/Ramon Perez)

Make way, Liberal cabinet: Prime Minister Justin Trudeau will have another all-Canadian crew in his corner as he suits up for his latest feature role — comic book character.

Trudeau will grace the variant cover of issue No. 5 of Marvel’s “Civil War II: Choosing Sides,” due out Aug. 31 [2016].

Trudeau is depicted smiling, sitting relaxed in the boxing ring sporting a Maple Leaf-emblazoned tank, black shorts and red boxing gloves. Standing behind him are Puck, Sasquatch and Aurora, who are members of Canadian superhero squad Alpha Flight. In the left corner, Iron Man is seen with his arms crossed.

“I didn’t want to do a stuffy cover — just like a suit and tie — put his likeness on the cover and call it a day,” said award-winning Toronto-based cartoonist Ramon Perez.

“I wanted to kind of evoke a little bit of what’s different about him than other people in power right now. You don’t see (U.S. President Barack) Obama strutting around in boxing gear, doing push-ups in commercials or whatnot. Just throwing him in his gear and making him almost like an everyday person was kind of fun.”

The variant cover featuring Trudeau will be an alternative to the main cover in circulation showcasing Aurora, Puck, Sasquatch and Nick Fury.

It’s not the first time a Canadian Prime Minister has been featured in a Marvel comic book (from the CBC news item),

Trudeau Comic Cover 20160628

Prime Minister Pierre Trudeau in 1979’s Volume 120 of The Uncanny X-Men. (The Canadian Press/Marvel)

Trudeau follows in the prime ministerial footsteps of his late father, Pierre, who graced the pages of “Uncanny X-Men” in 1979.

The news item goes on to describe artist/writer Chip Zdarsky’s (Edmonton-born) ideas for the 2016 story.

h/t to Reva Seth’s June 29, 2016 article for Fast Company for pointing me to Justin Trudeau’s comic book cover.

Canadian science petition and a science diplomacy event in Ottawa on June 21, 2016*

The Canadian science policy and science funding scene is hopping these days. Canada’s Minister of Science, Kirsty Duncan, announced a new review of federal funding for fundamental science on Monday, June 13, 2016 (see my June 15, 2016 post for more details and a brief critique of the panel) and now, there’s a new Parliamentary campaign for a science advisor and a Canadian Science Policy Centre event on science diplomacy.

Petition for a science advisor

Kennedy Stewart, Canadian Member of Parliament (Burnaby South) and NDP (New Democratic Party) Science Critic, has launched a campaign for independent science advice for the government. Here’s more from a June 15, 2016 announcement (received via email),

After years of muzzling and misuse of science by the Conservatives, our scientists need lasting protections in order to finally turn the page on the lost Harper decade.

I am writing to ask your support for a new campaign calling for an independent science advisor.

While I applaud the new Liberal government for their recent promises to support science, we have a long way to go to rebuild Canada’s reputation as a global knowledge leader. As NDP Science Critic, I continue to push for renewed research funding and measures to restore scientific integrity.

Canada badly needs a new science advisor to act as a public champion for research and evidence in Ottawa. Although the Trudeau government has committed to creating a Chief Science Officer, the Minister of Science – Dr. Kirsty Duncan – has yet to state whether or not the new officer will be given real independence and a mandate protected by law.

Today, we’re launching a new parliamentary petition calling for just that: https://petitions.parl.gc.ca/en/Petition/Sign/e-415

Can you add your name right now?

Canada’s last national science advisor lacked independence from the government and was easily eliminated in 2008 after the anti-science Harper Conservatives took power.

That’s why the Minister needs to build the new CSO to last and entrench the position in legislation. Rhetoric and half-measures aren’t good enough.

Please add your voice for public science by signing our petition to the Minister of Science.

Thank you for your support,

Breakfast session on science diplomacy

One June 21, 2016 the Canadian Science Policy Centre is presenting a breakfast session on Parliament Hill in Ottawa, (from an announcement received via email),

“Science Diplomacy in the 21st Century: The Potential for Tomorrow”
Remarks by Dr. Vaughan Turekian,
Science and Technology Adviser to Secretary of State John Kerry

Event Information
Tuesday, June 21, 2016, Room 238-S, Parliament Hill
7:30am – 8:00am – Continental Breakfast
8:00am – 8:10am – Opening Remarks, MP Terry Beech
8:10am – 8:45am – Dr. Vaughan Turekian Remarks and Q&A

Dr. Turekian’s visit comes during a pivotal time as Canada is undergoing fundamental changes in numerous policy directions surrounding international affairs. With Canada’s comeback on the world stage, there is great potential for science to play an integral role in the conduct of our foreign affairs.  The United States is currently one of the leaders in science diplomacy, and as such, listening to Dr.Turekian will provide a unique perspective from the best practices of science diplomacy in the US.

Actually, Dr. Turekian’s visit comes before a North American Summit being held in Ottawa on June 29, 2016 and which has already taken a scientific turn. From a June 16, 2016 news item on phys.org,

Some 200 intellectuals, scientists and artists from around the world urged the leaders of Mexico, the United States and Canada on Wednesday to save North America’s endangered migratory Monarch butterfly.

US novelist Paul Auster, environmental activist Robert F. Kennedy Jr., Canadian poet [Canadian media usually describe her as a writer] Margaret Atwood, British writer Ali Smith and India’s women’s and children’s minister Maneka Sanjay Gandhi were among the signatories of an open letter to the three leaders.

US President Barack Obama, Canadian Prime Minister Justin Trudeau and Mexican President Enrique Pena Nieto will hold a North American summit in Ottawa on June 29 [2016].

The letter by the so-called Group of 100 calls on the three leaders to “take swift and energetic actions to preserve the Monarch’s migratory phenomenon” when they meet this month.

In 1996-1997, the butterflies covered 18.2 hectares (45 acres) of land in Mexico’s central mountains.

It fell to 0.67 hectares in 2013-2014 but rose to 4 hectares this year. Their population is measured by the territory they cover.

They usually arrive in Mexico between late October and early November and head back north in March.

Given this turn of events, I wonder how Turekian, given that he’s held his current position for less than a year, might (or might not) approach the question of Monarch butterflies and diplomacy.

I did a little research about Turekian and found this Sept. 10, 2016 news release announcing his appointment as the Science and Technology Adviser to the US Secretary of State,

On September 8, Dr. Vaughan Turekian, formerly the Chief International Officer at The American Association for the Advancement of Science (AAAS), was named the 5th Science and Technology Adviser to the Secretary of State. In this capacity, Dr. Turekian will advise the Secretary of State and the Under Secretary for Economic Growth, Energy, and the Environment on international environment, science, technology, and health matters affecting the foreign policy of the United States. Dr. Turekian will draw upon his background in atmospheric chemistry and extensive policy experience to promote science, technology, and engineering as integral components of U.S. diplomacy.

Dr. Turekian brings both technical expertise and 14 years of policy experience to the position. As former Chief International Officer for The American Association for the Advancement of Science (AAAS) and Director of AAAS’s Center for Science Diplomacy, Dr. Turekian worked to build bridges between nations based on shared scientific goals, placing special emphasis on regions where traditional political relationships are strained or do not exist. As Editor-in-Chief of Science & Diplomacy, an online quarterly publication, Dr. Turekian published original policy pieces that have served to inform international science policy recommendations. Prior to his work at AAAS, Turekian worked at the State Department as Special Assistant and Adviser to the Under Secretary for Global Affairs on issues related to sustainable development, climate change, environment, energy, science, technology, and health and as a Program Director for the Committee on Global Change Research at the National Academy of Sciences where he was study director for a White House report on climate change science.

Turekian’s last editorial for Science & Diplomacy dated June 30, 2015 features a title (Evolving Institutions for Twenty-First Century [Science] Diplomacy) bearing a resemblance to the title for his talk in Ottawa and perhaps it provides a preview (spoilers),

Over the recent decade, its treatment of science and technology issues has increased substantially, with a number of cover stories focused on topics that bridge science, technology, and foreign affairs. This thought leadership reflects a broader shift in thinking within institutions throughout the world about the importance of better integrating the communities of science and diplomacy in novel ways.

In May, a high-level committee convened by Japan’s minister of foreign affairs released fifteen recommendations for how Japan could better incorporate its scientific and technological expertise into its foreign policy. While many of the recommendations were to be predicted, including the establishment of the position of science adviser to the foreign minister, the breadth of the recommendations highlighted numerous new ways Japan could leverage science to meet its foreign policy objectives. The report itself marks a turning point for an institution looking to upgrade its ability to meet and shape the challenges of this still young century.

On the other side of the Pacific, the U.S. National Academy of Sciences released its own assessment of science in the U.S. Department of State. Their report, “Diplomacy for the 21st Century: Embedding a Culture of Science and Technology Throughout the Department of State,” builds on its landmark 1999 report, which, among other things, established the position of science and technology adviser to the secretary of state. The twenty-seven recommendations in the new report are wide ranging, but as a whole speak to the fact that while one of the oldest U.S. institutions of government has made much progress toward becoming more scientifically and technologically literate, there are many more steps that could be taken to leverage science and technology as a key element of U.S. foreign policy.

These two recent reports highlight the importance of foreign ministries as vital instruments of science diplomacy. These agencies of foreign affairs, like their counterparts around the world, are often viewed as conservative and somewhat inflexible institutions focused on stability rather than transformation. However, they are adjusting to a world in which developments in science and technology move rapidly and affect relationships and interactions at bilateral, regional, and global scales.

At the same time that some traditional national instruments of diplomacy are evolving to better incorporate science, international science institutions are also evolving to meet the diplomatic and foreign policy drivers of this more technical century. …

It’s an interesting read and I’m glad to see the mention of Japan in his article. I’d like to see Canadian science advice and policy initiatives take more notice of the rest of the world rather than focusing almost solely on what’s happening in the US and Great Britain (see my June 15, 2016 post for an example of what I mean). On another note, it was disconcerting to find out that Turekian believes that we are only now moving past the Cold War politics of the past.

Unfortunately for anyone wanting to attend the talk, ticket sales have ended even though they were supposed to be open until June 17, 2016. And, there doesn’t seem to be a wait list.

You may want to try arriving at the door and hoping that people have cancelled or fail to arrive therefore acquiring a ticket. Should you be an MP (Member of Parliament), Senator, or guest of the Canadian Science Policy Conference, you get a free ticket. Should you be anyone else, expect to pay $15, assuming no one is attempting to scalp (sell one for more than it cost) these tickets.

*’ … on June’ in headline changed to ‘ … on June 21, 2016’ on June 17, 2016.

Prime Minister Trudeau, the quantum physicist

Prime Minister Justin Trudeau’s apparently extemporaneous response to a joking (non)question about quantum computing by a journalist during an April 15, 2016 press conference at the Perimeter Institute for Theoretical Physics in Waterloo, Ontario, Canada has created a buzz online, made international news, and caused Canadians to sit taller.

For anyone who missed the moment, here’s a video clip from the Canadian Broadcasting Corporation (CBC),

Aaron Hutchins in an April 15, 2016 article for Maclean’s magazine digs deeper to find out more about Trudeau and quantum physics (Note: A link has been removed),

Raymond Laflamme knows the drill when politicians visit the Perimeter Institute. A photo op here, a few handshakes there and a tour with “really basic, basic, basic facts” about the field of quantum mechanics.

But when the self-described “geek” Justin Trudeau showed up for a funding announcement on Friday [April 15, 2016], the co-founder and director of the Institute for Quantum Computing at the University of Waterloo wasn’t met with simple nods of the Prime Minister pretending to understand. Trudeau immediately started talking about things being waves and particles at the same time, like cats being dead and alive at the same time. It wasn’t just nonsense—Trudeau was referencing the famous thought experiment of the late legendary physicist Erwin Schrödinger.

“I don’t know where he learned all that stuff, but we were all surprised,” Laflamme says. Soon afterwards, as Trudeau met with one student talking about superconductivity, the Prime Minister asked her, “Why don’t we have high-temperature superconducting systems?” something Laflamme describes as the institute’s “Holy Grail” quest.

“I was flabbergasted,” Laflamme says. “I don’t know how he does in other subjects, but in quantum physics, he knows the basic pieces and the important questions.”

Strangely, Laflamme was not nearly as excited (tongue in cheek) when I demonstrated my understanding of quantum physics during our interview (see my May 11, 2015 posting; scroll down about 40% of the way to the Ramond Laflamme subhead).

As Jon Butterworth comments in his April 16, 2016 posting on the Guardian science blog, the response says something about our expectations regarding politicians,

This seems to have enhanced Trudeau’s reputation no end, and quite right too. But it is worth thinking a bit about why.

The explanation he gives is clear, brief, and understandable to a non-specialist. It is the kind of thing any sufficiently engaged politician could pick up from a decent briefing, given expert help. …

Butterworth also goes on to mention journalists’ expectations,

The reporter asked the question in a joking fashion, not unkindly as far as I can tell, but not expecting an answer either. If this had been an announcement about almost any other government investment, wouldn’t the reporter have expected a brief explanation of the basic ideas behind it? …

As for the announcement being made by Trudeau, there is this April 15, 2016 Perimeter Institute press release (Note: Links have been removed),

Prime Minister Justin Trudeau says the work being done at Perimeter and in Canada’s “Quantum Valley” [emphasis mine] is vital to the future of the country and the world.

Prime Minister Justin Trudeau became both teacher and student when he visited Perimeter Institute today to officially announce the federal government’s commitment to support fundamental scientific research at Perimeter.

Joined by Minister of Science Kirsty Duncan and Small Business and Tourism Minister Bardish Chagger, the self-described “geek prime minister” listened intensely as he received brief overviews of Perimeter research in areas spanning from quantum science to condensed matter physics and cosmology.

“You don’t have to be a geek like me to appreciate how important this work is,” he then told a packed audience of scientists, students, and community leaders in Perimeter’s atrium.

The Prime Minister was also welcomed by 200 teenagers attending the Institute’s annual Inspiring Future Women in Science conference, and via video greetings from cosmologist Stephen Hawking [he was Laflamme’s PhD supervisor], who is a Perimeter Distinguished Visiting Research Chair. The Prime Minister said he was “incredibly overwhelmed” by Hawking’s message.

“Canada is a wonderful, huge country, full of people with big hearts and forward-looking minds,” Hawking said in his message. “It’s an ideal place for an institute dedicated to the frontiers of physics. In supporting Perimeter, Canada sets an example for the world.”

The visit reiterated the Government of Canada’s pledge of $50 million over five years announced in last month’s [March 2016] budget [emphasis mine] to support Perimeter research, training, and outreach.

It was the Prime Minister’s second trip to the Region of Waterloo this year. In January [2016], he toured the region’s tech sector and universities, and praised the area’s innovation ecosystem.

This time, the focus was on the first link of the innovation chain: fundamental science that could unlock important discoveries, advance human understanding, and underpin the groundbreaking technologies of tomorrow.

As for the “quantum valley’ in Ontario, I think there might be some competition here in British Columbia with D-Wave Systems (first commercially available quantum computing, of a sort; my Dec. 16, 2015 post is the most recent one featuring the company) and the University of British Columbia’s Stewart Blusson Quantum Matter Institute.

Getting back to Trudeau, it’s exciting to have someone who seems so interested in at least some aspects of science that he can talk about it with a degree of understanding. I knew he had an interest in literature but there is also this (from his Wikipedia entry; Note: Links have been removed),

Trudeau has a bachelor of arts degree in literature from McGill University and a bachelor of education degree from the University of British Columbia…. After graduation, he stayed in Vancouver and he found substitute work at several local schools and permanent work as a French and math teacher at the private West Point Grey Academy … . From 2002 to 2004, he studied engineering at the École Polytechnique de Montréal, a part of the Université de Montréal.[67] He also started a master’s degree in environmental geography at McGill University, before suspending his program to seek public office.[68] [emphases mine]

Trudeau is not the only political leader to have a strong interest in science. In our neighbour to the south, there’s President Barack Obama who has done much to promote science since he was elected in 2008. David Bruggeman in an April 15, 2016  post (Obama hosts DNews segments for Science Channel week of April 11-15, 2016) and an April 17, 2016 post (Obama hosts White House Science Fair) describes two of Obama’s most recent efforts.

ETA April 19, 2016: I’ve found confirmation that this Q&A was somewhat staged as I hinted in the opening with “Prime Minister Justin Trudeau’s apparently extemporaneous response … .” Will Oremus’s April 19, 2016 article for Slate.com breaks the whole news cycle down and points out (Note: A link has been removed),

Over the weekend, even as latecomers continued to dine on the story’s rapidly decaying scraps, a somewhat different picture began to emerge. A Canadian blogger pointed out that Trudeau himself had suggested to reporters at the event that they lob him a question about quantum computing so that he could knock it out of the park with the newfound knowledge he had gleaned on his tour.

The Canadian blogger who tracked this down is J. J. McCullough (Jim McCullough) and you can read his Oct. 16, 2016 posting on the affair here. McCullough has a rather harsh view of the media response to Trudeau’s lecture. Oremus is a bit more measured,

… Monday brought the countertake parade—smaller and less pompous, if no less righteous—led by Gawker with the headline, “Justin Trudeau’s Quantum Computing Explanation Was Likely Staged for Publicity.”

But few of us in the media today are immune to the forces that incentivize timeliness and catchiness over subtlety, and even Gawker’s valuable corrective ended up meriting a corrective of its own. Author J.K. Trotter soon updated his post with comments from Trudeau’s press secretary, who maintained (rather convincingly, I think) that nothing in the episode was “staged”—at least, not in the sinister way that the word implies. Rather, Trudeau had joked that he was looking forward to someone asking him about quantum computing; a reporter at the press conference jokingly complied, without really expecting a response (he quickly moved on to his real question before Trudeau could answer); Trudeau responded anyway, because he really did want to show off his knowledge.

Trotter deserves credit, regardless, for following up and getting a fuller picture of what transpired. He did what those who initially jumped on the story did not, which was to contact the principals for context and comment.

But my point here is not to criticize any particular writer or publication. The too-tidy Trudeau narrative was not the deliberate work of any bad actor or fabricator. Rather, it was the inevitable product of today’s inexorable social-media machine, in which shareable content fuels the traffic-referral engines that pay online media’s bills.

I suggest reading both McCullough’s and Oremus’s posts in their entirety should you find debates about the role of media compelling.

A study in contrasts: innovation and education strategies in US and British Columbia (Canada)

It’s always interesting to contrast two approaches to the same issue, in this case, innovation and education strategies designed to improve the economies of the United States and of British Columbia, a province in Canada.

One of the major differences regarding education in the US and in Canada is that the Canadian federal government, unlike the US federal government, has no jurisdiction over the matter. Education is strictly a provincial responsibility.

I recently wrote a commentary (a Jan. 19, 2016 posting) about the BC government’s Jan. 18, 2016 announcement of its innovation strategy in a special emphasis on the education aspect. Premier Christy Clark focused largely on the notion of embedding courses on computer coding in schools from K-12 (kindergarten through grade 12) as Jonathon Narvey noted in his Jan. 19, 2016 event recap for Betakit,

While many in the tech sector will be focused on the short-term benefits of a quick injection of large capital [a $100M BC Tech Fund as part of a new strategy was announced in Dec. 2015 but details about the new #BCTECH Strategy were not shared until Jan. 18, 2016], the long-term benefits for the local tech sector are being seeded in local schools. More than 600,000 BC students will be getting basic skills in the K-12 curriculum, with coding academies, more work experience electives and partnerships between high school and post-secondary institutions.

Here’s what I had to say in my commentary (from the Jan. 19, 2016 posting),

… the government wants to embed  computer coding into the education system for K-12 (kindergarten to grade 12). One determined reporter (Canadian Press if memory serves) attempted to find out how much this would cost. No answer was forthcoming although there were many words expended. Whether this failure was due to ignorance (disturbing!) or a reluctance to share (also disturbing!) was impossible to tell. Another reporter (Georgia Straight) asked about equipment (coding can be taught with pen and paper but hardware is better). … Getting back to the reporter’s question, no answer was forthcoming although the speaker was loquacious.

Another reporter asked if the government had found any jurisdictions doing anything similar regarding computer coding. It seems they did consider other jurisdictions although it was claimed that BC is the first to strike out in this direction. Oddly, no one mentioned Estonia, known in some circles as E-stonia, where the entire school system was online by the late 1990s in an initiative known as the ‘Tiger Leap Foundation’ which also supported computer coding classes in secondary school (there’s more in Tim Mansel’s May 16, 2013 article about Estonia’s then latest initiative to embed computer coding into grade school.) …

Aside from the BC government’s failure to provide details, I am uncomfortable with what I see as an overemphasis on computer coding that suggests a narrow focus on what constitutes a science and technology strategy for education. I find the US approach closer to what I favour although I may be biased since they are building their strategy around nanotechnology education.

The US approach had been announced in dribs and drabs until recently when a Jan. 26, 2016 news item on Nanotechnology Now indicated a broad-based plan for nanotechnology education (and computer coding),

Over the past 15 years, the Federal Government has invested over $22 billion in R&D under the auspices of the National Nanotechnology Initiative (NNI) to understand and control matter at the nanoscale and develop applications that benefit society. As these nanotechnology-enabled applications become a part of everyday life, it is important for students to have a basic understanding of material behavior at the nanoscale, and some states have even incorporated nanotechnology concepts into their K-12 science standards. Furthermore, application of the novel properties that exist at the nanoscale, from gecko-inspired climbing gloves and invisibility cloaks, to water-repellent coatings on clothes or cellphones, can spark students’ excitement about science, technology, engineering, and mathematics (STEM).

An earlier Jan. 25, 2016 White House blog posting by Lisa Friedersdorf and Lloyd Whitman introduced the notion that nanotechnology is viewed as foundational and a springboard for encouraging interest in STEM (science, technology, engineering, and mathematics) careers while outlining several formal and information education efforts,

The Administration’s updated Strategy for American Innovation, released in October 2015, identifies nanotechnology as one of the emerging “general-purpose technologies”—a technology that, like the steam engine, electricity, and the Internet, will have a pervasive impact on our economy and our society, with the ability to create entirely new industries, create jobs, and increase productivity. To reap these benefits, we must train our Nation’s students for these high-tech jobs of the future. Fortunately, the multidisciplinary nature of nanotechnology and the unique and fascinating phenomena that occur at the nanoscale mean that nanotechnology is a perfect topic to inspire students to pursue careers in science, technology, engineering, and mathematics (STEM).

The Nanotechnology: Super Small Science series [mentioned in my Jan. 21, 2016 posting] is just the latest example of the National Nanotechnology Initiative (NNI)’s efforts to educate and inspire our Nation’s students. Other examples include:

The announcement about computer coding and courses being integrated in the US education curricula K-12 was made in US President Barack Obama’s 2016 State of the Union speech and covered in a Jan. 30, 2016 article by Jessica Hullinger for Fast Company,

In his final State Of The Union address earlier this month, President Obama called for providing hands-on computer science classes for all students to make them “job ready on day one.” Today, he is unveiling how he plans to do that with his upcoming budget.

The President’s Computer Science for All Initiative seeks to provide $4 billion in funding for states and an additional $100 million directly to school districts in a push to provide access to computer science training in K-12 public schools. The money would go toward things like training teachers, providing instructional materials, and getting kids involved in computer science early in elementary and middle school.

There are more details in the Hullinger’s article and in a Jan. 30, 2016 White House blog posting by Megan Smith,

Computer Science for All is the President’s bold new initiative to empower all American students from kindergarten through high school to learn computer science and be equipped with the computational thinking skills they need to be creators in the digital economy, not just consumers, and to be active citizens in our technology-driven world. Our economy is rapidly shifting, and both educators and business leaders are increasingly recognizing that computer science (CS) is a “new basic” skill necessary for economic opportunity and social mobility.

CS for All builds on efforts already being led by parents, teachers, school districts, states, and private sector leaders from across the country.

Nothing says one approach has to be better than the other as there’s usually more than one way to accomplish a set of goals. As well, it’s unfair to expect a provincial government to emulate the federal government of a larger country with more money to spend. I just wish the BC government (a) had shared details such as the budget allotment for their initiative and (b) would hint at a more imaginative, long range view of STEM education.

Going back to Estonia one last time, in addition to the country’s recent introduction of computer coding classes in grade school, it has also embarked on a nanotechnology/nanoscience educational and entrepreneurial programme as noted in my Sept. 30, 2014 posting,

The University of Tartu (Estonia) announced in a Sept. 29, 2014 press release an educational and entrepreneurial programme about nanotechnology/nanoscience for teachers and students,

To bring nanoscience closer to pupils, educational researchers of the University of Tartu decided to implement the European Union LLP Comenius project “Quantum Spin-Off – connecting schools with high-tech research and entrepreneurship”. The objective of the project is to build a kind of a bridge: at one end, pupils can familiarise themselves with modern science, and at the other, experience its application opportunities at high-tech enterprises. “We also wish to inspire these young people to choose a specialisation related to science and technology in the future,” added Lukk [Maarika Lukk, Coordinator of the project].

The pupils can choose between seven topics of nanotechnology: the creation of artificial muscles, microbiological fuel elements, manipulation of nanoparticles, nanoparticles and ionic liquids as oil additives, materials used in regenerative medicine, deposition and 3D-characterisation of atomically designed structures and a topic covered in English, “Artificial robotic fish with EAP elements”.

Learning is based on study modules in the field of nanotechnology. In addition, each team of pupils will read a scientific publication, selected for them by an expert of that particular field. In that way, pupils will develop an understanding of the field and of scientific texts. On the basis of the scientific publication, the pupils prepare their own research project and a business plan suitable for applying the results of the project.

In each field, experts of the University of Tartu will help to understand the topics. Participants will visit a nanotechnology research laboratory and enterprises using nanotechnologies.

The project lasts for two years and it is also implemented in Belgium, Switzerland and Greece.

As they say, time will tell.

Performances Tom Hanks never gave

The answer to the question, “What makes Tom Hanks look like Tom  Hanks?” leads to machine learning and algorithms according to a Dec. 7, 2015 University of Washington University news release (also on EurekAlert) Note: Link have been removed,

Tom Hanks has appeared in many acting roles over the years, playing young and old, smart and simple. Yet we always recognize him as Tom Hanks.

Why? Is it his appearance? His mannerisms? The way he moves?

University of Washington researchers have demonstrated that it’s possible for machine learning algorithms to capture the “persona” and create a digital model of a well-photographed person like Tom Hanks from the vast number of images of them available on the Internet.

With enough visual data to mine, the algorithms can also animate the digital model of Tom Hanks to deliver speeches that the real actor never performed.

“One answer to what makes Tom Hanks look like Tom Hanks can be demonstrated with a computer system that imitates what Tom Hanks will do,” said lead author Supasorn Suwajanakorn, a UW graduate student in computer science and engineering.

As for the performances Tom Hanks never gave, the news release offers more detail,

The technology relies on advances in 3-D face reconstruction, tracking, alignment, multi-texture modeling and puppeteering that have been developed over the last five years by a research group led by UW assistant professor of computer science and engineering Ira Kemelmacher-Shlizerman. The new results will be presented in a paper at the International Conference on Computer Vision in Chile on Dec. 16.

The team’s latest advances include the ability to transfer expressions and the way a particular person speaks onto the face of someone else — for instance, mapping former president George W. Bush’s mannerisms onto the faces of other politicians and celebrities.

Here’s a video demonstrating how former President Bush’s speech and mannerisms have mapped onto other famous faces including Hanks’s,

The research team has future plans for this technology (from the news release)

It’s one step toward a grand goal shared by the UW computer vision researchers: creating fully interactive, three-dimensional digital personas from family photo albums and videos, historic collections or other existing visuals.

As virtual and augmented reality technologies develop, they envision using family photographs and videos to create an interactive model of a relative living overseas or a far-away grandparent, rather than simply Skyping in two dimensions.

“You might one day be able to put on a pair of augmented reality glasses and there is a 3-D model of your mother on the couch,” said senior author Kemelmacher-Shlizerman. “Such technology doesn’t exist yet — the display technology is moving forward really fast — but how do you actually re-create your mother in three dimensions?”

One day the reconstruction technology could be taken a step further, researchers say.

“Imagine being able to have a conversation with anyone you can’t actually get to meet in person — LeBron James, Barack Obama, Charlie Chaplin — and interact with them,” said co-author Steve Seitz, UW professor of computer science and engineering. “We’re trying to get there through a series of research steps. One of the true tests is can you have them say things that they didn’t say but it still feels like them? This paper is demonstrating that ability.”

Existing technologies to create detailed three-dimensional holograms or digital movie characters like Benjamin Button often rely on bringing a person into an elaborate studio. They painstakingly capture every angle of the person and the way they move — something that can’t be done in a living room.

Other approaches still require a person to be scanned by a camera to create basic avatars for video games or other virtual environments. But the UW computer vision experts wanted to digitally reconstruct a person based solely on a random collection of existing images.

To reconstruct celebrities like Tom Hanks, Barack Obama and Daniel Craig, the machine learning algorithms mined a minimum of 200 Internet images taken over time in various scenarios and poses — a process known as learning ‘in the wild.’

“We asked, ‘Can you take Internet photos or your personal photo collection and animate a model without having that person interact with a camera?'” said Kemelmacher-Shlizerman. “Over the years we created algorithms that work with this kind of unconstrained data, which is a big deal.”

Suwajanakorn more recently developed techniques to capture expression-dependent textures — small differences that occur when a person smiles or looks puzzled or moves his or her mouth, for example.

By manipulating the lighting conditions across different photographs, he developed a new approach to densely map the differences from one person’s features and expressions onto another person’s face. That breakthrough enables the team to ‘control’ the digital model with a video of another person, and could potentially enable a host of new animation and virtual reality applications.

“How do you map one person’s performance onto someone else’s face without losing their identity?” said Seitz. “That’s one of the more interesting aspects of this work. We’ve shown you can have George Bush’s expressions and mouth and movements, but it still looks like George Clooney.”

Here’s a link to and a citation for the paper presented at the conference in Chile,

What Makes Tom Hanks Look Like Tom Hanks by Supasorn Suwajanakorn, Steven M. Seitz, Ira Kemelmacher-Shlizerman for the 2015 ICCV conference, Dec. 13 – 15, 2015 in Chile.

You can find out more about the conference here.