Tag Archives: European Union (EU)

Hardware policies best way to manage AI safety?

Regulation of artificial intelligence (AI) has become very topical in the last couple of years. There was an AI safety summit in November 2023 at Bletchley Park in the UK (see my November 2, 2023 posting for more about that international meeting).

A very software approach?

This year (2024) has seen a rise in legislative and proposed legislative activity. I have some articles on a few of these activities. China was the first to enact regulations of any kind on AI according to Matt Sheehan’s February 27, 2024 paper for the Carnegie Endowment for International Peace,

In 2021 and 2022, China became the first country to implement detailed, binding regulations on some of the most common applications of artificial intelligence (AI). These rules formed the foundation of China’s emerging AI governance regime, an evolving policy architecture that will affect everything from frontier AI research to the functioning of the world’s second-largest economy, from large language models in Africa to autonomous vehicles in Europe.

The Chinese Communist Party (CCP) and the Chinese government started that process with the 2021 rules on recommendation algorithms, an omnipresent use of the technology that is often overlooked in international AI governance discourse. Those rules imposed new obligations on companies to intervene in content recommendations, granted new rights to users being recommended content, and offered protections to gig workers subject to algorithmic scheduling. The Chinese party-state quickly followed up with a new regulation on “deep synthesis,” the use of AI to generate synthetic media such as deepfakes. Those rules required AI providers to watermark AI-generated content and ensure that content does not violate people’s “likeness rights” or harm the “nation’s image.” Together, these two regulations also created and amended China’s algorithm registry, a regulatory tool that would evolve into a cornerstone of the country’s AI governance regime.

The UK has adopted a more generalized approach focused on encouraging innovation according to Valeria Gallo’s and Suchitra Nair’s February 21, 2024 article for Deloitte (a British professional services firm also considered one of the big four accounting firms worldwide),

At a glance

The UK Government has adopted a cross-sector and outcome-based framework for regulating AI, underpinned by five core principles. These are safety, security and robustness, appropriate transparency and explainability, fairness, accountability and governance, and contestability and redress.

Regulators will implement the framework in their sectors/domains by applying existing laws and issuing supplementary regulatory guidance. Selected regulators will publish their AI annual strategic plans by 30th April [2024], providing businesses with much-needed direction.

Voluntary safety and transparency measures for developers of highly capable AI models and systems will also supplement the framework and the activities of individual regulators.

The framework will not be codified into law for now, but the Government anticipates the need for targeted legislative interventions in the future. These interventions will address gaps in the current regulatory framework, particularly regarding the risks posed by complex General Purpose AI and the key players involved in its development.

Organisations must prepare for increased AI regulatory activity over the next year, including guidelines, information gathering, and enforcement. International firms will inevitably have to navigate regulatory divergence.

While most of the focus appears to be on the software (e.g., General Purpose AI), the UK framework does not preclude hardware.

The European Union (EU) is preparing to pass its own AI regulation act through the European Parliament in 2024 according to a December 19, 2023 “EU AI Act: first regulation on artificial intelligence” article update, Note: Links have been removed,

As part of its digital strategy, the EU wants to regulate artificial intelligence (AI) to ensure better conditions for the development and use of this innovative technology. AI can create many benefits, such as better healthcare; safer and cleaner transport; more efficient manufacturing; and cheaper and more sustainable energy.

In April 2021, the European Commission proposed the first EU regulatory framework for AI. It says that AI systems that can be used in different applications are analysed and classified according to the risk they pose to users. The different risk levels will mean more or less regulation.

The agreed text is expected to be finally adopted in April 2024. It will be fully applicable 24 months after entry into force, but some parts will be applicable sooner:

*The ban of AI systems posing unacceptable risks will apply six months after the entry into force

*Codes of practice will apply nine months after entry into force

*Rules on general-purpose AI systems that need to comply with transparency requirements will apply 12 months after the entry into force

High-risk systems will have more time to comply with the requirements as the obligations concerning them will become applicable 36 months after the entry into force.

This EU initiative, like the UK framework, seems largely focused on AI software and according to the Wikipedia entry “Regulation of artificial intelligence,”

… The AI Act is expected to come into effect in late 2025 or early 2026.[109

I do have a few postings about Canadian regulatory efforts, which also seem to be focused on software but don’t preclude hardware. While the January 20, 2024 posting is titled “Canada’s voluntary code of conduct relating to advanced generative AI (artificial intelligence) systems,” information about legislative efforts is also included although you might find my May 1, 2023 posting titled “Canada, AI regulation, and the second reading of the Digital Charter Implementation Act, 2022 (Bill C-27)” offers more comprehensive information about Canada’s legislative progress or lack thereof.

The US is always to be considered in these matters and I have a November 2023 ‘briefing’ by Müge Fazlioglu on the International Association of Privacy Professionals (IAPP) website where she provides a quick overview of the international scene before diving deeper into US AI governance policy through the Barack Obama, Donald Trump, and Joe Biden administrations. There’s also this January 29, 2024 US White House “Fact Sheet: Biden-⁠Harris Administration Announces Key AI Actions Following President Biden’s Landmark Executive Order.”

What about AI and hardware?

A February 15, 2024 news item on ScienceDaily suggests that regulating hardware may be the most effective way of regulating AI,

Chips and datacentres — the ‘compute’ power driving the AI revolution — may be the most effective targets for risk-reducing AI policies as they have to be physically possessed, according to a new report.

A global registry tracking the flow of chips destined for AI supercomputers is one of the policy options highlighted by a major new report calling for regulation of “compute” — the hardware that underpins all AI — to help prevent artificial intelligence misuse and disasters.

Other technical proposals floated by the report include “compute caps” — built-in limits to the number of chips each AI chip can connect with — and distributing a “start switch” for AI training across multiple parties to allow for a digital veto of risky AI before it feeds on data.

The experts point out that powerful computing chips required to drive generative AI models are constructed via highly concentrated supply chains, dominated by just a handful of companies — making the hardware itself a strong intervention point for risk-reducing AI policies.

The report, published 14 February [2024], is authored by nineteen experts and co-led by three University of Cambridge institutes — the Leverhulme Centre for the Future of Intelligence (LCFI), the Centre for the Study of Existential Risk (CSER) and the Bennett Institute for Public Policy — along with OpenAI and the Centre for the Governance of AI.

A February 14, 2024 University of Cambridge press release by Fred Lewsey (also on EurekAlert), which originated the news item, provides more information about the ‘hardware approach to AI regulation’,

“Artificial intelligence has made startling progress in the last decade, much of which has been enabled by the sharp increase in computing power applied to training algorithms,” said Haydn Belfield, a co-lead author of the report from Cambridge’s LCFI. 

“Governments are rightly concerned about the potential consequences of AI, and looking at how to regulate the technology, but data and algorithms are intangible and difficult to control.

“AI supercomputers consist of tens of thousands of networked AI chips hosted in giant data centres often the size of several football fields, consuming dozens of megawatts of power,” said Belfield.

“Computing hardware is visible, quantifiable, and its physical nature means restrictions can be imposed in a way that might soon be nearly impossible with more virtual elements of AI.”

The computing power behind AI has grown exponentially since the “deep learning era” kicked off in earnest, with the amount of “compute” used to train the largest AI models doubling around every six months since 2010. The biggest AI models now use 350 million times more compute than thirteen years ago.

Government efforts across the world over the past year – including the US Executive Order on AI, EU AI Act, China’s Generative AI Regulation, and the UK’s AI Safety Institute – have begun to focus on compute when considering AI governance.

Outside of China, the cloud compute market is dominated by three companies, termed “hyperscalers”: Amazon, Microsoft, and Google. “Monitoring the hardware would greatly help competition authorities in keeping in check the market power of the biggest tech companies, and so opening the space for more innovation and new entrants,” said co-author Prof Diane Coyle from Cambridge’s Bennett Institute. 

The report provides “sketches” of possible directions for compute governance, highlighting the analogy between AI training and uranium enrichment. “International regulation of nuclear supplies focuses on a vital input that has to go through a lengthy, difficult and expensive process,” said Belfield. “A focus on compute would allow AI regulation to do the same.”

Policy ideas are divided into three camps: increasing the global visibility of AI computing; allocating compute resources for the greatest benefit to society; enforcing restrictions on computing power.

For example, a regularly-audited international AI chip registry requiring chip producers, sellers, and resellers to report all transfers would provide precise information on the amount of compute possessed by nations and corporations at any one time.

The report even suggests a unique identifier could be added to each chip to prevent industrial espionage and “chip smuggling”.

“Governments already track many economic transactions, so it makes sense to increase monitoring of a commodity as rare and powerful as an advanced AI chip,” said Belfield. However, the team point out that such approaches could lead to a black market in untraceable “ghost chips”.

Other suggestions to increase visibility – and accountability – include reporting of large-scale AI training by cloud computing providers, and privacy-preserving “workload monitoring” to help prevent an arms race if massive compute investments are made without enough transparency.  

“Users of compute will engage in a mixture of beneficial, benign and harmful activities, and determined groups will find ways to circumvent restrictions,” said Belfield. “Regulators will need to create checks and balances that thwart malicious or misguided uses of AI computing.”

These might include physical limits on chip-to-chip networking, or cryptographic technology that allows for remote disabling of AI chips in extreme circumstances. One suggested approach would require the consent of multiple parties to unlock AI compute for particularly risky training runs, a mechanism familiar from nuclear weapons.

AI risk mitigation policies might see compute prioritised for research most likely to benefit society – from green energy to health and education. This could even take the form of major international AI “megaprojects” that tackle global issues by pooling compute resources.

The report’s authors are clear that their policy suggestions are “exploratory” rather than fully fledged proposals and that they all carry potential downsides, from risks of proprietary data leaks to negative economic impacts and the hampering of positive AI development.

They offer five considerations for regulating AI through compute, including the exclusion of small-scale and non-AI computing, regular revisiting of compute thresholds, and a focus on privacy preservation.

Added Belfield: “Trying to govern AI models as they are deployed could prove futile, like chasing shadows. Those seeking to establish AI regulation should look upstream to compute, the source of the power driving the AI revolution. If compute remains ungoverned it poses severe risks to society.”

You can find the report, “Computing Power and the Governance of Artificial Intelligence” on the University of Cambridge’s Centre for the Study of Existential Risk.

Authors include: Girish Sastry, Lennart Heim, Haydn Belfield, Markus Anderljung, Miles Brundage, Julian Hazell, Cullen O’Keefe, Gillian K. Hadfield, Richard Ngo, Konstantin Pilz, George Gor, Emma Bluemke, Sarah Shoker, Janet Egan, Robert F. Trager, Shahar Avin, Adrian Weller, Yoshua Bengio, and Diane Coyle.

The authors are associated with these companies/agencies: OpenAI, Centre for the Governance of AI (GovAI), Leverhulme Centre for the Future of Intelligence at the Uni. of Cambridge, Oxford Internet Institute, Institute for Law & AI, University of Toronto Vector Institute for AI, Georgetown University, ILINA Program, Harvard Kennedy School (of Government), *AI Governance Institute,* Uni. of Oxford, Centre for the Study of Existential Risk at Uni. of Cambridge, Uni. of Cambridge, Uni. of Montreal / Mila, Bennett Institute for Public Policy at the Uni. of Cambridge.

“The ILINIA program is dedicated to providing an outstanding platform for Africans to learn and work on questions around maximizing wellbeing and responding to global catastrophic risks” according to the organization’s homepage.

*As for the AI Governance Institute, I believe that should be the Centre for the Governance of AI at Oxford University since the associated academic is Robert F. Trager from the University of Oxford.

As the months (years?) fly by, I guess we’ll find out if this hardware approach gains any traction where AI regulation is concerned.

Portable and non-invasive (?) mind-reading AI (artificial intelligence) turns thoughts into text and some thoughts about the near future

First, here’s some of the latest research and if by ‘non-invasive,’ you mean that electrodes are not being planted in your brain, then this December 12, 2023 University of Technology Sydney (UTS) press release (also on EurekAlert) highlights non-invasive mind-reading AI via a brain-computer interface (BCI), Note: Links have been removed,

In a world-first, researchers from the GrapheneX-UTS Human-centric Artificial Intelligence Centre at the University of Technology Sydney (UTS) have developed a portable, non-invasive system that can decode silent thoughts and turn them into text. 

The technology could aid communication for people who are unable to speak due to illness or injury, including stroke or paralysis. It could also enable seamless communication between humans and machines, such as the operation of a bionic arm or robot.

The study has been selected as the spotlight paper at the NeurIPS conference, a top-tier annual meeting that showcases world-leading research on artificial intelligence and machine learning, held in New Orleans on 12 December 2023.

The research was led by Distinguished Professor CT Lin, Director of the GrapheneX-UTS HAI Centre, together with first author Yiqun Duan and fellow PhD candidate Jinzhou Zhou from the UTS Faculty of Engineering and IT.

In the study participants silently read passages of text while wearing a cap that recorded electrical brain activity through their scalp using an electroencephalogram (EEG). A demonstration of the technology can be seen in this video [See UTS press release].

The EEG wave is segmented into distinct units that capture specific characteristics and patterns from the human brain. This is done by an AI model called DeWave developed by the researchers. DeWave translates EEG signals into words and sentences by learning from large quantities of EEG data. 

“This research represents a pioneering effort in translating raw EEG waves directly into language, marking a significant breakthrough in the field,” said Distinguished Professor Lin.

“It is the first to incorporate discrete encoding techniques in the brain-to-text translation process, introducing an innovative approach to neural decoding. The integration with large language models is also opening new frontiers in neuroscience and AI,” he said.

Previous technology to translate brain signals to language has either required surgery to implant electrodes in the brain, such as Elon Musk’s Neuralink [emphasis mine], or scanning in an MRI machine, which is large, expensive, and difficult to use in daily life.

These methods also struggle to transform brain signals into word level segments without additional aids such as eye-tracking, which restrict the practical application of these systems. The new technology is able to be used either with or without eye-tracking.

The UTS research was carried out with 29 participants. This means it is likely to be more robust and adaptable than previous decoding technology that has only been tested on one or two individuals, because EEG waves differ between individuals. 

The use of EEG signals received through a cap, rather than from electrodes implanted in the brain, means that the signal is noisier. In terms of EEG translation however, the study reported state-of the art performance, surpassing previous benchmarks.

“The model is more adept at matching verbs than nouns. However, when it comes to nouns, we saw a tendency towards synonymous pairs rather than precise translations, such as ‘the man’ instead of ‘the author’,” said Duan. [emphases mine; synonymous, eh? what about ‘woman’ or ‘child’ instead of the ‘man’?]

“We think this is because when the brain processes these words, semantically similar words might produce similar brain wave patterns. Despite the challenges, our model yields meaningful results, aligning keywords and forming similar sentence structures,” he said.

The translation accuracy score is currently around 40% on BLEU-1. The BLEU score is a number between zero and one that measures the similarity of the machine-translated text to a set of high-quality reference translations. The researchers hope to see this improve to a level that is comparable to traditional language translation or speech recognition programs, which is closer to 90%.

The research follows on from previous brain-computer interface technology developed by UTS in association with the Australian Defence Force [ADF] that uses brainwaves to command a quadruped robot, which is demonstrated in this ADF video [See my June 13, 2023 posting, “Mind-controlled robots based on graphene: an Australian research story” for the story and embedded video].

About one month after the research announcement regarding the University of Technology Sydney’s ‘non-invasive’ brain-computer interface (BCI), I stumbled across an in-depth piece about the field of ‘non-invasive’ mind-reading research.

Neurotechnology and neurorights

Fletcher Reveley’s January 18, 2024 article on salon.com (originally published January 3, 2024 on Undark) shows how quickly the field is developing and raises concerns, Note: Links have been removed,

One afternoon in May 2020, Jerry Tang, a Ph.D. student in computer science at the University of Texas at Austin, sat staring at a cryptic string of words scrawled across his computer screen:

“I am not finished yet to start my career at twenty without having gotten my license I never have to pull out and run back to my parents to take me home.”

The sentence was jumbled and agrammatical. But to Tang, it represented a remarkable feat: A computer pulling a thought, however disjointed, from a person’s mind.

For weeks, ever since the pandemic had shuttered his university and forced his lab work online, Tang had been at home tweaking a semantic decoder — a brain-computer interface, or BCI, that generates text from brain scans. Prior to the university’s closure, study participants had been providing data to train the decoder for months, listening to hours of storytelling podcasts while a functional magnetic resonance imaging (fMRI) machine logged their brain responses. Then, the participants had listened to a new story — one that had not been used to train the algorithm — and those fMRI scans were fed into the decoder, which used GPT1, a predecessor to the ubiquitous AI chatbot ChatGPT, to spit out a text prediction of what it thought the participant had heard. For this snippet, Tang compared it to the original story:

“Although I’m twenty-three years old I don’t have my driver’s license yet and I just jumped out right when I needed to and she says well why don’t you come back to my house and I’ll give you a ride.”

The decoder was not only capturing the gist of the original, but also producing exact matches of specific words — twenty, license. When Tang shared the results with his adviser, a UT Austin neuroscientist named Alexander Huth who had been working towards building such a decoder for nearly a decade, Huth was floored. “Holy shit,” Huth recalled saying. “This is actually working.” By the fall of 2021, the scientists were testing the device with no external stimuli at all — participants simply imagined a story and the decoder spat out a recognizable, albeit somewhat hazy, description of it. “What both of those experiments kind of point to,” said Huth, “is the fact that what we’re able to read out here was really like the thoughts, like the idea.”

The scientists brimmed with excitement over the potentially life-altering medical applications of such a device — restoring communication to people with locked-in syndrome, for instance, whose near full-body paralysis made talking impossible. But just as the potential benefits of the decoder snapped into focus, so too did the thorny ethical questions posed by its use. Huth himself had been one of the three primary test subjects in the experiments, and the privacy implications of the device now seemed visceral: “Oh my god,” he recalled thinking. “We can look inside my brain.”

Huth’s reaction mirrored a longstanding concern in neuroscience and beyond: that machines might someday read people’s minds. And as BCI technology advances at a dizzying clip, that possibility and others like it — that computers of the future could alter human identities, for example, or hinder free will — have begun to seem less remote. “The loss of mental privacy, this is a fight we have to fight today,” said Rafael Yuste, a Columbia University neuroscientist. “That could be irreversible. If we lose our mental privacy, what else is there to lose? That’s it, we lose the essence of who we are.”

Spurred by these concerns, Yuste and several colleagues have launched an international movement advocating for “neurorights” — a set of five principles Yuste argues should be enshrined in law as a bulwark against potential misuse and abuse of neurotechnology. But he may be running out of time.

Reveley’s January 18, 2024 article provides fascinating context and is well worth reading if you have the time.

For my purposes, I’m focusing on ethics, Note: Links have been removed,

… as these and other advances propelled the field forward, and as his own research revealed the discomfiting vulnerability of the brain to external manipulation, Yuste found himself increasingly concerned by the scarce attention being paid to the ethics of these technologies. Even Obama’s multi-billion-dollar BRAIN Initiative, a government program designed to advance brain research, which Yuste had helped launch in 2013 and supported heartily, seemed to mostly ignore the ethical and societal consequences of the research it funded. “There was zero effort on the ethical side,” Yuste recalled.

Yuste was appointed to the rotating advisory group of the BRAIN Initiative in 2015, where he began to voice his concerns. That fall, he joined an informal working group to consider the issue. “We started to meet, and it became very evident to me that the situation was a complete disaster,” Yuste said. “There was no guidelines, no work done.” Yuste said he tried to get the group to generate a set of ethical guidelines for novel BCI technologies, but the effort soon became bogged down in bureaucracy. Frustrated, he stepped down from the committee and, together with a University of Washington bioethicist named Sara Goering, decided to independently pursue the issue. “Our aim here is not to contribute to or feed fear for doomsday scenarios,” the pair wrote in a 2016 article in Cell, “but to ensure that we are reflective and intentional as we prepare ourselves for the neurotechnological future.”

In the fall of 2017, Yuste and Goering called a meeting at the Morningside Campus of Columbia, inviting nearly 30 experts from all over the world in such fields as neurotechnology, artificial intelligence, medical ethics, and the law. By then, several other countries had launched their own versions of the BRAIN Initiative, and representatives from Australia, Canada [emphasis mine], China, Europe, Israel, South Korea, and Japan joined the Morningside gathering, along with veteran neuroethicists and prominent researchers. “We holed ourselves up for three days to study the ethical and societal consequences of neurotechnology,” Yuste said. “And we came to the conclusion that this is a human rights issue. These methods are going to be so powerful, that enable to access and manipulate mental activity, and they have to be regulated from the angle of human rights. That’s when we coined the term ‘neurorights.’”

The Morningside group, as it became known, identified four principal ethical priorities, which were later expanded by Yuste into five clearly defined neurorights: The right to mental privacy, which would ensure that brain data would be kept private and its use, sale, and commercial transfer would be strictly regulated; the right to personal identity, which would set boundaries on technologies that could disrupt one’s sense of self; the right to fair access to mental augmentation, which would ensure equality of access to mental enhancement neurotechnologies; the right of protection from bias in the development of neurotechnology algorithms; and the right to free will, which would protect an individual’s agency from manipulation by external neurotechnologies. The group published their findings in an often-cited paper in Nature.

But while Yuste and the others were focused on the ethical implications of these emerging technologies, the technologies themselves continued to barrel ahead at a feverish speed. In 2014, the first kick of the World Cup was made by a paraplegic man using a mind-controlled robotic exoskeleton. In 2016, a man fist bumped Obama using a robotic arm that allowed him to “feel” the gesture. The following year, scientists showed that electrical stimulation of the hippocampus could improve memory, paving the way for cognitive augmentation technologies. The military, long interested in BCI technologies, built a system that allowed operators to pilot three drones simultaneously, partially with their minds. Meanwhile, a confusing maelstrom of science, science-fiction, hype, innovation, and speculation swept the private sector. By 2020, over $33 billion had been invested in hundreds of neurotech companies — about seven times what the NIH [US National Institutes of Health] had envisioned for the 12-year span of the BRAIN Initiative itself.

Now back to Tang and Huth (from Reveley’s January 18, 2024 article), Note: Links have been removed,

Central to the ethical questions Huth and Tang grappled with was the fact that their decoder, unlike other language decoders developed around the same time, was non-invasive — it didn’t require its users to undergo surgery. Because of that, their technology was free from the strict regulatory oversight that governs the medical domain. (Yuste, for his part, said he believes non-invasive BCIs pose a far greater ethical challenge than invasive systems: “The non-invasive, the commercial, that’s where the battle is going to get fought.”) Huth and Tang’s decoder faced other hurdles to widespread use — namely that fMRI machines are enormous, expensive, and stationary. But perhaps, the researchers thought, there was a way to overcome that hurdle too.

The information measured by fMRI machines — blood oxygenation levels, which indicate where blood is flowing in the brain — can also be measured with another technology, functional Near-Infrared Spectroscopy, or fNIRS. Although lower resolution than fMRI, several expensive, research-grade, wearable fNIRS headsets do approach the resolution required to work with Huth and Tang’s decoder. In fact, the scientists were able to test whether their decoder would work with such devices by simply blurring their fMRI data to simulate the resolution of research-grade fNIRS. The decoded result “doesn’t get that much worse,” Huth said.

And while such research-grade devices are currently cost-prohibitive for the average consumer, more rudimentary fNIRS headsets have already hit the market. Although these devices provide far lower resolution than would be required for Huth and Tang’s decoder to work effectively, the technology is continually improving, and Huth believes it is likely that an affordable, wearable fNIRS device will someday provide high enough resolution to be used with the decoder. In fact, he is currently teaming up with scientists at Washington University to research the development of such a device.

Even comparatively primitive BCI headsets can raise pointed ethical questions when released to the public. Devices that rely on electroencephalography, or EEG, a commonplace method of measuring brain activity by detecting electrical signals, have now become widely available — and in some cases have raised alarm. In 2019, a school in Jinhua, China, drew criticism after trialing EEG headbands that monitored the concentration levels of its pupils. (The students were encouraged to compete to see who concentrated most effectively, and reports were sent to their parents.) Similarly, in 2018 the South China Morning Post reported that dozens of factories and businesses had begun using “brain surveillance devices” to monitor workers’ emotions, in the hopes of increasing productivity and improving safety. The devices “caused some discomfort and resistance in the beginning,” Jin Jia, then a brain scientist at Ningbo University, told the reporter. “After a while, they got used to the device.”

But the primary problem with even low-resolution devices is that scientists are only just beginning to understand how information is actually encoded in brain data. In the future, powerful new decoding algorithms could discover that even raw, low-resolution EEG data contains a wealth of information about a person’s mental state at the time of collection. Consequently, nobody can definitively know what they are giving away when they allow companies to collect information from their brains.

Huth and Tang concluded that brain data, therefore, should be closely guarded, especially in the realm of consumer products. In an article on Medium from last April, Tang wrote that “decoding technology is continually improving, and the information that could be decoded from a brain scan a year from now may be very different from what can be decoded today. It is crucial that companies are transparent about what they intend to do with brain data and take measures to ensure that brain data is carefully protected.” (Yuste said the Neurorights Foundation recently surveyed the user agreements of 30 neurotech companies and found that all of them claim ownership of users’ brain data — and most assert the right to sell that data to third parties. [emphases mine]) Despite these concerns, however, Huth and Tang maintained that the potential benefits of these technologies outweighed their risks, provided the proper guardrails [emphasis mine] were put in place.

It would seem the first guardrails are being set up in South America (from Reveley’s January 18, 2024 article), Note: Links have been removed,

On a hot summer night in 2019, Yuste sat in the courtyard of an adobe hotel in the north of Chile with his close friend, the prominent Chilean doctor and then-senator Guido Girardi, observing the vast, luminous skies of the Atacama Desert and discussing, as they often did, the world of tomorrow. Girardi, who every year organizes the Congreso Futuro, Latin America’s preeminent science and technology event, had long been intrigued by the accelerating advance of technology and its paradigm-shifting impact on society — “living in the world at the speed of light,” as he called it. Yuste had been a frequent speaker at the conference, and the two men shared a conviction that scientists were birthing technologies powerful enough to disrupt the very notion of what it meant to be human.

Around midnight, as Yuste finished his pisco sour, Girardi made an intriguing proposal: What if they worked together to pass an amendment to Chile’s constitution, one that would enshrine protections for mental privacy as an inviolable right of every Chilean? It was an ambitious idea, but Girardi had experience moving bold pieces of legislation through the senate; years earlier he had spearheaded Chile’s famous Food Labeling and Advertising Law, which required companies to affix health warning labels on junk food. (The law has since inspired dozens of countries to pursue similar legislation.) With BCI, here was another chance to be a trailblazer. “I said to Rafael, ‘Well, why don’t we create the first neuro data protection law?’” Girardi recalled. Yuste readily agreed.

… Girardi led the political push, promoting a piece of legislation that would amend Chile’s constitution to protect mental privacy. The effort found surprising purchase across the political spectrum, a remarkable feat in a country famous for its political polarization. In 2021, Chile’s congress unanimously passed the constitutional amendment, which Piñera [Sebastián Piñera] swiftly signed into law. (A second piece of legislation, which would establish a regulatory framework for neurotechnology, is currently under consideration by Chile’s congress.) “There was no divide between the left or right,” recalled Girardi. “This was maybe the only law in Chile that was approved by unanimous vote.” Chile, then, had become the first country in the world to enshrine “neurorights” in its legal code.

Even before the passage of the Chilean constitutional amendment, Yuste had begun meeting regularly with Jared Genser, an international human rights lawyer who had represented such high-profile clients as Desmond Tutu, Liu Xiaobo, and Aung San Suu Kyi. (The New York Times Magazine once referred to Genser as “the extractor” for his work with political prisoners.) Yuste was seeking guidance on how to develop an international legal framework to protect neurorights, and Genser, though he had just a cursory knowledge of neurotechnology, was immediately captivated by the topic. “It’s fair to say he blew my mind in the first hour of discussion,” recalled Genser. Soon thereafter, Yuste, Genser, and a private-sector entrepreneur named Jamie Daves launched the Neurorights Foundation, a nonprofit whose first goal, according to its website, is “to protect the human rights of all people from the potential misuse or abuse of neurotechnology.”

To accomplish this, the organization has sought to engage all levels of society, from the United Nations and regional governing bodies like the Organization of American States, down to national governments, the tech industry, scientists, and the public at large. Such a wide-ranging approach, said Genser, “is perhaps insanity on our part, or grandiosity. But nonetheless, you know, it’s definitely the Wild West as it comes to talking about these issues globally, because so few people know about where things are, where they’re heading, and what is necessary.”

This general lack of knowledge about neurotech, in all strata of society, has largely placed Yuste in the role of global educator — he has met several times with U.N. Secretary-General António Guterres, for example, to discuss the potential dangers of emerging neurotech. And these efforts are starting to yield results. Guterres’s 2021 report, “Our Common Agenda,” which sets forth goals for future international cooperation, urges “updating or clarifying our application of human rights frameworks and standards to address frontier issues,” such as “neuro-technology.” Genser attributes the inclusion of this language in the report to Yuste’s advocacy efforts.

But updating international human rights law is difficult, and even within the Neurorights Foundation there are differences of opinion regarding the most effective approach. For Yuste, the ideal solution would be the creation of a new international agency, akin to the International Atomic Energy Agency — but for neurorights. “My dream would be to have an international convention about neurotechnology, just like we had one about atomic energy and about certain things, with its own treaty,” he said. “And maybe an agency that would essentially supervise the world’s efforts in neurotechnology.”

Genser, however, believes that a new treaty is unnecessary, and that neurorights can be codified most effectively by extending interpretation of existing international human rights law to include them. The International Covenant of Civil and Political Rights, for example, already ensures the general right to privacy, and an updated interpretation of the law could conceivably clarify that that clause extends to mental privacy as well.

There is no need for immediate panic (from Reveley’s January 18, 2024 article),

… while Yuste and the others continue to grapple with the complexities of international and national law, Huth and Tang have found that, for their decoder at least, the greatest privacy guardrails come not from external institutions but rather from something much closer to home — the human mind itself. Following the initial success of their decoder, as the pair read widely about the ethical implications of such a technology, they began to think of ways to assess the boundaries of the decoder’s capabilities. “We wanted to test a couple kind of principles of mental privacy,” said Huth. Simply put, they wanted to know if the decoder could be resisted.

In late 2021, the scientists began to run new experiments. First, they were curious if an algorithm trained on one person could be used on another. They found that it could not — the decoder’s efficacy depended on many hours of individualized training. Next, they tested whether the decoder could be thrown off simply by refusing to cooperate with it. Instead of focusing on the story that was playing through their headphones while inside the fMRI machine, participants were asked to complete other mental tasks, such as naming random animals, or telling a different story in their head. “Both of those rendered it completely unusable,” Huth said. “We didn’t decode the story they were listening to, and we couldn’t decode anything about what they were thinking either.”

Given how quickly this field of research is progressing, it seems like a good idea to increase efforts to establish neurorights (from Reveley’s January 18, 2024 article),

For Yuste, however, technologies like Huth and Tang’s decoder may only mark the beginning of a mind-boggling new chapter in human history, one in which the line between human brains and computers will be radically redrawn — or erased completely. A future is conceivable, he said, where humans and computers fuse permanently, leading to the emergence of technologically augmented cyborgs. “When this tsunami hits us I would say it’s not likely it’s for sure that humans will end up transforming themselves — ourselves — into maybe a hybrid species,” Yuste said. He is now focused on preparing for this future.

In the last several years, Yuste has traveled to multiple countries, meeting with a wide assortment of politicians, supreme court justices, U.N. committee members, and heads of state. And his advocacy is beginning to yield results. In August, Mexico began considering a constitutional reform that would establish the right to mental privacy. Brazil is currently considering a similar proposal, while Spain, Argentina, and Uruguay have also expressed interest, as has the European Union. In September [2023], neurorights were officially incorporated into Mexico’s digital rights charter, while in Chile, a landmark Supreme Court ruling found that Emotiv Inc, a company that makes a wearable EEG headset, violated Chile’s newly minted mental privacy law. That suit was brought by Yuste’s friend and collaborator, Guido Girardi.

“This is something that we should take seriously,” he [Huth] said. “Because even if it’s rudimentary right now, where is that going to be in five years? What was possible five years ago? What’s possible now? Where’s it gonna be in five years? Where’s it gonna be in 10 years? I think the range of reasonable possibilities includes things that are — I don’t want to say like scary enough — but like dystopian enough that I think it’s certainly a time for us to think about this.”

You can find The Neurorights Foundation here and/or read Reveley’s January 18, 2024 article on salon.com or as originally published January 3, 2024 on Undark. Finally, thank you for the article, Fletcher Reveley!

Japan inaugurates world’s biggest experimental operating nuclear fusion reactor

Andrew Paul’s December 4, 2023 article for Popular Science attempts to give readers a sense of the scale and this is one of those times when words are better than pictures, Note: Links have been removed,

Japan and the European Union have officially inaugurated testing at the world’s largest experimental nuclear fusion plant. Located roughly 85 miles north of Tokyo, the six-story, JT-60SA “tokamak” facility heats plasma to 200 million degrees Celsius (around 360 million Fahrenheit) within its circular, magnetically insulated reactor. Although JT-60SA first powered up during a test run back in October [2023], the partner governments’ December 1 announcement marks the official start of operations at the world’s biggest fusion center, reaffirming a “long-standing cooperation in the field of fusion energy.”

The tokamak—an acronym of the Russian-language designation of “toroidal chamber with magnetic coils”—has led researchers’ push towards achieving the “Holy Grail” of sustainable green energy production for decades. …

Speaking at the inauguration event, EU energy commissioner Kadri Simson referred to the JT-60SA as “the most advanced tokamak in the world,” representing “a milestone for fusion history.”

But even if such a revolutionary milestone is crossed, it likely won’t be at JT-60SA. Along with its still-in-construction sibling, the International Thermonuclear Experimental Reactor (ITER) in Europe, the projects are intended solely to demonstrate scalable fusion’s feasibility. Current hopes estimate ITER’s operational start for sometime in 2025, although the undertaking has been fraught with financial, logistical, and construction issues since its groundbreaking back in 2011.

See what I mean about a picture not really conveying the scale,

Until ITER turns on, Japan’s JT-60SA fusion reactor will be the largest in the world.National Institutes for Quantum Science and Technology

Dennis Normile’s October 31, 2023 article for Science magazine describes the facility’s (Japan’s JT-60SA fusion reactor) test run and future implications for the EU’s ITER project,

The long trek toward practical fusion energy passed a milestone last week when the world’s newest and largest fusion reactor fired up. Japan’s JT-60SA uses magnetic fields from superconducting coils to contain a blazingly hot cloud of ionized gas, or plasma, within a doughnut-shaped vacuum vessel, in hope of coaxing hydrogen nuclei to fuse and release energy. The four-story-high machine is designed to hold a plasma heated to 200 million degrees Celsius for about 100 seconds, far longer than previous large tokamaks.

Last week’s achievement “proves to the world that the machine fulfills its basic function,” says Sam Davis, a project manager at Fusion for Energy, an EU organization working with Japan’s National Institutes for Quantum Science and Technology (QST) on JT-60SA and related programs. It will take another 2 years before JT-60SA produces the long-lasting plasmas needed for meaningful physics experiments, says Hiroshi Shirai, leader of the project for QST.

JT-60SA will also help ITER, the mammoth international fusion reactor under construction in France that’s intended to demonstrate how fusion can generate more energy than goes into producing it. ITER will rely on technologies and operating know-how that JT-60SA will test.

Japan got to host JT-60SA and two other small fusion research facilities as a consolation prize for agreeing to let ITER go to France. …

As Normile notes, the ITER project has had a long and rocky road so far.

The Canadians

As it turns out, there’s a company in British Columbia, Canada that is also on the road to fusion energy. Not so imaginatively, it’s called General Fusion but it has a different approach to developing this ‘clean energy’. (See my October 28, 2022 posting, “Overview of fusion energy scene,” which includes information about the international scene and some of the approaches, including General Fusion’s, to developing the technology and my October 11, 2023 posting offers an update to the General Fusion situation.) Since my October 2023 posting, there have been a few developments at General Fusion.

This December 4, 2023 General Fusion news release celebrates a new infusion of cash from the Canadian government and take special note of the first item in the ‘Quick Facts’ of the advantage this technology offers,

Today [December 4, 2023], General Fusion announced that Canada’s Strategic Innovation Fund (SIF) has awarded CA$5 million to support research and development to advance the company’s Magnetized Target Fusion (MTF) demonstration at its Richmond headquarters. Called LM26, this ground-breaking machine will progress major technical milestones required to commercialize zero-carbon fusion power by the early to mid-2030s. The funds are an addition to the existing contribution agreement with SIF, to support the development of General Fusion’s transformational technology.

Fusion energy is the ultimate clean energy solution. It is what powers the sun and stars. It’s the process by which two light nuclei merge to form a heavier one, emitting a massive amount of energy. By 2100, the production and export of the Canadian industry’s fusion energy technology could provide up to $1.26 trillion in economic benefits to Canada. Additionally, fusion could completely offset 600 MT CO2-e emissions, the equivalent of over 160 coal-fired power plants for a single year. When commercialized, a single General Fusion power plant will be designed to provide zero-carbon power to approximately 150,000 Canadian homes, with the ability to be placed close to energy demand at a cost competitive with other energy sources such as coal and natural gas.1

Quotes:

“For more than 20 years, General Fusion has advanced its uniquely practical Magnetized Target Fusion technology and IP at its Canadian headquarters. LM26 will significantly de-risk our commercialization program and puts us on track to bring our game-changing, zero-emissions energy solution to Canada, and the world, in the next decade,” said Greg Twinney, CEO, General Fusion.

“Fusion technology has the potential to completely revolutionize the energy sector by giving us access to an affordable unlimited renewable power source. Since General Fusion is at the forefront of this technology, our decision to keep supporting the company will give us the tools we need to reduce greenhouse gas emissions and reach our climate goals. Our government is proud to invest in this innovative project to drive the creation of hundreds of middle-class jobs and position Canada as a world leader in fusion energy technology,” said The Honourable François-Philippe Champagne, Minister of Innovation, Science and Industry.

“British Columbia has a thriving innovation economy. In August, the B.C. Government announced CA$5 million in provincial support for General Fusion’s homegrown technology, and we’re pleased to see the Federal government has now provided funds to support General Fusion. These investments will help General Fusion as they continue to develop their core technology right here in B.C.,” said Brenda Bailey, B.C. Minister of Jobs, Economic Development and Innovation.

Quick Facts:

*Magnetized Target Fusion uniquely sidesteps challenges to commercialization that other technologies face. The game-changer is a proprietary liquid metal liner in the commercial fusion machine that is mechanically compressed by high-powered pistons. This enables fusion conditions to be created in short pulses rather than creating a sustained reaction. General Fusion’s design does not require large superconducting magnets or an expensive array of lasers.

*LM26 aims to achieve two of the most significant technical milestones required to commercialize fusion energy, targeting fusion conditions of over 100 million degrees Celsius by 2025, and progressing toward scientific breakeven equivalent by 2026.

*LM26’s plasmas will be approximately 50 per cent scale of a commercial fusion machine. It aims to achieve deuterium-tritium breakeven equivalent using deuterium fuel.

*The Canadian government is investing an additional CA$5 million for a total of CA$54.3 million to support the development of General Fusion’s energy technology through the Strategic Innovation Fund program.

*As a result of the government’s ongoing support, General Fusion has advanced its technology, building more than 24 plasma prototypes, filing over 170 patents, and conducting more than 200,000 experiments at its Canadian labs.

This January 11, 2024 General Fusion news release highlights some of the company’s latest research,

General Fusion has published new, peer-reviewed scientific results that validate the company has achieved the smooth, rapid, and symmetric compression of a liquid cavity that is key to the design of a commercial Magnetized Target Fusion power plant. The results, published in one of the foremost scientific journals in fusion, Fusion Engineering and Design [open access paper], validate the performance of General Fusion’s proprietary liquid compression technology for Magnetized Target Fusion and are scalable to a commercial machine.

General Fusion’s Magnetized Target Fusion technology uses mechanical compression of a plasma to achieve fusion conditions. High-speed drivers rapidly power a precisely shaped, symmetrical collapse of a liquid metal cavity that envelopes the plasma. In three years, General Fusion commissioned a prototype of its liquid compression system and completed over 1,000 shots, validating the compression technology. In addition, this scale model of General Fusion’s commercial compression system verified the company’s open-source computational fluid dynamics simulation. The paper confirms General Fusion’s concept for the compression system of a commercial machine.

“General Fusion has proven success scaling individual technologies, creating the pathway to integrate, deploy, and commercialize practical fusion energy,” said Greg Twinney, CEO, General Fusion. “The publication of these results demonstrates General Fusion has the science and engineering capabilities to progress the design of our proprietary liquid compression system to commercialization.”

General Fusion’s approach to compressing plasma to create fusion energy is unique. Its Magnetized Target Fusion technology is designed to address the barriers to commercialization that other fusion technologies still face. The game-changer is the proprietary liquid metal liner in the fusion vessel that is mechanically compressed by high-powered pistons. This allows General Fusion to create fusion conditions in short pulses, rather than creating a sustained reaction, while protecting the machine’s vessel, extracting heat, and re-breeding fuel.

Today [January 11, 2024] at its Canadian labs, General Fusion is building a ground-breaking Magnetized Target Fusion demonstration called Lawson Machine 26 (LM26). Designed to reach fusion conditions of over 100 million degrees Celsius by 2025 and progress towards scientific breakeven equivalent by 2026, LM26 fast-tracks General Fusion’s technical progress to provide commercial fusion energy to the grid by the early to mid-2030s.

Exciting times for us all and I wish good luck to all of the clean energy efforts wherever they are being pursued.

Thinking outside the curriculum: ‘Open schooling’ for science

An anecdote kicks off this October 20, 2023 news item on phys.org,

In a part of Sweden northeast of Stockholm, Nina Berglund likes trying out new ways to teach her science students aged 10 to 12.

Berglund recently invited a physics professor named Staffan Yngve to her class in the municipality of Norrtälje. Yngve brought with him a nail mat on which he proceeded to lie down to demonstrate the forces at work, delighting the students. “Even four months after, my pupils still remember it and speak about the visit using scientific terminology,” said Berglund.

She is a proponent of “open schooling,” an idea that science teaching must go beyond the staples of school labs such as test tubes, Bunsen burners and the periodic table to get students interested.

Amid concerns that Europe is attracting too few people—especially women—into scientific fields, the aim is to bring science to life for pupils.

While it has no formal defining characteristics, open schooling tends to feature activities such as on-site visits, off-site trips and remote learning that are generally exceptions in standard schools.

The story about open schooling in Europe comes from an October 19, 2023 article written by Andrew Dunne for Horizon: The EU Research & Innovation Magazine (also on Horizon science blog), Note: A link has been removed,

‘The big idea is to overcome the barriers we see with science education,’ said Maya Halevy, director of the Bloomfield Science Museum in Jerusalem, Israel.

Halevy led a research project that received EU funding to advance the whole concept. Called Make it Open, or MiO, the project ended in September 2023 after three years.

It helped to establish open schooling “hubs” in 10 European countries ranging from Sweden to Greece, bringing together more than 150 schools.

… at a Spanish educational institution called IES de Ortigueira in the northwestern part of the country, 12-year-olds learnt about physics by designing and building model playgrounds. The models were then displayed in the library, where the students explained their work to visitors.

At the primary school of Makrygialos near Greece’s second-biggest city, Thessaloniki, teacher Thanos Batsilas and his students were part of a living lab that taught environmental science through an activity involving mussel farming.

They accompanied farmers on a boat out to sea to observe how the environment is inextricably linked to the wellbeing of area residents and how climate change is advancing. The underlying point was that mussel farming is a viable way to make a living and can help support the local ecosystem.

Children loved the living-lab activities because they love anything that is out of the box,’ Batsilas said. ‘They embrace it.’

Koulouris [Pavlos Koulouris, faculty member at a school called Ellinogermaniki Agogi] said open schooling has the potential to turn traditional notions of academic achievement on their head.

You can find the Make it Open website here.

10 years of the European Union’s roll of the dice: €1B or 1billion euros each for the Human Brain Project (HBP) and the Graphene Flagship

Graphene and Human Brain Project win biggest research award in history (& this is the 2000th post)” on January 28, 2013 was how I announced the results of what had been a a European Union (EU) competition that stretched out over several years and many stages as projects were evaluated and fell to the wayside or were allowed onto the next stage. The two finalists received €1B each to be paid out over ten years.

Human Brain Project (HBP)

A September 12, 2023 Human Brain Project (HBP) press release (also on EurekAlert) summarizes the ten year research effort and the achievements,

The EU-funded Human Brain Project (HBP) comes to an end in September and celebrates its successful conclusion today with a scientific symposium at Forschungszentrum Jülich (FZJ). The HBP was one of the first flagship projects and, with 155 cooperating institutions from 19 countries and a total budget of 607 million euros, one of the largest research projects in Europe. Forschungszentrum Jülich, with its world-leading brain research institute and the Jülich Supercomputing Centre, played an important role in the ten-year project.

“Understanding the complexity of the human brain and explaining its functionality are major challenges of brain research today”, says Astrid Lambrecht, Chair of the Board of Directors of Forschungszentrum Jülich. “The instruments of brain research have developed considerably in the last ten years. The Human Brain Project has been instrumental in driving this development – and not only gained new insights for brain research, but also provided important impulses for information technologies.”

HBP researchers have employed highly advanced methods from computing, neuroinformatics and artificial intelligence in a truly integrative approach to understanding the brain as a multi-level system. The project has contributed to a deeper understanding of the complex structure and function of the brain and enabled novel applications in medicine and technological advances.

Among the project’s highlight achievements are a three-dimensional, digital atlas of the human brain with unprecedented detail, personalised virtual models of patient brains with conditions like epilepsy and Parkinson’s, breakthroughs in the field of artificial intelligence, and an open digital research infrastructure – EBRAINS – that will remain an invaluable resource for the entire neuroscience community beyond the end of the HBP.

Researchers at the HBP have presented scientific results in over 3000 publications, as well as advanced medical and technical applications and over 160 freely accessible digital tools for neuroscience research.

“The Human Brain Project has a pioneering role for digital brain research with a unique interdisciplinary approach at the interface of neuroscience, computing and technology,” says Katrin Amunts, Director of the HBP and of the Institute for Neuroscience and Medicine at FZJ. “EBRAINS will continue to power this new way of investigating the brain and foster developments in brain medicine.”

“The impact of what you achieved in digital science goes beyond the neuroscientific community”, said Gustav Kalbe, CNECT, Acting Director of Digital Excellence and Science Infrastructures at the European Commission during the opening of the event. “The infrastructure that the Human Brain Project has established is already seen as a key building block to facilitate cooperation and research across geographical boundaries, but also across communities.”

Further information about the Human Brain Project as well as photos from research can be found here: https://fz-juelich.sciebo.de/s/hWJkNCC1Hi1PdQ5.

Results highlights and event photos in the online press release.

Results overviews:
– “Human Brain Project: Spotlights on major achievements” and “A closer Look on Scientific
Advances”

– “Human Brain Project: An extensive guide to the tools developed”

Examples of results from the Human Brain Project:

As the “Google Maps of the brain” [emphasis mine], the Human Brain Project makes the most comprehensive digital brain atlas to date available to all researchers worldwide. The atlas by Jülich researchers and collaborators combines high-resolution data of neurons, fibre connections, receptors and functional specialisations in the brain, and is designed as a constantly growing system.

13 hospitals in France are currently testing the new “Virtual Epileptic Patient” – a platform developed at the University of Marseille [Aix-Marseille University?] in the Human Brain Project. It creates personalised simulation models of brain dynamics to provide surgeons with predictions for the success of different surgical treatment strategies. The approach was presented this year in the journals Science Translational Medicine and The Lancet Neurology.



SpiNNaker2 is a “neuromorphic” [brainlike] computer developed by the University of Manchester and TU Dresden within the Human Brain Project. The company SpiNNcloud Systems in Dresden is commercialising the approach for AI applications. (Image: Sprind.org)

As an openly accessible digital infrastructure, EBRAINS offers scientists easy access to the best techniques for complex research questions.

[https://www.ebrains.eu/]

There was a Canadian connection at one time; Montréal Neuro at Canada’s McGill University was involved in developing a computational platform for neuroscience (CBRAIN) for HBP according to an announcement in my January 29, 2013 posting. However, there’s no mention of the EU project on the CBRAIN website nor is there mention of a Canadian partner on the EBRAINS website, which seemed the most likely successor to the CBRAIN portion of the HBP project originally mentioned in 2013.

I couldn’t resist “Google maps of the brain.”

In any event, the statement from Astrid Lambrecht offers an interesting contrast to that offered by the leader of the other project.

Graphene Flagship

In fact, the Graphene Flagship has been celebrating its 10th anniversary since last year; see my September 1, 2022 posting titled “Graphene Week (September 5 – 9, 2022) is a celebration of 10 years of the Graphene Flagship.”

The flagship’s lead institution, Chalmers University of Technology in Sweden, issued an August 28, 2023 press release by Lisa Gahnertz (also on the Graphene Flagship website but published September 4, 2023) touting its achievement with an ebullience I am more accustomed to seeing in US news releases,

Chalmers steers Europe’s major graphene venture to success

For the past decade, the Graphene Flagship, the EU’s largest ever research programme, has been coordinated from Chalmers with Jari Kinaret at the helm. As the project reaches the ten-year mark, expectations have been realised, a strong European research field on graphene has been established, and the journey will continue.

‘Have we delivered what we promised?’ asks Graphene Flagship Director Jari Kinaret from his office in the physics department at Chalmers, overlooking the skyline of central Gothenburg.

‘Yes, we have delivered more than anyone had a right to expect,’ [emphasis mine] he says. ‘In our analysis for the conclusion of the project, we read the documents that were written at the start. What we promised then were over a hundred specific things. Some of them were scientific and technological promises, and they have all been fulfilled. Others were for specific applications, and here 60–70 per cent of what was promised has been delivered. We have also delivered applications we did not promise from the start, but these are more difficult to quantify.’

The autumn of 2013 saw the launch of the massive ten-year Science, Technology and Innovation research programme on graphene and other related two-dimensional materials. Joint funding from the European Commission and EU Member States totalled a staggering €1,000 million. A decade later, it is clear that the large-scale initiative has succeeded in its endeavours. According to a report by the research institute WifOR, the Graphene Flagship will have created a total contribution to GDP of €3,800 million and 38,400 new jobs in the 27 EU countries between 2014 and 2030.

Exceeded expectations

‘Per euro invested and compared to other EU projects, the flagship has performed 13 times better than expected in terms of patent applications, and seven times better for scientific publications. We have 17 spin-off companies that have received over €130 million in private funding – people investing their own money is a real example of trust in the fact that the technology works,’ says Jari Kinaret.

He emphasises that the long time span has been crucial in developing the concepts of the various flagship projects.

‘When it comes to new projects, the ability to work on a long timescale is a must and is more important than a large budget. It takes a long time to build trust, both in one another within a team and in the technology on the part of investors, industry and the wider community. The size of the project has also been significant. There has been an ecosystem around the material, with many graphene manufacturers and other organisations involved. It builds robustness, which means you have the courage to invest in the material and develop it.’

From lab to application

In 2010, Andre Geim and Konstantin Novoselov of the University of Manchester won the Nobel Prize in Physics for their pioneering experiments isolating the ultra-light and ultra-thin material graphene. It was the first known 2D material and stunned the world with its ‘exceptional properties originating in the strange world of quantum physics’ according to the Nobel Foundation’s press release. Many potential applications were identified for this electrically conductive, heat-resistant and light-transmitting material. Jari Kinaret’s research team had been exploring the material since 2006, and when Kinaret learned of the European Commission’s call for a ten-year research programme, it prompted him to submit an application. The Graphene Flagship was initiated to ensure that Europe would maintain its leading position in graphene research and innovation, and its coordination and administration fell to Chalmers.

Is it a staggering thought that your initiative became the biggest EU research project of all time?

‘The fact that the three-minute presentation I gave at a meeting in Brussels has grown into an activity in 22 countries, with 170 organisations and 1,300 people involved … You can’t think about things like that because it can easily become overwhelming. Sometimes you just have to go for it,’ says Jari Kinaret.

One of the objectives of the Graphene Flagship was to take the hopes for this material and move them from lab to application. What has happened so far?

‘We are well on track with 100 products priced and on their way to the market. Many of them are business-to-business products that are not something we ordinary consumers are going to buy, but which may affect us indirectly.’

‘It’s important to remember that getting products to the application stage is a complex process. For a researcher, it may take ten working prototypes; for industry, ten million. Everything has to click into place, on a large scale. All components must work identically and in exactly the same way, and be compatible with existing production in manufacturing as you cannot rebuild an entire factory for a new material. In short, it requires reliability, reproducibility and manufacturability.’

Applications in a wide range of areas

Graphene’s extraordinary properties are being used to deliver the next generation of technologies in a wide range of fields, such as sensors for self-driving cars, advanced batteries, new water purification methods and sophisticated instruments for use in neuroscience. When asked if there are any applications that Jani Kinaret himself would like to highlight, he mentions, among other things, the applications that are underway in the automotive industry – such as sensors to detect obstacles for self-driving cars. Thanks to graphene, they will be so cost-effective to produce that it will be possible to make them available in more than just the most expensive car models.

He also highlights the aerospace industry, where a graphene material for removing ice from aircraft and helicopter wings is under development for the Airbus company. Another favourite, which he has followed from basic research to application, is the development of an air cleaner for Lufthansa passenger aircraft, based on a kind of ‘graphene foam’. Because graphene foam is very light, it can be heated extremely quickly. A pulse of electricity lasting one thousandth of a second is enough to raise the temperature to 300 degrees, thus killing micro-organisms and effectively cleaning the air in the aircraft.

He also mentions the Swedish company ABB, which has developed a graphene composite for circuit breakers in switchgear. These circuit breakers are used to protect the electricity network and must be safe to use. The graphene composite replaces the manual lubrication of the circuit breakers, resulting in significant cost savings.

‘We also see graphene being used in medical technology, but its application requires many years of testing and approval by various bodies. For example, graphene technology can more effectively map the brain before neurosurgery, as it provides a more detailed image. Another aspect of graphene is that it is soft and pliable. This means it can be used for electrodes that are implanted in the brain to treat tremors in Parkinson’s patients, without the electrodes causing scarring,’ says Jari Kinaret.

Coordinated by Chalmers

Jari Kinaret sees the fact that the EU chose Chalmers as the coordinating university as a favourable factor for the Graphene Flagship.

‘Hundreds of millions of SEK [Swedish Kroner] have gone into Chalmers research, but what has perhaps been more important is that we have become well-known and visible in certain areas. We also have the 2D-Tech competence centre and the SIO Grafen programme, both funded by Vinnova and coordinated by Chalmers and Chalmers industriteknik respectively. I think it is excellent that Chalmers was selected, as there could have been too much focus on the coordinating organisation if it had been more firmly established in graphene research at the outset.’

What challenges have been encountered during the project?

‘With so many stakeholders involved, we are not always in agreement. But that is a good thing. A management book I once read said that if two parties always agree, then one is redundant. At the start of the project, it was also interesting to see the major cultural differences we had in our communications and that different cultures read different things between the lines; it took time to realise that we should be brutally straightforward in our communications with one another.’

What has it been like to have the coordinating role that you have had?

‘Obviously, I’ve had to worry about things an ordinary physics professor doesn’t have to worry about, like a phone call at four in the morning after the Brexit vote or helping various parties with intellectual property rights. I have read more legal contracts than I thought I would ever have to read as a professor. As a researcher, your approach when you go into a role is narrow and deep, here it was rather all about breadth. I would have liked to have both, but there are only 26 hours in a day,’ jokes Jari Kinaret.

New phase for the project and EU jobs to come

A new assignment now awaits Jari Kinaret outside Chalmers as Chief Executive Officer of the EU initiative KDT JU (Key Digital Technologies Joint Undertaking, soon to become Chips JU), where industry and the public sector interact to drive the development of new electronic components and systems.

The Graphene Flagship may have reached its destination in its current form, but the work started is progressing in a form more akin to a flotilla. About a dozen projects will continue to live on under the auspices of the European Commission’s Horizon Europe programme. Chalmers is going to coordinate a smaller CSA project called GrapheneEU, where CSA stands for ‘Coordination and Support Action’. It will act as a cohesive force between the research and innovation projects that make up the next phase of the flagship, offering them a range of support and services, including communication, innovation and standardisation.

The Graphene Flagship is about to turn ten. If the project had been a ten-year-old child, what kind of child would it have been?

‘It would have been a very diverse organism. Different aspirations are beginning to emerge – perhaps it is adolescence that is approaching. In addition, within the project we have also studied other related 2D materials, and we found that there are 6,000 distinct materials of this type, of which only about 100 have been studied. So, it’s the younger siblings that are starting to arrive now.’

Facts about the Graphene Flagship:

The Graphene Flagship is the first European flagship for future and emerging technologies. It has been coordinated and administered from the Department of Physics at Chalmers, and as the project enters its next phase, GrapheneEU, coordination will continue to be carried out by staff currently working on the flagship led by Chalmers Professor Patrik Johansson.

The project has proved highly successful in developing graphene-based technology in Europe, resulting in 17 new companies, around 100 new products, nearly 500 patent applications and thousands of scientific papers. All in all, the project has exceeded the EU’s targets for utilisation from research projects by a factor of ten. According to the assessment of the EU research programme Horizon 2020, Chalmers’ coordination of the flagship has been identified as one of the key factors behind its success.

Graphene Week will be held at the Svenska Mässan in Gothenburg from 4 to 8 September 2023. Graphene Week is an international conference, which also marks the finale of the ten-year anniversary of the Graphene Flagship. The conference will be jointly led by academia and industry – Professor Patrik Johansson from Chalmers and Dr Anna Andersson from ABB – and is expected to attract over 400 researchers from Sweden, Europe and the rest of the world. The programme includes an exhibition, press conference and media activities, special sessions on innovation, diversity and ethics, and several technical sessions. The full programme is available here.

Read the press release on Graphene Week from 4 to 8 September and the overall results of the Graphene Flagship. …

Ten years and €1B each. Congratulations to the organizers on such massive undertakings. As for whether or not (and how they’ve been successful), I imagine time will tell.

Neuromorphic engineering: an overview

In a February 13, 2023 essay, Michael Berger who runs the Nanowerk website provides an overview of brainlike (neuromorphic) engineering.

This essay is the most extensive piece I’ve seen on Berger’s website and it covers everything from the reasons why scientists are so interested in mimicking the human brain to specifics about memristors. Here are a few excerpts (Note: Links have been removed),

Neuromorphic engineering is a cutting-edge field that focuses on developing computer hardware and software systems inspired by the structure, function, and behavior of the human brain. The ultimate goal is to create computing systems that are significantly more energy-efficient, scalable, and adaptive than conventional computer systems, capable of solving complex problems in a manner reminiscent of the brain’s approach.

This interdisciplinary field draws upon expertise from various domains, including neuroscience, computer science, electronics, nanotechnology, and materials science. Neuromorphic engineers strive to develop computer chips and systems incorporating artificial neurons and synapses, designed to process information in a parallel and distributed manner, akin to the brain’s functionality.

Key challenges in neuromorphic engineering encompass developing algorithms and hardware capable of performing intricate computations with minimal energy consumption, creating systems that can learn and adapt over time, and devising methods to control the behavior of artificial neurons and synapses in real-time.

Neuromorphic engineering has numerous applications in diverse areas such as robotics, computer vision, speech recognition, and artificial intelligence. The aspiration is that brain-like computing systems will give rise to machines better equipped to tackle complex and uncertain tasks, which currently remain beyond the reach of conventional computers.

It is essential to distinguish between neuromorphic engineering and neuromorphic computing, two related but distinct concepts. Neuromorphic computing represents a specific application of neuromorphic engineering, involving the utilization of hardware and software systems designed to process information in a manner akin to human brain function.

One of the major obstacles in creating brain-inspired computing systems is the vast complexity of the human brain. Unlike traditional computers, the brain operates as a nonlinear dynamic system that can handle massive amounts of data through various input channels, filter information, store key information in short- and long-term memory, learn by analyzing incoming and stored data, make decisions in a constantly changing environment, and do all of this while consuming very little power.

The Human Brain Project [emphasis mine], a large-scale research project launched in 2013, aims to create a comprehensive, detailed, and biologically realistic simulation of the human brain, known as the Virtual Brain. One of the goals of the project is to develop new brain-inspired computing technologies, such as neuromorphic computing.

The Human Brain Project has been funded by the European Union (1B Euros over 10 years starting in 2013 and sunsetting in 2023). From the Human Brain Project Media Invite,

The final Human Brain Project Summit 2023 will take place in Marseille, France, from March 28-31, 2023.

As the ten-year European Flagship Human Brain Project (HBP) approaches its conclusion in September 2023, the final HBP Summit will highlight the scientific achievements of the project at the interface of neuroscience and technology and the legacy that it will leave for the brain research community. …

One last excerpt from the essay,

Neuromorphic computing is a radical reimagining of computer architecture at the transistor level, modeled after the structure and function of biological neural networks in the brain. This computing paradigm aims to build electronic systems that attempt to emulate the distributed and parallel computation of the brain by combining processing and memory in the same physical location.

This is unlike traditional computing, which is based on von Neumann systems consisting of three different units: processing unit, I/O unit, and storage unit. This stored program architecture is a model for designing computers that uses a single memory to store both data and instructions, and a central processing unit to execute those instructions. This design, first proposed by mathematician and computer scientist John von Neumann, is widely used in modern computers and is considered to be the standard architecture for computer systems and relies on a clear distinction between memory and processing.

I found the diagram Berger Included with von Neumann’s design contrasted with a neuromorphic design illuminating,

A graphical comparison of the von Neumann and Neuromorphic architecture. Left: The von Neumann architecture used in traditional computers. The red lines depict the data communication bottleneck in the von Neumann architecture. Right: A graphical representation of a general neuromorphic architecture. In this architecture, the processing and memory is decentralized across different neuronal units(the yellow nodes) and synapses(the black lines connecting the nodes), creating a naturally parallel computing environment via the mesh-like structure. (Source: DOI: 10.1109/IS.2016.7737434) [downloaded from https://www.nanowerk.com/spotlight/spotid=62353.php]

Berger offers a very good overview and I recommend reading his February 13, 2023 essay on neuromorphic engineering with one proviso, Note: A link has been removed,

Many researchers in this field see memristors as a key device component for neuromorphic engineering. Memristor – or memory resistor – devices are non-volatile nanoelectronic memory devices that were first theorized [emphasis mine] by Leon Chua in the 1970’s. However, it was some thirty years later that the first practical device was fabricated in 2008 by a group led by Stanley Williams [sometimes cited as R. Stanley Williams] at HP Research Labs.

Chua wasn’t the first as he, himself, has noted. Chua arrived at his theory independently in the 1970s but Bernard Widrow theorized what he called a ‘memistor’ in the 1960s. In fact “Memristors: they are older than you think” is a May 22, 2012 posting which featured an article “Two centuries of memristors” by Themistoklis Prodromakis, Christofer Toumazou and Leon Chua published in Nature Materials.

Most of us try to get it right but we don’t always succeed. It’s always good practice to read everyone (including me) with a little skepticism.

Graphene goes to the moon

The people behind the European Union’s Graphene Flagship programme (if you need a brief explanation, keep scrolling down to the “What is the Graphene Flagship?” subhead) and the United Arab Emirates have got to be very excited about the announcement made in a November 29, 2022 news item on Nanowerk, Note: Canadians too have reason to be excited as of April 3, 2023 when it was announced that Canadian astronaut Jeremy Hansen was selected to be part of the team on NASA’s [US National Aeronautics and Space Administration] Artemis II to orbit the moon (April 3, 2023 CBC news online article by Nicole Mortillaro) ·

Graphene Flagship Partners University of Cambridge (UK) and Université Libre de Bruxelles (ULB, Belgium) paired up with the Mohammed bin Rashid Space Centre (MBRSC, United Arab Emirates), and the European Space Agency (ESA) to test graphene on the Moon. This joint effort sees the involvement of many international partners, such as Airbus Defense and Space, Khalifa University, Massachusetts Institute of Technology, Technische Universität Dortmund, University of Oslo, and Tohoku University.

The Rashid rover is planned to be launched on 30 November 2022 [Note: the launch appears to have occurred on December 11, 2022; keep scrolling for more about that] from Cape Canaveral in Florida and will land on a geologically rich and, as yet, only remotely explored area on the Moon’s nearside – the side that always faces the Earth. During one lunar day, equivalent to approximately 14 days on Earth, Rashid will move on the lunar surface investigating interesting geological features.

A November 29, 2022 Graphene Flagship press release (also on EurekAlert), which originated the news item, provides more details,

The Rashid rover wheels will be used for repeated exposure of different materials to the lunar surface. As part of this Material Adhesion and abrasion Detection experiment, graphene-based composites on the rover wheels will be used to understand if they can protect spacecraft against the harsh conditions on the Moon, and especially against regolith (also known as ‘lunar dust’).

Regolith is made of extremely sharp, tiny and sticky grains and, since the Apollo missions, it has been one of the biggest challenges lunar missions have had to overcome. Regolith is responsible for mechanical and electrostatic damage to equipment, and is therefore also hazardous for astronauts. It clogs spacesuits’ joints, obscures visors, erodes spacesuits and protective layers, and is a potential health hazard.  

University of Cambridge researchers from the Cambridge Graphene Centre produced graphene/polyether ether ketone (PEEK) composites. The interaction of these composites with the Moon regolith (soil) will be investigated. The samples will be monitored via an optical camera, which will record footage throughout the mission. ULB researchers will gather information during the mission and suggest adjustments to the path and orientation of the rover. Images obtained will be used to study the effects of the Moon environment and the regolith abrasive stresses on the samples.

This moon mission comes soon after the ESA announcement of the 2022 class of astronauts, including the Graphene Flagship’s own Meganne Christian, a researcher at Graphene Flagship Partner the Institute of Microelectronics and Microsystems (IMM) at the National Research Council of Italy.

“Being able to follow the Moon rover’s progress in real time will enable us to track how the lunar environment impacts various types of graphene-polymer composites, thereby allowing us to infer which of them is most resilient under such conditions. This will enhance our understanding of how graphene-based composites could be used in the construction of future lunar surface vessels,” says Sara Almaeeni, MBRSC science team lead, who designed Rashid’s communication system.

“New materials such as graphene have the potential to be game changers in space exploration. In combination with the resources available on the Moon, advanced materials will enable radiation protection, electronics shielding and mechanical resistance to the harshness of the Moon’s environment. The Rashid rover will be the first opportunity to gather data on the behavior of graphene composites within a lunar environment,” says Carlo Iorio, Graphene Flagship Space Champion, from ULB.

Leading up to the Moon mission, a variety of inks containing graphene and related materials, such as conducting graphene, insulating hexagonal boron nitride and graphene oxide, semiconducting molybdenum disulfide, prepared by the University of Cambridge and ULB were also tested on the MAterials Science Experiment Rocket 15 (MASER 15) mission, successfully launched on the 23rd of November 2022 from the Esrange Space Center in Sweden. This experiment, named ARLES-2 (Advanced Research on Liquid Evaporation in Space) and supported by European and UK space agencies (ESA, UKSA) included contributions from Graphene Flagship Partners University of Cambridge (UK), University of Pisa (Italy) and Trinity College Dublin (Ireland), with many international collaborators, including Aix-Marseille University (France), Technische Universität Darmstadt (Germany), York University (Canada), Université de Liège (Belgium), University of Edinburgh and Loughborough.

This experiment will provide new information about the printing of GMR inks in weightless conditions, contributing to the development of new addictive manufacturing procedures in space such as 3d printing. Such procedures are key for space exploration, during which replacement components are often needed, and could be manufactured from functional inks.

“Our experiments on graphene and related materials deposition in microgravity pave the way addictive manufacturing in space. The study of the interaction of Moon regolith with graphene composites will address some key challenges brought about by the harsh lunar environment,” says Yarjan Abdul Samad, from the Universities of Cambridge and Khalifa, who prepared the samples and coordinated the interactions with the United Arab Emirates.    

“The Graphene Flagship is spearheading the investigation of graphene and related materials (GRMs) for space applications. In November 2022, we had the first member of the Graphene Flagship appointed to the ESA astronaut class. We saw the launch of a sounding rocket to test printing of a variety of GRMs in zero gravity conditions, and the launch of a lunar rover that will test the interaction of graphene—based composites with the Moon surface. Composites, coatings and foams based on GRMs have been at the core of the Graphene Flagship investigations since its beginning. It is thus quite telling that, leading up to the Flagship’s 10th anniversary, these innovative materials are now to be tested on the lunar surface. This is timely, given the ongoing effort to bring astronauts back to the Moon, with the aim of building lunar settlements. When combined with polymers, GRMs can tailor the mechanical, thermal, electrical properties of then host matrices. These pioneering experiments could pave the way for widespread adoption of GRM-enhanced materials for space exploration,” says Andrea Ferrari, Science and Technology Officer and Chair of the Management Panel of the Graphene Flagship. 

Caption: The MASER15 launch Credit: John-Charles Dupin

A pioneering graphene work and a first for the Arab World

A December 11, 2022 news item on Alarabiya news (and on CNN) describes the ‘graphene’ launch which was also marked the Arab World’s first mission to the moon,

The United Arab Emirates’ Rashid Rover – the Arab world’s first mission to the Moon – was launched on Sunday [December 11, 2022], the Mohammed bin Rashid Space Center (MBRSC) announced on its official Twitter account.

The launch came after it was previously postponed for “pre-flight checkouts.”

The launch of a SpaceX Falcon 9 rocket carrying the UAE’s Rashid rover successfully took off from Cape Canaveral, Florida.

The Rashid rover – built by Emirati engineers from the UAE’s Mohammed bin Rashid Space Center (MBRSC) – is to be sent to regions of the Moon unexplored by humans.

What is the Graphene Flagship?

In 2013, the Graphene Flagship was chosen as one of two FET (Future and Emerging Technologies) funding projects (the other being the Human Brain Project) each receiving €1 billion to be paid out over 10 years. In effect, it’s a science funding programme specifically focused on research, development, and commercialization of graphene (a two-dimensional [it has length and width but no depth] material made of carbon atoms).

You can find out more about the flagship and about graphene here.

Age of AI and Big Data – Impact on Justice, Human Rights and Privacy Zoom event on September 28, 2022 at 12 – 1:30 pm EDT

The Canadian Science Policy Centre (CSPC) in a September 15, 2022 announcement (received via email) announced an event (Age of AI and Big Data – Impact on Justice, Human Rights and Privacy) centered on some of the latest government doings on artificial intelligence and privacy (Bill C-27),

In an increasingly connected world, we share a large amount of our data in our daily lives without our knowledge while browsing online, traveling, shopping, etc. More and more companies are collecting our data and using it to create algorithms or AI. The use of our data against us is becoming more and more common. The algorithms used may often be discriminatory against racial minorities and marginalized people.

As technology moves at a high pace, we have started to incorporate many of these technologies into our daily lives without understanding its consequences. These technologies have enormous impacts on our very own identity and collectively on civil society and democracy. 

Recently, the Canadian Government introduced the Artificial Intelligence and Data Act (AIDA) and Bill C-27 [which includes three acts in total] in parliament regulating the use of AI in our society. In this panel, we will discuss how our AI and Big data is affecting us and its impact on society, and how the new regulations affect us. 

Date: Sep 28 Time: 12:00 pm – 1:30 pm EDT Event Category: Virtual Session

Register Here

For some reason, there was no information about the moderator and panelists, other than their names, titles, and affiliations. Here’s a bit more:

Moderator: Yuan Stevens (from her eponymous website’s About page), Note: Links have been removed,

Yuan (“You-anne”) Stevens (she/they) is a legal and policy expert focused on sociotechnical security and human rights.

She works towards a world where powerful actors—and the systems they build—are held accountable to the public, especially when it comes to marginalized communities. 

She brings years of international experience to her role at the Leadership Lab at Toronto Metropolitan University [formerly Ryerson University], having examined the impacts of technology on vulnerable populations in Canada, the US and Germany. 

Committed to publicly accessible legal and technical knowledge, Yuan has written for popular media outlets such as the Toronto Star and Ottawa Citizen and has been quoted in news stories by the New York Times, the CBC and the Globe & Mail.

Yuan is a research fellow at the Centre for Law, Technology and Society at the University of Ottawa and a research affiliate at Data & Society Research Institute. She previously worked at Harvard University’s Berkman Klein Center for Internet & Society during her studies in law at McGill University.

She has been conducting research on artificial intelligence since 2017 and is currently exploring sociotechnical security as an LL.M candidate at University of Ottawa’s Faculty of Law working under Florian Martin-Bariteau.

Panelist: Brenda McPhail (from her Centre for International Governance Innovation profile page),

Brenda McPhail is the director of the Canadian Civil Liberties Association’s Privacy, Surveillance and Technology Project. Her recent work includes guiding the Canadian Civil Liberties Association’s interventions in key court cases that raise privacy issues, most recently at the Supreme Court of Canada in R v. Marakah and R v. Jones, which focused on privacy rights in sent text messages; research into surveillance of dissent, government information sharing, digital surveillance capabilities and privacy in relation to emergent technologies; and developing resources and presentations to drive public awareness about the importance of privacy as a social good.

Panelist: Nidhi Hegde (from her University of Alberta profile page),

My research has spanned many areas such as resource allocation in networking, smart grids, social information networks, machine learning. Broadly, my interest lies in gaining a fundamental understanding of a given system and the design of robust algorithms.

More recently my research focus has been in privacy in machine learning. I’m interested in understanding how robust machine learning methods are to perturbation, and privacy and fairness constraints, with the goal of designing practical algorithms that achieve privacy and fairness.

Bio

Before joining the University of Alberta, I spent many years in industry research labs. Most recently, I was a Research team lead at Borealis AI (a research institute at Royal Bank of Canada), where my team worked on privacy-preserving methods for machine learning models and other applied problems for RBC. Prior to that, I spent many years in research labs in Europe working on a variety of interesting and impactful problems. I was a researcher at Bell Labs, Nokia, in France from January 2015 to March 2018, where I led a new team focussed on Maths and Algorithms for Machine Learning in Networks and Systems, in the Maths and Algorithms group of Bell Labs. I also spent a few years at the Technicolor Paris Research Lab working on social network analysis, smart grids, and privacy in recommendations.

Panelist: Benjamin Faveri (from his LinkedIn page),

About

Benjamin Faveri is a Research and Policy Analyst at the Responsible AI Institute (RAII) [headquarted in Austin, Texas]. Currently, he is developing their Responsible AI Certification Program and leading it through Canada’s national accreditation process. Over the last several years, he has worked on numerous certification program-related research projects such as fishery economics and certification programs, police body-worn camera policy certification, and emerging AI certifications and assurance systems. Before his work at RAII, Benjamin completed a Master of Public Policy and Administration at Carleton University, where he was a Canada Graduate Scholar, Ontario Graduate Scholar, Social Innovation Fellow, and Visiting Scholar at UC Davis School of Law. He holds undergraduate degrees in criminology and psychology, finishing both with first class standing. Outside of work, Benjamin reads about how and why certification and private governance have been applied across various industries.

Panelist: Ori Freiman (from his eponymous website’s About page)

I research at the forefront of technological innovation. This website documents some of my academic activities.

My formal background is in Analytic Philosophy, Library and Information Science, and Science & Technology Studies. Until September 22′ [September 2022], I was a Post-Doctoral Fellow at the Ethics of AI Lab, at the University of Toronto’s Centre for Ethics. Before joining the Centre, I submitted my dissertation, about trust in technology, to The Graduate Program in Science, Technology and Society at Bar-Ilan University.

I have also found a number of overviews and bits of commentary about the Canadian federal government’s proposed Bill C-27, which I think of as an omnibus bill as it includes three proposed Acts.

The lawyers are excited but I’m starting with the Responsible AI Institute’s (RAII) response first as one of the panelists (Benjamin Faveri) works for them and it’s a view from a closely neighbouring country, from a June 22, 2022 RAII news release, Note: Links have been removed,

Business Implications of Canada’s Draft AI and Data Act

On June 16 [2022], the Government of Canada introduced the Artificial Intelligence and Data Act (AIDA), as part of the broader Digital Charter Implementation Act 2022 (Bill C-27). Shortly thereafter, it also launched the second phase of the Pan-Canadian Artificial Intelligence Strategy.

Both RAII’s Certification Program, which is currently under review by the Standards Council of Canada, and the proposed AIDA legislation adopt the same approach of gauging an AI system’s risk level in context; identifying, assessing, and mitigating risks both pre-deployment and on an ongoing basis; and pursuing objectives such as safety, fairness, consumer protection, and plain-language notification and explanation.

Businesses should monitor the progress of Bill C-27 and align their AI governance processes, policies, and controls to its requirements. Businesses participating in RAII’s Certification Program will already be aware of requirements, such as internal Algorithmic Impact Assessments to gauge risk level and Responsible AI Management Plans for each AI system, which include system documentation, mitigation measures, monitoring requirements, and internal approvals.

The AIDA draft is focused on the impact of any “high-impact system”. Companies would need to assess whether their AI systems are high-impact; identify, assess, and mitigate potential harms and biases flowing from high-impact systems; and “publish on a publicly available website a plain-language description of the system” if making a high-impact system available for use. The government elaborated in a press briefing that it will describe in future regulations the classes of AI systems that may have high impact.

The AIDA draft also outlines clear criminal penalties for entities which, in their AI efforts, possess or use unlawfully obtained personal information or knowingly make available for use an AI system that causes serious harm or defrauds the public and causes substantial economic loss to an individual.

If enacted, AIDA would establish the Office of the AI and Data Commissioner, to support Canada’s Minister of Innovation, Science and Economic Development, with powers to monitor company compliance with the AIDA, to order independent audits of companies’ AI activities, and to register compliance orders with courts. The Commissioner would also help the Minister ensure that standards for AI systems are aligned with international standards.

Apart from being aligned with the approach and requirements of Canada’s proposed AIDA legislation, RAII is also playing a key role in the Standards Council of Canada’s AI  accreditation pilot. The second phase of the Pan-Canadian includes funding for the Standards Council of Canada to “advance the development and adoption of standards and a conformity assessment program related to AI/”

The AIDA’s introduction shows that while Canada is serious about governing AI systems, its approach to AI governance is flexible and designed to evolve as the landscape changes.

Charles Mandel’s June 16, 2022 article for Betakit (Canadian Startup News and Tech Innovation) provides an overview of the government’s overall approach to data privacy, AI, and more,

The federal Liberal government has taken another crack at legislating privacy with the introduction of Bill C-27 in the House of Commons.

Among the bill’s highlights are new protections for minors as well as Canada’s first law regulating the development and deployment of high-impact AI systems.

“It [Bill C-27] will address broader concerns that have been expressed since the tabling of a previous proposal, which did not become law,” a government official told a media technical briefing on the proposed legislation.

François-Philippe Champagne, the Minister of Innovation, Science and Industry, together with David Lametti, the Minister of Justice and Attorney General of Canada, introduced the Digital Charter Implementation Act, 2022. The ministers said Bill C-27 will significantly strengthen Canada’s private sector privacy law, create new rules for the responsible development and use of artificial intelligence (AI), and continue to put in place Canada’s Digital Charter.

The Digital Charter Implementation Act includes three proposed acts: the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act, and the Artificial Intelligence and Data Act (AIDA)- all of which have implications for Canadian businesses.

Bill C-27 follows an attempt by the Liberals to introduce Bill C-11 in 2020. The latter was the federal government’s attempt to reform privacy laws in Canada, but it failed to gain passage in Parliament after the then-federal privacy commissioner criticized the bill.

The proposed Artificial Intelligence and Data Act is meant to protect Canadians by ensuring high-impact AI systems are developed and deployed in a way that identifies, assesses and mitigates the risks of harm and bias.

For businesses developing or implementing AI this means that the act will outline criminal prohibitions and penalties regarding the use of data obtained unlawfully for AI development or where the reckless deployment of AI poses serious harm and where there is fraudulent intent to cause substantial economic loss through its deployment.

..

An AI and data commissioner will support the minister of innovation, science, and industry in ensuring companies comply with the act. The commissioner will be responsible for monitoring company compliance, ordering third-party audits, and sharing information with other regulators and enforcers as appropriate.

The commissioner would also be expected to outline clear criminal prohibitions and penalties regarding the use of data obtained unlawfully for AI development or where the reckless deployment of AI poses serious harm and where there is fraudulent intent to cause substantial economic loss through its deployment.

Canada already collaborates on AI standards to some extent with a number of countries. Canada, France, and 13 other countries launched an international AI partnership to guide policy development and “responsible adoption” in 2020.

The federal government also has the Pan-Canadian Artificial Intelligence Strategy for which it committed an additional $443.8 million over 10 years in Budget 2021. Ahead of the 2022 budget, Trudeau [Canadian Prime Minister Justin Trudeau] had laid out an extensive list of priorities for the innovation sector, including tasking Champagne with launching or expanding national strategy on AI, among other things.

Within the AI community, companies and groups have been looking at AI ethics for some time. Scotiabank donated $750,000 in funding to the University of Ottawa in 2020 to launch a new initiative to identify solutions to issues related to ethical AI and technology development. And Richard Zemel, co-founder of the Vector Institute [formed as part of the Pan-Canadian Artificial Intelligence Strategy], joined Integrate.AI as an advisor in 2018 to help the startup explore privacy and fairness in AI.

When it comes to the Consumer Privacy Protection Act, the Liberals said the proposed act responds to feedback received on the proposed legislation, and is meant to ensure that the privacy of Canadians will be protected, and that businesses can benefit from clear rules as technology continues to evolve.

“A reformed privacy law will establish special status for the information of minors so that they receive heightened protection under the new law,” a federal government spokesperson told the technical briefing.

..

The act is meant to provide greater controls over Canadians’ personal information, including how it is handled by organizations as well as giving Canadians the freedom to move their information from one organization to another in a secure manner.

The act puts the onus on organizations to develop and maintain a privacy management program that includes the policies, practices and procedures put in place to fulfill obligations under the act. That includes the protection of personal information, how requests for information and complaints are received and dealt with, and the development of materials to explain an organization’s policies and procedures.

The bill also ensures that Canadians can request that their information be deleted from organizations.

The bill provides the privacy commissioner of Canada with broad powers, including the ability to order a company to stop collecting data or using personal information. The commissioner will be able to levy significant fines for non-compliant organizations—with fines of up to five percent of global revenue or $25 million, whichever is greater, for the most serious offences.

The proposed Personal Information and Data Protection Tribunal Act will create a new tribunal to enforce the Consumer Privacy Protection Act.

Although the Liberal government said it engaged with stakeholders for Bill C-27, the Council of Canadian Innovators (CCI) expressed reservations about the process. Nick Schiavo, CCI’s director of federal affairs, said it had concerns over the last version of privacy legislation, and had hoped to present those concerns when the bill was studied at committee, but the previous bill died before that could happen.

Now the lawyers. Simon Hodgett, Kuljit Bhogal, and Sam Ip have written a June 27, 2022 overview, which highlights the key features from the perspective of Osler, a leading business law firm practising internationally from offices across Canada and in New York.

Maya Medeiros and Jesse Beatson authored a June 23, 2022 article for Norton Rose Fulbright, a global law firm, which notes a few ‘weak’ spots in the proposed legislation,

… While the AIDA is directed to “high-impact” systems and prohibits “material harm,” these and other key terms are not yet defined. Further, the quantum of administrative penalties will be fixed only upon the issuance of regulations. 

Moreover, the AIDA sets out publication requirements but it is unclear if there will be a public register of high-impact AI systems and what level of technical detail about the AI systems will be available to the public. More clarity should come through Bill C-27’s second and third readings in the House of Commons, and subsequent regulations if the bill passes.

The AIDA may have extraterritorial application if components of global AI systems are used, developed, designed or managed in Canada. The European Union recently introduced its Artificial Intelligence Act, which also has some extraterritorial application. Other countries will likely follow. Multi-national companies should develop a coordinated global compliance program.

I have two podcasts from Michael Geist, a lawyer and Canada Research Chair in Internet and E-Commerce Law at the University of Ottawa.

  • June 26, 2022: The Law Bytes Podcast, Episode 132: Ryan Black on the Government’s Latest Attempt at Privacy Law Reform “The privacy reform bill that is really three bills in one: a reform of PIPEDA, a bill to create a new privacy tribunal, and an artificial intelligence regulation bill. What’s in the bill from a privacy perspective and what’s changed? Is this bill any likelier to become law than an earlier bill that failed to even advance to committee hearings? To help sort through the privacy aspects of Bill C-27, Ryan Black, a Vancouver-based partner with the law firm DLA Piper (Canada) …” (about 45 mins.)
  • August 15, 2022: The Law Bytes Podcast, Episode 139: Florian Martin-Bariteau on the Artificial Intelligence and Data Act “Critics argue that regulations are long overdue, but have expressed concern about how much of the substance is left for regulations that are still to be developed. Florian Martin-Bariteau is a friend and colleague at the University of Ottawa, where he holds the University Research Chair in Technology and Society and serves as director of the Centre for Law, Technology and Society. He is currently a fellow at the Harvard’s Berkman Klein Center for Internet and Society …” (about 38 mins.)

Graphene Week (September 5 – 9, 2022) is a celebration of 10 years of the Graphene Flagship

Back in 2013 the European Union announced two huge targeted research investments €1B each for the Graphene Flagship and the Human Brain Project to be distributed over 10 years. (I have an overview of the Graphene Flagship’s high points from 2013-15 in my April 22, 2016 posting.)

Now at the ten year mark and its final days, the Graphene Flagship is celebrating 10 years with a Graphene Week (from an August 30, 2022 Graphene Flagship press release on EurekAlert),

Graphene Week is a celebration of 10 years of the Graphene Flagship, a European Commission funded research project worth over €1 billion in funding. Held at BMW Welt — the exhibition space of one of the Graphene Flagship’s industrial partners based in Germany — the conference includes a comprehensive program of speakers, exhibitions, posters and a free pavilion.

The program includes a session on the European Chip Act, a notable point of debate for the continent. The act promises to mobilise more than €43 billion of both public and private investments to alleviate the global chip shortage. Graphene Week will demonstrate the potential of graphene-enabled alternatives to traditional semiconductors with the findings of the 2D-Experimental Pilot Line (2D-EPL).

The 2D-EPL is a €20 million project to integrate 2D materials into silicon wafers. The project has recently completed its first multi-project wafer (MPW) run, producing graphene integrated silicon wafers to academic and industrial customers.

During the conference Max Lemme of AMO GmbH in Germany and Sanna Arpiainen, of VTT Finland will discuss this subject along with the European Commission’s Thomas Skordas, Deputy Director General of DG CNECT and Bert De Colvenaer, Executive Director, KDT Joint Undertaking. Attendees can find the full program here.

The conference covers a large range of topics: from composites and medicine, to electronics and sensors. Beyond fundamental research, the talks by industry experts and European scientists will explore how graphene and related materials are disrupting critical European industries.

Graphene Week is co-chaired by Georg Duesberg from Bundeswehr University Munich and Elmar Bonaccurso, from Airbus Germany. In addition to Airbus, representatives from Lufthansa and other partners from the AEROGrAFT project will be in attendance, showcasing their graphene air filtration application for aircraft.   aircraft. 

Graphene Week will also host its Graphene Innovation Forum, a dedicated space for scientists to meet those in industry. Interactive panel discussions with industrial representatives will dive into future trends of graphene applications. The Innovation forum will feature speakers from both the Graphene Flagship’s large industrial partners including Medica, Lufthansa, Nokia and Airbus and smaller companies including Graphene Flagship spin-offs Emberion, BeDimensional and Qurv.

The Open Forum will collate some of the leading experts of the Graphene Flagship for a panel discussion on the success of graphene research and innovation where the audience is encouraged to ask questions. And the Diversity in Graphene initiative will offer a panel discussion focused on career development and professional use of social media.

The Graphene Flagship welcomes the public to explore the Graphene Pavilion in BMW Welt. The exhibition will showcase applications for graphene for cars, planes, phones and cities, together with product demos and videos. This pavilion will be free and open to the public from 9am on Friday 9 September to 6pm on Sunday 11 September.

“The Graphene Flagship is one of the largest ever EU projects, forming a network of 171 academic and industrial partners from 22 countries,” explained Jari Kinaret, Director of the Graphene Flagship. “In the 17th  edition, Graphene Week provides an opportunity to demonstrate the successes of the project and the ongoing legacy it will have on Europe’s industry. We look forward to welcoming our academic and industrial partners to join us in Munich for this celebration.”

More information on Graphene Week, access to the speaker line up and full scientific program can be found on the Graphene Flagship website. Registration provides access to all scientific sessions, sponsored sessions, access to the exhibition, conference material and more. To register click here.

This is the BMW Welt,

Looks like something out of a science fiction movie, eh?

You can find (Graphene Flagship spinoff companies), Emberion website here, BeDimensional website here, and Qurv Technologies website here.

Save energy with neuromorphic (brainlike) hardware

It seems the appetite for computing power is bottomless, which presents a problem in a world where energy resources are increasingly constrained. A May 24, 2022 news item on ScienceDaily announces research into neuromorphic computing which hints the energy efficiency long promised by the technology may be realized in the foreseeable future,

For the first time TU Graz’s [Graz University of Technology; Austria] Institute of Theoretical Computer Science and Intel Labs demonstrated experimentally that a large neural network can process sequences such as sentences while consuming four to sixteen times less energy while running on neuromorphic hardware than non-neuromorphic hardware. The new research based on Intel Labs’ Loihi neuromorphic research chip that draws on insights from neuroscience to create chips that function similar to those in the biological brain.

Rich Uhlig, managing director of Intel Labs, holds one of Intel’s Nahuku boards, each of which contains 8 to 32 Intel Loihi neuromorphic chips. Intel’s latest neuromorphic system, Pohoiki Beach, is made up of multiple Nahuku boards and contains 64 Loihi chips. Pohoiki Beach was introduced in July 2019. (Credit: Tim Herman/Intel Corporation)

A May 24, 2022 Graz University of Technology (TU Graz) press release (also on EurekAlert), which originated the news item, delves further into the research, Note: Links have been removed,

The research was funded by The Human Brain Project (HBP), one of the largest research projects in the world with more than 500 scientists and engineers across Europe studying the human brain. The results of the research are published in the research paper “Memory for AI Applications in Spike-based Neuromorphic Hardware” [sic] (DOI 10.1038/s42256-022-00480-w) which in published in Nature Machine Intelligence.  

Human brain as a role model

Smart machines and intelligent computers that can autonomously recognize and infer objects and relationships between different objects are the subjects of worldwide artificial intelligence (AI) research. Energy consumption is a major obstacle on the path to a broader application of such AI methods. It is hoped that neuromorphic technology will provide a push in the right direction. Neuromorphic technology is modelled after the human brain, which is highly efficient in using energy. To process information, its hundred billion neurons consume only about 20 watts, not much more energy than an average energy-saving light bulb.

In the research, the group focused on algorithms that work with temporal processes. For example, the system had to answer questions about a previously told story and grasp the relationships between objects or people from the context. The hardware tested consisted of 32 Loihi chips.

Loihi research chip: up to sixteen times more energy-efficient than non-neuromorphic hardware

“Our system is four to sixteen times more energy-efficient than other AI models on conventional hardware,” says Philipp Plank, a doctoral student at TU Graz’s Institute of Theoretical Computer Science. Plank expects further efficiency gains as these models are migrated to the next generation of Loihi hardware, which significantly improves the performance of chip-to-chip communication.

“Intel’s Loihi research chips promise to bring gains in AI, especially by lowering their high energy cost,“ said Mike Davies, director of Intel’s Neuromorphic Computing Lab. “Our work with TU Graz provides more evidence that neuromorphic technology can improve the energy efficiency of today’s deep learning workloads by re-thinking their implementation from the perspective of biology.”

Mimicking human short-term memory

In their neuromorphic network, the group reproduced a presumed memory mechanism of the brain, as Wolfgang Maass, Philipp Plank’s doctoral supervisor at the Institute of Theoretical Computer Science, explains: “Experimental studies have shown that the human brain can store information for a short period of time even without neural activity, namely in so-called ‘internal variables’ of neurons. Simulations suggest that a fatigue mechanism of a subset of neurons is essential for this short-term memory.”

Direct proof is lacking because these internal variables cannot yet be measured, but it does mean that the network only needs to test which neurons are currently fatigued to reconstruct what information it has previously processed. In other words, previous information is stored in the non-activity of neurons, and non-activity consumes the least energy.

Symbiosis of recurrent and feed-forward network

The researchers link two types of deep learning networks for this purpose. Feedback neural networks are responsible for “short-term memory.” Many such so-called recurrent modules filter out possible relevant information from the input signal and store it. A feed-forward network then determines which of the relationships found are very important for solving the task at hand. Meaningless relationships are screened out, the neurons only fire in those modules where relevant information has been found. This process ultimately leads to energy savings.

“Recurrent neural structures are expected to provide the greatest gains for applications running on neuromorphic hardware in the future,” said Davies. “Neuromorphic hardware like Loihi is uniquely suited to facilitate the fast, sparse and unpredictable patterns of network activity that we observe in the brain and need for the most energy efficient AI applications.”

This research was financially supported by Intel and the European Human Brain Project, which connects neuroscience, medicine, and brain-inspired technologies in the EU. For this purpose, the project is creating a permanent digital research infrastructure, EBRAINS. This research work is anchored in the Fields of Expertise Human and Biotechnology and Information, Communication & Computing, two of the five Fields of Expertise of TU Graz.

Here’s a link to and a citation for the paper,

A Long Short-Term Memory for AI Applications in Spike-based Neuromorphic Hardware by Arjun Rao, Philipp Plank, Andreas Wild & Wolfgang Maass. Nature Machine Intelligence (2022) DOI: https://doi.org/10.1038/s42256-022-00480-w Published: 19 May 2022

This paper is behind a paywall.

For anyone interested in the EBRAINS project, here’s a description from their About page,

EBRAINS provides digital tools and services which can be used to address challenges in brain research and brain-inspired technology development. Its components are designed with, by, and for researchers. The tools assist scientists to collect, analyse, share, and integrate brain data, and to perform modelling and simulation of brain function.

EBRAINS’ goal is to accelerate the effort to understand human brain function and disease.

This EBRAINS research infrastructure is the entry point for researchers to discover EBRAINS services. The services are being developed and powered by the EU-funded Human Brain Project.

You can register to use the EBRAINS research infrastructure HERE

One last note, the Human Brain Project is a major European Union (EU)-funded science initiative (1B Euros) announced in 2013 and to be paid out over 10 years.