Category Archives: artificial intelligence (AI)

First round of seed funding announced for NSF (US National Science Foundation) Institute for Trustworthy AI in Law & Society (TRAILS)

Having published an earlier January 2024 US National Science Foundation (NSF) funding announcement for the TRAILS (Trustworthy AI in Law & Society) Institute yesterday (February 21, 2024), I’m following up with an announcement about the initiative’s first round of seed funding.

From a TRAILS undated ‘story‘ by Tom Ventsias on the initiative’s website (and published January 24, 2024 as a University of Maryland news release on EurekAlert),

The Institute for Trustworthy AI in Law & Society (TRAILS) has unveiled an inaugural round of seed grants designed to integrate a greater diversity of stakeholders into the artificial intelligence (AI) development and governance lifecycle, ultimately creating positive feedback loops to improve trustworthiness, accessibility and efficacy in AI-infused systems.

The eight grants announced on January 24, 2024—ranging from $100K to $150K apiece and totaling just over $1.5 million—were awarded to interdisciplinary teams of faculty associated with the institute. Funded projects include developing AI chatbots to assist with smoking cessation, designing animal-like robots that can improve autism-specific support at home, and exploring how people use and rely upon AI-generated language translation systems.

All eight projects fall under the broader mission of TRAILS, which is to transform the practice of AI from one driven primarily by technological innovation to one that is driven by ethics, human rights, and input and feedback from communities whose voices have previously been marginalized.

“At the speed with which AI is developing, our seed grant program will enable us to keep pace—or even stay one step ahead—by incentivizing cutting-edge research and scholarship that spans AI design, development and governance,” said Hal Daumé III, a professor of computer science at the University of Maryland who is the director of TRAILS.

After TRAILS was launched in May 2023 with a $20 million award from the National Science Foundation (NSF) and the National Institute of Standards and Technology (NIST), lead faculty met to brainstorm how the institute could best move forward with research, innovation and outreach that would have a meaningful impact.

They determined a seed grant program could quickly leverage the wide range of academic talent at TRAILS’ four primary institutions. This includes the University of Maryland’s expertise in computing and human-computer interaction; George Washington University’s strengths in systems engineering and AI as it relates to law and governance; Morgan State University’s work in addressing bias and inequity in AI; and Cornell University’s research in human behavior and decision-making.

“NIST and NSF’s support of TRAILS enables us to create a structured mechanism to reach across academic and institutional boundaries in search of innovative solutions,” said David Broniatowski, an associate professor of engineering management and systems engineering at George Washington University who leads TRAILS activities on the GW campus. “Seed funding from TRAILS will enable multidisciplinary teams to identify opportunities for their research to have impact, and to build the case for even larger, multi-institutional efforts.”

Further discussions were held at a TRAILS faculty retreat to identify seed grant guidelines and collaborative themes that mirror TRAILS’ primary research thrusts—participatory design, methods and metrics, evaluating trust, and participatory governance.

“Some of the funded projects are taking a fresh look at ideas we may have already been working on individually, and others are taking an entirely new approach to timely, pressing issues involving AI and machine learning,” said Virginia Byrne, an assistant professor of higher education & student affairs at Morgan State who is leading TRAILS activities on that campus and who served on the seed grant review committee.

A second round of seed funding will be announced later this year, said Darren Cambridge, who was recently hired as managing director of TRAILS to lead its day-to-day operations.

Projects selected in the first round are eligible for a renewal, while other TRAILS faculty—or any faculty member at the four primary TRAILS institutions—can submit new proposals for consideration, Cambridge said.

Ultimately, the seed funding program is expected to strengthen and incentivize other TRAILS activities that are now taking shape, including K–12 education and outreach programs, AI policy seminars and workshops on Capitol Hill, and multiple postdoc opportunities for early-career researchers.

“We want TRAILS to be the ‘go-to’ resource for educators, policymakers and others who are seeking answers and solutions on how to build, manage and use AI systems that will benefit all of society,” Cambridge said.

The eight projects selected for the first round of TRAILS seed-funding are:

Chung Hyuk Park and Zoe Szajnfarber from GW and Hernisa Kacorri from UMD aim to improve the support infrastructure and access to quality care for families of autistic children. Early interventions are strongly correlated with positive outcomes, while provider shortages and financial burdens have raised challenges—particularly for families without sufficient resources and experience. The researchers will develop novel parent-robot teaming for the home, advance the assistive technology, and assess the impact of teaming to promote more trust in human-robot collaborative settings.

Soheil Feizi from UMD and Robert Brauneis from GW will investigate various issues surrounding text-to-image [emphasis mine] generative AI models like Stable Diffusion, DALL-E 2, and Midjourney, focusing on myriad legal, aesthetic and computational aspects that are currently unresolved. A key question is how copyright law might adapt if these tools create works in an artist’s style. The team will explore how generative AI models represent individual artists’ styles, and whether those representations are complex and distinctive enough to form stable objects of protection. The researchers will also explore legal and technical questions to determine if specific artworks, especially rare and unique ones, have already been used to train AI models.

Huaishu Peng and Ge Gao from UMD will work with Malte Jung from Cornell to increase trust-building in embodied AI systems, which bridge the gap between computers and human physical senses. Specifically, the researchers will explore embodied AI systems in the form of miniaturized on-body or desktop robotic systems that can enable the exchange of nonverbal cues between blind and sighted individuals, an essential component of efficient collaboration. The researchers will also examine multiple factors—both physical and mental—in order to gain a deeper understanding of both groups’ values related to teamwork facilitated by embodied AI.

Marine Carpuat and Ge Gao from UMD will explore “mental models”—how humans perceive things—for language translation systems used by millions of people daily. They will focus on how individuals, depending on their language fluency and familiarity with the technology, make sense of their “error boundary”—that is, deciding whether an AI-generated translation is correct or incorrect. The team will also develop innovative techniques to teach users how to improve their mental models as they interact with machine translation systems.

Hal Daumé III, Furong Huang and Zubin Jelveh from UMD and Donald Braman from GW will propose new philosophies grounded in law to conceptualize, evaluate and achieve “effort-aware fairness,” which involves algorithms for determining whether an individual or a group of individuals is discriminated against in terms of equality of effort. The researchers will develop new metrics, evaluate fairness of datasets, and design novel algorithms that enable AI auditors to uncover and potentially correct unfair decisions.

Lorien Abroms and David Broniatowski from GW will recruit smokers to study the reliability of using generative chatbots, such as ChatGPT, as the basis for a digital smoking cessation program. Additional work will examine the acceptability by smokers and their perceptions of trust in using this rapidly evolving technology for help to quit smoking. The researchers hope their study will directly inform future digital interventions for smoking cessation and/or modifying other health behaviors.

Adam Aviv from GW and Michelle Mazurek from UMD will examine bias, unfairness and untruths such as sexism, racism and other forms of misrepresentation that come out of certain AI and machine learning systems. Though some systems have public warnings of potential biases, the researchers want to explore how users understand these warnings, if they recognize how biases may manifest themselves in the AI-generated responses, and how users attempt to expose, mitigate and manage potentially biased responses.

Susan Ariel Aaronson and David Broniatowski from GW plan to create a prototype of a searchable, easy-to-use website to enable policymakers to better utilize academic research related to trustworthy and participatory AI. The team will analyze research publications by TRAILS-affiliated researchers to ascertain which ones may have policy implications. Then, each relevant publication will be summarized and categorized by research questions, issues, keywords, and relevant policymaking uses. The resulting database prototype will enable the researchers to test the utility of this resource for policymakers over time.

Yes, things are moving quickly where AI is concerned. There’s text-to-image being investigated by Soheil Feizi and Robert Brauneis and, since the funding announcement in early January 2024, text-to-video has been announced (Open AI’s Sora was previewed February 15, 2024). I wonder if that will be added to the project.

One more comment, Huaishu Peng’s, Ge Gao’s, and Malte Jung’s project for “… trust-building in embodied AI systems …” brings to mind Elon Musk’s stated goal of using brain implants for “human/AI symbiosis.” (I have more about that in an upcoming post.) Hopefully, Susan Ariel Aaronson’s and David Broniatowski’s proposed website for policymakers will be able to keep up with what’s happening in the field of AI, including research on the impact of private investments primarily designed for generating profits.

Prioritizing ethical & social considerations in emerging technologies—$16M in US National Science Foundation funding

I haven’t seen this much interest in the ethics and social impacts of emerging technologies in years. It seems that the latest AI (artificial intelligence) panic has stimulated interest not only in regulation but ethics too.

The latest information I have on this topic comes from a January 9, 2024 US National Science Foundation (NSF) news release (also received via email),

NSF and philanthropic partners announce $16 million in funding to prioritize ethical and social considerations in emerging technologies

ReDDDoT is a collaboration with five philanthropic partners and crosses
all disciplines of science and engineering_

The U.S. National Science Foundation today launched a new $16 million
program in collaboration with five philanthropic partners that seeks to
ensure ethical, legal, community and societal considerations are
embedded in the lifecycle of technology’s creation and use. The
Responsible Design, Development and Deployment of Technologies (ReDDDoT)
program aims to help create technologies that promote the public’s
wellbeing and mitigate potential harms.

“The design, development and deployment of technologies have broad
impacts on society,” said NSF Director Sethuraman Panchanathan. “As
discoveries and innovations are translated to practice, it is essential
that we engage and enable diverse communities to participate in this
work. NSF and its philanthropic partners share a strong commitment to
creating a comprehensive approach for co-design through soliciting
community input, incorporating community values and engaging a broad
array of academic and professional voices across the lifecycle of
technology creation and use.”

The ReDDDoT program invites proposals from multidisciplinary,
multi-sector teams that examine and demonstrate the principles,
methodologies and impacts associated with responsible design,
development and deployment of technologies, especially those specified
in the “CHIPS and Science Act of 2022.” In addition to NSF, the
program is funded and supported by the Ford Foundation, the Patrick J.
McGovern Foundation, Pivotal Ventures, Siegel Family Endowment and the
Eric and Wendy Schmidt Fund for Strategic Innovation.

“In recognition of the role responsible technologists can play to
advance human progress, and the danger unaccountable technology poses to
social justice, the ReDDDoT program serves as both a collaboration and a
covenant between philanthropy and government to center public interest
technology into the future of progress,” said Darren Walker, president
of the Ford Foundation. “This $16 million initiative will cultivate
expertise from public interest technologists across sectors who are
rooted in community and grounded by the belief that innovation, equity
and ethics must equally be the catalysts for technological progress.”

The broad goals of ReDDDoT include:  

*Stimulating activity and filling gaps in research, innovation and capacity building in the responsible design, development, and deployment of technologies.
* Creating broad and inclusive communities of interest that bring
together key stakeholders to better inform practices for the design,
development, and deployment of technologies.
* Educating and training the science, technology, engineering, and
mathematics workforce on approaches to responsible design,
development, and deployment of technologies. 
* Accelerating pathways to societal and economic benefits while
developing strategies to avoid or mitigate societal and economic harms.
* Empowering communities, including economically disadvantaged and
marginalized populations, to participate in all stages of technology
development, including the earliest stages of ideation and design.

Phase 1 of the program solicits proposals for Workshops, Planning
Grants, or the creation of Translational Research Coordination Networks,
while Phase 2 solicits full project proposals. The initial areas of
focus for 2024 include artificial intelligence, biotechnology or natural
and anthropogenic disaster prevention or mitigation. Future iterations
of the program may consider other key technology focus areas enumerated
in the CHIPS and Science Act.

For more information about ReDDDoT, visit the program website or register for an informational webinar on Feb. 9, 2024, at 2 p.m. ET.

Statements from NSF’s Partners

“The core belief at the heart of ReDDDoT – that technology should be
shaped by ethical, legal, and societal considerations as well as
community values – also drives the work of the Patrick J. McGovern
Foundation to build a human-centered digital future for all. We’re
pleased to support this partnership, committed to advancing the
development of AI, biotechnology, and climate technologies that advance
equity, sustainability, and justice.” – Vilas Dhar, President, Patrick
J. McGovern Foundation

“From generative AI to quantum computing, the pace of technology
development is only accelerating. Too often, technological advances are
not accompanied by discussion and design that considers negative impacts
or unrealized potential. We’re excited to support ReDDDoT as an
opportunity to uplift new and often forgotten perspectives that
critically examine technology’s impact on civic life, and advance Siegel
Family Endowment’s vision of technological change that includes and
improves the lives of all people.” – Katy Knight, President and
Executive Director of Siegel Family Endowment

Only eight months ago, another big NSF funding project was announced but this time focused on AI and promoting trust, from a May 4, 2023 University of Maryland (UMD) news release (also on EurekAlert), Note: A link has been removed,

The University of Maryland has been chosen to lead a multi-institutional effort supported by the National Science Foundation (NSF) that will develop new artificial intelligence (AI) technologies designed to promote trust and mitigate risks, while simultaneously empowering and educating the public.

The NSF Institute for Trustworthy AI in Law & Society (TRAILS) announced on May 4, 2023, unites specialists in AI and machine learning with social scientists, legal scholars, educators and public policy experts. The multidisciplinary team will work with impacted communities, private industry and the federal government to determine what trust in AI looks like, how to develop technical solutions for AI that can be trusted, and which policy models best create and sustain trust.

Funded by a $20 million award from NSF, the new institute is expected to transform the practice of AI from one driven primarily by technological innovation to one that is driven by ethics, human rights, and input and feedback from communities whose voices have previously been marginalized.

“As artificial intelligence continues to grow exponentially, we must embrace its potential for helping to solve the grand challenges of our time, as well as ensure that it is used both ethically and responsibly,” said UMD President Darryll J. Pines. “With strong federal support, this new institute will lead in defining the science and innovation needed to harness the power of AI for the benefit of the public good and all humankind.”

In addition to UMD, TRAILS will include faculty members from George Washington University (GW) and Morgan State University, with more support coming from Cornell University, the National Institute of Standards and Technology (NIST), and private sector organizations like the DataedX Group, Arthur AI, Checkstep, FinRegLab and Techstars.

At the heart of establishing the new institute is the consensus that AI is currently at a crossroads. AI-infused systems have great potential to enhance human capacity, increase productivity, catalyze innovation, and mitigate complex problems, but today’s systems are developed and deployed in a process that is opaque and insular to the public, and therefore, often untrustworthy to those affected by the technology.

“We’ve structured our research goals to educate, learn from, recruit, retain and support communities whose voices are often not recognized in mainstream AI development,” said Hal Daumé III, a UMD professor of computer science who is lead principal investigator of the NSF award and will serve as the director of TRAILS.

Inappropriate trust in AI can result in many negative outcomes, Daumé said. People often “overtrust” AI systems to do things they’re fundamentally incapable of. This can lead to people or organizations giving up their own power to systems that are not acting in their best interest. At the same time, people can also “undertrust” AI systems, leading them to avoid using systems that could ultimately help them.

Given these conditions—and the fact that AI is increasingly being deployed to mediate society’s online communications, determine health care options, and offer guidelines in the criminal justice system—it has become urgent to ensure that people’s trust in AI systems matches those same systems’ level of trustworthiness.

TRAILS has identified four key research thrusts to promote the development of AI systems that can earn the public’s trust through broader participation in the AI ecosystem.

The first, known as participatory AI, advocates involving human stakeholders in the development, deployment and use of these systems. It aims to create technology in a way that aligns with the values and interests of diverse groups of people, rather than being controlled by a few experts or solely driven by profit.

Leading the efforts in participatory AI is Katie Shilton, an associate professor in UMD’s College of Information Studies who specializes in ethics and sociotechnical systems. Tom Goldstein, a UMD associate professor of computer science, will lead the institute’s second research thrust, developing advanced machine learning algorithms that reflect the values and interests of the relevant stakeholders.

Daumé, Shilton and Goldstein all have appointments in the University of Maryland Institute for Advanced Computer Studies, which is providing administrative and technical support for TRAILS.

David Broniatowski, an associate professor of engineering management and systems engineering at GW, will lead the institute’s third research thrust of evaluating how people make sense of the AI systems that are developed, and the degree to which their levels of reliability, fairness, transparency and accountability will lead to appropriate levels of trust. Susan Ariel Aaronson, a research professor of international affairs at GW, will use her expertise in data-driven change and international data governance to lead the institute’s fourth thrust of participatory governance and trust.

Virginia Byrne, an assistant professor of higher education and student affairs at Morgan State, will lead community-driven projects related to the interplay between AI and education. According to Daumé, the TRAILS team will rely heavily on Morgan State’s leadership—as Maryland’s preeminent public urban research university—in conducting rigorous, participatory community-based research with broad societal impacts.

Additional academic support will come from Valerie Reyna, a professor of human development at Cornell, who will use her expertise in human judgment and cognition to advance efforts focused on how people interpret their use of AI.

Federal officials at NIST will collaborate with TRAILS in the development of meaningful measures, benchmarks, test beds and certification methods—particularly as they apply to important topics essential to trust and trustworthiness such as safety, fairness, privacy, transparency, explainability, accountability, accuracy and reliability.

“The ability to measure AI system trustworthiness and its impacts on individuals, communities and society is limited. TRAILS can help advance our understanding of the foundations of trustworthy AI, ethical and societal considerations of AI, and how to build systems that are trusted by the people who use and are affected by them,” said Under Secretary of Commerce for Standards and Technology and NIST Director Laurie E. Locascio.

Today’s announcement [May 4, 2023] is the latest in a series of federal grants establishing a cohort of National Artificial Intelligence Research Institutes. This recent investment in seven new AI institutes, totaling $140 million, follows two previous rounds of awards.

“Maryland is at the forefront of our nation’s scientific innovation thanks to our talented workforce, top-tier universities, and federal partners,” said U.S. Sen. Chris Van Hollen (D-Md.). “This National Science Foundation award for the University of Maryland—in coordination with other Maryland-based research institutions including Morgan State University and NIST—will promote ethical and responsible AI development, with the goal of helping us harness the benefits of this powerful emerging technology while limiting the potential risks it poses. This investment entrusts Maryland with a critical priority for our shared future, recognizing the unparalleled ingenuity and world-class reputation of our institutions.” 

The NSF, in collaboration with government agencies and private sector leaders, has now invested close to half a billion dollars in the AI institutes ecosystem—an investment that expands a collaborative AI research network into almost every U.S. state.

“The National AI Research Institutes are a critical component of our nation’s AI innovation, infrastructure, technology, education and partnerships ecosystem,” said NSF Director Sethuraman Panchanathan. “[They] are driving discoveries that will ensure our country is at the forefront of the global AI revolution.”

As noted in the UMD news release, this funding is part of a ‘bundle’, here’s more from the May 4, 2023 US NSF news release announcing the full $ 140 million funding program, Note: Links have been removed,

The U.S. National Science Foundation, in collaboration with other federal agencies, higher education institutions and other stakeholders, today announced a $140 million investment to establish seven new National Artificial Intelligence Research Institutes. The announcement is part of a broader effort across the federal government to advance a cohesive approach to AI-related opportunities and risks.

The new AI Institutes will advance foundational AI research that promotes ethical and trustworthy AI systems and technologies, develop novel approaches to cybersecurity, contribute to innovative solutions to climate change, expand the understanding of the brain, and leverage AI capabilities to enhance education and public health. The institutes will support the development of a diverse AI workforce in the U.S. and help address the risks and potential harms posed by AI.  This investment means  NSF and its funding partners have now invested close to half a billion dollars in the AI Institutes research network, which reaches almost every U.S. state.

“The National AI Research Institutes are a critical component of our nation’s AI innovation, infrastructure, technology, education and partnerships ecosystem,” said NSF Director Sethuraman Panchanathan. “These institutes are driving discoveries that will ensure our country is at the forefront of the global AI revolution.”

“These strategic federal investments will advance American AI infrastructure and innovation, so that AI can help tackle some of the biggest challenges we face, from climate change to health. Importantly, the growing network of National AI Research Institutes will promote responsible innovation that safeguards people’s safety and rights,” said White House Office of Science and Technology Policy Director Arati Prabhakar.

The new AI Institutes are interdisciplinary collaborations among top AI researchers and are supported by co-funding from the U.S. Department of Commerce’s National Institutes of Standards and Technology (NIST); U.S. Department of Homeland Security’s Science and Technology Directorate (DHS S&T); U.S. Department of Agriculture’s National Institute of Food and Agriculture (USDA-NIFA); U.S. Department of Education’s Institute of Education Sciences (ED-IES); U.S. Department of Defense’s Office of the Undersecretary of Defense for Research and Engineering (DoD OUSD R&E); and IBM Corporation (IBM).

“Foundational research in AI and machine learning has never been more critical to the understanding, creation and deployment of AI-powered systems that deliver transformative and trustworthy solutions across our society,” said NSF Assistant Director for Computer and Information Science and Engineering Margaret Martonosi. “These recent awards, as well as our AI Institutes ecosystem as a whole, represent our active efforts in addressing national economic and societal priorities that hinge on our nation’s AI capability and leadership.”

The new AI Institutes focus on six research themes:

Trustworthy AI

NSF Institute for Trustworthy AI in Law & Society (TRAILS)

Led by the University of Maryland, TRAILS aims to transform the practice of AI from one driven primarily by technological innovation to one driven with attention to ethics, human rights and support for communities whose voices have been marginalized into mainstream AI. TRAILS will be the first institute of its kind to integrate participatory design, technology, and governance of AI systems and technologies and will focus on investigating what trust in AI looks like, whether current technical solutions for AI can be trusted, and which policy models can effectively sustain AI trustworthiness. TRAILS is funded by a partnership between NSF and NIST.

Intelligent Agents for Next-Generation Cybersecurity

AI Institute for Agent-based Cyber Threat Intelligence and Operation (ACTION)

Led by the University of California, Santa Barbara, this institute will develop novel approaches that leverage AI to anticipate and take corrective actions against cyberthreats that target the security and privacy of computer networks and their users. The team of researchers will work with experts in security operations to develop a revolutionary approach to cybersecurity, in which AI-enabled intelligent security agents cooperate with humans across the cyberdefense life cycle to jointly improve the resilience of security of computer systems over time. ACTION is funded by a partnership between NSF, DHS S&T, and IBM.

Climate Smart Agriculture and Forestry

AI Institute for Climate-Land Interactions, Mitigation, Adaptation, Tradeoffs and Economy (AI-CLIMATE)

Led by the University of Minnesota Twin Cities, this institute aims to advance foundational AI by incorporating knowledge from agriculture and forestry sciences and leveraging these unique, new AI methods to curb climate effects while lifting rural economies. By creating a new scientific discipline and innovation ecosystem intersecting AI and climate-smart agriculture and forestry, our researchers and practitioners will discover and invent compelling AI-powered knowledge and solutions. Examples include AI-enhanced estimation methods of greenhouse gases and specialized field-to-market decision support tools. A key goal is to lower the cost of and improve accounting for carbon in farms and forests to empower carbon markets and inform decision making. The institute will also expand and diversify rural and urban AI workforces. AI-CLIMATE is funded by USDA-NIFA.

Neural and Cognitive Foundations of Artificial Intelligence

AI Institute for Artificial and Natural Intelligence (ARNI)

Led by Columbia University, this institute will draw together top researchers across the country to focus on a national priority: connecting the major progress made in AI systems to the revolution in our understanding of the brain. ARNI will meet the urgent need for new paradigms of interdisciplinary research between neuroscience, cognitive science and AI. This will accelerate progress in all three fields and broaden the transformative impact on society in the next decade. ARNI is funded by a partnership between NSF and DoD OUSD R&E.

AI for Decision Making

AI Institute for Societal Decision Making (AI-SDM)

Led by Carnegie Mellon University, this institute seeks to create human-centric AI for decision making to bolster effective response in uncertain, dynamic and resource-constrained scenarios like disaster management and public health. By bringing together an interdisciplinary team of AI and social science researchers, AI-SDM will enable emergency managers, public health officials, first responders, community workers and the public to make decisions that are data driven, robust, agile, resource efficient and trustworthy. The vision of the institute will be realized via development of AI theory and methods, translational research, training and outreach, enabled by partnerships with diverse universities, government organizations, corporate partners, community colleges, public libraries and high schools.

AI-Augmented Learning to Expand Education Opportunities and Improve Outcomes

AI Institute for Inclusive Intelligent Technologies for Education (INVITE)

Led by the University of Illinois Urbana-Champaign, this institute seeks to fundamentally reframe how educational technologies interact with learners by developing AI tools and approaches to support three crucial noncognitive skills known to underlie effective learning: persistence, academic resilience and collaboration. The institute’s use-inspired research will focus on how children communicate STEM content, how they learn to persist through challenging work, and how teachers support and promote noncognitive skill development. The resultant AI-based tools will be integrated into classrooms to empower teachers to support learners in more developmentally appropriate ways.

AI Institute for Exceptional Education (AI4ExceptionalEd)

Led by the University at Buffalo, this institute will work toward universal speech and language screening for children. The framework, the AI screener, will analyze video and audio streams of children during classroom interactions and assess the need for evidence-based interventions tailored to individual needs of students. The institute will serve children in need of ability-based speech and language services, advance foundational AI technologies and enhance understanding of childhood speech and language development. The AI Institute for Exceptional Education was previously announced in January 2023. The INVITE and AI4ExceptionalEd institutes are funded by a partnership between NSF and ED-IES.

Statements from NSF’s Federal Government Funding Partners

“Increasing AI system trustworthiness while reducing its risks will be key to unleashing AI’s potential benefits and ensuring our shared societal values,” said Under Secretary of Commerce for Standards and Technology and NIST Director Laurie E. Locascio. “Today, the ability to measure AI system trustworthiness and its impacts on individuals, communities and society is limited. TRAILS can help advance our understanding of the foundations of trustworthy AI, ethical and societal considerations of AI, and how to build systems that are trusted by the people who use and are affected by them.”

“The ACTION Institute will help us better assess the opportunities and risks of rapidly evolving AI technology and its impact on DHS missions,” said Dimitri Kusnezov, DHS under secretary for science and technology. “This group of researchers and their ambition to push the limits of fundamental AI and apply new insights represents a significant investment in cybersecurity defense. These partnerships allow us to collectively remain on the forefront of leading-edge research for AI technologies.”

“In the tradition of USDA National Institute of Food and Agriculture investments, this new institute leverages the scientific power of U.S. land-grant universities informed by close partnership with farmers, producers, educators and innovators to address the grand challenge of rising greenhouse gas concentrations and associated climate change,” said Acting NIFA Director Dionne Toombs. “This innovative center will address the urgent need to counter climate-related threats, lower greenhouse gas emissions, grow the American workforce and increase new rural opportunities.”

“The leading-edge in AI research inevitably draws from our, so far, limited understanding of human cognition. This AI Institute seeks to unify the fields of AI and neuroscience to bring advanced designs and approaches to more capable and trustworthy AI, while also providing better understanding of the human brain,” said Bindu Nair, director, Basic Research Office, Office of the Undersecretary of Defense for Research and Engineering. “We are proud to partner with NSF in this critical field of research, as continued advancement in these areas holds the potential for further and significant benefits to national security, the economy and improvements in quality of life.”

“We are excited to partner with NSF on these two AI institutes,” said IES Director Mark Schneider. “We hope that they will provide valuable insights into how to tap modern technologies to improve the education sciences — but more importantly we hope that they will lead to better student outcomes and identify ways to free up the time of teachers to deliver more informed individualized instruction for the students they care so much about.” 

Learn more about the NSF AI Institutes by visiting nsf.gov.

Two things I noticed, (1) No mention of including ethics training or concepts in science and technology education and (2) No mention of integrating ethics and social issues into any of the AI Institutes. So, it seems that ‘Responsible Design, Development and Deployment of Technologies (ReDDDoT)’ occupies its own fiefdom.

Some sobering thoughts

Things can go terribly wrong with new technology as seen in the British television hit series, Mr. Bates vs. The Post Office (based on a true story) , from a January 9, 2024 posting by Ani Blundel for tellyvisions.org,

… what is this show that’s caused the entire country to rise up as one to defend the rights of the lowly sub-postal worker? Known as the “British Post Office scandal,” the incidents first began in 1999 when the U.K. postal system began to switch to digital systems, using the Horizon Accounting system to track the monies brought in. However, the IT system was faulty from the start, and rather than blame the technology, the British government accused, arrested, persecuted, and convicted over 700 postal workers of fraud and theft. This continued through 2015 when the glitch was finally recognized, and in 2019, the convictions were ruled to be a miscarriage of justice.

Here’s the series synopsis:

The drama tells the story of one of the greatest miscarriages of justice in British legal history. Hundreds of innocent sub-postmasters and postmistresses were wrongly accused of theft, fraud, and false accounting due to a defective IT system. Many of the wronged workers were prosecuted, some of whom were imprisoned for crimes they never committed, and their lives were irreparably ruined by the scandal. Following the landmark Court of Appeal decision to overturn their criminal convictions, dozens of former sub-postmasters and postmistresses have been exonerated on all counts as they battled to finally clear their names. They fought for over ten years, finally proving their innocence and sealing a resounding victory, but all involved believe the fight is not over yet, not by a long way.

Here’s a video trailer for ‘Mr. Bates vs. The Post Office,

More from Blundel’s January 9, 2024 posting, Note: A link has been removed,

The outcry from the general public against the government’s bureaucratic mismanagement and abuse of employees has been loud and sustained enough that Prime Minister Rishi Sunak had to come out with a statement condemning what happened back during the 2009 incident. Further, the current Justice Secretary, Alex Chalk, is now trying to figure out the fastest way to exonerate the hundreds of sub-post managers and sub-postmistresses who were wrongfully convicted back then and if there are steps to be taken to punish the post office a decade later.

It’s a horrifying story and the worst I’ve seen so far but, sadly, it’s not the only one of its kind.

Too often people’s concerns and worries about new technology are dismissed or trivialized. Somehow, all the work done to establish ethical standards and develop trust seems to be used as a kind of sop to the concerns rather than being integrated into the implementation of life-altering technologies.

March 6, 2024 Simon Fraser University (SFU) event “The Planetary Politics of AI: Past, Present, and Future” in Vancouver, Canada

*Unsurprisingly, this event has been cancelled. More details at the end of this posting.* This is not a free event; they’ve changed the information about fees/no fees and how the fees are being assessed enough times for me to lose track; check the eventbrite registration page for the latest. Also, there will not be a publicly available recording of the event. (For folks who can’t afford the fees, there’s a contact listed later in this posting.)

First, here’s the “The Planetary Politics of AI: Past, Present, and Future” event information (from a January 10, 2024 Simon Fraser University (SFU) Public Square notice received via email),

The Planetary Politics of AI: Past, Present, and Future

Wednesday, March 6 [2024] | 7:00pm | In-person | Free [Note: This was an error.]

Generative AI has dominated headlines in 2023, but these new technologies rely on a dramatic increase in the extraction of data, human labor, and natural resources. With increasing media manipulation, polarizing discourse, and deep fakes, regulators are struggling to manage new AI.

On March 6th [2024], join renowned author and digital scholar Kate Crawford, as she sits in conversation with SFU’s Wendy Hui Kyong Chun. Together, they will discuss the planetary politics of AI, how we got here, and where it might be going.

A January 11, 2024 SFU Public Square notice (received via email) updates the information about how this isn’t a free event and offers an option for folks who can’t afford the price of a ticket, Note Links have been removed,

The Planetary Politics of AI: Past, Present, and Future

Wednesday, March 6 | 7:00pm | In-person | Paid

Good morning,

We’ve been made aware that yesterday’s newsletter had a mistake, and we thank those who brought it to our attention. The March 6th [2024] event, The Planetary Politics of AI: Past, Present, and Future, is not a free event and has an admission fee for attendance. We apologize for the confusion.

Whenever possible, SFU Public Square’s events are free and open to all, to ensure that the event is as accessible as possible. For this event, there is a paid admission, with a General and Student/Senior Admission option. That being said, if the admission fees are a barrier to access, please email us at psqevent@sfu.ca. Exceptions can be made. [emphasis mine]

Thank you for your understanding!

“The Planetary Politics of AI: Past, Present, and Future” registration webpage on eventbrite offers more information about the speakers and logistics,

Date and time

Starts on Wed, Mar 6, 2024 7:00 PM PST

Location

Djavad Mowafaghian Cinema (SFU Vancouver — Woodward’s Building) 149 W Hastings Street Vancouver, BC V6B 1H7

[See registration page for link to map]

Refund Policy

Refunds up to 7 days before event

About the speakers

Kate Crawfordis a leading international scholar of the social implications of artificial intelligence. She is a Research Professor at USC Annenberg in Los Angeles, a Senior Principal Researcher at MSR in New York, an Honorary Professor at the University of Sydney, and the inaugural Visiting Chair for AI and Justice at the École Normale Supérieure in Paris. Her latest book, Atlas of AI (Yale, 2021), won the Sally Hacker Prize from the Society for the History of Technology, the ASSI&T Best Information Science Book Award, and was named one of the best books in 2021 by New Scientist and the Financial Times.

Over her twenty-year research career, she has also produced groundbreaking creative collaborations and visual investigations. Her project Anatomy of an AI System with Vladan Joler is in the permanent collection of the Museum of Modern Art in New York and the V&A in London, and was awarded with the Design of the Year Award in 2019 and included in the Design of the Decades by the Design Museum of London. Her collaboration with the artist Trevor Paglen, Excavating AI, won the Ayrton Prize from the British Society for the History of Science. She has advised policy makers in the United Nations, the White House, and the European Parliament, and she currently leads the Knowing Machines Project, an international research collaboration that investigates the foundations of machine learning. And in 2023, Kate Crawford was named on of the TIME100 list as one of the most influential people in AI.

Wendy Hui Kyong Chun is Simon Fraser University’s Canada 150 Research Chair in New Media, Professor in the School of Communication, and Director of the Digital Democracies Institute. At the Institute, she leads the Mellon-funded Data Fluencies Project, which combines the interpretative traditions of the arts and humanities with critical work in the data sciences to express, imagine, and create innovative engagements with (and resistances to) our data-filled world.

She has studied both Systems Design Engineering and English Literature, which she combines and mutates in her research on digital media. She is author many books, including: Control and Freedom: Power and Paranoia in the Age of Fiber Optics (MIT, 2006), Programmed Visions: Software and Memory (MIT 2011), Updating to Remain the Same: Habitual New Media (MIT 2016), and Discriminating Data: Correlation, Neighborhoods, and the New Politics of Recognition (2021, MIT Press). She has been Professor and Chair of the Department of Modern Culture and Media at Brown University, where she worked for almost two decades and is currently a Visiting Professor. She is a Fellow of the Royal Society of Canada, and has also held fellowships from: the Guggenheim, ACLS, American Academy of Berlin, Radcliffe Institute for Advanced Study at Harvard.

I’m wondering if the speakers will be discussing how visual and other arts impact their views on AI and vice versa. Both academics have an interest in the arts as you can see in Crawford’s event bio. As for Wendy Hui Kyong Chun, in my April 23, 2021 posting where if you scroll down to her name, (about 30% of the way down), you’ll see she was involved with “Multimedia & Electronic Music Experiments (MEME),” History of Art and Architecture,” and “Theatre Arts and Performance Studies” at Brown University.

A February 12, 2024 SFU Public Square announcement (received via email), which includes a link to this Speaker’s Spotlight webpage (scroll down), suggests my speculation is incorrect,

For over two decades, Kate Crawford’s work has focused on understanding large scale data systems, machine learning and AI in the wider contexts of history, politics, labor, and the environment.

Her latest book,  Atlas of AI (2021) explores artificial intelligence as the extractive industry of the 21st century, relying on vast amounts of data, human labour, and natural resources. …

One more biographical note about Crawford, she was mentioned here in an April 17, 2015 posting, scroll down to the National Film Board of Canada subhead, then down to Episode 5 ‘Big Data and its Algorithms’ of the Do Not Track documentary; she is one of the interviewees. I’m not sure if that documentary is still accessible online.

Back to the event, to get more details and/or buy a ticket, go to: “The Planetary Politics of AI: Past, Present, and Future” registration webpage.

Or, SFU is hosting its free 2023 Nobel Prize-themed lecture at Science World on March 6, 2024 (see my January 16, 2024 posting and scroll down about 30% of the way for more details).

*March 4, 2024: I found a cancellation notice on the SFU’s The Planetary Politics of AI: Past, Present, and Future event page,,

Unfortunately, this event has been cancelled due to extenuating circumstances. If you have questions or concerns, please email us at psqevent@sfu.ca. We apologize for any inconvenience this may cause and we thank you for your understanding.

My guess? They didn’t sell enough tickets. My assessment? Poor organization (e.g., the confusion over pricing), and poor marketing (e.g., no compelling reason to buy a ticket, (e.g.,, neither participant is currently a celebrity or a hot property, the presentation was nothing unique or special, it was just a talk; the title was mildly interesting but not exciting or provocative, etc.).

Hype, hype, hype: Vancouver’s Frontier Collective represents local tech community at SxWS (South by Southwest®) 2024 + an aside

I wonder if Vancouver’s Mayor Ken Sim will be joining the folks at the giant culture/tech event known as South by Southwest® (SxSW) later in 2024. Our peripatetic mayor seems to enjoy traveling to sports events (FIFA 2023 in Qatar), to Los Angeles to convince producers of a hit television series, “The Last of Us,” that they film the second season in Vancouver, and, to Austin, Texas for SxSW 2023. Note: FIFA is Fédération internationale de football association or ‘International Association Football Federation’.

It’s not entirely clear why Mayor Sim’s presence was necessary at any of these events. In October 2023, he finished his first year in office; a business owner and accountant, Sim is best known for his home care business, “Nurse Next Door” and his bagel business, “Rosemary Rocksalt,” meaning he wouldn’t seem to have much relevant experience with sports and film events.

I gather Mayor Sim’s presence was part of the 2023 hype (for those who don’t know, it’s from ‘hyperbole’) where SxSW was concerned, from the Vancouver Day at SxSW 2023 event page,

Vancouver Day

Past(03/12/2023) 12:00PM – 6:00PM

FREE W/ RSVP | ALL AGES

Swan Dive

The momentum and vibrancy of Vancouver’s innovation industry can’t be stopped!

The full day event will see the Canadian city’s premier technology innovators, creative tech industries, and musical artists show why Vancouver is consistently voted one of the most desirable places to live in the world.

We will have talks/panels with the biggest names in VR/AR/Metaverse, AI, Web3, premier technology innovators, top startups, investors and global thought-leaders. We will keep Canada House buzzing throughout the day with activations/demos from top companies from Vancouver and based on our unique culture of wellness and adventure will keep guests entertained, and giveaways will take place across the afternoon.

The Canadian city is showing why Vancouver has become the second largest AR/VR/Metaverse ecosystem globally (with the highest concentration of 3D talent than anywhere in the world), a leader in Web3 with companies like Dapper Labs leading the way and becoming a hotbed in technology like artificial intelligence.

The Frontier Collective’s Vancouver’s Takeover of SXSW is a signature event that will enhance Vancouver as the Innovation and Creative Tech leader on the world stage.It is an opportunity for the global community to encounter cutting-edge ideas, network with other professionals who share a similar appetite for a forward focused experience and define their next steps.

Some of our special guests include City of Vancouver Mayor Ken Sim [emphasis mine], Innovation Commissioner of the Government of BC- Gerri Sinclair, Amy Peck of Endeavor XR, Tony Parisi of Lamina1 and many more.

In the evening, guests can expect a special VIP event with first-class musical acts, installations, wellness activations and drinks, and the chance to mingle with investors, top brands, and top business leaders from around the world.

To round out the event, a hand-picked roster of Vancouver musicians will keep guests dancing late into the night.

This is from Mayor Sim’s Twitter (now X) feed, Note: The photographs have not been included,

Mayor Ken Sim@KenSimCity Another successful day at #SXSW2023 showcasing Vancouver and British Columbia while connecting with creators, innovators, and entrepreneurs from around the world! #vanpoli#SXSW

Last edited from Austin, TX·13.3K Views

Did he really need to be there?

2024 hype at SxSW and Vancouver’s Frontier Collective

New year and same hype but no Mayor Sim? From a January 22, 2024 article by Daniel Chai for the Daily Hive, Note: A link has been removed,

Frontier Collective, a coalition of Vancouver business leaders, culture entrepreneurs, and community builders, is returning to the South by Southwest (SXSW) Conference next month to showcase the city’s tech innovation on the global stage.

The first organization to formally represent and promote the region’s fastest-growing tech industries, Frontier Collective is hosting the Vancouver Takeover: Frontiers of Innovation from March 8 to 12 [2024].

According to Dan Burgar, CEO and co-founder of Frontier Collective, the showcase is not just about presenting new advancements but is also an invitation to the world to be part of a boundary-transcending journey.

“This year’s Vancouver Takeover is more than an event; it’s a beacon for the brightest minds and a celebration of the limitless possibilities that emerge when we dare to innovate together.”

Speakers lined up for the SXSW Vancouver Takeover in Austin, Texas, include executives from Google, Warner Bros, Amazon, JP Morgan, Amazon, LG, NTT, Newlab, and the Wall Street Journal.

“The Frontier Collective is excited to showcase a new era of technological innovation at SXSW 2024, building on the success of last year’s Takeover,” added Natasha Jaswal, VP of operations and events of Frontier Collective, in a statement. “Beyond creating a captivating event; its intentional and curated programming provides a great opportunity for local companies to gain exposure on an international stage, positioning Vancouver as a global powerhouse in frontier tech innovation.

Here’s the registration page if you want to attend the Frontiers of Innovation Vancouver Takeover at SxSW 2024,

Join us for a curated experience of music, art, frontier technologies and provocative panel discussions. We are organizing three major events, designed to ignite conversation and turn ideas into action.

We’re excited to bring together leaders from Vancouver and around the world to generate creative thinking at the biggest tech festival.

Let’s create the future together!

You have a choice of two parties and a day long event. Enjoy!

Who is the Frontier Collective?

The group announced itself in 2022, from a February 17, 2022 article in techcouver, Note: Links have been removed,

The Frontier Collective is the first organization to formally represent and advance the interests of the region’s fastest-growing industries, including Web3, the metaverse, VR/AR [virtual reality/augmented reality], AI [artificial intelligence], climate tech, and creative industries such as eSports [electronic sports], NFTs [non-fungible tokens], VFX [visual effects], and animation.

Did you know the Vancouver area currently boasts the world’s second largest virtual and augmented reality sector and hosts the globe’s biggest cluster of top VFX, video games and animation studios, as well as the highest concentration of 3D talent?

Did you know NFT technology was created in Vancouver and the city remains a top destination for blockchain and Web3 development?

Frontier Collective’s coalition of young entrepreneurs and business leaders wants to raise awareness of Vancouver’s greatness by promoting the region’s innovative tech industry on the world stage, growing investment and infrastructure for early-stage companies, and attracting diverse talent to Vancouver.

“These technologies move at an exponential pace. With the right investment and support, Vancouver has an immense opportunity to lead the world in frontier tech, ushering in a new wave of transformation, economic prosperity and high-paying jobs. Without backing from governments and leaders, these companies may look elsewhere for more welcoming environments.” said Dan Burgar, Co-founder and Head of the Frontier Collective. Burgar heads the local chapter of the VR/AR Association.

Their plan includes the creation of a 100,000-square-foot innovation hub in Vancouver to help incubate startups in Web3, VR/AR, and AI, and to establish the region as a centre for metaverse technology.

Frontier Collective’s team includes industry leaders at the Vancouver Economic Commission [emphasis mine; Under Mayor Sim and his majority City Council, the commission has been dissolved; see September 21, 2023 Vancouver Sun article “Vancouver scraps economic commission” by Tiffany Crawford], Collision Conference, Canadian incubator Launch, Invest Vancouver, and the BDC Deep Tech Fund.  These leaders continue to develop and support frontier technology in their own organizations and as part of the Collective.

Interestingly, a February 7, 2023 article by the editors of BC Business magazine seems to presage the Vancouver Economic Commission’s demise. Note: Links have been removed,

Last year, tech coalition Frontier Collective announced plans to position Vancouver as Canada’s tech capital by 2030. Specializing in subjects like Web3, the metaverse, VR/AR, AI and animation, it seems to be following through on its ambition, as the group is about to place Vancouver in front of a global audience at SXSW 2023, a major conference and festival celebrating tech, innovation and entertainment.  

Taking place in Austin, Texas from March 10-14 [2023], Vancouver Takeover is going to feature speakers, stories and activations, as well as opportunities for companies to connect with industry leaders and investors. Supported by local businesses like YVR Airport, Destination Vancouver, Low Tide Properties and others, Frontier is also working with partners from Trade and Invest BC, Telefilm and the Canadian Consulate. Attendees will spot familiar faces onstage, including the likes of Minister of Jobs, Economic Development and Innovation Brenda Bailey, Vancouver mayor Ken Sim [emphasis mine] and B.C. Innovation Commissioner Gerri Sinclair. 

That’s right, no mention of the Vancouver Economic Commission.

As for the Frontier Collective Team (accessed January 29, 2024), the list of ‘industry leaders’ (18 people with a gender breakdown that appears to be 10 male and 8 female) and staff members (a Senior VP who appears to be male and the other seven staff members who appear to be female) can be found here. (Should there be a more correct way to do the gender breakdown, please let me know in the Comments.)

i find the group’s name a bit odd, ‘frontier’ is something I associate with the US. Americans talk about frontiers, Canadians not so much.

If you are interested in attending the daylong (11 am – 9 pm) Vancouver Takeover at SxSW 2024 event on March 10, 2024, just click here.

Aside: swagger at Vancouver City Hall, economic prosperity, & more?

What follows is not germane to the VR/AR community, SxSW of any year, or the Frontier Collective but it may help to understand why the City of Vancouver’s current mayor is going to events where he would seem to have no useful role to play.

Matt O’Grady’s October 4, 2023 article for Vancouver Magazine offers an eyeopening review of Mayor Ken Sim’s first year in office.

Ken Sim swept to power a year ago promising to reduce waste, make our streets safer and bring Vancouver’s “swagger” back. But can his open-book style win over the critics?

I’m sitting on a couch in the mayor’s third-floor offices, and Ken Sim is walking over to his turntable to put on another record. “How about the Police? I love this album.”

With the opening strains of  “Every Breath You Take” crackling to life, Sim is explaining his approach to conflict resolution, and how he takes inspiration from the classic management tome Getting to Yes: Negotiating Agreement Without Giving In.

Odd choice for a song to set the tone for an interview. Here’s more about the song and its origins according to the song’s Wikipedia entry,

To escape the public eye, Sting retreated to the Caribbean. He started writing the song at Ian Fleming’s writing desk on the Goldeneye estate in Oracabessa, Jamaica.[14] The lyrics are the words of a possessive lover who is watching “every breath you take; every move you make”. Sting recalled:

“I woke up in the middle of the night with that line in my head, sat down at the piano and had written it in half an hour. The tune itself is generic, an aggregate of hundreds of others, but the words are interesting. It sounds like a comforting love song. I didn’t realise at the time how sinister it is. I think I was thinking of Big Brother, surveillance and control.”[15][emphasis mine]

The interview gets odder, from O’Grady’s October 4, 2023 article,

Suddenly, the office door swings open and Sim’s chief of staff, Trevor Ford, pokes his head in (for the third time in the past 10 minutes). “We have to go. Now.”

“Okay, okay,” says Sim, turning back to address me. “Do you mind if I change while we’re talking?” And so the door closes again—and, without further ado, the Mayor of Vancouver drops trou [emphasis mine] and goes in search of a pair of shorts, continuing with a story about how some of his west-side friends are vocally against the massive Jericho Lands development promising to reshape their 4th and Alma neighbourhood.

“And I’m like, ‘Let me be very clear: I 100-percent support it, this is why—and we’ll have to agree to disagree,’” he says, trading his baby-blue polo for a fitted charcoal grey T-shirt. Meanwhile, as Sim does his wardrobe change, I’m doing everything I can to keep my eyes on my keyboard—and hoping the mayor finds his missing shorts.

It’s fair to assume that previous mayors weren’t in the habit of getting naked in front of journalists. At least, I can’t quite picture Kennedy Stewart doing so, or Larry or Gordon Campbell either. 

But it also fits a pattern that’s developing with Ken Sim as a leader entirely comfortable in his own skin. He’s in a hurry to accomplish big things—no matter who’s watching and what they might say (or write). And he eagerly embraces the idea of bringing Vancouver’s “swagger” back—outlined in his inaugural State of the City address, and underlined when he shotgunned a beer at July’s [2023] Khatsahlano Street Party.

O’Grady’s October 4, 2023 article goes on to mention some of the more practical initiatives undertaken by Mayor Sim and his supermajority of ABC (Sim’s party, A Better City) city councillors in their efforts to deal with some of the city’s longstanding and intractable problems,

For a reminder of Sim’s key priorities, you need only look at the whiteboard in the mayor’s office. At the top, there’s a row labelled “Daily Focus (Top 4)”—which are, in order, 3-3-3-1 (ABC’s housing program); Chinatown; Business Advocacy; and Mental Health/Safety.

On some files, like Chinatown, there have been clear advances: council unanimously approved the Uplifting Chinatown Action Plan in January, which devotes more resources to cleaning and sanitation services, graffiti removal, beautification and other community supports. The plan also includes a new flat rate of $2 per hour for parking meters throughout Chinatown (to encourage more people to visit and shop in the area) and a new satellite City Hall office, to improve representation. And on mental health and public safety, the ABC council moved quickly in November to take action on its promise to fund 100 new police officers and 100 new mental health professionals [emphasis mine]—though the actual hiring will take time.

O’Grady likely wrote his article a few months before its October 2023 publication date (a standard practice for magazine articles), which may explain why he didn’t mention this, from an October 10, 2023 article by Michelle Gamage and Jen St. Denis for The Tyee,

100 Cops, Not Even 10 Nurses

One year after Mayor Ken Sim and the ABC party swept into power on a promise to hire 100 cops and 100 mental health nurses to address fears about crime and safety in Vancouver, only part of that campaign pledge has been fulfilled.

At a police board meeting in September, Chief Adam Palmer announced that 100 new police officers have now joined the Vancouver Police Department.

But just 9.5 full-time equivalent positions have been filled to support the mental health [emphasis mine] side of the promise.

In fact, Vancouver Coastal Health says it’s no longer aiming [emphasis mine] to hire 100 nurses. Instead, it’s aiming for 58 staff and specialists [emphasis mine], including social workers, community liaison workers and peers, as well as other disciplines alongside nurses to deliver care.

At the police board meeting on Sept. 21 [2023], Palmer said the VPD has had no trouble recruiting new police officers and has now hired 70 new recruits who are first-time officers, as well as at least 24 experienced officers from other police services.

In contrast, it’s been a struggle for VCH to recruit nurses specializing in mental health.

BC Nurses’ Union president Adriane Gear said she remembers wondering where Sim was planning on finding 100 nurses [emphasis mine] when he first made the campaign pledge. In B.C. there are around 5,000 full-time nursing vacancies, she said. Specialized nurses are an even more “finite resource,” she added.

I haven’t seen any information as to why the number was reduced from 100 mental health positions to 58. I’m also curious as to how Mayor Ken Sim whose business is called ‘Nurse Next Door’ doesn’t seem to know there’s a shortage of nurses in the province and elsewhere.

Last year, the World Economic Forum in collaboration with Quartz published a January 28, 2022 article by Aurora Almendral about the worldwide nursing shortage and the effects of COVID pandemic,

The report’s [from the International Council of Nurses (ICN)] survey of nurse associations around the world painted a grim picture of strained workforce. In Spain, nurses reported a chronic lack of PPE, and 30% caught covid. In Canada, 52% of nurses reported inadequate staffing, and 47% met the diagnostic cut-off for potential PTSD [emphasis mine].

Burnout plagued nurses around the world: 40% in Uganda, 60% in Belgium, and 63% in the US. In Oman, 38% nurses said they were depressed, and 73% had trouble sleeping. Fifty-seven percent of UK nurses planned to leave their jobs in 2021, up from 36% in 2020. Thirty-eight percent of nurses in Lebanon did not want to be nurses anymore, but stayed in their jobs because their families needed the money.

In Australia, 17% of nurses had sought mental health support. In China, 6.5% of nurses reported suicidal thoughts.

Moving on from Mayor Sim’s odd display of ignorance (or was it cynical calculation from a candidate determined to win over a more centrist voting population?), O’Grady’s October 4, 2023 article ends on this note,

When Sim runs for reelection in 2026, as he promises to do, he’ll have a great backdrop for his campaign—the city having just hosted several games for the FIFA World Cup, which is expected to bring in $1 billion and 900,000 visitors over five years.

The renewed swagger of Sim’s city will be on full display for the world to see. So too—if left unresolved—will some of Vancouver’s most glaring and intractable social problems.

I was born in Vancouver and don’t recall the city as having swagger, at any time. As for the economic prosperity that’s always promised with big events like the FIFA world cup, I’d like to see how much the 2010 Olympic Games held in Vancouver cost taxpayers and whether or not there were long lasting economic benefits. From a July 9, 2022 posting on Bob Mackin’s thebreaker.news,

The all-in cost to build and operate the Vancouver 2010 Games was as much as $8 billion, but the B.C. Auditor General never conducted a final report. The organizing committee, VANOC, was not covered by the freedom of information law and its records were transferred to the Vancouver Archives after the Games with restrictions not to open the board minutes and financial ledgers before fall 2025.

Mayor Sim will have two more big opportunities to show off his swagger in 2025 . (1) The Invictus Games come to Vancouver and Whistler in February 2025 and will likely bring Prince Harry and the Duchess of Sussex, Meghan Markle to the area (see the April 22, 2022 Associated Press article by Gemma Karstens-Smith on the Canadian Broadcasting Corporation website) and (2) The 2025 Junos (the Canadian equivalent to the Grammys) from March 26 – 30, 2025 with the awards show being held on March 30, 2025 (see the January 25, 2024 article by Daniel Chai for the Daily Hive website).

While he waits, Sim may have a ‘swagger’ opportunity later this month (February 2024) when Prince Harry and the Duchess of Sussex (Meghan Markle) visit the Vancouver and Whistler for a “a three-day Invictus Games’ One Year to Go event in Vancouver and Whistler,” see Daniel Chai’s February 2, 2024 article for more details.

Don’t forget, should you be in Austin, Texas for the 2024 SxSW, the daylong (11 am – 9 pm) Vancouver Takeover at SxSW 2024 event is on March 10, 2024, just click here to register. Who knows? You might get to meet Vancouver’s, Mayor Ken Sim. Or, if you can’t make it to Austin, Texas, O’Grady’s October 4, 2023 article offer an unusual political profile.

Be a citizen scientist: join the ‘Wild river battle’

I got this invitation from a professor at the University of Montpellier (Université de Montpellier, France) in a February 1, 2024 email (the project ‘Wild river battle’ is being run by scientists at ETH Zurich [Swiss Federal Institute of Technology in Zürich]) ,

Dear all,

I hope this message finds you well. I am reaching out to share an exciting opportunity for all of us to contribute to the safeguarding of wild rivers worldwide.

We are launching a Citizen Science project in collaboration with Citizen Science Zurich, utilizing AI and satellite imagery to assess and protect the natural state of rivers on a global scale. Whether you have a passion for river conservation or simply wish to contribute to a meaningful cause, we invite you to join us in this impactful game.

To access the game, please follow this link https://lab.citizenscience.ch/en/project/769

It only takes 3-5 minutes, and the rules are simple: click on the riverscape that you find the wildest (you can also use the buttons under the images).

Thank you very much for your time in advance, and I look forward to witnessing our collective efforts make a positive impact for the conservation of our precious rivers. And we are open to receive any feedback by mail (shzong@ethz.ch) and willing to provide more information for those who are interested (https://ele.ethz.ch/research/technology-modelling/citizen-river.html).

Best regards and have fun!

Nicolas Mouquet

Scientific director of the Centre for the Synthesis
and Analysis of Biodiversity (CESAB)
5 Rue de l’École de Médecine
34000, Montpellier

I went looking for more information as per Mouquet`s email (https://ele.ethz.ch/research/technology-modelling/citizen-river.html) and found this,

Finding wild rivers with AI

A citizen science project combining AI and satellite images to evaluate rivers’ wildness.

Wild rivers are an invaluable resource that play a vital role in maintaining healthy ecosystems and supporting biodiversity. Rivers of high ecological integrity provide habitat for a wide variety of plant and animal species, and their free-​flowing waters provide a large number of services such as freshwater, supporting the needs of local communities. Protecting wild rivers is essential to ensure long-​term global health, and it is our responsibility to develop management schemes to preserve these precious habitats for future generations.  

Wild stretches, supporting the highest levels of biodiversity, are disappearing globally at an extremely fast rate. Deforestation, mining, pollution, booming hydropower dams and other human infrastructures are built or planned on large rivers. The increasing pressure of human activities has been causing a rapid decline of biodiversity and ecological function. We should act now to protect the rivers and be guided by the current state of rivers to identify unprotected areas that are worth being included in conservation plans. However, there is still no map of global wild river segments which could support such global conservation planning, nor a tool to monitor the wilderness of rivers over time under global changes.

How we find wild rivers, evaluate their wildness, and why we need your help

We will evaluate the level of wildness of river sections from satellite images. Remote sensing is the most efficient method for monitoring the landscape on a global and dynamic scale. Satellite images contain valuable information about the river’s course, width, depth, shape and surrounding landscape, which allow us to assess how wild they are visually.

You and other citizen scientists can help us score the wildest river sections from satellite images. Using the ranking from citizen scientists, we will run a ranking algorithm to give each image a wildness score depending on the many pairwise comparisons. These images with a wilderness score will act as a training dataset for a machine learning algorithm which will be trained to automatically score any large river segment, globally. With an accurate river wildness model, we will be able to quickly assess the wildness of the global river sections. Using such a tool, we can for instance find the river sections that are still worth protecting. This pristine river map will provide invaluable insights for conservation initiatives and enable targeted actions to safeguard and restore the remaining pristine rivers and monitor the trajectories of rivers around the world.

How to do it?

Rivers will first be segmented into river sections with the surrounding environment as a whole landscape bounding box. The river sections will be identified by citizen scientists and your interpretation to form a reference dataset. The game (you can click the corresponding language to access it with different language versions. English, French, German, Spanish, Chinese) is easy (thanks to Citizen Science Zurich); you just have to click on the riverscape you find more wild, or click the button under the rivers. For mobile users, please use the buttons.

Before you get started there will be this,

Your participation in the study is voluntary.

Statement of consent

By participating in the study, I confirm that I:

* have heard/read and understood the study information.
* had enough time to decide on my participation in the study.
* voluntarily participate in the study and agree to my personal data being used as described below.

Participants’ information will be handled with the utmost confidentiality. All data collected, including but not limited to demographic details, responses to survey questions, and any other pertinent information, will be securely stored and accessible only to authorized personnel involved in the research. Your personal identity will be kept strictly confidential, and any published results will be presented in aggregate form, ensuring that individual participants cannot be identified. Furthermore, your data will not be shared with any third parties and will only be used for the specific research purposes outlined in the introduction page prior to participating in the study.

I fund this description of the researchers and contributors (from https://lab.citizenscience.ch/en/project/769 or ‘Wild river battle’)

Who is behind

We are ecologists at ETH Zurich that are foucusing on biodiversity monitoring in the large river corridors. Learn more about us from our homepage. Chair of Ecosystems and Landscape Evolution

Who contributes

All the people that have interest in protecting wild rivers can participate this project, and of course non-governmental organizations (NGOs) and river management bureau like CNR (Compagnie Nationale du Rhône) also showed great interests in this project.

Should you be inspired to do more, Citizen Science Zurich lists a number of projects (ranging from the Hair SALON project to FELIDAE: Finding Elusive Links by Tracking Diet of Cats in Environment to more) on this page. It’s a mixed listing of those that are completed or looking for participants and/or looking for financial resources.

There is also a Citizen Science Portal (a Canadian federal government project) that was last updated January 15, 2024. Some of the projects are national in scope while others are provincial in scope.

February 1, 2024 talk about ‘CULTUS’: a scifi, queer art installation at the University of British Columbia’s Belkin Gallery in Vancouver, Canada

Spanning religiosity, science fiction, contemporary perspectives on artificial intelligence, and the techno-industrial complex, artist Zach Blas and writer/editor Jayne Wilkinson will be discussing CULTUS, an art installation currently being shown as part of the Belkin Gallery’s January 12 – April 14, 2024 exhibition, Aporia (Notes to a Medium),

Zach Blas, CULTUS , 2023, from the 2024 exhibition at Arebyte Gallery, London, UK. Courtesy of the artist. Photo: Max Colson

Here’s what the folks at the Belkin Art Gallery (Morris and Helen Belkin Art Gallery) had to say in their January 30, 2024 announcement (received via email),

Artist Talk with Zach Blas and Jayne Wilkinson

Thursday, February 1 at 5 pm 

Please join us for a lecture by interdisciplinary artist Zach Blas, with a dialogue to follow with writer/editor Jayne Wilkinson. Blas will discuss his most recent work, CULTUS, the second in a trilogy of queer science-fiction installations addressing the beliefs, fantasies and histories that are influential to the contemporary tech industry. CULTUS (the Latin word for “worship”) considers the God-like status often afforded to artificial intelligence (AI) and examines how this religiosity is marshalled to serve beliefs about judgement and transcendence, extraction and immortality, pleasure and punishment, individual freedom and cult devotion. The conversation to follow will address some of the pressing intersecting political and ethical questions raised by both using and critiquing contemporary image technologies like AI.

This conversation will be audio-recorded; email us at belkin.gallery@ubc.ca if you are interested in listening to the recording following the event.

This talk is presented in conjunction with the Belkin’s exhibition Aporia (Notes to a Medium) and Critical Image Forum, a collaboration between the Belkin and the Department of Art History, Visual Art and Theory at UBC.

For anyone (like me) who’s never heard of either Blas or Wilkinson, there’s more on the Belkin’s event page,

Zach Blas is an artist, filmmaker and writer whose practice draws out the philosophies and imaginaries residing in computational technologies and their industries. Working across moving image, computation, installation, theory and performance, Blas has exhibited, lectured and held screenings at venues including the 12th Berlin Biennale for Contemporary Art, Whitney Museum of American Art, Tate Modern, 12th Gwangju Biennale and e-flux. His 2021 artist monograph Unknown Ideals is published by Sternberg Press. Blas is currently Assistant Professor of Visual Studies at the University of Toronto.

Jayne Wilkinson is a Toronto-based art writer and editor.

Should you be interested in attending the talk and/or the exhibition, here are some directions, from the Belkin Gallery’s Visit webpage,

Directions

The Morris and Helen Belkin Art Gallery is located at the University of British Columbia Vancouver campus, 1825 Main Mall, Vancouver BC, V6T 1Z2

Open in Maps

By Public Transit

TransLink offers many routes to UBC, including several express services (44, 84, R4, 99). The UBC Bus Loop is the last stop for each of these buses, and is located in the central area of campus near the AMS Nest. To get to the gallery, walk west on University Boulevard. (about 1 block) until you reach Main Mall. Turn right onto Main Mall and continue for about 3 blocks until you reach Crescent Road. We are located on your left at the corner of Main Mall and Crescent Road, near the Flagpole Plaza.

By Car

From downtown Vancouver, proceed west on West 4th Avenue, which becomes Chancellor Blvd and then merges with NW Marine Drive. Continue west on NW Marine Drive, to the Rose Garden Parkade (on your left).

From the airport, proceed to SW Marine Drive. Stay on SW Marine Drive, which eventually merges with NW Marine Drive. Continue just past the Museum of Anthropology (on your left) to the Rose Garden Parkade (on your right).

Accessibility

Entrance

The Belkin is wheelchair accessible. The main entrance is located on the east side of the building next to Main Mall. For people requiring wheelchair or easier accessibility, use the ramp from Crescent Road to access the main gallery doors.  This entrance is level and accessible and has both a revolving door and a powered wheelchair-accessible door.

Washrooms

Washrooms are all-gender and include two multi-stall washrooms with wheelchair-accessible stalls and one stand-alone washroom that is wheelchair accessible.

Seating

Portable gallery stools are available for use.

Large Print Materials

Large print materials are available.

ASL Interpretation

ASL interpreters are available upon request for Belkin programs and events. To request interpretation for an event or tour, please contact us in advance.

Service Animals

Service dogs are welcome to accompany visitors.

Scent

The Belkin’s office is scent free. Occasionally, there are works or projects that are scent-focused.

Please ask our staff if you require any assistance or have any questions.

Admission to the gallery is free.

Canada’s voluntary code of conduct relating to advanced generative AI (artificial intelligence) systems

These days there’s a lot of international interest in policy and regulation where AI is concerned. So even though this is a little late, here’s what happened back in September 2023, the Canadian government came to an agreement with various technology companies about adopting a new voluntary code. Quinn Henderson’s September 28, 2023 article for the Daily Hive starts in a typically Canadian fashion, Note: Links have been removed,

While not quite as star-studded [emphasis mine] at the [US] White House’s AI summit, the who’s who of Canadian tech companies have agreed to new rules concerning AI.

What happened: A handful of Canada’s biggest tech companies, including Blackberry, OpenText, and Cohere, agreed to sign on to new voluntary government guidelines for the development of AI technologies and a “robust, responsible AI ecosystem in Canada.”

What’s next: The code of conduct is something of a stopgap until the government’s *real* AI regulation, the Artificial Intelligence and Data Act (AIDA), comes into effect in two years.

The regulation race is on around the globe. The EU is widely viewed as leading the way with the world’s first comprehensive regulatory AI framework set to take effect in 2026. The US is also hard at work but only has a voluntary code in place.

Henderson’s September 28, 2023 article offers a good, brief summary of the situation regarding regulation and self-regulation of AI here in Canada and elsewhere around the world, albeit, from a few months ago. Oddly, there’s no mention of what was then an upcoming international AI summit in the UK (see my November 2, 2023 posting, “UK AI Summit (November 1 – 2, 2023) at Bletchley Park finishes“).

Getting back to Canada’s voluntary code of conduct. here’s the September 27, 2023 Innovation, Science and Economic Development Canada (ISED) news release about it, Note: Links have been removed,

Today [September 27, 2023], the Honourable François-Philippe Champagne, Minister of Innovation, Science and Industry, announced Canada’s Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems, which is effective immediately. The code identifies measures that organizations are encouraged to apply to their operations when they are developing and managing general-purpose generative artificial intelligence (AI) systems. The Government of Canada has already taken significant steps toward ensuring that AI technology evolves responsibly and safely through the proposed Artificial Intelligence and Data Act (AIDA), which was introduced as part of Bill C-27 in June 2022. This code is a critical bridge between now and when that legislation would be coming into force.The code outlines measures that are aligned with six core principles:

Accountability: Organizations will implement a clear risk management framework proportionate to the scale and impact of their activities.

Safety: Organizations will perform impact assessments and take steps to mitigate risks to safety, including addressing malicious or inappropriate uses.

Fairness and equity: Organizations will assess and test systems for biases throughout the lifecycle.

Transparency: Organizations will publish information on systems and ensure that AI systems and AI-generated content can be identified.

Human oversight and monitoring: Organizations will ensure that systems are monitored and that incidents are reported and acted on.

Validity and robustness: Organizations will conduct testing to ensure that systems operate effectively and are appropriately secured against attacks.

This code is based on the input received from a cross-section of stakeholders, including the Government of Canada’s Advisory Council on Artificial Intelligence, through the consultation on the development of a Canadian code of practice for generative AI systems. The government will publish a summary of feedback received during the consultation in the coming days. The code will also help reinforce Canada’s contributions to ongoing international deliberations on proposals to address common risks encountered with large-scale deployment of generative AI, including at the G7 and among like-minded partners.

Quotes

“Advances in AI have captured the world’s attention with the immense opportunities they present. Canada is a global AI leader, among the top countries in the world, and Canadians have created many of the world’s top AI innovations. At the same time, Canada takes the potential risks of AI seriously. The government is committed to ensuring Canadians can trust AI systems used across the economy, which in turn will accelerate AI adoption. Through our Voluntary Code of Conduct on the Responsible Development and Management of

Advanced Generative AI Systems, leading Canadian companies will adopt responsible guardrails for advanced generative AI systems in order to build safety and trust as the technology spreads. We will continue to ensure Canada’s AI policies are fit for purpose in a fast-changing world.”
– The Honourable François-Philippe Champagne, Minister of Innovation, Science and Industry

“We are very pleased to see the Canadian government taking a strong leadership role in building a regulatory framework that will help society maximize the benefits of AI, while addressing the many legitimate concerns that exist. It is essential that we, as an industry, address key issues like bias and ensure that humans maintain a clear role in oversight and monitoring of this incredibly exciting technology.”
– Aidan Gomez, CEO and Co-founder, Cohere

“AI technologies represent immense opportunities for every citizen and business in Canada. The societal impacts of AI are profound across education, biotech, climate and the very nature of work. Canada’s AI Code of Conduct will help accelerate innovation and citizen adoption by setting the standard on how to do it best. As Canada’s largest software company, we are honoured to partner with Minister Champagne and the Government of Canada in supporting this important step forward.”
– Mark J. Barrenechea, CEO and CTO, OpenText

“CCI has been calling for Canada to take a leadership role on AI regulation, and this should be done in the spirit of collaboration between government and industry leaders. The AI Code of Conduct is a meaningful step in the right direction and marks the beginning of an ongoing conversation about how to build a policy ecosystem for AI that fosters public trust and creates the conditions for success among Canadian companies. The global landscape for artificial intelligence regulation and adoption will evolve, and we are optimistic to see future collaboration to adapt to the emerging technological reality.”
– Benjamin Bergen, President, Council of Canadian Innovators

Quick facts

*The proposed Artificial Intelligence and Data Act (AIDA), part of Bill C-27, is designed to promote the responsible design, development and use of AI systems in Canada’s private sector, with a focus on systems with the greatest impact on health, safety and human rights (high-impact systems).

*Since the introduction of the bill, the government has engaged extensively with stakeholders on AIDA and will continue to seek the advice of Canadians, experts—including the government’s Advisory Council on AI—and international partners on the novel challenges posed by generative AI, as outlined in the Artificial Intelligence and Data Act (AIDA) – Companion document.

*Bill C-27 was adopted at second reading in the House of Commons in April 2023 and was referred to the House of Commons Standing Committee on Industry and Technology for study.

You can read more about Canada’s regulation efforts (Bill C-27) and some of the critiques in my May 1, 2023 posting, “Canada, AI regulation, and the second reading of the Digital Charter Implementation Act, 2022 (Bill C-27).”

For now, the “Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems” can be found on this ISED September 2023 webpage.

Other Canadian AI policy bits and bobs

Back in 2016, shiny new Prime Minister Justin Trudeau announced the Pan-Canadian Artificial Intelligence Strategy (you can find out more about the strategy (Pillar 1: Commercialization) from this ISED Pan-Canadian Artificial Intelligence Strategy webpage, which was last updated July 20, 2022).

More recently, the Canadian Institute for Advanced Research (CIFAR), a prominent player in the Pan-Canadian AI strategy, published a report about regulating AI, from a November 21, 2023 CIFAR news release by Kathleen Sandusky, Note: Links have been removed,

New report from the CIFAR AI Insights Policy Briefs series cautions that current efforts to regulate AI are doomed to fail if they ignore a crucial aspect: the transformative impact of AI on regulatory processes themselves.

As rapid advances in artificial intelligence (AI) continue to reshape our world, global legislators and policy experts are working full-tilt to regulate this transformative technology. A new report, part of the CIFAR AI Insights Policy Briefs series, provides novel tools and strategies for a new way of thinking about regulation.

“Regulatory Transformation in the Age of AI” was authored by members of the Schwartz Reisman Institute for Technology and Society at the University of Toronto: Director and Chair Gillian Hadfield, who is also a Canada CIFAR AI Chair at the Vector Institute; Policy Researcher Jamie Amarat Sandhu; and Graduate Affiliate Noam Kolt.

The report challenges the current regulatory focus, arguing that the standard “harms paradigm” of regulating AI is necessary but incomplete. For example, current car safety regulations were not developed to address the advent of autonomous vehicles. In this way, the introduction of AI into vehicles has made some existing car safety regulations inefficient or irrelevant.

Through three Canadian case studies—in healthcare, financial services, and nuclear energy—the report illustrates some of the ways in which the targets and tools of regulation could be reconsidered for a world increasingly shaped by AI.

The brief proposes a novel concept—Regulatory Impacts Analysis (RIA)—as a means to evaluate the impact of AI on regulatory regimes. RIA aims to assess the likely impact of AI on regulatory targets and tools, helping policymakers adapt governance institutions to the changing conditions brought about by AI. The authors provide a real-world adaptable tool—a sample questionnaire—for policymakers to identify potential gaps in their domain as AI becomes more prevalent.

This report also highlights the need for a comprehensive regulatory approach that goes beyond mitigating immediate harms, recognizing AI as a “general-purpose technology” with far-reaching implications, including on the very act of regulation itself.

As AI is expected to play a pivotal role in the global economy, the authors emphasize the need for regulators to go beyond traditional approaches. The evolving landscape requires a more flexible and adaptive playbook, with tools like RIA helping to shape strategies to harness the benefits of AI, address associated risks, and prepare for the technology’s transformative impact.

You can find CIFAR’s November 2023 report, “Regulatory Transformation in the Age of AI” (PDF) here.

I have two more AI bits and these concern provincial AI policies, one from Ontario and the other from British Columbia (BC),

Stay tuned, there will be more about AI policy throughout 2024.

A formal theory for neuromorphic (brainlike) computing hardware needed

This is one my older pieces as the information dates back to October 2023 but neuromorphic computing is one of my key interests and I’m particularly interested to see the upsurge in the discussion of hardware, here goes. From an October 17, 2023 news item on Nanowerk,

There is an intense, worldwide search for novel materials to build computer microchips with that are not based on classic transistors but on much more energy-saving, brain-like components. However, whereas the theoretical basis for classic transistor-based digital computers is solid, there are no real theoretical guidelines for the creation of brain-like computers.

Such a theory would be absolutely necessary to put the efforts that go into engineering new kinds of microchips on solid ground, argues Herbert Jaeger, Professor of Computing in Cognitive Materials at the University of Groningen [Netherlands].

Key Takeaways
Scientists worldwide are searching for new materials to build energy-saving, brain-like computer microchips as classic transistor miniaturization reaches its physical limit.

Theoretical guidelines for brain-like computers are lacking, making it crucial for advancements in the field.

The brain’s versatility and robustness serve as an inspiration, despite limited knowledge about its exact workings.

A recent paper suggests that a theory for non-digital computers should focus on continuous, analogue signals and consider the characteristics of new materials.

Bridging gaps between diverse scientific fields is vital for developing a foundational theory for neuromorphic computing..

An October 17, 2023 University of Groningen press release (also on EurekAlert), which originated the news item, provides more context for this proposal,

Computers have, so far, relied on stable switches that can be off or on, usually transistors. These digital computers are logical machines and their programming is also based on logical reasoning. For decades, computers have become more powerful by further miniaturization of the transistors, but this process is now approaching a physical limit. That is why scientists are working to find new materials to make more versatile switches, which could use more values than just the digitals 0 or 1.

Dangerous pitfall

Jaeger is part of the Groningen Cognitive Systems and Materials Center (CogniGron), which aims to develop neuromorphic (i.e. brain-like) computers. CogniGron is bringing together scientists who have very different approaches: experimental materials scientists and theoretical modelers from fields as diverse as mathematics, computer science, and AI. Working closely with materials scientists has given Jaeger a good idea of the challenges that they face when trying to come up with new computational materials, while it has also made him aware of a dangerous pitfall: there is no established theory for the use of non-digital physical effects in computing systems.

Our brain is not a logical system. We can reason logically, but that is only a small part of what our brain does. Most of the time, it must work out how to bring a hand to a teacup or wave to a colleague on passing them in a corridor. ‘A lot of the information-processing that our brain does is this non-logical stuff, which is continuous and dynamic. It is difficult to formalize this in a digital computer,’ explains Jaeger. Furthermore, our brains keep working despite fluctuations in blood pressure, external temperature, or hormone balance, and so on. How is it possible to create a computer that is as versatile and robust? Jaeger is optimistic: ‘The simple answer is: the brain is proof of principle that it can be done.’

Neurons

The brain is, therefore, an inspiration for materials scientists. Jaeger: ‘They might produce something that is made from a few hundred atoms and that will oscillate, or something that will show bursts of activity. And they will say: “That looks like how neurons work, so let’s build a neural network”.’ But they are missing a vital bit of knowledge here. ‘Even neuroscientists don’t know exactly how the brain works. This is where the lack of a theory for neuromorphic computers is problematic. Yet, the field doesn’t appear to see this.’

In a paper published in Nature Communications on 16 August, Jaeger and his colleagues Beatriz Noheda (scientific director of CogniGron) and Wilfred G. van der Wiel (University of Twente) present a sketch of what a theory for non-digital computers might look like. They propose that instead of stable 0/1 switches, the theory should work with continuous, analogue signals. It should also accommodate the wealth of non-standard nanoscale physical effects that the materials scientists are investigating.

Sub-theories

Something else that Jaeger has learned from listening to materials scientists is that devices from these new materials are difficult to construct. Jaeger: ‘If you make a hundred of them, they will not all be identical.’ This is actually very brain-like, as our neurons are not all exactly identical either. Another possible issue is that the devices are often brittle and temperature-sensitive, continues Jaeger. ‘Any theory for neuromorphic computing should take such characteristics into account.’

Importantly, a theory underpinning neuromorphic computing will not be a single theory but will be constructed from many sub-theories (see image below). Jaeger: ‘This is in fact how digital computer theory works as well, it is a layered system of connected sub-theories.’ Creating such a theoretical description of neuromorphic computers will require close collaboration of experimental materials scientists and formal theoretical modellers. Jaeger: ‘Computer scientists must be aware of the physics of all these new materials [emphasis mine] and materials scientists should be aware of the fundamental concepts in computing.’

Blind spots

Bridging this divide between materials science, neuroscience, computing science, and engineering is exactly why CogniGron was founded at the University of Groningen: it brings these different groups together. ‘We all have our blind spots,’ concludes Jaeger. ‘And the biggest gap in our knowledge is a foundational theory for neuromorphic computing. Our paper is a first attempt at pointing out how such a theory could be constructed and how we can create a common language.’

Here’s a link to and a citation for the paper,

Toward a formal theory for computing machines made out of whatever physics offers by Herbert Jaeger, Beatriz Noheda & Wilfred G. van der Wiel. Nature Communications volume 14, Article number: 4911 (2023) DOI: https://doi.org/10.1038/s41467-023-40533-1 Published: 16 August 2023

This paper is open access and there’s a 76 pp. version, “Toward a formal theory for computing machines made out of whatever physics offers: extended version” (emphasis mine) available on arXchiv.

Caption: A general theory of physical computing systems would comprise existing theories as special cases. Figure taken from an extended version of the Nature Comm paper on arXiv. Credit: Jaeger et al. / University of Groningen

With regard to new materials for neuromorphic computing, my January 4, 2024 posting highlights a proposed quantum material for this purpose.

A hardware (neuromorphic and quantum) proposal for handling increased AI workload

It’s been a while since I’ve featured anything from Purdue University (Indiana, US). From a November 7, 2023 news item on Nanowerk, Note Links have been removed,

Technology is edging closer and closer to the super-speed world of computing with artificial intelligence. But is the world equipped with the proper hardware to be able to handle the workload of new AI technological breakthroughs?

Key Takeaways
Current AI technologies are strained by the limitations of silicon-based computing hardware, necessitating new solutions.

Research led by Erica Carlson [Purdue University] suggests that neuromorphic [brainlike] architectures, which replicate the brain’s neurons and synapses, could revolutionize computing efficiency and power.

Vanadium oxides have been identified as a promising material for creating artificial neurons and synapses, crucial for neuromorphic computing.

Innovative non-volatile memory, observed in vanadium oxides, could be the key to more energy-efficient and capable AI hardware.

Future research will explore how to optimize the synaptic behavior of neuromorphic materials by controlling their memory properties.

The colored landscape above shows a transition temperature map of VO2 (pink surface) as measured by optical microscopy. This reveals the unique way that this neuromorphic quantum material [emphasis mine] stores memory like a synapse. Image credit: Erica Carlson, Alexandre Zimmers, and Adobe Stock

An October 13, 2023 Purdue University news release (also on EurekAlert but published November 6, 2023) by Cheryl Pierce, which originated the news item, provides more detail about the work, Note: A link has been removed,

“The brain-inspired codes of the AI revolution are largely being run on conventional silicon computer architectures which were not designed for it,” explains Erica Carlson, 150th Anniversary Professor of Physics and Astronomy at Purdue University.

A joint effort between Physicists from Purdue University, University of California San Diego (USCD) and École Supérieure de Physique et de Chimie Industrielles (ESPCI) in Paris, France, believe they may have discovered a way to rework the hardware…. [sic] By mimicking the synapses of the human brain.  They published their findings, “Spatially Distributed Ramp Reversal Memory in VO2” in Advanced Electronic Materials which is featured on the back cover of the October 2023 edition.

New paradigms in hardware will be necessary to handle the complexity of tomorrow’s computational advances. According to Carlson, lead theoretical scientist of this research, “neuromorphic architectures hold promise for lower energy consumption processors, enhanced computation, fundamentally different computational modes, native learning and enhanced pattern recognition.”

Neuromorphic architecture basically boils down to computer chips mimicking brain behavior.  Neurons are cells in the brain that transmit information. Neurons have small gaps at their ends that allow signals to pass from one neuron to the next which are called synapses. In biological brains, these synapses encode memory. This team of scientists concludes that vanadium oxides show tremendous promise for neuromorphic computing because they can be used to make both artificial neurons and synapses.

“The dissonance between hardware and software is the origin of the enormously high energy cost of training, for example, large language models like ChatGPT,” explains Carlson. “By contrast, neuromorphic architectures hold promise for lower energy consumption by mimicking the basic components of a brain: neurons and synapses. Whereas silicon is good at memory storage, the material does not easily lend itself to neuron-like behavior. Ultimately, to provide efficient, feasible neuromorphic hardware solutions requires research into materials with radically different behavior from silicon – ones that can naturally mimic synapses and neurons. Unfortunately, the competing design needs of artificial synapses and neurons mean that most materials that make good synaptors fail as neuristors, and vice versa. Only a handful of materials, most of them quantum materials, have the demonstrated ability to do both.”

The team relied on a recently discovered type of non-volatile memory which is driven by repeated partial temperature cycling through the insulator-to-metal transition. This memory was discovered in vanadium oxides.

Alexandre Zimmers, lead experimental scientist from Sorbonne University and École Supérieure de Physique et de Chimie Industrielles, Paris, explains, “Only a few quantum materials are good candidates for future neuromorphic devices, i.e., mimicking artificial synapses and neurons. For the first time, in one of them, vanadium dioxide, we can see optically what is changing in the material as it operates as an artificial synapse. We find that memory accumulates throughout the entirety of the sample, opening new opportunities on how and where to control this property.”

“The microscopic videos show that, surprisingly, the repeated advance and retreat of metal and insulator domains causes memory to be accumulated throughout the entirety of the sample, rather than only at the boundaries of domains,” explains Carlson. “The memory appears as shifts in the local temperature at which the material transitions from insulator to metal upon heating, or from metal to insulator upon cooling. We propose that these changes in the local transition temperature accumulate due to the preferential diffusion of point defects into the metallic domains that are interwoven through the insulator as the material is cycled partway through the transition.”

Now that the team has established that vanadium oxides are possible candidates for future neuromorphic devices, they plan to move forward in the next phase of their research.

“Now that we have established a way to see inside this neuromorphic material, we can locally tweak and observe the effects of, for example, ion bombardment on the material’s surface,” explains Zimmers. “This could allow us to guide the electrical current through specific regions in the sample where the memory effect is at its maximum. This has the potential to significantly enhance the synaptic behavior of this neuromorphic material.”

There’s a very interesting 16 mins. 52 secs. video embedded in the October 13, 2023 Purdue University news release. In an interview with Dr. Erica Carlson who hosts The Quantum Age website and video interviews on its YouTube Channel, Alexandre Zimmers takes you from an amusing phenomenon observed by 19th century scientists through the 20th century where it becomes of more interest as the nanscale phenonenon can be exploited (sonar, scanning tunneling microscopes, singing birthday cards, etc.) to the 21st century where we are integrating this new information into a quantum* material for neuromorphic hardware.

Here’s a link to and a citation for the paper,

Spatially Distributed Ramp Reversal Memory in VO2 by Sayan Basak, Yuxin Sun, Melissa Alzate Banguero, Pavel Salev, Ivan K. Schuller, Lionel Aigouy, Erica W. Carlson, Alexandre Zimmers. Advanced Electronic Materials Volume 9, Issue 10 October 2023 2300085 DOI: https://doi.org/10.1002/aelm.202300085 First published: 10 July 2023

This paper is open access.

There’s a lot of research into neuromorphic hardware, here’s a sampling of some of my most recent posts on the topic,

There’s more, just use ‘neuromorphic hardware’ for your search term.

*’meta’ changed to ‘quantum’ on January 8, 2024.