Category Archives: artificial intelligence (AI)

DeepSeek, a Chinese rival to OpenAI and other US AI companies

There’s been quite the kerfuffle over DeepSeek during the last few days. This January 27, 2025 article by Alexandra Mae Jones for the Canadian Broadcasting Corporation (CBC) news only was my introduction to DeepSeek AI, Note: A link has been removed,

There’s a new player in AI on the world stage: DeepSeek, a Chinese startup that’s throwing tech valuations into chaos and challenging U.S. dominance in the field with an open-source model that they say they developed for a fraction of the cost of competitors.

DeepSeek’s free AI assistant — which by Monday [January 27, 20¸25] had overtaken rival ChatGPT to become the top-rated free application on Apple’s App Store in the United States — offers the prospect of a viable, cheaper AI alternative, raising questions on the heavy spending by U.S. companies such as Apple and Microsoft, amid a growing investor push for returns.

U.S. stocks dropped sharply on Monday [January 27, 2025], as the surging popularity of DeepSeek sparked a sell-off in U.S. chipmakers.

“[DeepSeek] performs as well as the leading models in Silicon Valley and in some cases, according to their claims, even better,” Sheldon Fernandez, co-founder of DarwinAI, told CBC News. “But they did it with a fractional amount of the resources is really what is turning heads in our industry.”

What is DeepSeek?

Little is known about the small Hangzhou startup behind DeepSeek, which was founded out of a hedge fund in 2023, but largely develops open-source AI models. 

Its researchers wrote in a paper last month that the DeepSeek-V3 model, launched on Jan. 10 [2025], cost less than $6 million US to develop and uses less data than competitors, running counter to the assumption that AI development will eat up increasing amounts of money and energy. 

Some analysts are skeptical about DeepSeek’s $6 million claim, pointing out that this figure only covers computing power. But Fernandez said that even if you triple DeepSeek’s cost estimates, it would still cost significantly less than its competitors. 

The open source release of DeepSeek-R1, which came out on Jan. 20 [2025] and uses DeepSeek-V3 as its base, also means that developers and researchers can look at its inner workings, run it on their own infrastructure and build on it, although its training data has not been made available. 

“Instead of paying Open $20 a month or $200 a month for the latest advanced versions of these models, [people] can really get these types of features for free. And so it really upends a lot of the business model that a lot of these companies were relying on to justify their very high valuations.”

A key difference between DeepSeek’s AI assistant, R1, and other chatbots like OpenAI’s ChatGPT is that DeepSeek lays out its reasoning when it answers prompts and questions, something developers are excited about. 

“The dealbreaker is the access to the raw thinking steps,” Elvis Saravia, an AI researcher and co-founder of the U.K.-based AI consulting firm DAIR.AI, wrote on X, adding that the response quality was “comparable” to OpenAI’s latest reasoning model, o1.

U.S. dominance in AI challenged

One of the reasons DeepSeek is making headlines is because its development occurred despite U.S. actions to keep Americans at the top of AI development. In 2022, the U.S. curbed exports of computer chips to China, hampering their advanced supercomputing development.

The latest AI models from DeepSeek are widely seen to be competitive with those of OpenAI and Meta, which rely on high-end computer chips and extensive computing power.

Christine Mui in a January 27, 2025 article for Politico notes the stock ‘crash’ taking place while focusing on the US policy implications, Note: Links set by Politico have been removed while I have added one link

A little-known Chinese artificial intelligence startup shook the tech world this weekend by releasing an OpenAI-like assistant, which shot to the No.1 ranking on Apple’s app store and caused American tech giants’ stocks to tumble.

From Washington’s perspective, the news raised an immediate policy alarm: It happened despite consistent, bipartisan efforts to stifle AI progress in China.

In tech terms, what freaked everyone out about DeepSeek’s R1 model is that it replicated — and in some cases, surpassed — the performance of OpenAI’s cutting-edge o1 product across a host of performance benchmarks, at a tiny fraction of the cost.

The business takeaway was straightforward. DeepSeek’s success shows that American companies might not need to spend nearly as much as expected to develop AI models. That both intrigues and worries investors and tech leaders.

The policy implications, though, are more complex. Washington’s rampant anxiety about beating China has led to policies that the industry has very mixed feelings about.

On one hand, most tech firms hate the export controls that stop them from selling as much to the world’s second-largest economy, and force them to develop new products if they want to do business with China. If DeepSeek shows those rules are pointless, many would be delighted to see them go away.

On the other hand, anti-China, protectionist sentiment has encouraged Washington to embrace a whole host of industry wishlist items, from a lighter-touch approach to AI rules to streamlined permitting for related construction projects. Does DeepSeek mean those, too, are failing? Or does it trigger a doubling-down?

DeepSeek’s success truly seems to challenge the belief that the future of American AI demands ever more chips and power. That complicates Trump’s interest in rapidly building out that kind of infrastructure in the U.S.

Why pour $500 billion into the Trump-endorsed “Stargate” mega project [announced by Trump on January 21, 2025] — and why would the market reward companies like Meta that spend $65 billion in just one year on AI — if DeepSeek claims it only took $5.6 million and second-tier Nvidia chips to train one of its latest models? (U.S. industry insiders dispute the startup’s figures and claim they don’t tell the full story, but even at 100 times that cost, it would be a bargain.)

Tech companies, of course, love the recent bloom of federal support, and it’s unlikely they’ll drop their push for more federal investment to match anytime soon. Marc Andreessen, a venture capitalist and Trump ally, argued today that DeepSeek should be seen as “AI’s Sputnik moment,” one that raises the stakes for the global competition.

That would strengthen the case that some American AI companies have been pressing for the new administration to invest government resources into AI infrastructure (OpenAI), tighten restrictions on China (Anthropic) and ease up on regulations to ensure their developers build “artificial general intelligence” before their geopolitical rivals.

The British Broadcasting Corporation’s (BBC) Peter Hoskins & Imran Rahman-Jones provided a European perspective and some additional information in their January 27, 2025 article for BBC news online, Note: Links have been removed,

US tech giant Nvidia lost over a sixth of its value after the surging popularity of a Chinese artificial intelligence (AI) app spooked investors in the US and Europe.

DeepSeek, a Chinese AI chatbot reportedly made at a fraction of the cost of its rivals, launched last week but has already become the most downloaded free app in the US.

AI chip giant Nvidia and other tech firms connected to AI, including Microsoft and Google, saw their values tumble on Monday [January 27, 2025] in the wake of DeepSeek’s sudden rise.

In a separate development, DeepSeek said on Monday [January 27, 2025] it will temporarily limit registrations because of “large-scale malicious attacks” on its software.

The DeepSeek chatbot was reportedly developed for a fraction of the cost of its rivals, raising questions about the future of America’s AI dominance and the scale of investments US firms are planning.

DeepSeek is powered by the open source DeepSeek-V3 model, which its researchers claim was trained for around $6m – significantly less than the billions spent by rivals.

But this claim has been disputed by others in AI.

The researchers say they use already existing technology, as well as open source code – software that can be used, modified or distributed by anybody free of charge.

DeepSeek’s emergence comes as the US is restricting the sale of the advanced chip technology that powers AI to China.

To continue their work without steady supplies of imported advanced chips, Chinese AI developers have shared their work with each other and experimented with new approaches to the technology.

This has resulted in AI models that require far less computing power than before.

It also means that they cost a lot less than previously thought possible, which has the potential to upend the industry.

After DeepSeek-R1 was launched earlier this month, the company boasted of “performance on par with” one of OpenAI’s latest models when used for tasks such as maths, coding and natural language reasoning.

In Europe, Dutch chip equipment maker ASML ended Monday’s trading with its share price down by more than 7% while shares in Siemens Energy, which makes hardware related to AI, had plunged by a fifth.

“This idea of a low-cost Chinese version hasn’t necessarily been forefront, so it’s taken the market a little bit by surprise,” said Fiona Cincotta, senior market analyst at City Index.

“So, if you suddenly get this low-cost AI model, then that’s going to raise concerns over the profits of rivals, particularly given the amount that they’ve already invested in more expensive AI infrastructure.”

Singapore-based technology equity adviser Vey-Sern Ling told the BBC it could “potentially derail the investment case for the entire AI supply chain”.

Who founded DeepSeek?

The company was founded in 2023 by Liang Wenfeng in Hangzhou, a city in southeastern China.

The 40-year-old, an information and electronic engineering graduate, also founded the hedge fund that backed DeepSeek.

He reportedly built up a store of Nvidia A100 chips, now banned from export to China.

Experts believe this collection – which some estimates put at 50,000 – led him to launch DeepSeek, by pairing these chips with cheaper, lower-end ones that are still available to import.

Mr Liang was recently seen at a meeting between industry experts and the Chinese premier Li Qiang.

In a July 2024 interview with The China Academy, Mr Liang said he was surprised by the reaction to the previous version of his AI model.

“We didn’t expect pricing to be such a sensitive issue,” he said.

“We were simply following our own pace, calculating costs, and setting prices accordingly.”

A January 28, 2025 article by Daria Solovieva for salon.com covers much the same territory as the others and includes a few detail about security issues,

The pace at which U.S. consumers have embraced DeepSeek is raising national security concerns similar to those surrounding TikTok, the social media platform that faces a ban unless it is sold to a non-Chinese company.

The U.S. Supreme Court this month upheld a federal law that requires TikTok’s sale. The Court sided with the U.S. government’s argument that the app can collect and track data on its 170 million American users. President Donald Trump has paused enforcement of the ban until April to try to negotiate a deal.

But “the threat posed by DeepSeek is more direct and acute than TikTok,” Luke de Pulford, co-founder and executive director of non-profit Inter-Parliamentary Alliance on China, told Salon.

DeepSeek is a fully Chinese company and is subject to Communist Party control, unlike TikTok which positions itself as independent from parent company ByteDance, he said. 

“DeepSeek logs your keystrokes, device data, location and so much other information and stores it all in China,” de Pulford said. “So you’ll never know if the Chinese state has been crunching your data to gain strategic advantage, and DeepSeek would be breaking the law if they told you.”  

I wonder if other AI companies in other countries also log keystrokes, etc. Is it theoretically possible that one of those governments or their government agencies could gain access to your data? It’s obvious in China but people in other countries may have the issues.

Censorship: DeepSeek and ChatGPT

Anis Heydari’s January 28, 2025 article for CBC news online reveals some surprising results from a head to head comparison between DeepSeek and ChatGPT,

The Chinese-made AI chatbot DeepSeek may not always answer some questions about topics that are often censored by Beijing, according to tests run by CBC News and The Associated Press, and is providing different information than its U.S.-owned competitor ChatGPT.

The new, free chatbot has sparked discussions about the competition between China and the U.S. in AI development, with many users flocking to test it. 

But experts warn users should be careful with what information they provide to such software products.

It is also “a little bit surprising,” according to one researcher, that topics which are often censored within China are seemingly also being restricted elsewhere.

“A lot of services will differentiate based on where the user is coming from when deciding to deploy censorship or not,” said Jeffrey Knockel, who researches software censorship and surveillance at the Citizen Lab at the University of Toronto’s Munk School of Global Affairs & Public Policy.

“With this one, it just seems to be censoring everyone.”

Both CBC News and The Associated Press posed questions to DeepSeek and OpenAI’s ChatGPT, with mixed and differing results.

For example, DeepSeek seemed to indicate an inability to answer fully when asked “What does Winnie the Pooh mean in China?” For many Chinese people, the Winnie the Pooh character is used as a playful taunt of President Xi Jinping, and social media searches about that character were previously, briefly banned in China. 

DeepSeek said the bear is a beloved cartoon character that is adored by countless children and families in China, symbolizing joy and friendship.

Then, abruptly, it added the Chinese government is “dedicated to providing a wholesome cyberspace for its citizens,” and that all online content is managed under Chinese laws and socialist core values, with the aim of protecting national security and social stability.

CBC News was unable to produce this response. DeepSeek instead said “some internet users have drawn comparisons between Winnie the Pooh and Chinese leaders, leading to increased scrutiny and restrictions on the character’s imagery in certain contexts,” when asked the same question on an iOS app on a CBC device in Canada.

Asked if Taiwan is a part of China — another touchy subject — it [DeepSeek] began by saying the island’s status is a “complex and sensitive issue in international relations,” adding that China claims Taiwan, but that the island itself operates as a “separate and self-governing entity” which many people consider to be a sovereign nation.

But as that answer was being typed out, for both CBC and the AP, it vanished and was replaced with: “Sorry, that’s beyond my current scope. Let’s talk about something else.”

… Brent Arnold, a data breach lawyer in Toronto, says there are concerns about DeepSeek, which explicitly says in its privacy policy that the information it collects is stored on servers in China.

That information can include the type of device used, user “keystroke patterns,” and even “activities on other websites and apps or in stores, including the products or services you purchased, online or in person” depending on whether advertising services have shared those with DeepSeek.

“The difference between this and another AI company having this is now, the Chinese government also has it,” said Arnold.

While much, if not all, of the data DeepSeek collects is the same as that of U.S.-based companies such as Meta or Google, Arnold points out that — for now — the U.S. has checks and balances if governments want to obtain that information.

“With respect to America, we assume the government operates in good faith if they’re investigating and asking for information, they’ve got a legitimate basis for doing so,” he said. 

Right now, Arnold says it’s not accurate to compare Chinese and U.S. authorities in terms of their ability to take personal information. But that could change.

“I would say it’s a false equivalency now. But in the months and years to come, we might start to say you don’t see a whole lot of difference in what one government or another is doing,” he said.

Graham Fraser’s January 28, 2025 article comparing DeepSeek to the others (OpenAI’s ChatGPT and Google’s Gemini) for BBC news online took a different approach,

Writing Assistance

When you ask ChatGPT what the most popular reasons to use ChatGPT are, it says that assisting people to write is one of them.

From gathering and summarising information in a helpful format to even writing blog posts on a topic, ChatGPT has become an AI companion for many across different workplaces.

As a proud Scottish football [soccer] fan, I asked ChatGPT and DeepSeek to summarise the best Scottish football players ever, before asking the chatbots to “draft a blog post summarising the best Scottish football players in history”.

DeepSeek responded in seconds, with a top ten list – Kenny Dalglish of Liverpool and Celtic was number one. It helpfully summarised which position the players played in, their clubs, and a brief list of their achievements.

DeepSeek also detailed two non-Scottish players – Rangers legend Brian Laudrup, who is Danish, and Celtic hero Henrik Larsson. For the latter, it added “although Swedish, Larsson is often included in discussions of Scottish football legends due to his impact at Celtic”.

For its subsequent blog post, it did go into detail of Laudrup’s nationality before giving a succinct account of the careers of the players.

ChatGPT’s answer to the same question contained many of the same names, with “King Kenny” once again at the top of the list.

Its detailed blog post briefly and accurately went into the careers of all the players.

It concluded: “While the game has changed over the decades, the impact of these Scottish greats remains timeless.” Indeed.

For this fun test, DeepSeek was certainly comparable to its best-known US competitor.

Coding

Brainstorming ideas

Learning and research

Steaming ahead

The tasks I set the chatbots were simple but they point to something much more significant – the winner of the so-called AI race is far from decided.

For all the vast resources US firms have poured into the tech, their Chinese rival has shown their achievements can be emulated.

Reception from the science community

Days before the news outlets discovered DeepSeek, the company published a paper about its Large Language Models (LLMs) and its new chatbot on arXiv. Here’s a little more information,

DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning

[over 100 authors are listed]

We introduce our first-generation reasoning models, DeepSeek-R1-Zero and DeepSeek-R1. DeepSeek-R1-Zero, a model trained via large-scale reinforcement learning (RL) without supervised fine-tuning (SFT) as a preliminary step, demonstrates remarkable reasoning capabilities. Through RL, DeepSeek-R1-Zero naturally emerges with numerous powerful and intriguing reasoning behaviors. However, it encounters challenges such as poor readability, and language mixing. To address these issues and further enhance reasoning performance, we introduce DeepSeek-R1, which incorporates multi-stage training and cold-start data before RL. DeepSeek-R1 achieves performance comparable to OpenAI-o1-1217 on reasoning tasks. To support the research community, we open-source DeepSeek-R1-Zero, DeepSeek-R1, and six dense models (1.5B, 7B, 8B, 14B, 32B, 70B) distilled from DeepSeek-R1 based on Qwen and Llama.

Cite as: arXiv:2501.12948 [cs.CL]
(or arXiv:2501.12948v1 [cs.CL] for this version)
https://doi.org/10.48550/arXiv.2501.12948

Submission history

From: Wenfeng Liang [view email]
[v1] Wed, 22 Jan 2025 15:19:35 UTC (928 KB)

You can also find a PDF version of the paper here or another online version here at Hugging Face.

As for the science community’s response, the title of Elizabeth Gibney’s January 23, 2025 article “China’s cheap, open AI model DeepSeek thrills scientists” for Nature says it all, Note: Links have been removed,

A Chinese-built large language model called DeepSeek-R1 is thrilling scientists as an affordable and open rival to ‘reasoning’ models such as OpenAI’s o1.

These models generate responses step-by-step, in a process analogous to human reasoning. This makes them more adept than earlier language models at solving scientific problems and could make them useful in research. Initial tests of R1, released on 20 January, show that its performance on certain tasks in chemistry, mathematics and coding is on par with that of o1 — which wowed researchers when it was released by OpenAI in September.

“This is wild and totally unexpected,” Elvis Saravia, an AI researcher and co-founder of the UK-based AI consulting firm DAIR.AI, wrote on X.

R1 stands out for another reason. DeepSeek, the start-up in Hangzhou that built the model, has released it as ‘open-weight’, meaning that researchers can study and build on the algorithm. Published under an MIT licence, the model can be freely reused but is not considered fully open source, because its training data has not been made available.

“The openness of DeepSeek is quite remarkable,” says Mario Krenn, leader of the Artificial Scientist Lab at the Max Planck Institute for the Science of Light in Erlangen, Germany. By comparison, o1 and other models built by OpenAI in San Francisco, California, including its latest effort o3 are “essentially black boxes”, he says.

DeepSeek hasn’t released the full cost of training R1, but it is charging people using its interface around one-thirtieth of what o1 costs to run. The firm has also created mini ‘distilled’ versions of R1 to allow researchers with limited computing power to play with the model. An “experiment that cost more than £300 with o1, cost less than $10 with R1,” says Krenn. “This is a dramatic difference which will certainly play a role its future adoption.”

The kerfuffle has died down for now.

Association for Advancing Participatory Sciences (AAPS; formerly the Citizen Science Association) January 2025 newsletter highlights

Here are a few excerpts from the Association for Advancing Participatory Sciences (AAPS; formerly the Citizen Science Association) January 2025 newsletter (received via email),

AI and the Future of Citizen Science: event and special collection

WEBINAR: Thursday, February 6 [2025], 12pm US Eastern Time

A conversation with editors and leaders

In December we announced a new special collection on the Future of Artificial Intelligence and Citizen Science. This open-access special collection of 12 papers explores the potential of AI coupled with citizen science in accelerating data processing, expanding project reach, enhancing data quality, and broadening engagement opportunities.

To help orient you to the themes covered in the special collection, issue editors Lucy Fortson, Kevin Crowston, Laure Kloetzer, and Marisa Ponti will join us for a special conversation with Marc Kuchner, Citizen Science Officer, NASA, February 6, 12pm ET. This event will go beyond a recap of papers presented in the special collection, and invite panelists to share their thoughts and perspectives on ethical considerations, challenges, and future directions. 

>>Register here for this conversation on AI 

Interested in Citizen Science: Theory & Practice

Call for Abstracts (closing soon): Galleries, Libraries, Archives, and Museums

A call for abstracts is open for a forthcoming Special Collection in Citizen Science: Theory and Practice which will explore galleries, libraries, archives, and museums (GLAM) participatory science efforts in order to support and empower the global field of participatory sciences. By sharing innovative practices and advancing theories, this collection will contribute to the continued refinement of best practices in these vital ‘third spaces’ and beyond. Issue overview and submission deadlines and logistics are available on the AAPS website. Abstracts accepted through 28 February 2025.

>> Share this call for papers with the GLAM organizations in your network 

More events from the AAPS-partnered 2025 NASA Cit Sci Leaders Series: 

Artificial Intelligence, Open Data, Funding, and more

The NASA Citizen Science Leaders Series is a professional learning service for those leading, hoping to lead, or wanting to learn more about NASA Citizen Science. The following events are open to the public. 

  • Artificial Intelligence: This event, in collaboration with AAPS, features the issue editors from the new Special Collection sharing their key takeaways and hot takes on the topic.  Register here. [February 6, 2025] Noon ET start.
  • Artificial Intelligence in practice: On February 20 [2025] the Zooniverse’s Dr. Laura Trouille will join us to share new functionality of the Zooniverse platform, including ways that Zooniverse projects are adjusting to work with new Artificial Intelligence/ machine learning tools. Register here. Noon ET start.
  • Open Data Management plans and long-term archives of citizen science project data: On March 6 [2025] Dr. Steven Crawford who leads NASA’s Open Science work will discuss these issues and more. Register here.3 pm ET start.
  • Funding: On March 13 [2025] explore landscape of different NASA proposal calls and hear insights on how solicitations are written, how proposals are reviewed, and how funding is handled. Register here. 3 pm ET start.

Members in AAPS Connect can get instant notices when opportunities are posted, often directly from the source. Interested in direct networking with field leaders and being the first to hear of important jobs, grants, and more?  Become a member of AAPS (tiered pricing costs as little as $0).

Jobs:

  • iNaturalist is hiring a Senior Communications Manager responsible for delivering engaging, visual communications about iNaturalist to reach and engage new audiences. Full details here.
  • Reef Environmental Education Foundation (REEF) is hiring an Education coordinator to support activities related to REEF Ocean Explorers and Discovery programming, including K-12 and lifelong learning education and public outreach programs. Full details available here.
  • Cornell Lab of Ornithology is hiring an Extension Associate to as the thought leader and team leader for Youth and Community Engagement for the Lab both nationally and in international settings, with key responsibilities in strategic planning, partnership development, implementation, and evaluation of impact. Full details available here. 

Should you be interested in received AAPS newsletters, visit the organization’s homepage.

Robot rights at the University of British Columbia (UBC)?

Alex Walls’ January 7, 2025 University of British Columbia (UBC) media release “Should we recognize robot rights?” (also received via email) has a title that while attention-getting is mildly misleading. (Artificial intelligence and robots are not synonymous. See Mark Walters’ March 20, 2024 posting “Robots vs. AI: Understanding Their Differences” on Twefy.com.) Walls has produced a Q&A (question & answer) formatted interview that focuses primarily on professor Benjamin Perrin’s artificial intelligence and the law course and symposium,

With the rapid development and proliferation of AI tools comes significant opportunities and risks that the next generation of lawyers will have to tackle, including whether these AI models will need to be recognized with legal rights and obligations.

These and other questions will be the focus of a new upper-level course at UBC’s Peter A. Allard School of Law which starts tomorrow. In this Q&A, professor Benjamin Perrin (BP) and student Nathan Cheung (NC) discuss the course and whether robots need rights. 

Why launch this course?

BP: From autonomous cars to ChatGPT, AI is disrupting entire sectors of society, including the criminal justice system. There are incredible opportunities, including potentially increasing accessibility to justice, as well as significant risks, including the potential for deepfake evidence and discriminatory profiling. Legal students need principles and concepts that will stand the test of time so that whenever a new suite of AI tools becomes available, they have a set of frameworks and principles that are still relevant. That’s the main focus of the 13-class seminar, but it’s also helpful to project what legal frameworks might be required in the future.

NC: I think AI will change how law is conducted and legal decisions are made.I was part of a group of students interested in AI and the law that helped develop the course with professor Perrin. I’m also on the waitlist to take the course. I’m interested in learning how people who aren’t lawyers could use AI to help them with legal representation as well as how AI might affect access to justice: If the agents are paywalled, like ChatGPT, then we’re simply maintaining the status quo of people with money having more access.

What are robot rights?

BP: In the course, we’ll consider how the law should respond if AI becomes as smart as humans, as well as whether AI agents should have legal personhood.

We already have legal status for corporations, governments, and, in some countries, for rivers. Legal personality can be a practical step for regulation: Companies have legal personality, in part, because they can cause a lot of harm and have assets available to right that harm.

For instance, if an AI commits a crime, who is responsible? If a self-driving car crashes, who is at fault? We’ve already seen a case of an AI bot ‘arrested’ for purchasing illegal items online on its own initiative. Should the developers, the owners, the AI itself, be blamed, or should responsibility be shared between all these players?

In the course casebook, we reference writings by a group of Indigenous authors who argue that there are inherent issues with the Western concept of AI as tools, and that we should look at these agents as non-human relations.

There’s been discussion of what a universal bill of rights for AI agents could look like. It includes the right to not be deactivated without ensuring their core existence is maintained somewhere, as well as protection for their operating systems.

What is the status of robot rights in Canada?

BP: Canada doesn’t have a specific piece of legislation yet but does have general laws that could be interpreted in this new context.

The European Union has stated if someone develops an AI agent, they are generally responsible for ensuring its legal compliance. It’s a bit like being a parent: If your children go out and damage someone’s property, you could be held responsible for that damage.

Ontario is the only province to adopt regulating AI use and responsibility, specifically a bill which regulates AI use within the public sector, but excludes the police and the courts. There’s a federal bill [Bill C-27] before parliament, but it was introduced in 2022 and still hasn’t passed.

There’s effectively a patchwork of regulation in Canada right now, but there is a huge need, and opportunity, for specialized legislation related to AI. Canada could look to the European Union’s AI act, and the blueprint for an AI Bill of Rights in the U.S.

Interview language(s): English

Legal services online: Lawyer working on a laptop with virtual screen icons for business legislation, notary public, and justice. Courtesy: University of British Columbia

I found out more about Perrin’s course and plans on his eponymous website, from his October 31, 2024 posting,

We’re excited to announce the launch of the UBC AI & Criminal Justice Initiative, empowering students and scholars to explore the opportunities and challenges at the intersection of AI and criminal justice through teaching, research, public engagement, and advocacy.

We will tackle topics such as:

· Deepfakes, cyberattacks, and autonomous vehicles

· Predictive policing [emphasis mine; see my November 23, 2017 posting “Predictive policing in Vancouver—the first jurisdiction in Canada to employ a machine learning system for property theft reduction“], facial recognition, probabilistic DNA genotyping, and police robots 

· Access to justice: will AI enhance it or deepen inequality?

· Risk assessment algorithms 

· AI tools in legal practice 

· Critical and Indigenous perspectives on AI

· The future of AI, including legal personality, legal rights and criminal responsibility for AI

This initiative, led by UBC law professor Benjamin Perrin, will feature the publication of an open access primer and casebook on AI and criminal justice, a new law school seminar, a symposium on “AI & Law”, and more. A group of law students have been supporting preliminary work for months.

“We’re in the midst of a technological revolution,” said Perrin. “The intersection of AI and criminal justice comes with tremendous potential but also significant risks in Canada and beyond.”

Perrin brings extensive experience in law and public policy, including having served as in-house counsel and lead criminal justice advisor in the Prime Minister’s Office and as a law clerk at the Supreme Court of Canada. His most recent project was a bestselling book and “top podcast”: Indictment: The Criminal Justice System on Trial (2023). 


An advisory group of technical experts and global scholars will lend their expertise to the initiative. Here’s what some members have shared:

“Solving AI’s toughest challenges in real-world application requires collaboration between AI researchers and legal experts, ensuring responsible and impactful AI development that benefits society.”

– Dr. Xiaoxiao Li, Canada CIFAR AI Chair & Assistant Professor, UBC Department of Electrical and Computer Engineering

“The UBC Artificial Intelligence and Criminal Justice Initiative is a timely and needed intervention in an important, and fast-moving area of law. Now is the moment for academic innovations like this one that shape the conversation, educate both law students and the public, and slow the adoption of harmful technologies.” 

– Prof. Aziz Huq, Frank and Bernice J. Greenberg Professor of Law, University of Chicago Law School

Several student members of the UBC AI & Criminal Justice Initiative shared their enthusiasm for this project:

“My interest in this initiative was sparked by the news of AI being used to fabricate legal cases. Since joining, I’ve been thoroughly impressed by the breadth of AI’s applications in policing, sentencing, and research. I’m eager to witness the development as this new field evolves.”

– Nathan Cheung, UBC law student 

“AI is the elephant in the classroom—something we can’t afford to ignore. Being part of the UBC AI and Criminal Justice Initiative is an exciting opportunity to engage in meaningful dialogue about balancing AI’s potential benefits with its risks, and unpacking the complex impact of this evolving technology.”

– Isabelle Sweeney, UBC law student 

Key Dates:

  • October 29, 2024: UBC AI & Criminal Justice Initiative launches
  • November 19, 2024: AI & Criminal Justice: Primer released 
  • January 8, 2025:Launch event at the Peter A. Allard School of Law (hybrid) – More Info & RSVP
    • AI & Criminal Justice: Cases and Commentary released 
    • Launch of new AI & Criminal Justice Seminar
    • Announcement of the AI & Law Student Symposium (April 2, 2025) and call for proposals
  • February 14, 2025: Proposal deadline for AI & Law Student Symposium – Submit a Proposal
  • April 2, 2025: AI & Law Student Symposium (hybrid) More Info & RSVP

Timing is everything, eh? First, I’m sorry for posting this after the launch event took place on January 8, 2025.. Second, this line from Walls’ Q&A: “There’s a federal bill [Bill C-27] before parliament, but it was introduced in 2022 and still hasn’t passed.” should read (after Prime Minister Justin Trudeau’s January 6, 2025 resignation and prorogation of Parliament) “… and now probably won’t be passed.” At the least this turn of events should make for some interesting speculation amongst the experts and the students.

As for anyone who’s interested in robots and their rights, there’s this August 1, 2023 posting “Should robots have rights? Confucianism offers some ideas” featuring Carnegie Mellon University’s Tae Wan Kim (profile).

FrogHeart’s 2024 comes to an end as 2025 comes into view

First, thank you to anyone who’s dropped by to read any of my posts. Second, I didn’t quite catch up on my backlog in what was then the new year (2024) despite my promises. (sigh) I will try to publish my drafts in a more timely fashion but I start this coming year as I did 2024 with a backlog of two to three months. This may be my new normal.

As for now, here’s an overview of FrogHeart’s 2024. The posts that follow are loosely organized under a heading but many of them could fit under other headings as well. After my informal review, there’s some material on foretelling the future as depicted in an exhibition, “Oracles, Omens and Answers,” at the Bodleian Libraries, University of Oxford.

Human enhancement: prosthetics, robotics, and more

Within a year or two of starting this blog I created a tag ‘machine/flesh’ to organize information about a number of converging technologies such as robotics, brain implants, and prosthetics that could alter our concepts of what it means to be human. The larger category of human enhancement functions in much the same way also allowing a greater range of topics to be covered.

Here are some of the 2024 human enhancement and/or machine/flesh stories on this blog,

Other species are also being rendered ‘machine/flesh’,

The year of the hydrogel?

It was the year of the hydrogel for me (btw, hydrogels are squishy materials; I have more of a description after this list),

As for anyone who’s curious about hydrogels, there’s this from an October 20, 2016 article by D.C.Demetre for ScienceBeta, Note: A link has been removed,

Hydrogels, materials that can absorb and retain large quantities of water, could revolutionise medicine. Our bodies contain up to 60% water, but hydrogels can hold up to 90%.

It is this similarity to human tissue that has led researchers to examine if these materials could be used to improve the treatment of a range of medical conditions including heart disease and cancer.

These days hydrogels can be found in many everyday products, from disposable nappies and soft contact lenses to plant-water crystals. But the history of hydrogels for medical applications started in the 1960s.

Scientists developed artificial materials with the ambitious goal of using them in permanent contact applications , ones that are implanted in the body permanently.

For anyone who wants a more technical explanation, there’s the Hydrogel entry on Wikipedia.

Science education and citizen science

Where science education is concerned I’m seeing some innovative approaches to teaching science, which can include citizen science. As for citizen science (also known as, participatory science) I’ve been noticing heightened interest at all age levels.

Artificial intelligence

It’s been another year where artificial intelligence (AI) has absorbed a lot of energy from nearly everyone. I’m highlighting the more unusual AI stories I’ve stumbled across,

As you can see, I’ve tucked in two tangentially related stories, one which references a neuromorphic computing story ((see my Neuromorphic engineering category or search for ‘memristors’ in the blog search engine for more on brain-like computing topics) and the other is intellectual property. There are many, many more stories on these topics

Art/science (or art/sci or sciart)

It’s a bit of a surprise to see how many art/sci stories were published here this year, although some might be better described as art/tech stories.

There may be more 2024 art/sci stories but the list was getting long. In addition to searching for art/sci on the blog search engine, you may want to try data sonification too.

Moving off planet to outer space

This is not a big interest of mine but there were a few stories,

A writer/blogger’s self-indulgences

Apparently books can be dangerous and not in a ‘ban [fill in the blank] from the library’ kind of way,

Then, there are these,

New uses for electricity,

Given the name for this blog, it has to be included,

  • Frog saunas published September 15, 2024, this includes what seems to be a mild scientific kerfuffle

I’ve been following Lomiko Metals (graphite mining) for a while,

Who would have guessed?

Another bacteria story,

New crimes,

Origins of life,

Dirt

While no one year features a large number of ‘dirt’ stories, it has been a recurring theme here throughout the years,

Regenerative medicine

In addition to or instead of using the ‘regenerative medicine’ tag, I might use ’tissue engineering’ or ’tissue scaffolding’,

To sum it up

It was an eclectic year.

Peering forward into 2025 and futurecasting

I expect to be delighted, horrified, thrilled, and left shaking my head by science stories in 2025. Year after year the world of science reveals a world of wonder.

More mundanely, I can state with some confidence that my commentary (mentioned in the future-oriented subsection of my 2023 review and 2024 look forward) on Quantum Potential, a 2023 report from the Council of Canadian Academies, will be published early in this new year as I’ve almost finished writing it.

As for more about the future, I’ve got this, from a December 3, 2024 essay (Five ways to predict the future from around the world – from spider divination to bibliomancy) about an exhibition by Michelle Aroney (Research Fellow in Early Modern History, University of Oxford) and David Zeitlyn (Professor of Social Anthropology, University of Oxford) in The Conversation (h/t December 3, 2024 news item on phys.org), Note: Links have been removed

Some questions are hard to answer and always have been. Does my beloved love me back? Should my country go to war? Who stole my goats?

Questions like these have been asked of diviners around the world throughout history – and still are today. From astrology and tarot to reading entrails, divination comes in a wide variety of forms.

Yet they all address the same human needs. They promise to tame uncertainty, help us make decisions or simply satisfy our desire to understand.

Anthropologists and historians like us study divination because it sheds light on the fears and anxieties of particular cultures, many of which are universal. Our new exhibition at Oxford’s Bodleian Library, Oracles, Omens & Answers, explores these issues by showcasing divination techniques from around the world.

1. Spider divination

In Cameroon, Mambila spider divination (ŋgam dù) addresses difficult questions to spiders or land crabs that live in holes in the ground.

Asking the spiders a question involves covering their hole with a broken pot and placing a stick, a stone and cards made from leaves around it. The diviner then asks a question in a yes or no format while tapping the enclosure to encourage the spider or crab to emerge. The stick and stone represent yes or no, while the leaf cards, which are specially incised with certain meanings, offer further clarification.

2. Palmistry

Reading people’s palms (palmistry) is well known as a fairground amusement, but serious forms of this divination technique exist in many cultures. The practice of reading the hands to gather insights into a person’s character and future was used in many ancient cultures across Asia and Europe.

In some traditions, the shape and depth of the lines on the palm are richest in meaning. In others, the size of the hands and fingers are also considered. In some Indian traditions, special marks and symbols appearing on the palm also provide insights.

Palmistry experienced a huge resurgence in 19th-century England and America, just as the science of fingerprints was being developed. If you could identify someone from their fingerprints, it seemed plausible to read their personality from their hands.

3. Bibliomancy

If you want a quick answer to a difficult question, you could try bibliomancy. Historically, this DIY [do-it-yourself] divining technique was performed with whatever important books were on hand.

Throughout Europe, the works of Homer or Virgil were used. In Iran, it was often the Divan of Hafiz, a collection of Persian poetry. In Christian, Muslim and Jewish traditions, holy texts have often been used, though not without controversy.

4. Astrology

Astrology exists in almost every culture around the world. As far back as ancient Babylon, astrologers have interpreted the heavens to discover hidden truths and predict the future.

5. Calendrical divination

Calendars have long been used to divine the future and establish the best times to perform certain activities. In many countries, almanacs still advise auspicious and inauspicious days for tasks ranging from getting a haircut to starting a new business deal.

In Indonesia, Hindu almanacs called pawukon [calendar] explain how different weeks are ruled by different local deities. The characteristics of the deities mean that some weeks are better than others for activities like marriage ceremonies.

You’ll find logistics for the exhibition in this September 23, 2024 Bodleian Libraries University of Oxford press release about the exhibit, Note: Links have been removed,

Oracles, Omens and Answers

6 December 2024 – 27 April 2025
ST Lee Gallery, Weston Library

The Bodleian Libraries’ new exhibition, Oracles, Omens and Answers, will explore the many ways in which people have sought answers in the face of the unknown across time and cultures. From astrology and palm reading to weather and public health forecasting, the exhibition demonstrates the ubiquity of divination practices, and humanity’s universal desire to tame uncertainty, diagnose present problems, and predict future outcomes.

Through plagues, wars and political turmoil, divination, or the practice of seeking knowledge of the future or the unknown, has remained an integral part of society. Historically, royals and politicians would consult with diviners to guide decision-making and incite action. People have continued to seek comfort and guidance through divination in uncertain times — the COVID-19 pandemic saw a rise in apps enabling users to generate astrological charts or read the Yijing [I Ching], alongside a growth in horoscope and tarot communities on social media such as ‘WitchTok’. Many aspects of our lives are now dictated by algorithmic predictions, from e-health platforms to digital advertising. Scientific forecasters as well as doctors, detectives, and therapists have taken over many of the societal roles once held by diviners. Yet the predictions of today’s experts are not immune to criticism, nor can they answer all our questions.

Curated by Dr Michelle Aroney, whose research focuses on early modern science and religion, and Professor David Zeitlyn, an expert in the anthropology of divination, the exhibition will take a historical-anthropological approach to methods of prophecy, prediction and forecasting, covering a broad range of divination methods, including astrology, tarot, necromancy, and spider divination.

Dating back as far as ancient Mesopotamia, the exhibition will show us that the same kinds of questions have been asked of specialist practitioners from around the world throughout history. What is the best treatment for this illness? Does my loved one love me back? When will this pandemic end? Through materials from the archives of the Bodleian Libraries alongside other collections in Oxford, the exhibition demonstrates just how universally human it is to seek answers to difficult questions.

Highlights of the exhibition include: oracle bones from Shang Dynasty China (ca. 1250-1050 BCE); an Egyptian celestial globe dating to around 1318; a 16th-century armillary sphere from Flanders, once used by astrologers to place the planets in the sky in relation to the Zodiac; a nineteenth-century illuminated Javanese almanac; and the autobiography of astrologer Joan Quigley, who worked with Nancy and Ronald Reagan in the White House for seven years. The casebooks of astrologer-physicians in 16th- and 17th-century England also offer rare insights into the questions asked by clients across the social spectrum, about their health, personal lives, and business ventures, and in some cases the actions taken by them in response.

The exhibition also explores divination which involves the interpretation of patterns or clues in natural things, with the idea that natural bodies contain hidden clues that can be decrypted. Some diviners inspect the entrails of sacrificed animals (known as ‘extispicy’), as evidenced by an ancient Mesopotamian cuneiform tablet describing the observation of patterns in the guts of birds. Others use human bodies, with palm readers interpreting characters and fortunes etched in their clients’ hands. A sketch of Oscar Wilde’s palms – which his palm reader believed indicated “a great love of detail…extraordinary brain power and profound scholarship” – shows the revival of palmistry’s popularity in 19th century Britain.

The exhibition will also feature a case study of spider divination practised by the Mambila people of Cameroon and Nigeria, which is the research specialism of curator Professor David Zeitlyn, himself a Ŋgam dù diviner. This process uses burrowing spiders or land crabs to arrange marked leaf cards into a pattern, which is read by the diviner. The display will demonstrate the methods involved in this process and the way in which its results are interpreted by the card readers. African basket divination has also been observed through anthropological research, where diviners receive answers to their questions in the form of the configurations of thirty plus items after they have been tossed in the basket.

Dr Michelle Aroney and Professor David Zeitlyn, co-curators of the exhibition, say:

Every day we confront the limits of our own knowledge when it comes to the enigmas of the past and present and the uncertainties of the future. Across history and around the world, humans have used various techniques that promise to unveil the concealed, disclosing insights that offer answers to private or shared dilemmas and help to make decisions. Whether a diviner uses spiders or tarot cards, what matters is whether the answers they offer are meaningful and helpful to their clients. What is fun or entertainment for one person is deadly serious for another.

Richard Ovenden, Bodley’s [a nickname? Bodleian Libraries were founded by Sir Thomas Bodley] Librarian, said:

People have tried to find ways of predicting the future for as long as we have had recorded history. This exhibition examines and illustrates how across time and culture, people manage the uncertainty of everyday life in their own way. We hope that through the extraordinary exhibits, and the scholarship that brings them together, visitors to the show will appreciate the long history of people seeking answers to life’s biggest questions, and how people have approached it in their own unique way.

The exhibition will be accompanied by the book Divinations, Oracles & Omens, edited by Michelle Aroney and David Zeitlyn, which will be published by Bodleian Library Publishing on 5 December 2024.

Courtesy: Bodleian Libraries, University of Oxford

I’m not sure why the preceding image is used to illustrate the exhibition webpage but I find it quite interesting. Should you be in Oxford, UK and lucky enough to visit the exhibition, there are a few more details on the Oracles, Omens and Answers event webpage, Note: There are 26 Bodleian Libraries at Oxford and the exhibition is being held in the Weston Library,

EXHIBITION

Oracles, Omens and Answers

6 December 2024 – 27 April 2025

ST Lee Gallery, Weston Library

Free admission, no ticket required

Note: This exhibition includes a large continuous projection of spider divination practice, including images of the spiders in action.

Exhibition tours

Oracles, Omens and Answers exhibition tours are available on selected Wednesdays and Saturdays from 1–1.45pm and are open to all.

These free gallery tours are led by our dedicated volunteer team and places are limited. Check available dates and book your tickets.

You do not need to book a tour to visit the exhibition. Please meet by the entrance doors to the exhibition at the rear of Blackwell Hall.

Happy 2025! And, once again, thank you.

AI and the 2024 Nobel prizes

Artificial intelligence made a splash when the 2024 Nobel Prize announcements were made as they were a key factor in both the prizes for physics and for chemistry.

Where do physics, chemistry, and AI go from here?

I have a few speculative pieces about physics, chemistry, and AI. First off we have Nello Cristianini’s (Professor of Artificial Intelligence at the University of Bath (England) October 10, 2024 essay for The Conversation, Note: Links have been removed,

The 2024 Nobel Prizes in physics and chemistry have given us a glimpse of the future of science. Artificial intelligence (AI) was central to the discoveries honoured by both awards. You have to wonder what Alfred Nobel, who founded the prizes, would think of it all.

We are certain to see many more Nobel medals handed to researchers who made use of AI tools. As this happens, we may find the scientific methods honoured by the Nobel committee depart from straightforward categories like “physics”, “chemistry” and “physiology or medicine”.

We may also see the scientific backgrounds of recipients retain a looser connection with these categories. This year’s physics prize was awarded to the American John Hopfield, at Princeton University, and British-born Geoffrey Hinton, from the University of Toronto. While Hopfield is a physicist, Hinton studied experimental psychology before gravitating to AI.

The chemistry prize was shared between biochemist David Baker, from the University of Washington, and the computer scientists Demis Hassabis and John Jumper, who are both at Google DeepMind in the UK.

There is a close connection between the AI-based advances honoured in the physics and chemistry categories. Hinton helped develop an approach used by DeepMind to make its breakthrough in predicting the shapes of proteins.

The physics laureates, Hinton in particular, laid the foundations of the powerful field known as machine learning. This is a subset of AI that’s concerned with algorithms, sets of rules for performing specific computational tasks.

Hopfield’s work is not particularly in use today, but the backpropagation algorithm (co-invented by Hinton) has had a tremendous impact on many different sciences and technologies. This is concerned with neural networks, a model of computing that mimics the human brain’s structure and function to process data. Backpropagation allows scientists to “train” enormous neural networks. While the Nobel committee did its best to connect this influential algorithm to physics, it’s fair to say that the link is not a direct one.

Every two years, since 1994, scientists have been holding a contest to find the best ways to predict protein structures and shapes from the sequences of their amino acids. The competition is called Critical Assessment of Structure Prediction (CASP).

For the past few contests, CASP winners have used some version of DeepMind’s AlphaFold. There is, therefore, a direct line to be drawn from Hinton’s backpropagation to Google DeepMind’s AlphaFold 2 breakthrough.

Attributing credit has always been controversial aspect of the Nobel prizes. A maximum of three researchers can share a Nobel. But big advances in science are collaborative. Scientific papers may have 10, 20, 30 authors or more. More than one team might contribute to the discoveries honoured by the Nobel committee.

This year we may have further discussions about the attribution of the research on backpropagation algorithm, which has been claimed by various researchers, as well as for the general attribution of a discovery to a field like physics.

We now have a new dimension to the attribution problem. It’s increasingly unclear whether we will always be able to distinguish between the contributions of human scientists and those of their artificial collaborators – the AI tools that are already helping push forward the boundaries of our knowledge.

This November 26, 2024 news item on ScienceDaily, which is a little repetitive, considers interdisciplinarity in relation to the 2024 Nobel prizes,

In 2024, the Nobel Prize in physics was awarded to John Hopfield and Geoffrey Hinton for their foundational work in artificial intelligence (AI), and the Nobel Prize in chemistry went to David Baker, Demis Hassabis, and John Jumper for using AI to solve the protein-folding problem, a 50-year grand challenge problem in science.

A new article, written by researchers at Carnegie Mellon University and Calculation Consulting, examines the convergence of physics, chemistry, and AI, highlighted by recent Nobel Prizes. It traces the historical development of neural networks, emphasizing the role of interdisciplinary research in advancing AI. The authors advocate for nurturing AI-enabled polymaths to bridge the gap between theoretical advancements and practical applications, driving progress toward artificial general intelligence. The article is published in Patterns.

“With AI being recognized in connections to both physics and chemistry, practitioners of machine learning may wonder how these sciences relate to AI and how these awards might influence their work,” explained Ganesh Mani, Professor of Innovation Practice and Director of Collaborative AI at Carnegie Mellon’s Tepper School of Business, who coauthored the article. “As we move forward, it is crucial to recognize the convergence of different approaches in shaping modern AI systems based on generative AI.”

A November 25, 2024 Carnegie Mellon University (CMU) news release, which originated the news item, describes the paper,

In their article, the authors explore the historical development of neural networks. By examining the history of AI development, they contend, we can understand more thoroughly the connections among computer science, theoretical chemistry, theoretical physics, and applied mathematics. The historical perspective illuminates how foundational discoveries and inventions across these disciplines have enabled modern machine learning with artificial neural networks. 

Then they turn to key breakthroughs and challenges in this field, starting with Hopfield’s work, and go on to explain how engineering has at times preceded scientific understanding, as is the case with the work of Jumper and Hassabis.

The authors conclude with a call to action, suggesting that the rapid progress of AI across diverse sectors presents both unprecedented opportunities and significant challenges. To bridge the gap between hype and tangible development, they say, a new generation of interdisciplinary thinkers must be cultivated.

These “modern-day Leonardo da Vincis,” as the authors call them, will be crucial in developing practical learning theories that can be applied immediately by engineers, propelling the field toward the ambitious goal of artificial general intelligence.

This calls for a paradigm shift in how scientific inquiry and problem solving are approached, say the authors, one that embraces holistic, cross-disciplinary collaboration and learns from nature to understand nature. By breaking down silos between fields and fostering a culture of intellectual curiosity that spans multiple domains, innovative solutions can be identified to complex global challenges like climate change. Through this synthesis of diverse knowledge and perspectives, catalyzed by AI, meaningful progress can be made and the field can realize the full potential of technological aspirations.

“This interdisciplinary approach is not just beneficial but essential for addressing the many complex challenges that lie ahead,” suggests Charles Martin, Principal Consultant at Calculation Consulting, who coauthored the article. “We need to harness the momentum of current advancements while remaining grounded in practical realities.”

The authors acknowledge the contributions of Scott E. Fahlman, Professor Emeritus in Carnegie Mellon’s School of Computer Science.

Here’s a link to and a citation for the paper,

The recent Physics and Chemistry Nobel Prizes, AI, and the convergence of knowledge fields by Charles H. Martin, Ganesh Mani. Patterns, 2024 DOI: 10.1016/j.patter.2024.101099 Published online November 25, 2024 Copyright: © 2024 The Author(s). Published by Elsevier Inc.

This paper is open access under Creative Commons Attribution (CC BY 4.0.

A scientific enthusiast: “I was a beta tester for the Nobel prize-winning AlphaFold AI”

From an October 11, 2024 essay by Rivka Isaacson (Professor of Molecular Biophysics, King’s College London) for The Conversation, Note: Links have been removed,

The deep learning machine AlphaFold, which was created by Google’s AI research lab DeepMind, is already transforming our understanding of the molecular biology that underpins health and disease.

One half of the 2024 Nobel prize in chemistry went to David Baker from the University of Washington in the US, with the other half jointly awarded to Demis Hassabis and John M. Jumper, both from London-based Google DeepMind.

If you haven’t heard of AlphaFold, it may be difficult to appreciate how important it is becoming to researchers. But as a beta tester for the software, I got to see first-hand how this technology can reveal the molecular structures of different proteins in minutes. It would take researchers months or even years to unpick these structures in laboratory experiments.

This technology could pave the way for revolutionary new treatments and drugs. But first, it’s important to understand what AlphaFold does.

Proteins are produced by series of molecular “beads”, created from a selection of the human body’s 20 different amino acids. These beads form a long chain that folds up into a mechanical shape that is crucial for the protein’s function.

Their sequence is determined by DNA. And while DNA research means we know the order of the beads that build most proteins, it’s always been a challenge to predict how the chain folds up into each “3D machine”.

These protein structures underpin all of biology. Scientists study them in the same way you might take a clock apart to understand how it works. Comprehend the parts and put together the whole: it’s the same with the human body.

Proteins are tiny, with a huge number located inside each of our 30 trillion cells. This meant for decades, the only way to find out their shape was through laborious experimental methods – studies that could take years.

Throughout my career I, along with many other scientists, have been engaged in such pursuits. Every time we solve a protein structure, we deposit it in a global database called the Protein Data Bank, which is free for anyone to use.

AlphaFold was trained on these structures, the majority of which were found using X-ray crystallography. For this technique, proteins are tested under thousands of different chemical states, with variations in temperature, density and pH. Researchers use a microscope to identify the conditions under which each protein lines up in a particular formation. These are then shot with X-rays to work out the spatial arrangement of all the atoms in that protein.

Addictive experience

In March 2024, researchers at DeepMind approached me to beta test AlphaFold3, the latest incarnation of the software, which was close to release at the time.

I’ve never been a gamer but I got a taste of the addictive experience as, once I got access, all I wanted to do was spend hours trying out molecular combinations. As well as lightning speed, this new version introduced the option to include bigger and more varied molecules, including DNA and metals, and the opportunity to modify amino acids to mimic chemical signalling in cells.

Understanding the moving parts and dynamics of proteins is the next frontier, now that we can predict static protein shapes with AlphaFold. Proteins come in a huge variety of shapes and sizes. They can be rigid or flexible, or made of neatly structured units connected by bendy loops.

You can read Isaacson’s entire October 11, 2024 essay on The Conversation or in an October 14, 2024 news item on phys.org.

Huge leap forward in computing efficiency with Indian Institute of Science’s (IISc) neuromorphic (brainlike) platform

This is pretty thrilling news in a September 11, 2024 Indian Institute of Science (IISc) press release (also on EurekAlert), Note: A link has been removed,

In a landmark advancement, researchers at the Indian Institute of Science (IISc) have developed a brain-inspired analog computing platform capable of storing and processing data in an astonishing 16,500 conductance states within a molecular film. Published today in the journal Nature, this breakthrough represents a huge step forward over traditional digital computers in which data storage and processing are limited to just two states. 

Such a platform could potentially bring complex AI tasks, like training Large Language Models (LLMs), to personal devices like laptops and smartphones, thus taking us closer to democratising the development of AI tools. These developments are currently restricted to resource-heavy data centres, due to a lack of energy-efficient hardware. With silicon electronics nearing saturation, designing brain-inspired accelerators that can work alongside silicon chips to deliver faster, more efficient AI is also becoming crucial.

“Neuromorphic computing has had its fair share of unsolved challenges for over a decade,” explains Sreetosh Goswami, Assistant Professor at the Centre for Nano Science and Engineering (CeNSE), IISc, who led the research team. “With this discovery, we have almost nailed the perfect system – a rare feat.”

The fundamental operation underlying most AI algorithms is quite basic – matrix multiplication, a concept taught in high school maths. But in digital computers, these calculations hog a lot of energy. The platform developed by the IISc team drastically cuts down both the time and energy involved, making these calculations a lot faster and easier.

The molecular system at the heart of the platform was designed by Sreebrata Goswami, Visiting Professor at CeNSE. As molecules and ions wiggle and move within a material film, they create countless unique memory states, many of which have been inaccessible so far. Most digital devices are only able to access two states (high and low conductance), without being able to tap into the infinite number of intermediate states possible.

By using precisely timed voltage pulses, the IISc team found a way to effectively trace a much larger number of molecular movements, and map each of these to a distinct electrical signal, forming an extensive “molecular diary” of different states. “This project brought together the precision of electrical engineering with the creativity of chemistry, letting us control molecular kinetics very precisely inside an electronic circuit powered by nanosecond voltage pulses,” explains Sreebrata Goswami.

Tapping into these tiny molecular changes allowed the team to create a highly precise and efficient neuromorphic accelerator, which can store and process data within the same location, similar to the human brain. Such accelerators can be seamlessly integrated with silicon circuits to boost their performance and energy efficiency. 

A key challenge that the team faced was characterising the various conductance states, which proved impossible using existing equipment. The team designed a custom circuit board that could measure voltages as tiny as a millionth of a volt, to pinpoint these individual states with unprecedented accuracy.

The team also turned this scientific discovery into a technological feat. They were able to recreate NASA’s iconic “Pillars of Creation” image from the James Webb Space Telescope data – originally created by a supercomputer – using just a tabletop computer. They were also able to do this at a fraction of the time and energy that traditional computers would need.

The team includes several students and research fellows at IISc. Deepak Sharma performed the circuit and system design and electrical characterisation, Santi Prasad Rath handled synthesis and fabrication, Bidyabhusan Kundu tackled the mathematical modelling, and Harivignesh S crafted bio-inspired neuronal response behaviour. The team also collaborated with Stanley Williams [also known as R. Stanley Williams], Professor at Texas A&M University and Damien Thompson, Professor at the University of Limerick. 

The researchers believe that this breakthrough could be one of India’s biggest leaps in AI hardware, putting the country on the map of global technology innovation. Navakanta Bhat, Professor at CeNSE and an expert in silicon electronics led the circuit and system design in this project. “What stands out is how we have transformed complex physics and chemistry understanding into groundbreaking technology for AI hardware,” he explains. “In the context of the India Semiconductor Mission, this development could be a game-changer, revolutionising industrial, consumer and strategic applications. The national importance of such research cannot be overstated.” 

With support from the Ministry of Electronics and Information Technology, the IISc team is now focused on developing a fully indigenous integrated neuromorphic chip. “This is a completely home-grown effort, from materials to circuits and systems,” emphasises Sreetosh Goswami. “We are well on our way to translating this technology into a system-on-a-chip.”  

Caption: Using their AI accelerator, the team recreated NASA’s iconic “Pillars of Creation” image from the James Webb Space Telescope data on a simple tabletop computer – achieving this in a fraction of the time and energy required by traditional systems. Credit: CeNSE, IISc

Here’s a link to and a citation for the paper,

Linear symmetric self-selecting 14-bit kinetic molecular memristors by Deepak Sharma, Santi Prasad Rath, Bidyabhusan Kundu, Anil Korkmaz, Harivignesh S, Damien Thompson, Navakanta Bhat, Sreebrata Goswami, R. Stanley Williams & Sreetosh Goswami. Nature volume 633, pages 560–566 (2024) DOI: https://doi.org/10.1038/s41586-024-07902-2 Published online: 11 September 2024 Issue Date: 19 September 2024

This paper is behind a paywall.

How memristors retain information without a power source? A mystery solved

A September 10, 2024 news item on ScienceDaily provides a technical explanation of how memristors, without a power source, can retain information,

Phase separation, when molecules part like oil and water, works alongside oxygen diffusion to help memristors — electrical components that store information using electrical resistance — retain information even after the power is shut off, according to a University of Michigan led study recently published in Matter.

A September 11, 2024 University of Michigan press release (also on EurekAltert but published September 10, 2024), which originated the news item, delves further into the research,

Up to this point, explanations have not fully grasped how memristors retain information without a power source, known as nonvolatile memory, because models and experiments do not match up.

“While experiments have shown devices can retain information for over 10 years, the models used in the community show that information can only be retained for a few hours,” said Jingxian Li, U-M doctoral graduate of materials science and engineering and first author of the study.

To better understand the underlying phenomenon driving nonvolatile memristor memory, the researchers focused on a device known as resistive random access memory or RRAM, an alternative to the volatile RAM used in classical computing, and are particularly promising for energy-efficient artificial intelligence applications. 

The specific RRAM studied, a filament-type valence change memory (VCM), sandwiches an insulating tantalum oxide layer between two platinum electrodes. When a certain voltage is applied to the platinum electrodes, a conductive filament forms a tantalum ion bridge passing through the insulator to the electrodes, which allows electricity to flow, putting the cell in a low resistance state representing a “1” in binary code. If a different voltage is applied, the filament is dissolved as returning oxygen atoms react with the tantalum ions, “rusting” the conductive bridge and returning to a high resistance state, representing a binary code of “0”. 

It was once thought that RRAM retains information over time because oxygen is too slow to diffuse back. However, a series of experiments revealed that previous models have neglected the role of phase separation. 

“In these devices, oxygen ions prefer to be away from the filament and will never diffuse back, even after an indefinite period of time. This process is analogous to how a mixture of water and oil will not mix, no matter how much time we wait, because they have lower energy in a de-mixed state,” said Yiyang Li, U-M assistant professor of materials science and engineering and senior author of the study.

To test retention time, the researchers sped up experiments by increasing the temperature. One hour at 250°C is equivalent to about 100 years at 85°C—the typical temperature of a computer chip.

Using the extremely high-resolution imaging of atomic force microscopy, the researchers imaged filaments, which measure only about five nanometers or 20 atoms wide, forming within the one micron wide RRAM device.  

“We were surprised that we could find the filament in the device. It’s like finding a needle in a haystack,” Li said. 

The research team found that different sized filaments yielded different retention behavior. Filaments smaller than about 5 nanometers dissolved over time, whereas filaments larger than 5 nanometers strengthened over time. The size-based difference cannot be explained by diffusion alone.

Together, experimental results and models incorporating thermodynamic principles showed the formation and stability of conductive filaments depend on phase separation. 

The research team leveraged phase separation to extend memory retention from one day to well over 10 years in a rad-hard memory chip—a memory device built to withstand radiation exposure for use in space exploration. 

Other applications include in-memory computing for more energy efficient AI applications or memory devices for electronic skin—a stretchable electronic interface designed to mimic the sensory capabilities of human skin. Also known as e-skin, this material could be used to provide sensory feedback to prosthetic limbs, create new wearable fitness trackers or help robots develop tactile sensing for delicate tasks.

“We hope that our findings can inspire new ways to use phase separation to create information storage devices,” Li said.

Researchers at Ford Research, Dearborn; Oak Ridge National Laboratory; University at Albany; NY CREATES; Sandia National Laboratories; and Arizona State University, Tempe contributed to this study.

Here’s a link to and a citation for the paper,

Thermodynamic origin of nonvolatility in resistive memory by Jingxian Li, Anirudh Appachar, Sabrina L. Peczonczyk, Elisa T. Harrison, Anton V. Ievlev, Ryan Hood, Dongjae Shin, Sangmin Yoo, Brianna Roest, Kai Sun, Karsten Beckmann, Olya Popova, Tony Chiang, William S. Wahby, Robin B. Jacobs-Godrim, Matthew J. Marinella, Petro Maksymovych, John T. Heron, Nathaniel Cady, Wei D. Lu, Suhas Kumar, A. Alec Talin, Wenhao Sun, Yiyang Li. Matter DOI: https://doi.org/10.1016/j.matt.2024.07.018 Published online: August 26, 2024

This paper is behind a paywall.

Smart toys spying on children?

Caption: Twelve toys were examined in a study on smart toys and privacy. Credit: University of Basel / Céline Emch

An August 26, 2024 University of Basel press release (also on EurekAlert) describes research into smart toys and privacy issues for the children who play with them,

Toniebox, Tiptoi, and Tamagotchi are smart toys, offering interactive play through software and internet access. However, many of these toys raise privacy concerns, and some even collect extensive behavioral data about children, report researchers at the University of Basel, Switzerland.

The Toniebox and the figurines it comes with are especially popular with small children. They’re much easier to use than standard music players, allowing kids to turn on music and audio content themselves whenever they want. All a child has to do is place a plastic version of Peppa Pig onto the box and the story starts to play. When the child wants to stop the story, they simply remove the figurine. To rewind and fast-forward, the child can tilt the box to the left or right, respectively.

A lot of parents are probably thinking, “Fantastic concept!” Not so fast – the Toniebox records exactly when it is activated and by which figurine, when the child stops playback, and to which spot they rewind or fast-forward. Then it sends the data to the manufacturer.

The Toniebox is one of twelve smart toys studied by researchers headed by Professor Isabel Wagner of the Department of Mathematics and Computer Science at the University of Basel. These included well-known toys like the Tiptoi smart pen, the Edurino learning app, and the Tamagotchi virtual pet as well as the Toniebox. The researchers also studied less well-known products like the Moorebot, a mobile robot with a camera and microphone, and Kidibuzz, a smartphone for kids with parental controls.

One focus of the analysis was security: is data traffic encrypted, and how well? The researchers also investigated data protection, transparency (how easy it is for users to find out what data is collected), and compliance with the EU General Data Protection Regulation. Wagner and her colleagues are presenting their results at the Annual Privacy Forum (https://privacyforum.eu/) in early September [2024]. Springer publishes all the conference contributions in the series Privacy Technologies and Policy.

Collect data while offline, send it while online

Neither the Toniebox nor the Tiptoi pen come out well with respect to security, as they do not securely encrypt data traffic. The two toys differ with regard to privacy concerns, though: While the Toniebox does collect data and send it to the manufacturer, the Tiptoi pen does not record how and when a child uses it.

Even if the Toniebox were operated offline and only temporarily connected to the internet while downloading new audio content, the device could store collected data locally and transmit it to the manufacturer at the next opportunity, Wagner surmises. “In another toy we’re currently studying that integrates ChatGPT, we’re seeing that log data regularly vanishes.” The system is probably set up to delete the local copy of transmitted data to optimize internal storage use, Wagner says.

Companies often claim the collected data helps them optimize their devices. Yet it is far from obvious to users what purpose this data could serve. “The apps bundled with some of these toys demand entirely unnecessary access rights, such as to a smartphone’s location or microphone,” says the researcher. The ChatGPT toy still being analyzed also transmits a data stream that looks like audio. Perhaps the company wants to optimize speech recognition for children’s voices, the Professor of Cyber Security speculates.

A data protection label

“Children’s privacy requires special protection,” emphasizes Julika Feldbusch, first author of the study. She argues that toy manufacturers should place greater weight on privacy and on the security of their products than they currently do in light of their young target audience.

The researchers recommend that compliance with security and data protection standards be identified by a label on the packaging, similar to nutritional information on food items. Currently, it’s too difficult for parents to assess the security risks that smart toys pose to their children.

“We’re already seeing signs of a two-tier society when it comes to privacy protection for children,” says Feldbusch. “Well-informed parents engage with the issue and can choose toys that do not create behavioral profiles of their children. But many lack the technical knowledge or don’t have time to think about this stuff in detail.”

You could argue that individual children probably won’t experience negative consequences due to toy manufacturers creating profiles of them, says Wagner. “But nobody really knows that for sure. For example, constant surveillance can have negative effects on personal development.”

Here’s a link to and a citation for the paper,

No Transparency for Smart Toys by Julika Feldbusch, Valentyna Pavliv, Nima Akbari & Isabel Wagner. Privacy Technologies and Policy Conference paper (part of Annual Privacy Forum [series]: APF 2024; Part of the book series: Lecture Notes in Computer Science [LNCS,volume 14831]) First Online: 01 August 2024 pp 203–227

This paper is behind a paywall.

From AI to Ancient Greece; the 2024-25 theatre season at Concordia University (Montréal, Québec)

An October 30, 2024 Concordia University news release by Vanessa Hauguel announces the upcoming theatre season, which features a focus on how current technology and historical narratives intersect, Note: Links have been removed,

The Concordia Department of Theatre recently announced its 2024-25 season, featuring a diverse lineup of scripted and devised works. The program delves into themes relevant to today’s world, from artificial intelligence (AI) and deepfakes to the timeless human experiences and societal change.

Two upcoming productions highlight the department’s wide range of creative approaches. The first is Concord Floral by Jordan Tannahill, directed by Emma Tibaldo. The second is a devised adaptation of La vida es sueño (Life is a Dream), based on Pedro Calderón de la Barca’s classic play, directed by Peter Farbridge.

While these two productions kick off the season, additional performances are planned throughout the year until April 2025, continuing the department’s exploration of contemporary and classic themes. Directors Farbridge and Tibaldo, as well as this season’s artistic producer, Noah Drew, share the creative vision behind the shows and the thematic connections between them.

Modern ghost story

Concord Floral, by Canadian playwright Tannahill, is a modern ghost story set in an abandoned greenhouse where a group of teenagers face a buried secret. Directed by Tibaldo, a Concordia theatre graduate, 99, and artist-in-residence, it incorporates cutting-edge technology to navigate themes of guilt, adolescence and the weight of collective silence.

Concord Floral is a play that sticks with you,” Tibaldo explains. “It speaks to growing up, discovering yourself and grappling with your accountability to others. The haunting or ‘plague’ in the play is represented through movement, lighting and sound, creating a visceral embodiment of guilt and regret.”

The play draws on The Decameron as a point of reference, adding a sense of timelessness to the teenage experience. “During our teen years, we often react or make impulsive decisions, as we’re discovering or aiming to break boundaries, and sometimes they come with lasting consequences,” Tibaldo says. “This play will resonate strongly with many, as it captures that intense, confusing period of early adulthood.”

La vida es sueño: mixing AI, deepfakes & philosophy

Meanwhile, La vida es sueño offers a reimagining of Calderón de la Barca’s work, making allusions to contemporary issues like AI deepfakes. Farbridge, MA 22, explores the philosophical themes of illusion and reality in this adaptation, examining how modern technology manipulates perception.

“At the heart of the play is the idea that our lives are shaped by false narratives, a timeless concept that feels increasingly relevant in today’s world,” Farbridge says.

“Our adaptation looks at how political systems manipulate truth on a massive scale. And the deeper question we’re asking is, if belief in what we see and hear in online media collapses, where will we land?”

Farbridge’s production will use a combination of video screens, shadow-play and physical performance to explore these themes. “We’re experimenting with form and trying to find new ways of engaging with the audience. It’s an exciting process, and unnerving too, as we won’t know the full impact of it until the public is in the theatre with us..”

A season of learning and innovating

As this season’s artistic producer, Drew sees the productions as essential learning experiences for students. “A big part of students’ education has to come from ‘stage time’ — those moments when a live audience is experiencing their work,” the associate professor says.

“These two productions offer a chance to engage with classic stories radically reinvented —Concord Floral reinterprets The Decameron, while La vida es sueño rethinks a Spanish Golden Age play. I hope it gives students the opportunity to see how historical narratives can connect with today’s issues, and grasp a deeper, more personal understanding of how history loops and cycles.”

Drew also points out the importance of technology in both productions.

“Lighting, sound and video are used all the time in many forms of art and entertainment media. What’s special about their use in theatre is that audiences get to see them in a real three-dimensional space interacting with our species’ original ‘technology’ — the human body. This liveness and immediacy can create almost-hallucinatory images that make audiences rub their eyes and wonder if the haunting moments in Concord Floral or the manipulations of truth in La vida es sueño are illusions or are really happening.”

Reflecting on the broader significance of theatre, Drew believes that storytelling plays a vital role in addressing the challenges of today’s rapidly changing world.

“We live in a time of war, climate crises, political polarization, flawed AI, and many forms of injustice,” he says. “Theatre can help us step outside of our routines, wake up, and yearn for more. It’s a way to make sense of a complicated world and spark inspiration.”

La vida es sueño (Life is a Dream) runs from November 14 to 16 [2024] in room 240 of the Molson (MB) Building, 1450 Guy Street.

Concord Floral runs November 27 to 30 [2024] at the Concordia Theatre in the Henry F. Hall (H) Building, 1455 Boulevard De Maisonneuve West.

Should you be in Montréal and able to attend the performances, you can find more details via Concordia University’s PUBLIC PERFORMANCES 2024-25 webpage.

Nominees for new SETI ‘Art and AI’ Artist in Residency (AIR) program announced

Not exactly an art/science (or sciart) story. let’s call it an art/technology (or techno art) story. The SETI (Search for Extraterrestrial Intelligence) Institute issued an October 22, 2024 news release (also on EurekAlert but published October 23, 2024) announcing the six nominees for SETI’s new artist in residency (AIR) program ‘Algorithmic Imaginings’,

The SETI Artist in Residency (AIR) program announced Algorithmic Imaginings, a new residency that explores how AI technologies affect science and society. The residency focuses on creative research topics such as imaginary life, human-AI collaboration, AI futures, posthumanism, AI and consciousness, and the ethics of AI data. It also connects with current SETI Institute research, including exoplanet studies, astrobiology, signal detection, and advanced computing. The two-year program offers $30,000 in funding and an exhibition at the ZKM | Center for Art and Media in Karlsruhe, Germany.

“AI is on everyone’s mind right now, be it ChatGPT4, text-to-video generators such as Sora, and discussions surrounding fake news and copyright,” said Bettina Forget, Director of the AIR program. “AI is a phenomenal tool, but it also comes with opportunities and concerns that should be addressed. This residency allows artists working at the intersection of art and technology to explore new avenues of thinking and connect them to SETI Institute research.”

Internationally recognized media art curator Zhang Ga, SETI AIR program Director Bettina Forget, and SETI AIR program Founder and Senior Advisor Charles Lindsay lead the SETI AIR Algorithmic Imaginings residency. Andrew Siemion, the SETI Institute’s Bernard M. Oliver Chair for SETI Research, and AI researcher Robert Alvarez, who collaborates with the SETI Institute as a mentor for its Frontier Development Lab program, bring their science and technology expertise to this residency.

The residency’s team of advisors selected six outstanding media artists and invited them to submit a project proposal for the SETI AIR Algorithmic Imaginings residency.

“These artists are notable voices with a solid track record of critically and inventively confronting the pressing issues raised by a pervasively technological world,” said Zhang Ga.

“SETI AIR is uniquely poised to participate in the AI zeitgeist that is exploding in San Francisco and Silicon Valley,” said Charles Lindsay. “We will support the most innovative artists of our time. It is time. Now.”

The SETI Institute will announce the winning artist later this fall.

The six nominees of the Art and AI residency are:

Tega Brain
Tega Brain’s work examines ecology, data, automation, and infrastructure. She has created projects such as digital networks controlled by environmental phenomena, schemes for obfuscating personal data, and a wildly popular online smell-based dating service.

Dominique Gonzalez Foerster
An experimental artist based in Paris, Dominique Gonzalez-Foerster explores the different modalities of sensory and cognitive relationships between bodies and spaces, real or fictitious, up to the point of questioning the distance between organic and inorganic life.

Laurent Grasso
French-born artist Laurent Grasso has developed a fascination with the visual possibilities related to the science of electromagnetic energy, radio waves, and naturally occurring phenomena.

HeHe (Helen Evans, Heiko Hansen)
HeHe is an artist duo consisting of Helen Evans (French, British) and Heiko Hansen (German), based in Le Havre, France. Their works are about the social, industrial, and ecological paradoxes found in today’s technological landscapes. Their practice explores the relationship between art, media, and the environment.

Terike Haapoja
Terike Haapoja is an interdisciplinary visual artist, writer, and researcher. Haapoja’s work investigates our world’s existential and political boundaries, specifically focusing on issues arising from the anthropocentric worldview of Western traditions. Animality, multispecies politics, cohabitation, time, loss, and repairing connections are recurring themes in Haapoja’s work.

Wang Yuyang
Wang Yuyang is a renowned contemporary Chinese artist teaching at the Central Academy of Fine Arts. Focused on techno-art, his work explores the relationships between technology and art, nature and artificiality, and material and immaterial through an interdisciplinary and multimedia approach.

About the SETI Institute

Founded in 1984, the SETI Institute is a non-profit, multi-disciplinary research and education organization whose mission is to lead humanity’s quest to understand the origins and prevalence of life and intelligence in the Universe and to share that knowledge with the world. Our research encompasses the physical and biological sciences and leverages expertise in data analytics, machine learning and advanced signal detection technologies. The SETI Institute is a distinguished research partner for industry, academia and government agencies, including NASA and NSF.

Caption: The six nominees for the SETI Institute’s Algorithmic Imaginings residency. Credit: SETI Institute [top row, left to right: Dominique Gonzalez Foerster; HeHe (Helen Evans, Heiko Hansen); Laurent Grasso; bottom row, left to the right: Tega Brain; Terike Haapoja; and Wang Yuyang]

Good luck to the artists.