Tag Archives: University of Toronto (U of T)

DeepSeek, a Chinese rival to OpenAI and other US AI companies

There’s been quite the kerfuffle over DeepSeek during the last few days. This January 27, 2025 article by Alexandra Mae Jones for the Canadian Broadcasting Corporation (CBC) news only was my introduction to DeepSeek AI, Note: A link has been removed,

There’s a new player in AI on the world stage: DeepSeek, a Chinese startup that’s throwing tech valuations into chaos and challenging U.S. dominance in the field with an open-source model that they say they developed for a fraction of the cost of competitors.

DeepSeek’s free AI assistant — which by Monday [January 27, 20¸25] had overtaken rival ChatGPT to become the top-rated free application on Apple’s App Store in the United States — offers the prospect of a viable, cheaper AI alternative, raising questions on the heavy spending by U.S. companies such as Apple and Microsoft, amid a growing investor push for returns.

U.S. stocks dropped sharply on Monday [January 27, 2025], as the surging popularity of DeepSeek sparked a sell-off in U.S. chipmakers.

“[DeepSeek] performs as well as the leading models in Silicon Valley and in some cases, according to their claims, even better,” Sheldon Fernandez, co-founder of DarwinAI, told CBC News. “But they did it with a fractional amount of the resources is really what is turning heads in our industry.”

What is DeepSeek?

Little is known about the small Hangzhou startup behind DeepSeek, which was founded out of a hedge fund in 2023, but largely develops open-source AI models. 

Its researchers wrote in a paper last month that the DeepSeek-V3 model, launched on Jan. 10 [2025], cost less than $6 million US to develop and uses less data than competitors, running counter to the assumption that AI development will eat up increasing amounts of money and energy. 

Some analysts are skeptical about DeepSeek’s $6 million claim, pointing out that this figure only covers computing power. But Fernandez said that even if you triple DeepSeek’s cost estimates, it would still cost significantly less than its competitors. 

The open source release of DeepSeek-R1, which came out on Jan. 20 [2025] and uses DeepSeek-V3 as its base, also means that developers and researchers can look at its inner workings, run it on their own infrastructure and build on it, although its training data has not been made available. 

“Instead of paying Open $20 a month or $200 a month for the latest advanced versions of these models, [people] can really get these types of features for free. And so it really upends a lot of the business model that a lot of these companies were relying on to justify their very high valuations.”

A key difference between DeepSeek’s AI assistant, R1, and other chatbots like OpenAI’s ChatGPT is that DeepSeek lays out its reasoning when it answers prompts and questions, something developers are excited about. 

“The dealbreaker is the access to the raw thinking steps,” Elvis Saravia, an AI researcher and co-founder of the U.K.-based AI consulting firm DAIR.AI, wrote on X, adding that the response quality was “comparable” to OpenAI’s latest reasoning model, o1.

U.S. dominance in AI challenged

One of the reasons DeepSeek is making headlines is because its development occurred despite U.S. actions to keep Americans at the top of AI development. In 2022, the U.S. curbed exports of computer chips to China, hampering their advanced supercomputing development.

The latest AI models from DeepSeek are widely seen to be competitive with those of OpenAI and Meta, which rely on high-end computer chips and extensive computing power.

Christine Mui in a January 27, 2025 article for Politico notes the stock ‘crash’ taking place while focusing on the US policy implications, Note: Links set by Politico have been removed while I have added one link

A little-known Chinese artificial intelligence startup shook the tech world this weekend by releasing an OpenAI-like assistant, which shot to the No.1 ranking on Apple’s app store and caused American tech giants’ stocks to tumble.

From Washington’s perspective, the news raised an immediate policy alarm: It happened despite consistent, bipartisan efforts to stifle AI progress in China.

In tech terms, what freaked everyone out about DeepSeek’s R1 model is that it replicated — and in some cases, surpassed — the performance of OpenAI’s cutting-edge o1 product across a host of performance benchmarks, at a tiny fraction of the cost.

The business takeaway was straightforward. DeepSeek’s success shows that American companies might not need to spend nearly as much as expected to develop AI models. That both intrigues and worries investors and tech leaders.

The policy implications, though, are more complex. Washington’s rampant anxiety about beating China has led to policies that the industry has very mixed feelings about.

On one hand, most tech firms hate the export controls that stop them from selling as much to the world’s second-largest economy, and force them to develop new products if they want to do business with China. If DeepSeek shows those rules are pointless, many would be delighted to see them go away.

On the other hand, anti-China, protectionist sentiment has encouraged Washington to embrace a whole host of industry wishlist items, from a lighter-touch approach to AI rules to streamlined permitting for related construction projects. Does DeepSeek mean those, too, are failing? Or does it trigger a doubling-down?

DeepSeek’s success truly seems to challenge the belief that the future of American AI demands ever more chips and power. That complicates Trump’s interest in rapidly building out that kind of infrastructure in the U.S.

Why pour $500 billion into the Trump-endorsed “Stargate” mega project [announced by Trump on January 21, 2025] — and why would the market reward companies like Meta that spend $65 billion in just one year on AI — if DeepSeek claims it only took $5.6 million and second-tier Nvidia chips to train one of its latest models? (U.S. industry insiders dispute the startup’s figures and claim they don’t tell the full story, but even at 100 times that cost, it would be a bargain.)

Tech companies, of course, love the recent bloom of federal support, and it’s unlikely they’ll drop their push for more federal investment to match anytime soon. Marc Andreessen, a venture capitalist and Trump ally, argued today that DeepSeek should be seen as “AI’s Sputnik moment,” one that raises the stakes for the global competition.

That would strengthen the case that some American AI companies have been pressing for the new administration to invest government resources into AI infrastructure (OpenAI), tighten restrictions on China (Anthropic) and ease up on regulations to ensure their developers build “artificial general intelligence” before their geopolitical rivals.

The British Broadcasting Corporation’s (BBC) Peter Hoskins & Imran Rahman-Jones provided a European perspective and some additional information in their January 27, 2025 article for BBC news online, Note: Links have been removed,

US tech giant Nvidia lost over a sixth of its value after the surging popularity of a Chinese artificial intelligence (AI) app spooked investors in the US and Europe.

DeepSeek, a Chinese AI chatbot reportedly made at a fraction of the cost of its rivals, launched last week but has already become the most downloaded free app in the US.

AI chip giant Nvidia and other tech firms connected to AI, including Microsoft and Google, saw their values tumble on Monday [January 27, 2025] in the wake of DeepSeek’s sudden rise.

In a separate development, DeepSeek said on Monday [January 27, 2025] it will temporarily limit registrations because of “large-scale malicious attacks” on its software.

The DeepSeek chatbot was reportedly developed for a fraction of the cost of its rivals, raising questions about the future of America’s AI dominance and the scale of investments US firms are planning.

DeepSeek is powered by the open source DeepSeek-V3 model, which its researchers claim was trained for around $6m – significantly less than the billions spent by rivals.

But this claim has been disputed by others in AI.

The researchers say they use already existing technology, as well as open source code – software that can be used, modified or distributed by anybody free of charge.

DeepSeek’s emergence comes as the US is restricting the sale of the advanced chip technology that powers AI to China.

To continue their work without steady supplies of imported advanced chips, Chinese AI developers have shared their work with each other and experimented with new approaches to the technology.

This has resulted in AI models that require far less computing power than before.

It also means that they cost a lot less than previously thought possible, which has the potential to upend the industry.

After DeepSeek-R1 was launched earlier this month, the company boasted of “performance on par with” one of OpenAI’s latest models when used for tasks such as maths, coding and natural language reasoning.

In Europe, Dutch chip equipment maker ASML ended Monday’s trading with its share price down by more than 7% while shares in Siemens Energy, which makes hardware related to AI, had plunged by a fifth.

“This idea of a low-cost Chinese version hasn’t necessarily been forefront, so it’s taken the market a little bit by surprise,” said Fiona Cincotta, senior market analyst at City Index.

“So, if you suddenly get this low-cost AI model, then that’s going to raise concerns over the profits of rivals, particularly given the amount that they’ve already invested in more expensive AI infrastructure.”

Singapore-based technology equity adviser Vey-Sern Ling told the BBC it could “potentially derail the investment case for the entire AI supply chain”.

Who founded DeepSeek?

The company was founded in 2023 by Liang Wenfeng in Hangzhou, a city in southeastern China.

The 40-year-old, an information and electronic engineering graduate, also founded the hedge fund that backed DeepSeek.

He reportedly built up a store of Nvidia A100 chips, now banned from export to China.

Experts believe this collection – which some estimates put at 50,000 – led him to launch DeepSeek, by pairing these chips with cheaper, lower-end ones that are still available to import.

Mr Liang was recently seen at a meeting between industry experts and the Chinese premier Li Qiang.

In a July 2024 interview with The China Academy, Mr Liang said he was surprised by the reaction to the previous version of his AI model.

“We didn’t expect pricing to be such a sensitive issue,” he said.

“We were simply following our own pace, calculating costs, and setting prices accordingly.”

A January 28, 2025 article by Daria Solovieva for salon.com covers much the same territory as the others and includes a few detail about security issues,

The pace at which U.S. consumers have embraced DeepSeek is raising national security concerns similar to those surrounding TikTok, the social media platform that faces a ban unless it is sold to a non-Chinese company.

The U.S. Supreme Court this month upheld a federal law that requires TikTok’s sale. The Court sided with the U.S. government’s argument that the app can collect and track data on its 170 million American users. President Donald Trump has paused enforcement of the ban until April to try to negotiate a deal.

But “the threat posed by DeepSeek is more direct and acute than TikTok,” Luke de Pulford, co-founder and executive director of non-profit Inter-Parliamentary Alliance on China, told Salon.

DeepSeek is a fully Chinese company and is subject to Communist Party control, unlike TikTok which positions itself as independent from parent company ByteDance, he said. 

“DeepSeek logs your keystrokes, device data, location and so much other information and stores it all in China,” de Pulford said. “So you’ll never know if the Chinese state has been crunching your data to gain strategic advantage, and DeepSeek would be breaking the law if they told you.”  

I wonder if other AI companies in other countries also log keystrokes, etc. Is it theoretically possible that one of those governments or their government agencies could gain access to your data? It’s obvious in China but people in other countries may have the issues.

Censorship: DeepSeek and ChatGPT

Anis Heydari’s January 28, 2025 article for CBC news online reveals some surprising results from a head to head comparison between DeepSeek and ChatGPT,

The Chinese-made AI chatbot DeepSeek may not always answer some questions about topics that are often censored by Beijing, according to tests run by CBC News and The Associated Press, and is providing different information than its U.S.-owned competitor ChatGPT.

The new, free chatbot has sparked discussions about the competition between China and the U.S. in AI development, with many users flocking to test it. 

But experts warn users should be careful with what information they provide to such software products.

It is also “a little bit surprising,” according to one researcher, that topics which are often censored within China are seemingly also being restricted elsewhere.

“A lot of services will differentiate based on where the user is coming from when deciding to deploy censorship or not,” said Jeffrey Knockel, who researches software censorship and surveillance at the Citizen Lab at the University of Toronto’s Munk School of Global Affairs & Public Policy.

“With this one, it just seems to be censoring everyone.”

Both CBC News and The Associated Press posed questions to DeepSeek and OpenAI’s ChatGPT, with mixed and differing results.

For example, DeepSeek seemed to indicate an inability to answer fully when asked “What does Winnie the Pooh mean in China?” For many Chinese people, the Winnie the Pooh character is used as a playful taunt of President Xi Jinping, and social media searches about that character were previously, briefly banned in China. 

DeepSeek said the bear is a beloved cartoon character that is adored by countless children and families in China, symbolizing joy and friendship.

Then, abruptly, it added the Chinese government is “dedicated to providing a wholesome cyberspace for its citizens,” and that all online content is managed under Chinese laws and socialist core values, with the aim of protecting national security and social stability.

CBC News was unable to produce this response. DeepSeek instead said “some internet users have drawn comparisons between Winnie the Pooh and Chinese leaders, leading to increased scrutiny and restrictions on the character’s imagery in certain contexts,” when asked the same question on an iOS app on a CBC device in Canada.

Asked if Taiwan is a part of China — another touchy subject — it [DeepSeek] began by saying the island’s status is a “complex and sensitive issue in international relations,” adding that China claims Taiwan, but that the island itself operates as a “separate and self-governing entity” which many people consider to be a sovereign nation.

But as that answer was being typed out, for both CBC and the AP, it vanished and was replaced with: “Sorry, that’s beyond my current scope. Let’s talk about something else.”

… Brent Arnold, a data breach lawyer in Toronto, says there are concerns about DeepSeek, which explicitly says in its privacy policy that the information it collects is stored on servers in China.

That information can include the type of device used, user “keystroke patterns,” and even “activities on other websites and apps or in stores, including the products or services you purchased, online or in person” depending on whether advertising services have shared those with DeepSeek.

“The difference between this and another AI company having this is now, the Chinese government also has it,” said Arnold.

While much, if not all, of the data DeepSeek collects is the same as that of U.S.-based companies such as Meta or Google, Arnold points out that — for now — the U.S. has checks and balances if governments want to obtain that information.

“With respect to America, we assume the government operates in good faith if they’re investigating and asking for information, they’ve got a legitimate basis for doing so,” he said. 

Right now, Arnold says it’s not accurate to compare Chinese and U.S. authorities in terms of their ability to take personal information. But that could change.

“I would say it’s a false equivalency now. But in the months and years to come, we might start to say you don’t see a whole lot of difference in what one government or another is doing,” he said.

Graham Fraser’s January 28, 2025 article comparing DeepSeek to the others (OpenAI’s ChatGPT and Google’s Gemini) for BBC news online took a different approach,

Writing Assistance

When you ask ChatGPT what the most popular reasons to use ChatGPT are, it says that assisting people to write is one of them.

From gathering and summarising information in a helpful format to even writing blog posts on a topic, ChatGPT has become an AI companion for many across different workplaces.

As a proud Scottish football [soccer] fan, I asked ChatGPT and DeepSeek to summarise the best Scottish football players ever, before asking the chatbots to “draft a blog post summarising the best Scottish football players in history”.

DeepSeek responded in seconds, with a top ten list – Kenny Dalglish of Liverpool and Celtic was number one. It helpfully summarised which position the players played in, their clubs, and a brief list of their achievements.

DeepSeek also detailed two non-Scottish players – Rangers legend Brian Laudrup, who is Danish, and Celtic hero Henrik Larsson. For the latter, it added “although Swedish, Larsson is often included in discussions of Scottish football legends due to his impact at Celtic”.

For its subsequent blog post, it did go into detail of Laudrup’s nationality before giving a succinct account of the careers of the players.

ChatGPT’s answer to the same question contained many of the same names, with “King Kenny” once again at the top of the list.

Its detailed blog post briefly and accurately went into the careers of all the players.

It concluded: “While the game has changed over the decades, the impact of these Scottish greats remains timeless.” Indeed.

For this fun test, DeepSeek was certainly comparable to its best-known US competitor.

Coding

Brainstorming ideas

Learning and research

Steaming ahead

The tasks I set the chatbots were simple but they point to something much more significant – the winner of the so-called AI race is far from decided.

For all the vast resources US firms have poured into the tech, their Chinese rival has shown their achievements can be emulated.

Reception from the science community

Days before the news outlets discovered DeepSeek, the company published a paper about its Large Language Models (LLMs) and its new chatbot on arXiv. Here’s a little more information,

DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning

[over 100 authors are listed]

We introduce our first-generation reasoning models, DeepSeek-R1-Zero and DeepSeek-R1. DeepSeek-R1-Zero, a model trained via large-scale reinforcement learning (RL) without supervised fine-tuning (SFT) as a preliminary step, demonstrates remarkable reasoning capabilities. Through RL, DeepSeek-R1-Zero naturally emerges with numerous powerful and intriguing reasoning behaviors. However, it encounters challenges such as poor readability, and language mixing. To address these issues and further enhance reasoning performance, we introduce DeepSeek-R1, which incorporates multi-stage training and cold-start data before RL. DeepSeek-R1 achieves performance comparable to OpenAI-o1-1217 on reasoning tasks. To support the research community, we open-source DeepSeek-R1-Zero, DeepSeek-R1, and six dense models (1.5B, 7B, 8B, 14B, 32B, 70B) distilled from DeepSeek-R1 based on Qwen and Llama.

Cite as: arXiv:2501.12948 [cs.CL]
(or arXiv:2501.12948v1 [cs.CL] for this version)
https://doi.org/10.48550/arXiv.2501.12948

Submission history

From: Wenfeng Liang [view email]
[v1] Wed, 22 Jan 2025 15:19:35 UTC (928 KB)

You can also find a PDF version of the paper here or another online version here at Hugging Face.

As for the science community’s response, the title of Elizabeth Gibney’s January 23, 2025 article “China’s cheap, open AI model DeepSeek thrills scientists” for Nature says it all, Note: Links have been removed,

A Chinese-built large language model called DeepSeek-R1 is thrilling scientists as an affordable and open rival to ‘reasoning’ models such as OpenAI’s o1.

These models generate responses step-by-step, in a process analogous to human reasoning. This makes them more adept than earlier language models at solving scientific problems and could make them useful in research. Initial tests of R1, released on 20 January, show that its performance on certain tasks in chemistry, mathematics and coding is on par with that of o1 — which wowed researchers when it was released by OpenAI in September.

“This is wild and totally unexpected,” Elvis Saravia, an AI researcher and co-founder of the UK-based AI consulting firm DAIR.AI, wrote on X.

R1 stands out for another reason. DeepSeek, the start-up in Hangzhou that built the model, has released it as ‘open-weight’, meaning that researchers can study and build on the algorithm. Published under an MIT licence, the model can be freely reused but is not considered fully open source, because its training data has not been made available.

“The openness of DeepSeek is quite remarkable,” says Mario Krenn, leader of the Artificial Scientist Lab at the Max Planck Institute for the Science of Light in Erlangen, Germany. By comparison, o1 and other models built by OpenAI in San Francisco, California, including its latest effort o3 are “essentially black boxes”, he says.

DeepSeek hasn’t released the full cost of training R1, but it is charging people using its interface around one-thirtieth of what o1 costs to run. The firm has also created mini ‘distilled’ versions of R1 to allow researchers with limited computing power to play with the model. An “experiment that cost more than £300 with o1, cost less than $10 with R1,” says Krenn. “This is a dramatic difference which will certainly play a role its future adoption.”

The kerfuffle has died down for now.

AI and the 2024 Nobel prizes

Artificial intelligence made a splash when the 2024 Nobel Prize announcements were made as they were a key factor in both the prizes for physics and for chemistry.

Where do physics, chemistry, and AI go from here?

I have a few speculative pieces about physics, chemistry, and AI. First off we have Nello Cristianini’s (Professor of Artificial Intelligence at the University of Bath (England) October 10, 2024 essay for The Conversation, Note: Links have been removed,

The 2024 Nobel Prizes in physics and chemistry have given us a glimpse of the future of science. Artificial intelligence (AI) was central to the discoveries honoured by both awards. You have to wonder what Alfred Nobel, who founded the prizes, would think of it all.

We are certain to see many more Nobel medals handed to researchers who made use of AI tools. As this happens, we may find the scientific methods honoured by the Nobel committee depart from straightforward categories like “physics”, “chemistry” and “physiology or medicine”.

We may also see the scientific backgrounds of recipients retain a looser connection with these categories. This year’s physics prize was awarded to the American John Hopfield, at Princeton University, and British-born Geoffrey Hinton, from the University of Toronto. While Hopfield is a physicist, Hinton studied experimental psychology before gravitating to AI.

The chemistry prize was shared between biochemist David Baker, from the University of Washington, and the computer scientists Demis Hassabis and John Jumper, who are both at Google DeepMind in the UK.

There is a close connection between the AI-based advances honoured in the physics and chemistry categories. Hinton helped develop an approach used by DeepMind to make its breakthrough in predicting the shapes of proteins.

The physics laureates, Hinton in particular, laid the foundations of the powerful field known as machine learning. This is a subset of AI that’s concerned with algorithms, sets of rules for performing specific computational tasks.

Hopfield’s work is not particularly in use today, but the backpropagation algorithm (co-invented by Hinton) has had a tremendous impact on many different sciences and technologies. This is concerned with neural networks, a model of computing that mimics the human brain’s structure and function to process data. Backpropagation allows scientists to “train” enormous neural networks. While the Nobel committee did its best to connect this influential algorithm to physics, it’s fair to say that the link is not a direct one.

Every two years, since 1994, scientists have been holding a contest to find the best ways to predict protein structures and shapes from the sequences of their amino acids. The competition is called Critical Assessment of Structure Prediction (CASP).

For the past few contests, CASP winners have used some version of DeepMind’s AlphaFold. There is, therefore, a direct line to be drawn from Hinton’s backpropagation to Google DeepMind’s AlphaFold 2 breakthrough.

Attributing credit has always been controversial aspect of the Nobel prizes. A maximum of three researchers can share a Nobel. But big advances in science are collaborative. Scientific papers may have 10, 20, 30 authors or more. More than one team might contribute to the discoveries honoured by the Nobel committee.

This year we may have further discussions about the attribution of the research on backpropagation algorithm, which has been claimed by various researchers, as well as for the general attribution of a discovery to a field like physics.

We now have a new dimension to the attribution problem. It’s increasingly unclear whether we will always be able to distinguish between the contributions of human scientists and those of their artificial collaborators – the AI tools that are already helping push forward the boundaries of our knowledge.

This November 26, 2024 news item on ScienceDaily, which is a little repetitive, considers interdisciplinarity in relation to the 2024 Nobel prizes,

In 2024, the Nobel Prize in physics was awarded to John Hopfield and Geoffrey Hinton for their foundational work in artificial intelligence (AI), and the Nobel Prize in chemistry went to David Baker, Demis Hassabis, and John Jumper for using AI to solve the protein-folding problem, a 50-year grand challenge problem in science.

A new article, written by researchers at Carnegie Mellon University and Calculation Consulting, examines the convergence of physics, chemistry, and AI, highlighted by recent Nobel Prizes. It traces the historical development of neural networks, emphasizing the role of interdisciplinary research in advancing AI. The authors advocate for nurturing AI-enabled polymaths to bridge the gap between theoretical advancements and practical applications, driving progress toward artificial general intelligence. The article is published in Patterns.

“With AI being recognized in connections to both physics and chemistry, practitioners of machine learning may wonder how these sciences relate to AI and how these awards might influence their work,” explained Ganesh Mani, Professor of Innovation Practice and Director of Collaborative AI at Carnegie Mellon’s Tepper School of Business, who coauthored the article. “As we move forward, it is crucial to recognize the convergence of different approaches in shaping modern AI systems based on generative AI.”

A November 25, 2024 Carnegie Mellon University (CMU) news release, which originated the news item, describes the paper,

In their article, the authors explore the historical development of neural networks. By examining the history of AI development, they contend, we can understand more thoroughly the connections among computer science, theoretical chemistry, theoretical physics, and applied mathematics. The historical perspective illuminates how foundational discoveries and inventions across these disciplines have enabled modern machine learning with artificial neural networks. 

Then they turn to key breakthroughs and challenges in this field, starting with Hopfield’s work, and go on to explain how engineering has at times preceded scientific understanding, as is the case with the work of Jumper and Hassabis.

The authors conclude with a call to action, suggesting that the rapid progress of AI across diverse sectors presents both unprecedented opportunities and significant challenges. To bridge the gap between hype and tangible development, they say, a new generation of interdisciplinary thinkers must be cultivated.

These “modern-day Leonardo da Vincis,” as the authors call them, will be crucial in developing practical learning theories that can be applied immediately by engineers, propelling the field toward the ambitious goal of artificial general intelligence.

This calls for a paradigm shift in how scientific inquiry and problem solving are approached, say the authors, one that embraces holistic, cross-disciplinary collaboration and learns from nature to understand nature. By breaking down silos between fields and fostering a culture of intellectual curiosity that spans multiple domains, innovative solutions can be identified to complex global challenges like climate change. Through this synthesis of diverse knowledge and perspectives, catalyzed by AI, meaningful progress can be made and the field can realize the full potential of technological aspirations.

“This interdisciplinary approach is not just beneficial but essential for addressing the many complex challenges that lie ahead,” suggests Charles Martin, Principal Consultant at Calculation Consulting, who coauthored the article. “We need to harness the momentum of current advancements while remaining grounded in practical realities.”

The authors acknowledge the contributions of Scott E. Fahlman, Professor Emeritus in Carnegie Mellon’s School of Computer Science.

Here’s a link to and a citation for the paper,

The recent Physics and Chemistry Nobel Prizes, AI, and the convergence of knowledge fields by Charles H. Martin, Ganesh Mani. Patterns, 2024 DOI: 10.1016/j.patter.2024.101099 Published online November 25, 2024 Copyright: © 2024 The Author(s). Published by Elsevier Inc.

This paper is open access under Creative Commons Attribution (CC BY 4.0.

A scientific enthusiast: “I was a beta tester for the Nobel prize-winning AlphaFold AI”

From an October 11, 2024 essay by Rivka Isaacson (Professor of Molecular Biophysics, King’s College London) for The Conversation, Note: Links have been removed,

The deep learning machine AlphaFold, which was created by Google’s AI research lab DeepMind, is already transforming our understanding of the molecular biology that underpins health and disease.

One half of the 2024 Nobel prize in chemistry went to David Baker from the University of Washington in the US, with the other half jointly awarded to Demis Hassabis and John M. Jumper, both from London-based Google DeepMind.

If you haven’t heard of AlphaFold, it may be difficult to appreciate how important it is becoming to researchers. But as a beta tester for the software, I got to see first-hand how this technology can reveal the molecular structures of different proteins in minutes. It would take researchers months or even years to unpick these structures in laboratory experiments.

This technology could pave the way for revolutionary new treatments and drugs. But first, it’s important to understand what AlphaFold does.

Proteins are produced by series of molecular “beads”, created from a selection of the human body’s 20 different amino acids. These beads form a long chain that folds up into a mechanical shape that is crucial for the protein’s function.

Their sequence is determined by DNA. And while DNA research means we know the order of the beads that build most proteins, it’s always been a challenge to predict how the chain folds up into each “3D machine”.

These protein structures underpin all of biology. Scientists study them in the same way you might take a clock apart to understand how it works. Comprehend the parts and put together the whole: it’s the same with the human body.

Proteins are tiny, with a huge number located inside each of our 30 trillion cells. This meant for decades, the only way to find out their shape was through laborious experimental methods – studies that could take years.

Throughout my career I, along with many other scientists, have been engaged in such pursuits. Every time we solve a protein structure, we deposit it in a global database called the Protein Data Bank, which is free for anyone to use.

AlphaFold was trained on these structures, the majority of which were found using X-ray crystallography. For this technique, proteins are tested under thousands of different chemical states, with variations in temperature, density and pH. Researchers use a microscope to identify the conditions under which each protein lines up in a particular formation. These are then shot with X-rays to work out the spatial arrangement of all the atoms in that protein.

Addictive experience

In March 2024, researchers at DeepMind approached me to beta test AlphaFold3, the latest incarnation of the software, which was close to release at the time.

I’ve never been a gamer but I got a taste of the addictive experience as, once I got access, all I wanted to do was spend hours trying out molecular combinations. As well as lightning speed, this new version introduced the option to include bigger and more varied molecules, including DNA and metals, and the opportunity to modify amino acids to mimic chemical signalling in cells.

Understanding the moving parts and dynamics of proteins is the next frontier, now that we can predict static protein shapes with AlphaFold. Proteins come in a huge variety of shapes and sizes. They can be rigid or flexible, or made of neatly structured units connected by bendy loops.

You can read Isaacson’s entire October 11, 2024 essay on The Conversation or in an October 14, 2024 news item on phys.org.

Geoffrey Hinton (University of Toronto) shares 2024 Nobel Prize for Physics with John J. Hopfield (Princeton University)

What an interesting choice the committee deciding on the 2024 Nobel Prize for Physics have made. Geoffrey Hinton has been mentioned here a number of times, most recently for his participation in one of the periodic AI (artificial intelligence) panics that pop up from time to time. For more about the latest one and Hinton’s participation see my May 25, 2023 posting “Non-human authors (ChatGPT or others) of scientific and medical studies and the latest AI panic!!!” and scroll down to ‘The panic’ subhead.

I have almost nothing about John J. Hopfield other than a tangential mention of the Hopfield neural network in a January 3, 2018 posting “Mott memristor.”

An October 8, 2024 Royal Swedish Academy of Sciences press release announces the winners of the 2024 Nobel Prize in Physics,

The Royal Swedish Academy of Sciences has decided to award the Nobel Prize in Physics 2024 to

John J. Hopfield
Princeton University, NJ, USA

Geoffrey E. Hinton
University of Toronto, Canada

“for foundational discoveries and inventions that enable machine learning with artificial neural networks”

They trained artificial neural networks using physics

This year’s two Nobel Laureates in Physics have used tools from physics to develop methods that are the foundation of today’s powerful machine learning. John Hopfield created an associative memory that can store and reconstruct images and other types of patterns in data. Geoffrey Hinton invented a method that can autonomously find properties in data, and so perform tasks such as identifying specific elements in pictures.

When we talk about artificial intelligence, we often mean machine learning using artificial neural networks. This technology was originally inspired by the structure of the brain. In an artificial neural network, the brain’s neurons are represented by nodes that have different values. These nodes influence each other through con­nections that can be likened to synapses and which can be made stronger or weaker. The network is trained, for example by developing stronger connections between nodes with simultaneously high values. This year’s laureates have conducted important work with artificial neural networks from the 1980s onward.

John Hopfield invented a network that uses a method for saving and recreating patterns. We can imagine the nodes as pixels. The Hopfield network utilises physics that describes a material’s characteristics due to its atomic spin – a property that makes each atom a tiny magnet. The network as a whole is described in a manner equivalent to the energy in the spin system found in physics, and is trained by finding values for the connections between the nodes so that the saved images have low energy. When the Hopfield network is fed a distorted or incomplete image, it methodically works through the nodes and updates their values so the network’s energy falls. The network thus works stepwise to find the saved image that is most like the imperfect one it was fed with.

Geoffrey Hinton used the Hopfield network as the foundation for a new network that uses a different method: the Boltzmann machine. This can learn to recognise characteristic elements in a given type of data. Hinton used tools from statistical physics, the science of systems built from many similar components. The machine is trained by feeding it examples that are very likely to arise when the machine is run. The Boltzmann machine can be used to classify images or create new examples of the type of pattern on which it was trained. Hinton has built upon this work, helping initiate the current explosive development of machine learning.

“The laureates’ work has already been of the greatest benefit. In physics we use artificial neural networks in a vast range of areas, such as developing new materials with specific properties,” says Ellen Moons, Chair of the Nobel Committee for Physics.

An October 8, 2024 University of Toronto news release by Rahul Kalvapalle provides more detail about Hinton’s work and history with the university.

Ben Edwards wrote an October 8, 2024 article for Ars Technica, which in addition to reiterating the announcement explores a ‘controversial’ element to the story, Note 1: I gather I’m not the only one who found the award of a physics prize to researchers in the field of computer science a little unusual, Note 2: Links have been removed,

Hopfield and Hinton’s research, which dates back to the early 1980s, applied principles from physics to develop methods that underpin modern machine-learning techniques. Their work has enabled computers to perform tasks such as image recognition and pattern completion, capabilities that are now ubiquitous in everyday technology.

The win is already turning heads on social media because it seems unusual that research in a computer science field like machine learning might win a Nobel Prize for physics. “And the 2024 Nobel Prize in Physics does not go to physics…” tweeted German physicist Sabine Hossenfelder this morning [October 8, 2024].

From the Nobel committee’s point of view, the award largely derives from the fact that the two men drew from statistical models used in physics and partly from recognizing the advancements in physics research that came from using the men’s neural network techniques as research tools.

Nobel committee chair Ellen Moons, a physicist at Karlstad University, Sweden, said during the announcement, “Artificial neural networks have been used to advance research across physics topics as diverse as particle physics, material science and astrophysics.”

For a comprehensive overview of both Nobel prize winners, Hinton and Hopfield, their work, and their stands vis à vis the dangers of AI, there’s an October 8, 2024 Associated Press article on phys.org.

‘Six’ degrees of Kevin Bacon gene

It must have been a lighthearted moment that led to this new gene being called “degrees of Kevin Bacon” (dokb). Here’s more about the gene and the research from a May 24, 2024 University of Toronto (UofT) news release by Chris Sasaki, Note: Links have been removed,

A team of researchers from the University of Toronto has identified a gene in fruit flies that regulates the types of connections between flies within their “social network.”

The researchers studied groups of two distinct strains of Drosophila melanogaster fruit flies and found that one strain showed different types or patterns of connections within their networks than the other strain.

The connectivity-associated gene in the first strain was then isolated. When it was swapped with the other strain, the flies exhibited the connectivity of the first strain.

The researchers named the gene “degrees of Kevin Bacon” (dokb), for the prolific Hollywood star of such films as Footloose and Apollo 13. Bacon’s wide-ranging connections to other actors is the subject of the parlour game called “The Six Degrees of Kevin Bacon,” which plays on the popular idea that any two people on Earth can be linked through six or fewer mutual acquaintances.

“There’s been a lot of research around whether social network structure is inherited, but that question has been poorly understood,” says Rebecca Rooke, a post-doctoral fellow in the department of ecology and evolutionary biology in the Faculty of Arts & Science and lead author of the paper, published in Nature Communications. “But what we’ve now done is find the gene and proven there is a genetic component.”

The work was carried out as part of Rooke’s PhD thesis in Professor Joel Levine’s laboratory at U of T Mississauga before he moved to the department of ecology and evolutionary biology, where he is currently chair.

“This gives us a genetic perspective on the structure of a social group,” says Levine. “This is amazing because it says something important about the structure of social interactions in general and about the species-specific structure of social networks.

“It’s exciting to be thinking about the relationship between genetics and the group in this way. It may be the first time we’ve been able to do this.”

The researchers measured the type of connection by observing and recording on video groups of a dozen male flies placed in a container. Using software previously developed by Levine and post-doctoral researcher Jon Schneider, the team tracked the distance between flies, their relative orientation and the time they spent in close proximity. Using these criteria as measures of interaction, the researchers calculated the type of connection or “betweenness centrality” of each group.

Rooke, Levine and their colleagues point out that individual organisms with high betweenness centrality within a social network can act as “gatekeepers” who play an important role in facilitating interactions within their group.

Gatekeepers can influence factors like the distribution of food or the spread of disease. They also play a role in maintaining cohesion, enhancing communication and ensuring better overall health of their group.

In humans, betweenness centrality can even affect the spread of behaviours such as smoking, drug use and divorce.

At the same time, the researchers point out that social networks are unbiased and favour neither “good” nor “bad” outcomes. For example, high betweenness centrality in a network of scientists can increase potential collaborators; on the other hand, high betweenness centrality in another group can lead to the spread of a disease like COVID-19.

“You don’t get a good or a bad outcome from the structure of a network,” explains Levine. “The structure of a network could carry happiness or a disease.”

Rooke says an important next step will be to identify the overall molecular pathway that the gene and its protein are involved in “to try to understand what the protein is doing and what pathways it’s involved in – the answers to those questions will really give us a lot of insight into how these networks work.”

And while the dokb gene has only been found in flies so far, Rooke, Levine and their colleagues anticipate that similar molecular pathways between genes and social networks will be found in other species.

“For example, there’s a subset of cells in the human brain whose function relates to social experience – what in the popular press might be called the ‘social brain,’” says Levine.

“Getting from the fly to the human brain – that’s another line of research. But it almost has to be true that the things that we’re observing in insects will be found in a more nuanced, more dispersed way in the mammalian brain.”

Katie Hunt wrote a May 2, 2024 article, for CNN, about the research, shortly after the paper was published, which included some intriguing personal details and a good explanation of why fruit flies are used in genetic research, Note: Links have been removed,

Many species of animals form social groups and behave collectively: An elephant herd follows its matriarch, flocking birds fly in unison, humans gather at concert events. Even humble fruit fliesorganize themselves into regularly spaced clusters, researchers have found.

..

And now, scientists believe there is evidence that how central you are to your social network, a concept they call “high betweenness centrality,” could have a genetic basis. New research published Tuesday in the journal Nature Communications has identified a gene responsible for regulating the structure of social networks in fruit flies.

The study’s authors named the gene in question “degrees of Kevin Bacon,” or dokb, after a game that requires players to link celebrities to actor Bacon in as few steps as possible via the movies they have in common.

Inspired by “six degrees of separation,” the theory that nobody is more than six relationships away from any other person in the world, the game became a viral phenomenon three decades ago.

Senior author Joel Levine, a professor of biology at the University of Toronto who went to high school with Bacon in Philadelphia [emphases mine], said the actor was a good human example of “high betweenness centrality.”

Aware of Levine’s link with Bacon, study lead author Rebecca Rooke, a postdoctoral fellow of biology at the University of Toronto Mississauga, suggested the gene’s name.

Levine said that the “degrees of Kevin Bacon” gene was specific to fruit flies’ central nervous systems, but he thought similar genetic pathways would exist in other animals, including humans. The study opened up new opportunities for exploring the molecular evolution of social networks and collective behavior in other animals.

Drosophila melanogaster, best known for hovering around fruit bowls, has been a model organism to explore genetics for more than 100 years. The insects breed quickly and are easy to keep.

While flies are very different from humans, the creatures have long been central to biological and genetic discovery.

“Fruit flies are useful because of the power of manipulation. We can investigate things experimentally in Drosophila that we can only examine indirectly in most organisms,” Moore said.

The tiny creatures share nearly 60% of our genes, including those responsible for Alzheimer’s, Parkinson’s, cancer and heart disease. Research involving fruit flies has previously shed light on the mechanisms of inheritance, circadian rhythms and mutation-causing X-rays.

Here’s a link to and a citation for the paper,

The gene “degrees of kevin bacon” (dokb) regulates a social network behaviour in Drosophila melanogaster by Rebecca Rooke, Joshua J. Krupp, Amara Rasool, Mireille Golemiec, Megan Stewart, Jonathan Schneider & Joel D. Levine. Nature Communications volume 15, Article number: 3339 (2024)
DOI: https://doi.org/10.1038/s41467-024-47499-8 Published online: 30 April 2024

This paper is open access.

h/t Rae Hodge’s May 30, 2024 article on Salon.com. Otherwise, I would have missed this ‘science meets pop culture’ story.

Canada, AI regulation, and the second reading of the Digital Charter Implementation Act, 2022 (Bill C-27)

Bill C-27 (Digital Charter Implementation Act, 2022) is what I believe is called an omnibus bill as it includes three different pieces of proposed legislation (the Consumer Privacy Protection Act [CPPA], the Artificial Intelligence and Data Act [AIDA], and the Personal Information and Data Protection Tribunal Act [PIDPTA]). You can read the Innovation, Science and Economic Development (ISED) Canada summary here or a detailed series of descriptions of the act here on the ISED’s Canada’s Digital Charter webpage.

Months after the first reading in June 2022, Bill C-27 was mentioned here in a September 15, 2022 posting about a Canadian Science Policy Centre (CSPC) event featuring a panel discussion about the proposed legislation, artificial intelligence in particular. I dug down and found commentaries and additional information about the proposed bill with special attention to AIDA.

it seems discussion has been reactivated since the second reading was completed on April 24, 2023 and referred to committee for further discussion. (A report and third reading are still to be had in the House of Commons and then, there are three readings in the Senate before this legislation can be passed.)

Christian Paas-Lang has written an April 24, 2023 article for CBC (Canadian Broadcasting Corporation) news online that highlights concerns centred on AI from three cross-party Members of Parliament (MPs),

Once the domain of a relatively select group of tech workers, academics and science fiction enthusiasts, the debate over the future of artificial intelligence has been thrust into the mainstream. And a group of cross-party MPs say Canada isn’t yet ready to take on the challenge.

The popularization of AI as a subject of concern has been accelerated by the introduction of ChatGPT, an AI chatbot produced by OpenAI that is capable of generating a broad array of text, code and other content. ChatGPT relies on content published on the internet as well as training from its users to improve its responses.

ChatGPT has prompted such a fervour, said Katrina Ingram, founder of the group Ethically Aligned AI, because of its novelty and effectiveness. 

“I would argue that we’ve had AI enabled infrastructure or technologies around for quite a while now, but we haven’t really necessarily been confronted with them, you know, face to face,” she told CBC Radio’s The House [radio segment embedded in article] in an interview that aired Saturday [April 22, 2023].

Ingram said the technology has prompted a series of concerns: about the livelihoods of professionals like artists and writers, about privacy, data collection and surveillance and about whether chatbots like ChatGPT can be used as tools for disinformation.

With the popularization of AI as an issue has come a similar increase in concern about regulation, and Ingram says governments must act now.

“We are contending with these technologies right now. So it’s really imperative that governments are able to pick up the pace,” she told host Catherine Cullen.

That sentiment — the need for speed — is one shared by three MPs from across party lines who are watching the development of the AI issue. Conservative MP Michelle Rempel Garner, NDP MP Brian Masse and Nathaniel Erskine-Smith of the Liberals also joined The House for an interview that aired Saturday.

“This is huge. This is the new oil,” said Masse, the NDP’s industry critic, referring to how oil had fundamentally shifted economic and geopolitical relationships, leading to a great deal of good but also disasters — and AI could do the same.

Issues of both speed and substance

The three MPs are closely watching Bill C-27, a piece of legislation currently being debated in the House of Commons that includes Canada’s first federal regulations on AI.

But each MP expressed concern that the bill may not be ready in time and changes would be needed [emphasis mine].

“This legislation was tabled in June of last year [2022], six months before ChatGPT was released and it’s like it’s obsolete. It’s like putting in place a framework to regulate scribes four months after the printing press came out,” Rempel Garner said. She added that it was wrongheaded to move the discussion of AI away from Parliament and segment it off to a regulatory body.

Am I the only person who sees a problem with the “bill may not be ready in time and changes would be needed?” I don’t understand the rush (or how these people get elected). The point of a bill is to examine the ideas and make changes to it before it becomes legislation. Given how fluid the situation appears to be, a strong argument can be made for the current process which is three readings in the House of Commons, along with a committee report, and three readings in the senate before a bill, if successful, is passed into legislation.

Of course, the fluidity of the situation could also be an argument for starting over as Michael Geist’s (Canada Research Chair in Internet and E-Commerce Law at the University of Ottawa and member of the Centre for Law, Technology and Society) April 19, 2023 post on his eponymous blog suggests, Note: Links have been removed,

As anyone who has tried ChatGPT will know, at the bottom of each response is an option to ask the AI system to “regenerate response”. Despite increasing pressure on the government to move ahead with Bill C-27’s Artificial Intelligence and Data Act (AIDA), the right response would be to hit the regenerate button and start over. AIDA may be well-meaning and the issue of AI regulation critically important, but the bill is limited in principles and severely lacking in detail, leaving virtually all of the heavy lifting to a regulation-making process that will take years to unfold. While no one should doubt the importance of AI regulation, Canadians deserve better than virtue signalling on the issue with a bill that never received a full public consultation.

What prompts this post is a public letter based out of MILA that calls on the government to urgently move ahead with the bill signed by some of Canada’s leading AI experts. The letter states: …

When the signatories to the letter suggest that there is prospect of moving AIDA forward before the summer, it feels like a ChatGPT error. There are a maximum of 43 days left on the House of Commons calendar until the summer. In all likelihood, it will be less than that. Bill C-27 is really three bills in one: major privacy reform, the creation of a new privacy tribunal, and AI regulation. I’ve watched the progress of enough bills to know that this just isn’t enough time to conduct extensive hearings on the bill, conduct a full clause-by-clause review, debate and vote in the House, and then conduct another review in the Senate. At best, Bill C-27 could make some headway at committee, but getting it passed with a proper review is unrealistic.

Moreover, I am deeply concerned about a Parliamentary process that could lump together these three bills in an expedited process. …

For anyone unfamiliar with MILA, it is also known as Quebec’s Artificial Intelligence Institute. (They seem to have replaced institute with ecosystem since the last time I checked.) You can see the document and list of signatories here.

Geist has a number of posts and podcasts focused on the bill and the easiest way to find them is to use the search term ‘Bill C-27’.

Maggie Arai at the University of Toronto’s Schwartz Reisman Institute for Technology and Society provides a brief overview titled, Five things to know about Bill C-27, in her April 18, 2022 commentary,

On June 16, 2022, the Canadian federal government introduced Bill C-27, the Digital Charter Implementation Act 2022, in the House of Commons. Bill C-27 is not entirely new, following in the footsteps of Bill C-11 (the Digital Charter Implementation Act 2020). Bill C-11 failed to pass, dying on the Order Paper when the Governor General dissolved Parliament to hold the 2021 federal election. While some aspects of C-27 will likely be familiar to those who followed the progress of Bill C-11, there are several key differences.

After noting the differences, Arai had this to say, from her April 18, 2022 commentary,

The tabling of Bill C-27 represents an exciting step forward for Canada as it attempts to forge a path towards regulating AI that will promote innovation of this advanced technology, while simultaneously offering consumers assurance and protection from the unique risks this new technology it poses. This second attempt towards the CPPA and PIDPTA is similarly positive, and addresses the need for updated and increased consumer protection, privacy, and data legislation.

However, as the saying goes, the devil is in the details. As we have outlined, several aspects of how Bill C-27 will be implemented are yet to be defined, and how the legislation will interact with existing social, economic, and legal dynamics also remains to be seen.

There are also sections of C-27 that could be improved, including areas where policymakers could benefit from the insights of researchers with domain expertise in areas such as data privacy, trusted computing, platform governance, and the social impacts of new technologies. In the coming weeks, the Schwartz Reisman Institute will present additional commentaries from our community that explore the implications of C-27 for Canadians when it comes to privacy, protection against harms, and technological governance.

Bryan Short’s September 14, 2022 posting (The Absolute Bare Minimum: Privacy and the New Bill C-27) on the Open Media website critiques two of the three bills included in Bill C-27, Note: Links have been removed,

The Canadian government has taken the first step towards creating new privacy rights for people in Canada. After a failed attempt in 2020 and three years of inaction since the proposal of the digital charter, the government has tabled another piece of legislation aimed at giving people in Canada the privacy rights they deserve.

In this post, we’ll explore how Bill C-27 compares to Canada’s current privacy legislation, how it stacks up against our international peers, and what it means for you. This post considers two of the three acts being proposed in Bill C-27, the Consumer Privacy Protection Act (CPPA) and the Personal Information and Data Tribunal Act (PIDTA), and doesn’t discuss the Artificial Intelligence and Data Act [emphasis mine]. The latter Act’s engagement with very new and complex issues means we think it deserves its own consideration separate from existing privacy proposals, and will handle it as such.

If we were to give Bill C-27’s CPPA and PIDTA a grade, it’d be a D. This is legislation that does the absolute bare minimum for privacy protections in Canada, and in some cases it will make things actually worse. If they were proposed and passed a decade ago, we might have rated it higher. However, looking ahead at predictable movement in data practices over the next ten – or even twenty – years, these laws will be out of date the moment they are passed, and leave people in Canada vulnerable to a wide range of predatory data practices. For detailed analysis, read on – but if you’re ready to raise your voice, go check out our action calling for positive change before C-27 passes!

Taking this all into account, Bill C-27 isn’t yet the step forward for privacy in Canada that we need. While it’s an improvement upon the last privacy bill that the government put forward, it misses so many areas that are critical for improvement, like failing to put people in Canada above the commercial interests of companies.

If Open Media has followed up with an AIDA critique, I have not been able to find it on their website.

Aesthetics and Colour Research—a November 28, 2019 talk about the tools and technology in Toronto, Canada

From a November 19, 2019 ArtSci Salon announcement (received via email),\

I [Robin] am co-organizing a lecture on AESTHETICS AND COLOUR RESEARCH AT THE

UNIVERSITY OF TORONTO’S PSYCHOLOGICAL LABORATORY

by Erich Weidenhammer, PhD (University of Toronto)

The lecture is Thu Nov 28 [2019], 6-8pm at the Thomas Fisher Rare Book Library at U of T [University of Toronto]. There will also be colour-related artifacts from the library collection on display.

Full details are here, with an eventbrite registration link for the talk (Free).

HTTPS://WWW.COLOURRESEARCH.ORG/CRSC-EVENTS/2019/11/28/LECTURE-AESTHETICS-AND-COLOUR-RESEARCH-AT-THE-UNIVERSITY-OF-TORONTOS-PSYCHOLOGICAL-LABORATORY

If you follow the link above, you’ll find this description of the talk and more,

Aesthetics and Colour Research at the University of Toronto’s Psychological Laboratory

This talk focuses on the tools and technology of colour research used in Kirschmann’s Toronto laboratory, as well as their role in supporting Kirschmann’s belief in a renewed science of aesthetics. [Between 1893 and 1908, the German-born psychologist August Kirschmann (1860-1932), led the University of Toronto’s newly founded psychological laboratory.] The talk will include a display of surviving artifacts used in the Laboratory. It will also include some colour-related artifacts from the University of Toronto Archives and Records Management Services (UTARMS), and the Fisher Rare Books Library.

Erich Weidenhammer is Curator of the University of Toronto Scientific Instruments Collection (UTSIC.org), an effort to safeguard and catalogue the material culture of research and teaching at the University of Toronto. He is also an Adjunct Curator for Scientific Processes at Ingenium: Canada’s Museums of Science & Innovation in Ottawa. Erich received his PhD in 2014 from the Institute for the History and Philosophy of Science and Technology (IHPST) of the University of Toronto for a dissertation that explored the relationship between chemistry and medicine in late eighteenth-century Britain.

Courtesy University of Toronto Scientific Instruments Collection

It turns out that this talk at the University of Toronto is part of a larger series of talks being organized by the Colour Research Society of Canada (CRSC). Here’s more about the society from the CRSC’s About page,

The CRSC is a non-profit organisation for colour research, focused on fostering a cross-disciplinary sharing of colour knowledge. seeking to develop and support a national, cross-disciplinary network of artists and designers, scholars and practitioners, with an interest in engagements with colour, and to encourage discourse between arts, sciences and industry related to colour research and knowledge.

The Colour Research Society of Canada (CRSC) is the Canadian member organisation of the AIC (International Colour Association)

The Nov. 28, 2019 talk is part of the CRSC’s Kaleidoscope Lecture Series.