Tag Archives: machine learning

Geoffrey Hinton (University of Toronto) shares 2024 Nobel Prize for Physics with John J. Hopfield (Princeton University)

What an interesting choice the committee deciding on the 2024 Nobel Prize for Physics have made. Geoffrey Hinton has been mentioned here a number of times, most recently for his participation in one of the periodic AI (artificial intelligence) panics that pop up from time to time. For more about the latest one and Hinton’s participation see my May 25, 2023 posting “Non-human authors (ChatGPT or others) of scientific and medical studies and the latest AI panic!!!” and scroll down to ‘The panic’ subhead.

I have almost nothing about John J. Hopfield other than a tangential mention of the Hopfield neural network in a January 3, 2018 posting “Mott memristor.”

An October 8, 2024 Royal Swedish Academy of Sciences press release announces the winners of the 2024 Nobel Prize in Physics,

The Royal Swedish Academy of Sciences has decided to award the Nobel Prize in Physics 2024 to

John J. Hopfield
Princeton University, NJ, USA

Geoffrey E. Hinton
University of Toronto, Canada

“for foundational discoveries and inventions that enable machine learning with artificial neural networks”

They trained artificial neural networks using physics

This year’s two Nobel Laureates in Physics have used tools from physics to develop methods that are the foundation of today’s powerful machine learning. John Hopfield created an associative memory that can store and reconstruct images and other types of patterns in data. Geoffrey Hinton invented a method that can autonomously find properties in data, and so perform tasks such as identifying specific elements in pictures.

When we talk about artificial intelligence, we often mean machine learning using artificial neural networks. This technology was originally inspired by the structure of the brain. In an artificial neural network, the brain’s neurons are represented by nodes that have different values. These nodes influence each other through con­nections that can be likened to synapses and which can be made stronger or weaker. The network is trained, for example by developing stronger connections between nodes with simultaneously high values. This year’s laureates have conducted important work with artificial neural networks from the 1980s onward.

John Hopfield invented a network that uses a method for saving and recreating patterns. We can imagine the nodes as pixels. The Hopfield network utilises physics that describes a material’s characteristics due to its atomic spin – a property that makes each atom a tiny magnet. The network as a whole is described in a manner equivalent to the energy in the spin system found in physics, and is trained by finding values for the connections between the nodes so that the saved images have low energy. When the Hopfield network is fed a distorted or incomplete image, it methodically works through the nodes and updates their values so the network’s energy falls. The network thus works stepwise to find the saved image that is most like the imperfect one it was fed with.

Geoffrey Hinton used the Hopfield network as the foundation for a new network that uses a different method: the Boltzmann machine. This can learn to recognise characteristic elements in a given type of data. Hinton used tools from statistical physics, the science of systems built from many similar components. The machine is trained by feeding it examples that are very likely to arise when the machine is run. The Boltzmann machine can be used to classify images or create new examples of the type of pattern on which it was trained. Hinton has built upon this work, helping initiate the current explosive development of machine learning.

“The laureates’ work has already been of the greatest benefit. In physics we use artificial neural networks in a vast range of areas, such as developing new materials with specific properties,” says Ellen Moons, Chair of the Nobel Committee for Physics.

An October 8, 2024 University of Toronto news release by Rahul Kalvapalle provides more detail about Hinton’s work and history with the university.

Ben Edwards wrote an October 8, 2024 article for Ars Technica, which in addition to reiterating the announcement explores a ‘controversial’ element to the story, Note 1: I gather I’m not the only one who found the award of a physics prize to researchers in the field of computer science a little unusual, Note 2: Links have been removed,

Hopfield and Hinton’s research, which dates back to the early 1980s, applied principles from physics to develop methods that underpin modern machine-learning techniques. Their work has enabled computers to perform tasks such as image recognition and pattern completion, capabilities that are now ubiquitous in everyday technology.

The win is already turning heads on social media because it seems unusual that research in a computer science field like machine learning might win a Nobel Prize for physics. “And the 2024 Nobel Prize in Physics does not go to physics…” tweeted German physicist Sabine Hossenfelder this morning [October 8, 2024].

From the Nobel committee’s point of view, the award largely derives from the fact that the two men drew from statistical models used in physics and partly from recognizing the advancements in physics research that came from using the men’s neural network techniques as research tools.

Nobel committee chair Ellen Moons, a physicist at Karlstad University, Sweden, said during the announcement, “Artificial neural networks have been used to advance research across physics topics as diverse as particle physics, material science and astrophysics.”

For a comprehensive overview of both Nobel prize winners, Hinton and Hopfield, their work, and their stands vis à vis the dangers of AI, there’s an October 8, 2024 Associated Press article on phys.org.

Nanostrings that vibrate for a long, long, long time at ambient termperatures

It was the ‘ambient temperature’ that caught my attention, from a May 22, 204 news item on Nanowerk, Note: Links have been removed, Note: Much exciting work has to be conducted at very, very cold or very, very hot temperatures, so, ambient or room temperature is a big deal,

Researchers from TU Delft [Delft University of Technology] and Brown University have engineered string-like resonators capable of vibrating longer at ambient temperature than any previously known solid-state object — approaching what is currently only achievable near absolute zero temperatures. Their study, published in Nature Communications (“Centimeter-scale nanomechanical resonators with low dissipation”), pushes the edge of nanotechnology and machine learning to make some of the world’s most sensitive mechanical sensors.

Caption: Artist impression of new nanostrings that can vibrate for a very long time. These nanostrings vibrate more than 100.000 times per second. Because it’s difficult for energy to leak out, it also means environmental noise is hard to get in, making these some of the best sensors for room temperature environments. Credit: Richard Norte

A May 21, 2024 Delft University of Technology news release (also on EurekAlert but published May 22, 2024), which originated the news item, explains why the research is considered exciting,

A 100 year swing on a microchip

“Imagine a swing that, once pushed, keeps swinging for almost 100 years because it loses almost no energy through the ropes,” says associate professor Richard Norte. “Our nanostrings do something similar but rather than vibrating once per second like a swing, our strings vibrate 100,000 times per second. Because it’s difficult for energy to leak out, it also means environmental noise is hard to get in, making these some of the best sensors for room temperature environments.

This innovation is pivotal for studying macroscopic quantum phenomena at room temperature—environments where such phenomena were previously masked by noise. While the weird laws of quantum mechanics are usually only seen in single atoms, the nanostrings’ ability to isolate themselves from our everyday heat-based vibrational noise allows them to open a window into their own quantum signatures; strings made from billions of atoms. In everyday environments, this kind of capability would have interesting uses for quantum-based sensing.

Extraordinary match between simulation and experiment

“Our manufacturing process goes in a different direction with respect to what is possible in nanotechnology today,” said Dr. Andrea Cupertino, who spearheaded the experimental efforts. The strings are 3 centimetres long and 70 nanometres thick, but scaled up, this would be the equivalent of manufacturing guitar strings of glass that are suspended half a kilometre with almost no sag. “This kind of extreme structures are only feasible at nanoscales where the effects of gravity and weight enter differently. This allows for structures that would be unfeasible at our everyday scales but are particularly useful in miniature devices used to measure physical quantities such as pressure, temperature, acceleration and magnetic fields, which we call MEMS sensing,” explains Cupertino.

The nanostrings are crafted using advanced nanotechnology techniques developed at the TU Delft, pushing the boundaries of how thin and long suspended nanostructures can be made. A key of the collaboration is that these nanostructures can be made so perfectly on a microchip, that there is an extraordinary match between simulations and experiments – meaning that simulations can act as the data for machine learning algorithms, rather than costly experiments. “Our approach involved using machine learning algorithms to optimize the design without continuously fabricating prototypes,” noted lead author Dr. Dongil Shin, who developed these algorithms with Miguel Bessa. To further enhance efficiency of designing these large detailed structures, the machine learning algorithms smartly utilised insights from simpler, shorter string experiments to refine the designs of longer strings, making the development process both economical and effective.

According to Norte, the success of this project is a testament to the fruitful collaboration between experts in nanotechnology and machine learning, underscoring the interdisciplinary nature of cutting-edge scientific research.

Inertial navigation and next-generation microphones

The implications of these nanostrings extend beyond basic science. They offer promising new pathways for integrating highly sensitive sensors with standard microchip technology, leading to new approaches in vibration-based sensing. While these initial studies focus on strings, the concepts can be expanded to more complex designs to measure other important parameters like acceleration for inertial navigation or something looking more like a vibrating drumhead for next-generation microphones. This research demonstrates the vast array of possibilities when combining nanotechnology advances with machine learning to open new frontiers in technology.

Here’s a link to and a citation for the paper,

Centimeter-scale nanomechanical resonators with low dissipation by Andrea Cupertino, Dongil Shin, Leo Guo, Peter G. Steeneken, Miguel A. Bessa & Richard A. Norte. Nature Communications volume 15, Article number: 4255 (2024) DOI: https://doi.org/10.1038/s41467-024-48183-7 Published: 18 May 2024

This paper is open access.

I have highlighted work from this team previously in a September 15, 2022 posting, “One of world’s most precise microchip sensors thanks to nanotechnology, machine learning, extended cognition, and spiderwebs.”

Nanobiotics and artificial intelligence (AI)

Antibiotics at the nanoscale = nanobiotics. For a more complete explanation, there’s this (Note: the video runs a little longer than most of the others embedded on this blog),

Before pushing further into this research, a note about antibiotic resistance. In a sense, we’ve created the problem we (those scientists in particular) are trying to solve.

Antibiotics and cleaning products kill 99.9% of the bacteria, leaving 0.1% that are immune. As so many living things on earth do, bacteria reproduce. Now, a new antibiotic is needed and discovered; it too kills 99.9% of the bacteria. The 0.1% left are immune to two antibiotics. And,so it goes.

As the scientists have made clear, we’re running out of options using standard methods and they’re hoping this ‘nanoparticle approach’ as described in a June 5, 2023 news item on Nanowerk will work, Note: A link has been removed,

Identifying whether and how a nanoparticle and protein will bind with one another is an important step toward being able to design antibiotics and antivirals on demand, and a computer model developed at the University of Michigan can do it.

The new tool could help find ways to stop antibiotic-resistant infections and new viruses—and aid in the design of nanoparticles for different purposes.

“Just in 2019, the number of people who died of antimicrobial resistance was 4.95 million. Even before COVID, which worsened the problem, studies showed that by 2050, the number of deaths by antibiotic resistance will be 10 million,” said Angela Violi, an Arthur F. Thurnau Professor of mechanical engineering, and corresponding author of the study that made the cover of Nature Computational Science (“Domain-agnostic predictions of nanoscale interactions in proteins and nanoparticles”).

In my ideal scenario, 20 or 30 years from now, I would like—given any superbug—to be able to quickly produce the best nanoparticles that can treat it.”

A June 5, 2023 University of Michigan news release (also on EurekAlert), which originated the news item, provides more technical details, Note: A link has been removed,

Much of the work within cells is done by proteins. Interaction sites on their surfaces can stitch molecules together, break them apart and perform other modifications—opening doorways into cells, breaking sugars down to release energy, building structures to support groups of cells and more. If we could design medicines that target crucial proteins in bacteria and viruses without harming our own cells, that would enable humans to fight new and changing diseases quickly.

The new [computer] model, named NeCLAS [NeCLAS (Nanoparticle-Computed Ligand Affinity Scoring)], uses machine learning—the AI technique that powers the virtual assistant on your smartphone and ChatGPT. But instead of learning to process language, it absorbs structural models of proteins and their known interaction sites. From this information, it learns to extrapolate how proteins and nanoparticles might interact, predict binding sites and the likelihood of binding between them—as well as predicting interactions between two proteins or two nanoparticles.

“Other models exist, but ours is the best for predicting interactions between proteins and nanoparticles,” said Paolo Elvati, U-M associate research scientist in mechanical engineering.

AlphaFold, for example, is a widely used tool for predicting the 3D structure of a protein based on its building blocks, called amino acids. While this capacity is crucial, this is only the beginning: Discovering how these proteins assemble into larger structures and designing practical nanoscale systems are the next steps.

“That’s where NeCLAS comes in,” said Jacob Saldinger, U-M doctoral student in chemical engineering and first author of the study. “It goes beyond AlphaFold by showing how nanostructures will interact with one another, and it’s not limited to proteins. This enables researchers to understand the potential applications of nanoparticles and optimize their designs.”

The team tested three case studies for which they had additional data: 

  • Molecular tweezers, in which a molecule binds to a particular site on another molecule. This approach can stop harmful biological processes, such as the aggregation of protein plaques in diseases of the brain like Alzheimer’s.
  • How graphene quantum dots break up the biofilm produced by staph bacteria. These nanoparticles are flakes of carbon, no more than a few atomic layers thick and 0.0001 millimeters to a side. Breaking up biofilms is likely a crucial tool in fighting antibiotic-resistant infections—including the superbug methicillin-resistant Staphylococcus aureus (MRSA), commonly acquired at hospitals.
  • Whether graphene quantum dots would disperse in water, demonstrating the model’s ability to predict nanoparticle-nanoparticle binding even though it had been trained exclusively on protein-protein data.

While many protein-protein models set amino acids as the smallest unit that the model must consider, this doesn’t work for nanoparticles. Instead, the team set the size of that smallest feature to be roughly the size of the amino acid but then let the computer model decide where the boundaries between these minimum features were. The result is representations of proteins and nanoparticles that look a bit like collections of interconnected beads, providing more flexibility in exploring small scale interactions.

“Besides being more general, NeCLAS also uses way less training data than AlphaFold. We only have 21 nanoparticles to look at, so we have to use protein data in a clever way,” said Matt Raymond, U-M doctoral student in electrical and computer engineering and study co-author.  

Next, the team intends to explore other biofilms and microorganisms, including viruses.

The Nature Computational Science study was funded by the University of Michigan Blue Sky Initiative, the Army Research Office and the National Science Foundation. 

Here’s a link to and a citation for the paper,

Domain-agnostic predictions of nanoscale interactions in proteins and nanoparticles by Jacob Charles Saldinger, Matt Raymond, Paolo Elvati & Angela Violi. Nature Computational Science volume 3, pages 393–402 (2023) DOI: https://doi.org/10.1038/s43588-023-00438-x Published: 01 May 2023 Issue Date: May 2023

This paper is behind a paywall.

Optical memristors and neuromorphic computing

A June 5, 2023 news item on Nanowerk announced a paper which reviews the state-of-the-art of optical memristors, Note: Links have been removed,

AI, machine learning, and ChatGPT may be relatively new buzzwords in the public domain, but developing a computer that functions like the human brain and nervous system – both hardware and software combined – has been a decades-long challenge. Engineers at the University of Pittsburgh are today exploring how optical “memristors” may be a key to developing neuromorphic computing.

Resistors with memory, or memristors, have already demonstrated their versatility in electronics, with applications as computational circuit elements in neuromorphic computing and compact memory elements in high-density data storage. Their unique design has paved the way for in-memory computing and captured significant interest from scientists and engineers alike.

A new review article published in Nature Photonics (“Integrated Optical Memristors”), sheds light on the evolution of this technology—and the work that still needs to be done for it to reach its full potential. Led by Nathan Youngblood, assistant professor of electrical and computer engineering at the University of Pittsburgh Swanson School of Engineering, the article explores the potential of optical devices which are analogs of electronic memristors. This new class of device could play a major role in revolutionizing high-bandwidth neuromorphic computing, machine learning hardware, and artificial intelligence in the optical domain.

A June 2, 2023 University of Pittsburgh news release (also on EurekAlert but published June 5, 2023), which originated the news item, provides more detail,

“Researchers are truly captivated by optical memristors because of their incredible potential in high-bandwidth neuromorphic computing, machine learning hardware, and artificial intelligence,” explained Youngblood. “Imagine merging the incredible advantages of optics with local information processing. It’s like opening the door to a whole new realm of technological possibilities that were previously unimaginable.” 

The review article presents a comprehensive overview of recent progress in this emerging field of photonic integrated circuits. It explores the current state-of-the-art and highlights the potential applications of optical memristors, which combine the benefits of ultrafast, high-bandwidth optical communication with local information processing. However, scalability emerged as the most pressing issue that future research should address. 

“Scaling up in-memory or neuromorphic computing in the optical domain is a huge challenge. Having a technology that is fast, compact, and efficient makes scaling more achievable and would represent a huge step forward,” explained Youngblood. 

“One example of the limitations is that if you were to take phase change materials, which currently have the highest storage density for optical memory, and try to implement a relatively simplistic neural network on-chip, it would take a wafer the size of a laptop to fit all the memory cells needed,” he continued. “Size matters for photonics, and we need to find a way to improve the storage density, energy efficiency, and programming speed to do useful computing at useful scales.”

Using Light to Revolutionize Computing

Optical memristors can revolutionize computing and information processing across several applications. They can enable active trimming of photonic integrated circuits (PICs), allowing for on-chip optical systems to be adjusted and reprogrammed as needed without continuously consuming power. They also offer high-speed data storage and retrieval, promising to accelerate processing, reduce energy consumption, and enable parallel processing. 

Optical memristors can even be used for artificial synapses and brain-inspired architectures. Dynamic memristors with nonvolatile storage and nonlinear output replicate the long-term plasticity of synapses in the brain and pave the way for spiking integrate-and-fire computing architectures.

Research to scale up and improve optical memristor technology could unlock unprecedented possibilities for high-bandwidth neuromorphic computing, machine learning hardware, and artificial intelligence. 

“We looked at a lot of different technologies. The thing we noticed is that we’re still far away from the target of an ideal optical memristor–something that is compact, efficient, fast, and changes the optical properties in a significant manner,” Youngblood said. “We’re still searching for a material or a device that actually meets all these criteria in a single technology in order for it to drive the field forward.”

The publication of “Integrated Optical Memristors” (DOI: 10.1038/s41566-023-01217-w) was published in Nature Photonics and is coauthored by senior author Harish Bhaskaran at the University of Oxford, Wolfram Pernice at Heidelberg University, and Carlos Ríos at the University of Maryland.

Despite including that final paragraph, I’m also providing a link to and a citation for the paper,

Integrated optical memristors by Nathan Youngblood, Carlos A. Ríos Ocampo, Wolfram H. P. Pernice & Harish Bhaskaran. Nature Photonics volume 17, pages 561–572 (2023) DOI: https://doi.org/10.1038/s41566-023-01217-w Published online: 29 May 2023 Issue Date: July 2023

This paper is behind a paywall.

Non-human authors (ChatGPT or others) of scientific and medical studies and the latest AI panic!!!

It’s fascinating to see all the current excitement (distressed and/or enthusiastic) around the act of writing and artificial intelligence. Easy to forget that it’s not new. First, the ‘non-human authors’ and then the panic(s). *What follows the ‘nonhuman authors’ is essentially a survey of situation/panic.*

How to handle non-human authors (ChatGPT and other AI agents)—the medical edition

The first time I wrote about the incursion of robots or artificial intelligence into the field of writing was in a July 16, 2014 posting titled “Writing and AI or is a robot writing this blog?” ChatGPT (then known as GPT-2) first made its way onto this blog in a February 18, 2019 posting titled “AI (artificial intelligence) text generator, too dangerous to release?

The folks at the Journal of the American Medical Association (JAMA) have recently adopted a pragmatic approach to the possibility of nonhuman authors of scientific and medical papers, from a January 31, 2022 JAMA editorial,

Artificial intelligence (AI) technologies to help authors improve the preparation and quality of their manuscripts and published articles are rapidly increasing in number and sophistication. These include tools to assist with writing, grammar, language, references, statistical analysis, and reporting standards. Editors and publishers also use AI-assisted tools for myriad purposes, including to screen submissions for problems (eg, plagiarism, image manipulation, ethical issues), triage submissions, validate references, edit, and code content for publication in different media and to facilitate postpublication search and discoverability..1

In November 2022, OpenAI released a new open source, natural language processing tool called ChatGPT.2,3 ChatGPT is an evolution of a chatbot that is designed to simulate human conversation in response to prompts or questions (GPT stands for “generative pretrained transformer”). The release has prompted immediate excitement about its many potential uses4 but also trepidation about potential misuse, such as concerns about using the language model to cheat on homework assignments, write student essays, and take examinations, including medical licensing examinations.5 In January 2023, Nature reported on 2 preprints and 2 articles published in the science and health fields that included ChatGPT as a bylined author.6 Each of these includes an affiliation for ChatGPT, and 1 of the articles includes an email address for the nonhuman “author.” According to Nature, that article’s inclusion of ChatGPT in the author byline was an “error that will soon be corrected.”6 However, these articles and their nonhuman “authors” have already been indexed in PubMed and Google Scholar.

Nature has since defined a policy to guide the use of large-scale language models in scientific publication, which prohibits naming of such tools as a “credited author on a research paper” because “attribution of authorship carries with it accountability for the work, and AI tools cannot take such responsibility.”7 The policy also advises researchers who use these tools to document this use in the Methods or Acknowledgment sections of manuscripts.7 Other journals8,9 and organizations10 are swiftly developing policies that ban inclusion of these nonhuman technologies as “authors” and that range from prohibiting the inclusion of AI-generated text in submitted work8 to requiring full transparency, responsibility, and accountability for how such tools are used and reported in scholarly publication.9,10 The International Conference on Machine Learning, which issues calls for papers to be reviewed and discussed at its conferences, has also announced a new policy: “Papers that include text generated from a large-scale language model (LLM) such as ChatGPT are prohibited unless the produced text is presented as a part of the paper’s experimental analysis.”11 The society notes that this policy has generated a flurry of questions and that it plans “to investigate and discuss the impact, both positive and negative, of LLMs on reviewing and publishing in the field of machine learning and AI” and will revisit the policy in the future.11

This is a link to and a citation for the JAMA editorial,

Nonhuman “Authors” and Implications for the Integrity of Scientific Publication and Medical Knowledge by Annette Flanagin, Kirsten Bibbins-Domingo, Michael Berkwits, Stacy L. Christiansen. JAMA. 2023;329(8):637-639. doi:10.1001/jama.2023.1344

The editorial appears to be open access.

ChatGPT in the field of education

Dr. Andrew Maynard (scientist, author, and professor of Advanced Technology Transitions in the Arizona State University [ASU] School for the Future if Innovation in Society and founder of the ASU Future of Being Human initiative and Director of the ASU Risk Innovation Nexus) also takes a pragmatic approach in a March 14, 2023 posting on his eponymous blog,

Like many of my colleagues, I’ve been grappling with how ChatGPT and other Large Language Models (LLMs) are impacting teaching and education — especially at the undergraduate level.

We’re already seeing signs of the challenges here as a growing divide emerges between LLM-savvy students who are experimenting with novel ways of using (and abusing) tools like ChatGPT, and educators who are desperately trying to catch up. As a result, educators are increasingly finding themselves unprepared and poorly equipped to navigate near-real-time innovations in how students are using these tools. And this is only exacerbated where their knowledge of what is emerging is several steps behind that of their students.

To help address this immediate need, a number of colleagues and I compiled a practical set of Frequently Asked Questions on ChatGPT in the classroom. These covers the basics of what ChatGPT is, possible concerns over use by students, potential creative ways of using the tool to enhance learning, and suggestions for class-specific guidelines.

Dr. Maynard goes on to offer the FAQ/practical guide here. Prior to issuing the ‘guide’, he wrote a December 8, 2022 essay on Medium titled “I asked Open AI’s ChatGPT about responsible innovation. This is what I got.”

Crawford Kilian, a longtime educator, author, and contributing editor to The Tyee, expresses measured enthusiasm for the new technology (as does Dr. Maynard), in a December 13, 2022 article for thetyee.ca, Note: Links have been removed,

ChatGPT, its makers tell us, is still in beta form. Like a million other new users, I’ve been teaching it (tuition-free) so its answers will improve. It’s pretty easy to run a tutorial: once you’ve created an account, you’re invited to ask a question or give a command. Then you watch the reply, popping up on the screen at the speed of a fast and very accurate typist.

Early responses to ChatGPT have been largely Luddite: critics have warned that its arrival means the end of high school English, the demise of the college essay and so on. But remember that the Luddites were highly skilled weavers who commanded high prices for their products; they could see that newfangled mechanized looms would produce cheap fabrics that would push good weavers out of the market. ChatGPT, with sufficient tweaks, could do just that to educators and other knowledge workers.

Having spent 40 years trying to teach my students how to write, I have mixed feelings about this prospect. But it wouldn’t be the first time that a technological advancement has resulted in the atrophy of a human mental skill.

Writing arguably reduced our ability to memorize — and to speak with memorable and persuasive coherence. …

Writing and other technological “advances” have made us what we are today — powerful, but also powerfully dangerous to ourselves and our world. If we can just think through the implications of ChatGPT, we may create companions and mentors that are not so much demonic as the angels of our better nature.

More than writing: emergent behaviour

The ChatGPT story extends further than writing and chatting. From a March 6, 2023 article by Stephen Ornes for Quanta Magazine, Note: Links have been removed,

What movie do these emojis describe?

That prompt was one of 204 tasks chosen last year to test the ability of various large language models (LLMs) — the computational engines behind AI chatbots such as ChatGPT. The simplest LLMs produced surreal responses. “The movie is a movie about a man who is a man who is a man,” one began. Medium-complexity models came closer, guessing The Emoji Movie. But the most complex model nailed it in one guess: Finding Nemo.

“Despite trying to expect surprises, I’m surprised at the things these models can do,” said Ethan Dyer, a computer scientist at Google Research who helped organize the test. It’s surprising because these models supposedly have one directive: to accept a string of text as input and predict what comes next, over and over, based purely on statistics. Computer scientists anticipated that scaling up would boost performance on known tasks, but they didn’t expect the models to suddenly handle so many new, unpredictable ones.

“That language models can do these sort of things was never discussed in any literature that I’m aware of,” said Rishi Bommasani, a computer scientist at Stanford University. Last year, he helped compile a list of dozens of emergent behaviors [emphasis mine], including several identified in Dyer’s project. That list continues to grow.

Now, researchers are racing not only to identify additional emergent abilities but also to figure out why and how they occur at all — in essence, to try to predict unpredictability. Understanding emergence could reveal answers to deep questions around AI and machine learning in general, like whether complex models are truly doing something new or just getting really good at statistics. It could also help researchers harness potential benefits and curtail emergent risks.

Biologists, physicists, ecologists and other scientists use the term “emergent” to describe self-organizing, collective behaviors that appear when a large collection of things acts as one. Combinations of lifeless atoms give rise to living cells; water molecules create waves; murmurations of starlings swoop through the sky in changing but identifiable patterns; cells make muscles move and hearts beat. Critically, emergent abilities show up in systems that involve lots of individual parts. But researchers have only recently been able to document these abilities in LLMs as those models have grown to enormous sizes.

But the debut of LLMs also brought something truly unexpected. Lots of somethings. With the advent of models like GPT-3, which has 175 billion parameters — or Google’s PaLM, which can be scaled up to 540 billion — users began describing more and more emergent behaviors. One DeepMind engineer even reported being able to convince ChatGPT that it was a Linux terminal and getting it to run some simple mathematical code to compute the first 10 prime numbers. Remarkably, it could finish the task faster than the same code running on a real Linux machine.

As with the movie emoji task, researchers had no reason to think that a language model built to predict text would convincingly imitate a computer terminal. Many of these emergent behaviors illustrate “zero-shot” or “few-shot” learning, which describes an LLM’s ability to solve problems it has never — or rarely — seen before. This has been a long-time goal in artificial intelligence research, Ganguli [Deep Ganguli, a computer scientist at the AI startup Anthropic] said. Showing that GPT-3 could solve problems without any explicit training data in a zero-shot setting, he said, “led me to drop what I was doing and get more involved.”

There is an obvious problem with asking these models to explain themselves: They are notorious liars. [emphasis mine] “We’re increasingly relying on these models to do basic work,” Ganguli said, “but I do not just trust these. I check their work.” As one of many amusing examples, in February [2023] Google introduced its AI chatbot, Bard. The blog post announcing the new tool shows Bard making a factual error.

If you have time, I recommend reading Omes’s March 6, 2023 article.

The panic

Perhaps not entirely unrelated to current developments, there was this announcement in a May 1, 2023 article by Hannah Alberga for CTV (Canadian Television Network) news, Note: Links have been removed,

Toronto’s pioneer of artificial intelligence quits Google to openly discuss dangers of AI

Geoffrey Hinton, professor at the University of Toronto and the “godfather” of deep learning – a field of artificial intelligence that mimics the human brain – announced his departure from the company on Monday [May 1, 2023] citing the desire to freely discuss the implications of deep learning and artificial intelligence, and the possible consequences if it were utilized by “bad actors.”

Hinton, a British-Canadian computer scientist, is best-known for a series of deep neural network breakthroughs that won him, Yann LeCun and Yoshua Bengio the 2018 Turing Award, known as the Nobel Prize of computing. 

Hinton has been invested in the now-hot topic of artificial intelligence since its early stages. In 1970, he got a Bachelor of Arts in experimental psychology from Cambridge, followed by his Ph.D. in artificial intelligence in Edinburgh, U.K. in 1978.

He joined Google after spearheading a major breakthrough with two of his graduate students at the University of Toronto in 2012, in which the team uncovered and built a new method of artificial intelligence: neural networks. The team’s first neural network was  incorporated and sold to Google for $44 million.

Neural networks are a method of deep learning that effectively teaches computers how to learn the way humans do by analyzing data, paving the way for machines to classify objects and understand speech recognition.

There’s a bit more from Hinton in a May 3, 2023 article by Sheena Goodyear for the Canadian Broadcasting Corporation’s (CBC) radio programme, As It Happens (the 10 minute radio interview is embedded in the article), Note: A link has been removed,

There was a time when Geoffrey Hinton thought artificial intelligence would never surpass human intelligence — at least not within our lifetimes.

Nowadays, he’s not so sure.

“I think that it’s conceivable that this kind of advanced intelligence could just take over from us,” the renowned British-Canadian computer scientist told As It Happens host Nil Köksal. “It would mean the end of people.”

For the last decade, he [Geoffrey Hinton] divided his career between teaching at the University of Toronto and working for Google’s deep-learning artificial intelligence team. But this week, he announced his resignation from Google in an interview with the New York Times.

Now Hinton is speaking out about what he fears are the greatest dangers posed by his life’s work, including governments using AI to manipulate elections or create “robot soldiers.”

But other experts in the field of AI caution against his visions of a hypothetical dystopian future, saying they generate unnecessary fear, distract from the very real and immediate problems currently posed by AI, and allow bad actors to shirk responsibility when they wield AI for nefarious purposes. 

Ivana Bartoletti, founder of the Women Leading in AI Network, says dwelling on dystopian visions of an AI-led future can do us more harm than good. 

“It’s important that people understand that, to an extent, we are at a crossroads,” said Bartoletti, chief privacy officer at the IT firm Wipro.

“My concern about these warnings, however, is that we focus on the sort of apocalyptic scenario, and that takes us away from the risks that we face here and now, and opportunities to get it right here and now.”

Ziv Epstein, a PhD candidate at the Massachusetts Institute of Technology who studies the impacts of technology on society, says the problems posed by AI are very real, and he’s glad Hinton is “raising the alarm bells about this thing.”

“That being said, I do think that some of these ideas that … AI supercomputers are going to ‘wake up’ and take over, I personally believe that these stories are speculative at best and kind of represent sci-fi fantasy that can monger fear” and distract from more pressing issues, he said.

He especially cautions against language that anthropomorphizes — or, in other words, humanizes — AI.

“It’s absolutely possible I’m wrong. We’re in a period of huge uncertainty where we really don’t know what’s going to happen,” he [Hinton] said.

Don Pittis in his May 4, 2022 business analysis for CBC news online offers a somewhat jaundiced view of Hinton’s concern regarding AI, Note: Links have been removed,

As if we needed one more thing to terrify us, the latest warning from a University of Toronto scientist considered by many to be the founding intellect of artificial intelligence, adds a new layer of dread.

Others who have warned in the past that thinking machines are a threat to human existence seem a little miffed with the rock-star-like media coverage Geoffrey Hinton, billed at a conference this week as the Godfather of AI, is getting for what seems like a last minute conversion. Others say Hinton’s authoritative voice makes a difference.

Not only did Hinton tell an audience of experts at Wednesday’s [May 3, 2023] EmTech Digital conference that humans will soon be supplanted by AI — “I think it’s serious and fairly close.” — he said that due to national and business competition, there is no obvious way to prevent it.

“What we want is some way of making sure that even if they’re smarter than us, they’re going to do things that are beneficial,” said Hinton on Wednesday [May 3, 2023] as he explained his change of heart in detailed technical terms. 

“But we need to try and do that in a world where there’s bad actors who want to build robot soldiers that kill people and it seems very hard to me.”

“I wish I had a nice and simple solution I could push, but I don’t,” he said. “It’s not clear there is a solution.”

So when is all this happening?

“In a few years time they may be significantly more intelligent than people,” he told Nil Köksal on CBC Radio’s As It Happens on Wednesday [May 3, 2023].

While he may be late to the party, Hinton’s voice adds new clout to growing anxiety that artificial general intelligence, or AGI, has now joined climate change and nuclear Armageddon as ways for humans to extinguish themselves.

But long before that final day, he worries that the new technology will soon begin to strip away jobs and lead to a destabilizing societal gap between rich and poor that current politics will be unable to solve.

The EmTech Digital conference is a who’s who of AI business and academia, fields which often overlap. Most other participants at the event were not there to warn about AI like Hinton, but to celebrate the explosive growth of AI research and business.

As one expert I spoke to pointed out, the growth in AI is exponential and has been for a long time. But even knowing that, the increase in the dollar value of AI to business caught the sector by surprise.

Eight years ago when I wrote about the expected increase in AI business, I quoted the market intelligence group Tractica that AI spending would “be worth more than $40 billion in the coming decade,” which sounded like a lot at the time. It appears that was an underestimate.

“The global artificial intelligence market size was valued at $428 billion U.S. in 2022,” said an updated report from Fortune Business Insights. “The market is projected to grow from $515.31 billion U.S. in 2023.”  The estimate for 2030 is more than $2 trillion. 

This week the new Toronto AI company Cohere, where Hinton has a stake of his own, announced it was “in advanced talks” to raise $250 million. The Canadian media company Thomson Reuters said it was planning “a deeper investment in artificial intelligence.” IBM is expected to “pause hiring for roles that could be replaced with AI.” The founders of Google DeepMind and LinkedIn have launched a ChatGPT competitor called Pi.

And that was just this week.

“My one hope is that, because if we allow it to take over it will be bad for all of us, we could get the U.S. and China to agree, like we did with nuclear weapons,” said Hinton. “We’re all the in same boat with respect to existential threats, so we all ought to be able to co-operate on trying to stop it.”

Interviewer and moderator Will Douglas Heaven, an editor at MIT Technology Review finished Hinton’s sentence for him: “As long as we can make some money on the way.”

Hinton has attracted some criticism himself. Wilfred Chan writing for Fast Company has two articles, “‘I didn’t see him show up’: Ex-Googlers blast ‘AI godfather’ Geoffrey Hinton’s silence on fired AI experts” on May 5, 2023, Note: Links have been removed,

Geoffrey Hinton, the 75-year-old computer scientist known as the “Godfather of AI,” made headlines this week after resigning from Google to sound the alarm about the technology he helped create. In a series of high-profile interviews, the machine learning pioneer has speculated that AI will surpass humans in intelligence and could even learn to manipulate or kill people on its own accord.

But women who for years have been speaking out about AI’s problems—even at the expense of their jobs—say Hinton’s alarmism isn’t just opportunistic but also overshadows specific warnings about AI’s actual impacts on marginalized people.

“It’s disappointing to see this autumn-years redemption tour [emphasis mine] from someone who didn’t really show up” for other Google dissenters, says Meredith Whittaker, president of the Signal Foundation and an AI researcher who says she was pushed out of Google in 2019 in part over her activism against the company’s contract to build machine vision technology for U.S. military drones. (Google has maintained that Whittaker chose to resign.)

Another prominent ex-Googler, Margaret Mitchell, who co-led the company’s ethical AI team, criticized Hinton for not denouncing Google’s 2020 firing of her coleader Timnit Gebru, a leading researcher who had spoken up about AI’s risks for women and people of color.

“This would’ve been a moment for Dr. Hinton to denormalize the firing of [Gebru],” Mitchell tweeted on Monday. “He did not. This is how systemic discrimination works.”

Gebru, who is Black, was sacked in 2020 after refusing to scrap a research paper she coauthored about the risks of large language models to multiply discrimination against marginalized people. …

… An open letter in support of Gebru was signed by nearly 2,700 Googlers in 2020, but Hinton wasn’t one of them. 

Instead, Hinton has used the spotlight to downplay Gebru’s voice. In an appearance on CNN Tuesday [May 2, 2023], for example, he dismissed a question from Jake Tapper about whether he should have stood up for Gebru, saying her ideas “aren’t as existentially serious as the idea of these things getting more intelligent than us and taking over.” [emphasis mine]

Gebru has been mentioned here a few times. She’s mentioned in passing in a June 23, 2022 posting “Racist and sexist robots have flawed AI” and in a little more detail in an August 30, 2022 posting “Should AI algorithms get patents for their inventions and is anyone talking about copyright for texts written by AI algorithms?” scroll down to the ‘Consciousness and ethical AI’ subhead

Chan has another Fast Company article investigating AI issues also published on May 5, 2023, “Researcher Meredith Whittaker says AI’s biggest risk isn’t ‘consciousness’—it’s the corporations that control them.”

The last two existential AI panics

The term “autumn-years redemption tour”is striking and while the reference to age could be viewed as problematic, it also hints at the money, honours, and acknowledgement that Hinton has enjoyed as an eminent scientist. I’ve covered two previous panics set off by eminent scientists. “Existential risk” is the title of my November 26, 2012 posting which highlights Martin Rees’ efforts to found the Centre for Existential Risk at the University of Cambridge.

Rees is a big deal. From his Wikipedia entry, Note: Links have been removed,

Martin John Rees, Baron Rees of Ludlow OM FRS FREng FMedSci FRAS HonFInstP[10][2] (born 23 June 1942) is a British cosmologist and astrophysicist.[11] He is the fifteenth Astronomer Royal, appointed in 1995,[12][13][14] and was Master of Trinity College, Cambridge, from 2004 to 2012 and President of the Royal Society between 2005 and 2010.[15][16][17][18][19][20]

The Centre for Existential Risk can be found here online (it is located at the University of Cambridge). Interestingly, Hinton who was born in December 1947 will be giving a lecture “Digital versus biological intelligence: Reasons for concern about AI” in Cambridge on May 25, 2023.

The next panic was set off by Stephen Hawking (1942 – 2018; also at the University of Cambridge, Wikipedia entry) a few years before he died. (Note: Rees, Hinton, and Hawking were all born within five years of each other and all have/had ties to the University of Cambridge. Interesting coincidence, eh?) From a January 9, 2015 article by Emily Chung for CBC news online,

Machines turning on their creators has been a popular theme in books and movies for decades, but very serious people are starting to take the subject very seriously. Physicist Stephen Hawking says, “the development of full artificial intelligence could spell the end of the human race.” Tesla Motors and SpaceX founder Elon Musk suggests that AI is probably “our biggest existential threat.”

Artificial intelligence experts say there are good reasons to pay attention to the fears expressed by big minds like Hawking and Musk — and to do something about it while there is still time.

Hawking made his most recent comments at the beginning of December [2014], in response to a question about an upgrade to the technology he uses to communicate, He relies on the device because he has amyotrophic lateral sclerosis, a degenerative disease that affects his ability to move and speak.

Popular works of science fiction – from the latest Terminator trailer, to the Matrix trilogy, to Star Trek’s borg – envision that beyond that irreversible historic event, machines will destroy, enslave or assimilate us, says Canadian science fiction writer Robert J. Sawyer.

Sawyer has written about a different vision of life beyond singularity [when machines surpass humans in general intelligence,] — one in which machines and humans work together for their mutual benefit. But he also sits on a couple of committees at the Lifeboat Foundation, a non-profit group that looks at future threats to the existence of humanity, including those posed by the “possible misuse of powerful technologies” such as AI. He said Hawking and Musk have good reason to be concerned.

To sum up, the first panic was in 2012, the next in 2014/15, and the latest one began earlier this year (2023) with a letter. A March 29, 2023 Thompson Reuters news item on CBC news online provides information on the contents,

Elon Musk and a group of artificial intelligence experts and industry executives are calling for a six-month pause in developing systems more powerful than OpenAI’s newly launched GPT-4, in an open letter citing potential risks to society and humanity.

Earlier this month, Microsoft-backed OpenAI unveiled the fourth iteration of its GPT (Generative Pre-trained Transformer) AI program, which has wowed users with its vast range of applications, from engaging users in human-like conversation to composing songs and summarizing lengthy documents.

The letter, issued by the non-profit Future of Life Institute and signed by more than 1,000 people including Musk, called for a pause on advanced AI development until shared safety protocols for such designs were developed, implemented and audited by independent experts.

Co-signatories included Stability AI CEO Emad Mostaque, researchers at Alphabet-owned DeepMind, and AI heavyweights Yoshua Bengio, often referred to as one of the “godfathers of AI,” and Stuart Russell, a pioneer of research in the field.

According to the European Union’s transparency register, the Future of Life Institute is primarily funded by the Musk Foundation, as well as London-based effective altruism group Founders Pledge, and Silicon Valley Community Foundation.

The concerns come as EU police force Europol on Monday {March 27, 2023] joined a chorus of ethical and legal concerns over advanced AI like ChatGPT, warning about the potential misuse of the system in phishing attempts, disinformation and cybercrime.

Meanwhile, the U.K. government unveiled proposals for an “adaptable” regulatory framework around AI.

The government’s approach, outlined in a policy paper published on Wednesday [March 29, 2023], would split responsibility for governing artificial intelligence (AI) between its regulators for human rights, health and safety, and competition, rather than create a new body dedicated to the technology.

The engineers have chimed in, from an April 7, 2023 article by Margo Anderson for the IEEE (institute of Electrical and Electronics Engineers) Spectrum magazine, Note: Links have been removed,

The open letter [published March 29, 2023], titled “Pause Giant AI Experiments,” was organized by the nonprofit Future of Life Institute and signed by more than 27,565 people (as of 8 May). It calls for cessation of research on “all AI systems more powerful than GPT-4.”

It’s the latest of a host of recent “AI pause” proposals including a suggestion by Google’s François Chollet of a six-month “moratorium on people overreacting to LLMs” in either direction.

In the news media, the open letter has inspired straight reportage, critical accounts for not going far enough (“shut it all down,” Eliezer Yudkowsky wrote in Time magazine), as well as critical accounts for being both a mess and an alarmist distraction that overlooks the real AI challenges ahead.

IEEE members have expressed a similar diversity of opinions.

There was an earlier open letter in January 2015 according to Wikipedia’s “Open Letter on Artificial Intelligence” entry, Note: Links have been removed,

In January 2015, Stephen Hawking, Elon Musk, and dozens of artificial intelligence experts[1] signed an open letter on artificial intelligence calling for research on the societal impacts of AI. The letter affirmed that society can reap great potential benefits from artificial intelligence, but called for concrete research on how to prevent certain potential “pitfalls”: artificial intelligence has the potential to eradicate disease and poverty, but researchers must not create something which is unsafe or uncontrollable.[1] The four-paragraph letter, titled “Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter”, lays out detailed research priorities in an accompanying twelve-page document.

As for ‘Mr. ChatGPT’ or Sam Altman, CEO of OpenAI, while he didn’t sign the March 29, 2023 letter, he appeared before US Congress suggesting AI needs to be regulated according to May 16, 2023 news article by Mohar Chatterjee for Politico.

You’ll notice I’ve arbitrarily designated three AI panics by assigning their origins to eminent scientists. In reality, these concerns rise and fall in ways that don’t allow for such a tidy analysis. As Chung notes, science fiction regularly addresses this issue. For example, there’s my October 16, 2013 posting, “Wizards & Robots: a comic book encourages study in the sciences and maths and discussions about existential risk.” By the way, will.i.am (of the Black Eyed Peas band was involved in the comic book project and he us a longtime supporter of STEM (science, technology, engineering, and mathematics) initiatives.

Finally (but not quite)

Puzzling, isn’t it? I’m not sure we’re asking the right questions but it’s encouraging to see that at least some are being asked.

Dr. Andrew Maynard in a May 12, 2023 essay for The Conversation (h/t May 12, 2023 item on phys.org) notes that ‘Luddites’ questioned technology’s inevitable progress and were vilified for doing so, Note: Links have been removed,

The term “Luddite” emerged in early 1800s England. At the time there was a thriving textile industry that depended on manual knitting frames and a skilled workforce to create cloth and garments out of cotton and wool. But as the Industrial Revolution gathered momentum, steam-powered mills threatened the livelihood of thousands of artisanal textile workers.

Faced with an industrialized future that threatened their jobs and their professional identity, a growing number of textile workers turned to direct action. Galvanized by their leader, Ned Ludd, they began to smash the machines that they saw as robbing them of their source of income.

It’s not clear whether Ned Ludd was a real person, or simply a figment of folklore invented during a period of upheaval. But his name became synonymous with rejecting disruptive new technologies – an association that lasts to this day.

Questioning doesn’t mean rejecting

Contrary to popular belief, the original Luddites were not anti-technology, nor were they technologically incompetent. Rather, they were skilled adopters and users of the artisanal textile technologies of the time. Their argument was not with technology, per se, but with the ways that wealthy industrialists were robbing them of their way of life

In December 2015, Stephen Hawking, Elon Musk and Bill Gates were jointly nominated for a “Luddite Award.” Their sin? Raising concerns over the potential dangers of artificial intelligence.

The irony of three prominent scientists and entrepreneurs being labeled as Luddites underlines the disconnect between the term’s original meaning and its more modern use as an epithet for anyone who doesn’t wholeheartedly and unquestioningly embrace technological progress.

Yet technologists like Musk and Gates aren’t rejecting technology or innovation. Instead, they’re rejecting a worldview that all technological advances are ultimately good for society. This worldview optimistically assumes that the faster humans innovate, the better the future will be.

In an age of ChatGPT, gene editing and other transformative technologies, perhaps we all need to channel the spirit of Ned Ludd as we grapple with how to ensure that future technologies do more good than harm.

In fact, “Neo-Luddites” or “New Luddites” is a term that emerged at the end of the 20th century.

In 1990, the psychologist Chellis Glendinning published an essay titled “Notes toward a Neo-Luddite Manifesto.”

Then there are the Neo-Luddites who actively reject modern technologies, fearing that they are damaging to society. New York City’s Luddite Club falls into this camp. Formed by a group of tech-disillusioned Gen-Zers, the club advocates the use of flip phones, crafting, hanging out in parks and reading hardcover or paperback books. Screens are an anathema to the group, which sees them as a drain on mental health.

I’m not sure how many of today’s Neo-Luddites – whether they’re thoughtful technologists, technology-rejecting teens or simply people who are uneasy about technological disruption – have read Glendinning’s manifesto. And to be sure, parts of it are rather contentious. Yet there is a common thread here: the idea that technology can lead to personal and societal harm if it is not developed responsibly.

Getting back to where this started with nonhuman authors, Amelia Eqbal has written up an informal transcript of a March 16, 2023 CBC radio interview (radio segment is embedded) about ChatGPT-4 (the latest AI chatbot from OpenAI) between host Elamin Abdelmahmoud and tech journalist, Alyssa Bereznak.

I was hoping to add a little more Canadian content, so in March 2023 and again in April 2023, I sent a question about whether there were any policies regarding nonhuman or AI authors to Kim Barnhardt at the Canadian Medical Association Journal (CMAJ). To date, there has been no reply but should one arrive, I will place it here.

In the meantime, I have this from Canadian writer, Susan Baxter in her May 15, 2023 blog posting “Coming soon: Robot Overlords, Sentient AI and more,”

The current threat looming (Covid having been declared null and void by the WHO*) is Artificial Intelligence (AI) which, we are told, is becoming too smart for its own good and will soon outsmart humans. Then again, given some of the humans I’ve met along the way that wouldn’t be difficult.

All this talk of scary-boo AI seems to me to have become the worst kind of cliché, one that obscures how our lives have become more complicated and more frustrating as apps and bots and cyber-whatsits take over.

The trouble with clichés, as Alain de Botton wrote in How Proust Can Change Your Life, is not that they are wrong or contain false ideas but more that they are “superficial articulations of good ones”. Cliches are oversimplifications that become so commonplace we stop noticing the more serious subtext. (This is rife in medicine where metaphors such as talk of “replacing” organs through transplants makes people believe it’s akin to changing the oil filter in your car. Or whatever it is EV’s have these days that needs replacing.)

Should you live in Vancouver (Canada) and are attending a May 28, 2023 AI event, you may want to read Susan Baxter’s piece as a counterbalance to, “Discover the future of artificial intelligence at this unique AI event in Vancouver,” a May 19, 2023 sponsored content by Katy Brennan for the Daily Hive,

If you’re intrigued and eager to delve into the rapidly growing field of AI, you’re not going to want to miss this unique Vancouver event.

On Sunday, May 28 [2023], a Multiplatform AI event is coming to the Vancouver Playhouse — and it’s set to take you on a journey into the future of artificial intelligence.

The exciting conference promises a fusion of creativity, tech innovation, and thought–provoking insights, with talks from renowned AI leaders and concept artists, who will share their experiences and opinions.

Guests can look forward to intense discussions about AI’s pros and cons, hear real-world case studies, and learn about the ethical dimensions of AI, its potential threats to humanity, and the laws that govern its use.

Live Q&A sessions will also be held, where leading experts in the field will address all kinds of burning questions from attendees. There will also be a dynamic round table and several other opportunities to connect with industry leaders, pioneers, and like-minded enthusiasts. 

This conference is being held at The Playhouse, 600 Hamilton Street, from 11 am to 7:30 pm, ticket prices range from $299 to $349 to $499 (depending on when you make your purchase, From the Multiplatform AI Conference homepage,

Event Speakers

Max Sills
General Counsel at Midjourney

From Jan 2022 – present (Advisor – now General Counsel) – Midjourney – An independent research lab exploring new mediums of thought and expanding the imaginative powers of the human species (SF) Midjourney – a generative artificial intelligence program and service created and hosted by a San Francisco-based independent research lab Midjourney, Inc. Midjourney generates images from natural language descriptions, called “prompts”, similar to OpenAI’s DALL-E and Stable Diffusion. For now the company uses Discord Server as a source of service and, with huge 15M+ members, is the biggest Discord server in the world. In the two-things-at-once department, Max Sills also known as an owner of Open Advisory Services, firm which is set up to help small and medium tech companies with their legal needs (managing outside counsel, employment, carta, TOS, privacy). Their clients are enterprise level, medium companies and up, and they are here to help anyone on open source and IP strategy. Max is an ex-counsel at Block, ex-general manager of the Crypto Open Patent Alliance. Prior to that Max led Google’s open source legal group for 7 years.

So, the first speaker listed is a lawyer associated with Midjourney, a highly controversial generative artificial intelligence programme used to generate images. According to their entry on Wikipedia, the company is being sued, Note: Links have been removed,

On January 13, 2023, three artists – Sarah Andersen, Kelly McKernan, and Karla Ortiz – filed a copyright infringement lawsuit against Stability AI, Midjourney, and DeviantArt, claiming that these companies have infringed the rights of millions of artists, by training AI tools on five billion images scraped from the web, without the consent of the original artists.[32]

My October 24, 2022 posting highlights some of the issues with generative image programmes and Midjourney is mentioned throughout.

As I noted earlier, I’m glad to see more thought being put into the societal impact of AI and somewhat disconcerted by the hyperbole from the like of Geoffrey Hinton and the like of Vancouver’s Multiplatform AI conference organizers. Mike Masnick put it nicely in his May 24, 2023 posting on TechDirt (Note 1: I’ve taken a paragraph out of context, his larger issue is about proposals for legislation; Note 2: Links have been removed),

Honestly, this is partly why I’ve been pretty skeptical about the “AI Doomers” who keep telling fanciful stories about how AI is going to kill us all… unless we give more power to a few elite people who seem to think that it’s somehow possible to stop AI tech from advancing. As I noted last month, it is good that some in the AI space are at least conceptually grappling with the impact of what they’re building, but they seem to be doing so in superficial ways, focusing only on the sci-fi dystopian futures they envision, and not things that are legitimately happening today from screwed up algorithms.

For anyone interested in the Canadian government attempts to legislate AI, there’s my May 1, 2023 posting, “Canada, AI regulation, and the second reading of the Digital Charter Implementation Act, 2022 (Bill C-27).”

Addendum (June 1, 2023)

Another statement warning about runaway AI was issued on Tuesday, May 30, 2023. This was far briefer than the previous March 2023 warning, from the Center for AI Safety’s “Statement on AI Risk” webpage,

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war [followed by a list of signatories] …

Vanessa Romo’s May 30, 2023 article (with contributions from Bobby Allyn) for NPR ([US] National Public Radio) offers an overview of both warnings. Rae Hodge’s May 31, 2023 article for Salon offers a more critical view, Note: Links have been removed,

The artificial intelligence world faced a swarm of stinging backlash Tuesday morning, after more than 350 tech executives and researchers released a public statement declaring that the risks of runaway AI could be on par with those of “nuclear war” and human “extinction.” Among the signatories were some who are actively pursuing the profitable development of the very products their statement warned about — including OpenAI CEO Sam Altman and Google DeepMind CEO Demis Hassabis.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement from the non-profit Center for AI Safety said.

But not everyone was shaking in their boots, especially not those who have been charting AI tech moguls’ escalating use of splashy language — and those moguls’ hopes for an elite global AI governance board.

TechCrunch’s Natasha Lomas, whose coverage has been steeped in AI, immediately unravelled the latest panic-push efforts with a detailed rundown of the current table stakes for companies positioning themselves at the front of the fast-emerging AI industry.

“Certainly it speaks volumes about existing AI power structures that tech execs at AI giants including OpenAI, DeepMind, Stability AI and Anthropic are so happy to band and chatter together when it comes to publicly amplifying talk of existential AI risk. And how much more reticent to get together to discuss harms their tools can be seen causing right now,” Lomas wrote.

“Instead of the statement calling for a development pause, which would risk freezing OpenAI’s lead in the generative AI field, it lobbies policymakers to focus on risk mitigation — doing so while OpenAI is simultaneously crowdfunding efforts to shape ‘democratic processes for steering AI,'” Lomas added.

The use of scary language and fear as a marketing tool has a long history in tech. And, as the LA Times’ Brian Merchant pointed out in an April column, OpenAI stands to profit significantly from a fear-driven gold rush of enterprise contracts.

“[OpenAI is] almost certainly betting its longer-term future on more partnerships like the one with Microsoft and enterprise deals serving large companies,” Merchant wrote. “That means convincing more corporations that if they want to survive the coming AI-led mass upheaval, they’d better climb aboard.”

Fear, after all, is a powerful sales tool.

Romo’s May 30, 2023 article for NPR offers a good overview and, if you have the time, I recommend reading Hodge’s May 31, 2023 article for Salon in its entirety.

*ETA June 8, 2023: This sentence “What follows the ‘nonhuman authors’ is essentially a survey of situation/panic.” was added to the introductory paragraph at the beginning of this post.

Nanobiotics and a new machine learning model

A May 16, 2022 news item on phys.org announces work on a new machine learning model that could be useful in the research into engineered nanoparticles for medical purposes (Note: Links have been removed),

With antibiotic-resistant infections on the rise and a continually morphing pandemic virus, it’s easy to see why researchers want to be able to design engineered nanoparticles that can shut down these infections.

A new machine learning model that predicts interactions between nanoparticles and proteins, developed at the University of Michigan, brings us a step closer to that reality.

A May 16, 2022 University of Michigan news release by Kate McAlpine, which originated the news item, delves further into the work (Note: Links have been removed),

“We have reimagined nanoparticles to be more than mere drug delivery vehicles. We consider them to be active drugs in and of themselves,” said J. Scott VanEpps, an assistant professor of emergency medicine and an author of the study in Nature Computational Science.

Discovering drugs is a slow and unpredictable process, which is why so many antibiotics are variations on a previous drug. Drug developers would like to design medicines that can attack bacteria and viruses in ways that they choose, taking advantage of the “lock-and-key” mechanisms that dominate interactions between biological molecules. But it was unclear how to transition from the abstract idea of using nanoparticles to disrupt infections to practical implementation of the concept. 

“By applying mathematical methods to protein-protein interactions, we have streamlined the design of nanoparticles that mimic one of the proteins in these pairs,” said Nicholas Kotov, the Irving Langmuir Distinguished University Professor of Chemical Sciences and Engineering and corresponding author of the study. 

“Nanoparticles are more stable than biomolecules and can lead to entirely new classes of antibacterial and antiviral agents.”

The new machine learning algorithm compares nanoparticles to proteins using three different ways to describe them. While the first was a conventional chemical description, the two that concerned structure turned out to be most important for making predictions about whether a nanoparticle would be a lock-and-key match with a specific protein.

Between them, these two structural descriptions captured the protein’s complex surface and how it might reconfigure itself to enable lock-and-key fits. This includes pockets that a nanoparticle could fit into, along with the size such a nanoparticle would need to be. The descriptions also included chirality, a clockwise or counterclockwise twist that is important for predicting how a protein and nanoparticle will lock in.

“There are many proteins outside and inside bacteria that we can target. We can use this model as a first screening to discover which nanoparticles will bind with which proteins,” said Emine Sumeyra Turali Emre, a postdoctoral researcher in chemical engineering and co-first author of the paper, along with Minjeong Cha, a PhD student in materials science and engineering.

Emre and Cha explained that researchers could follow up on matches identified by their algorithm with more detailed simulations and experiments. One such match could stop the spread of MRSA, a common antibiotic-resistant strain, using zinc oxide nanopyramids that block metabolic enzymes in the bacteria.  

“Machine learning algorithms like ours will provide a design tool for nanoparticles that can be used in many biological processes. Inhibition of the virus that causes COVID-19 is one good example,” said Cha. “We can use this algorithm to efficiently design nanoparticles that have broad-spectrum antiviral activity against all variants.”

This breakthrough was enabled by the Blue Sky Initiative at the University of Michigan College of Engineering. It provided $1.5 million to support the interdisciplinary team carrying out the fundamental exploration of whether a machine learning approach could be effective when data on the biological activity of nanoparticles is so sparse.

“The core of the Blue Sky idea is exactly what this work covers: finding a way to represent proteins and nanoparticles in a unified approach to understand and design new classes of drugs that have multiple ways of working against bacteria,” said Angela Violi, an Arthur F. Thurnau Professor, a professor of mechanical engineering and leader of the nanobiotics Blue Sky project.

Emre led the building of a database of interactions between proteins that could help to predict nanoparticle and protein interaction. Cha then identified structural descriptors that would serve equally well for nanoparticles and proteins, working with collaborators at the University of Southern California, Los Angeles to develop a machine learning algorithm that combed through the database and used the patterns it found to predict how proteins and nanoparticles would interact with one another. Finally, the team compared these predictions for lock-and-key matches with the results from experiments and detailed simulations, finding that they closely matched.

Additional collaborators on the project include Ji-Young Kim, a postdoctoral researcher in chemical engineering at U-M, who helped calculate chirality in the proteins and nanoparticles. Paul Bogdan and Xiongye Xiao, a professor and PhD student, respectively, in electrical and computer engineering at USC [University of Southern California] contributed to the graph theory descriptors. Cha then worked with them to design and train the neural network, comparing different machine learning models. All authors helped analyze the data.

Here are links to and a citation for the research briefing and paper, respectively,

Universal descriptors to predict interactions of inorganic nanoparticles with proteins. Nature Computational Science (2022) [Research briefing] DOI: https://doi.org/10.1038/s43588-022-00230-3 Published: 28 April 2022

This paper is behind a paywall.

Unifying structural descriptors for biological and bioinspired nanoscale complexes by Minjeong Cha, Emine Sumeyra Turali Emre, Xiongye Xiao, Ji-Young Kim, Paul Bogdan, J. Scott VanEpps, Angela Violi & Nicholas A. Kotov. Nature Computational Science volume 2, pages 243–252 (2022) Issue Date: April 2022 DOI: https://doi.org/10.1038/s43588-022-00229-w Published: 28 April 2022

This paper appears to be open access.

AI (artificial intelligence) and art ethics: a debate + a Botto (AI artist) October 2022 exhibition in the Uk

Who is an artist? What is an artist? Can everyone be an artist? These are the kinds of questions you can expect with the rise of artificially intelligent artists/collaborators. Of course, these same questions have been asked many times before the rise of AI (artificial intelligence) agents/programs in the field of visual art. Each time the questions are raised is an opportunity to examine our beliefs from a different perspective. And, not to be forgotten, there are questions about money.

The shock

First, the ‘art’,

The winning work. Colorado State Fair 2022. Screengrab from Discord [downloaded from https://www.artnews.com/art-news/news/colorado-state-fair-ai-generated-artwork-controversy-1234638022/]

Shanti Escalante-De Mattei’s September 1, 2022 article for ArtNews.com provides an overview of the latest AI art controversy (Note: A link has been removed),

The debate around AI art went viral once again when a man won first place at the Colorado State Fair’s art competition in the digital category with a work he made using text-to-image AI generator Midjourney.

Twitter user and digital artist Genel Jumalon tweeted out a screenshot from a Discord channel in which user Sincarnate, aka game designer Jason Allen, celebrated his win at the fair. Jumalon wrote, “Someone entered an art competition with an AI-generated piece and won the first prize. Yeah that’s pretty fucking shitty.”

The comments on the post range from despair and anger as artists, both digital and traditional, worry that their livelihoods might be at stake after years of believing that creative work would be safe from AI-driven automation. [emphasis mine]

Rachel Metz’s September 3, 2022 article for CNN provides more details about how the work was generated (Note: Links have been removed),

Jason M. Allen was almost too nervous to enter his first art competition. Now, his award-winning image is sparking controversy about whether art can be generated by a computer, and what, exactly, it means to be an artist.

In August [2022], Allen, a game designer who lives in Pueblo West, Colorado, won first place in the emerging artist division’s “digital arts/digitally-manipulated photography” category at the Colorado State Fair Fine Arts Competition. His winning image, titled “Théâtre D’opéra Spatial” (French for “Space Opera Theater”), was made with Midjourney — an artificial intelligence system that can produce detailed images when fed written prompts. A $300 prize accompanied his win.

Allen’s winning image looks like a bright, surreal cross between a Renaissance and steampunk painting. It’s one of three such images he entered in the competition. In total, 11 people entered 18 pieces of art in the same category in the emerging artist division.

The definition for the category in which Allen competed states that digital art refers to works that use “digital technology as part of the creative or presentation process.” Allen stated that Midjourney was used to create his image when he entered the contest, he said.

The newness of these tools, how they’re used to produce images, and, in some cases, the gatekeeping for access to some of the most powerful ones has led to debates about whether they can truly make art or assist humans in making art.

This came into sharp focus for Allen not long after his win. Allen had posted excitedly about his win on Midjourney’s Discord server on August 25 [2022], along with pictures of his three entries; it went viral on Twitter days later, with many artists angered by Allen’s win because of his use of AI to create the image, as a story by Vice’s Motherboard reported earlier this week.

“This sucks for the exact same reason we don’t let robots participate in the Olympics,” one Twitter user wrote.

“This is the literal definition of ‘pressed a few buttons to make a digital art piece’,” another Tweeted. “AI artwork is the ‘banana taped to the wall’ of the digital world now.”

Yet while Allen didn’t use a paintbrush to create his winning piece, there was plenty of work involved, he said.

“It’s not like you’re just smashing words together and winning competitions,” he said.

You can feed a phrase like “an oil painting of an angry strawberry” to Midjourney and receive several images from the AI system within seconds, but Allen’s process wasn’t that simple. To get the final three images he entered in the competition, he said, took more than 80 hours.

First, he said, he played around with phrasing that led Midjourney to generate images of women in frilly dresses and space helmets — he was trying to mash up Victorian-style costuming with space themes, he said. Over time, with many slight tweaks to his written prompt (such as to adjust lighting and color harmony), he created 900 iterations of what led to his final three images. He cleaned up those three images in Photoshop, such as by giving one of the female figures in his winning image a head with wavy, dark hair after Midjourney had rendered her headless. Then he ran the images through another software program called Gigapixel AI that can improve resolution and had the images printed on canvas at a local print shop.

Ars Technica has run a number of articles on the subject of Art and AI, Benj Edwards in an August 31, 2022 article seems to have been one of the first to comment on Jason Allen’s win (Note 1: Links have been removed; Note 2: Look at how Edwards identifies Jason Allen as an artist),

A synthetic media artist named Jason Allen entered AI-generated artwork into the Colorado State Fair fine arts competition and announced last week that he won first place in the Digital Arts/Digitally Manipulated Photography category, Vice reported Wednesday [August 31, 2022?] based on a viral tweet.

Allen’s victory prompted lively discussions on Twitter, Reddit, and the Midjourney Discord server about the nature of art and what it means to be an artist. Some commenters think human artistry is doomed thanks to AI and that all artists are destined to be replaced by machines. Others think art will evolve and adapt with new technologies that come along, citing synthesizers in music. It’s a hot debate that Wired covered in July [2022].

It’s worth noting that the invention of the camera in the 1800s prompted similar criticism related to the medium of photography, since the camera seemingly did all the work compared to an artist that labored to craft an artwork by hand with a brush or pencil. Some feared that painters would forever become obsolete with the advent of color photography. In some applications, photography replaced more laborious illustration methods (such as engraving), but human fine art painters are still around today.

Benj Edwards in a September 12, 2022 article for Ars Technica examines how some art communities are responding (Note: Links have been removed),

Confronted with an overwhelming amount of artificial-intelligence-generated artwork flooding in, some online art communities have taken dramatic steps to ban or curb its presence on their sites, including Newgrounds, Inkblot Art, and Fur Affinity, according to Andy Baio of Waxy.org.

Baio, who has been following AI art ethics closely on his blog, first noticed the bans and reported about them on Friday [Sept. 9, 2022?]. …

The arrival of widely available image synthesis models such as Midjourney and Stable Diffusion has provoked an intense online battle between artists who view AI-assisted artwork as a form of theft (more on that below) and artists who enthusiastically embrace the new creative tools.

… a quickly evolving debate about how art communities (and art professionals) can adapt to software that can potentially produce unlimited works of beautiful art at a rate that no human working without the tools could match.

A few weeks ago, some artists began discovering their artwork in the Stable Diffusion data set, and they weren’t happy about it. Charlie Warzel wrote a detailed report about these reactions for The Atlantic last week [September 7, 2022]. With battle lines being drawn firmly in the sand and new AI creativity tools coming out steadily, this debate will likely continue for some time to come.

Filthy lucre becomes more prominent in the conversation

Lizzie O’Leary in a September 12, 2022 article for Fast Company presents a transcript of an interview (from the TBD podcast) she conducted with Drew Harwell, tech reporter covering A.I. for Washington Post) about the ‘Jason Allen’ win,

I’m struck by how quickly these art A.I.s are advancing. DALL-E was released in January of last year and there were some pretty basic images. And then, a year later, DALL-E 2 is using complex, faster methods. Midjourney, the one Jason Allen used, has a feature that allows you to upscale and downscale images. Where is this sudden supply and demand for A.I. art coming from?

You could look back to five years ago when they had these text-to-image generators and the output would be really crude. You could sort of see what the A.I. was trying to get at, but we’ve only really been able to cross that photorealistic uncanny valley in the last year or so. And I think the things that have contributed to that are, one, better data. You’re seeing people invest a lot of money and brainpower and resources into adding more stuff into bigger data sets. We have whole groups that are taking every image they can get on the internet. Billions, billions of images from Pinterest and Amazon and Facebook. You have bigger data sets, so the A.I. is learning more. You also have better computing power, and those are the two ingredients to any good piece of A.I. So now you have A.I. that is not only trained to understand the world a little bit better, but it can now really quickly spit out a very finely detailed generated image.

Is there any way to know, when you look at a piece of A.I. art, what images it referenced to create what it’s doing? Or is it just so vast that you can’t kind of unspool it backward?

When you’re doing an image that’s totally generated out of nowhere, it’s taking bits of information from billions of images. It’s creating it in a much more sophisticated way so that it’s really hard to unspool.

Art generated by A.I. isn’t just a gee-whiz phenomenon, something that wins prizes, or even a fascinating subject for debate—it has valuable commercial uses, too. Some that are a little frightening if you’re, say, a graphic designer.

You’re already starting to see some of these images illustrating news articles, being used as logos for companies, being used in the form of stock art for small businesses and websites. Anything where somebody would’ve gone and paid an illustrator or graphic designer or artist to make something, they can now go to this A.I. and create something in a few seconds that is maybe not perfect, maybe would be beaten by a human in a head-to-head, but is good enough. From a commercial perspective, that’s scary, because we have an industry of people whose whole job is to create images, now running up against A.I.

And the A.I., again, in the last five years, the A.I. has gotten better and better. It’s still not perfect. I don’t think it’ll ever be perfect, whatever that looks like. It processes information in a different, maybe more literal, way than a human. I think human artists will still sort of have the upper hand in being able to imagine things a little more outside of the box. And yet, if you’re just looking for three people in a classroom or a pretty simple logo, you’re going to go to A.I. and you’re going to take potentially a job away from a freelancer whom you would’ve given it to 10 years ago.

I can see a use case here in marketing, in advertising. The A.I. doesn’t need health insurance, it doesn’t need paid vacation days, and I really do wonder about this idea that the A.I. could replace the jobs of visual artists. Do you think that is a legitimate fear, or is that overwrought at this moment?

I think it is a legitimate fear. When something can mirror your skill set, not 100 percent of the way, but enough of the way that it could replace you, that’s an issue. Do these A.I. creators have any kind of moral responsibility to not create it because it could put people out of jobs? I think that’s a debate, but I don’t think they see it that way. They see it like they’re just creating the new generation of digital camera, the new generation of Photoshop. But I think it is worth worrying about because even compared with cameras and Photoshop, the A.I. is a little bit more of the full package and it is so accessible and so hard to match in terms. It’s really going to be up to human artists to find some way to differentiate themselves from the A.I.

This is making me wonder about the humans underneath the data sets that the A.I. is trained on. The criticism is, of course, that these businesses are making money off thousands of artists’ work without their consent or knowledge and it undermines their work. Some people looked at the Stable Diffusion and they didn’t have access to its whole data set, but they found that Thomas Kinkade, the landscape painter, was the most referenced artist in the data set. Is the A.I. just piggybacking? And if it’s not Thomas Kinkade, if it’s someone who’s alive, are they piggybacking on that person’s work without that person getting paid?

Here’s a bit more on the topic of money and art in a September 19, 2022 article by John Herrman for New York Magazine. First, he starts with the literary arts, Note: Links have been removed,

Artificial-intelligence experts are excited about the progress of the past few years. You can tell! They’ve been telling reporters things like “Everything’s in bloom,” “Billions of lives will be affected,” and “I know a person when I talk to it — it doesn’t matter whether they have a brain made of meat in their head.”

We don’t have to take their word for it, though. Recently, AI-powered tools have been making themselves known directly to the public, flooding our social feeds with bizarre and shocking and often very funny machine-generated content. OpenAI’s GPT-3 took simple text prompts — to write a news article about AI or to imagine a rose ceremony from The Bachelor in Middle English — and produced convincing results.

Deepfakes graduated from a looming threat to something an enterprising teenager can put together for a TikTok, and chatbots are occasionally sending their creators into crisis.

More widespread, and probably most evocative of a creative artificial intelligence, is the new crop of image-creation tools, including DALL-E, Imagen, Craiyon, and Midjourney, which all do versions of the same thing. You ask them to render something. Then, with models trained on vast sets of images gathered from around the web and elsewhere, they try — “Bart Simpson in the style of Soviet statuary”; “goldendoodle megafauna in the streets of Chelsea”; “a spaghetti dinner in hell”; “a logo for a carpet-cleaning company, blue and red, round”; “the meaning of life.”

This flood of machine-generated media has already altered the discourse around AI for the better, probably, though it couldn’t have been much worse. In contrast with the glib intra-VC debate about avoiding human enslavement by a future superintelligence, discussions about image-generation technology have been driven by users and artists and focus on labor, intellectual property, AI bias, and the ethics of artistic borrowing and reproduction [emphasis mine]. Early controversies have cut to the chase: Is the guy who entered generated art into a fine-art contest in Colorado (and won!) an asshole? Artists and designers who already feel underappreciated or exploited in their industries — from concept artists in gaming and film and TV to freelance logo designers — are understandably concerned about automation. Some art communities and marketplaces have banned AI-generated images entirely.

Requests are effectively thrown into “a giant swirling whirlpool” of “10,000 graphics cards,” Holz [David Holz, Midjourney founder] said, after which users gradually watch them take shape, gaining sharpness but also changing form as Midjourney refines its work.

This hints at an externality beyond the worlds of art and design. “Almost all the money goes to paying for those machines,” Holz said. New users are given a small number of free image generations before they’re cut off and asked to pay; each request initiates a massive computational task, which means using a lot of electricity.

High compute costs [emphasis mine] — which are largely energy costs — are why other services have been cautious about adding new users. …

Another Midjourney user, Gila von Meissner, is a graphic designer and children’s-book author-illustrator from “the boondocks in north Germany.” Her agent is currently shopping around a book that combines generated images with her own art and characters. Like Pluckebaum [Brian Pluckebaum who works in automotive-semiconductor marketing and designs board games], she brought up the balance of power with publishers. “Picture books pay peanuts,” she said. “Most illustrators struggle financially.” Why not make the work easier and faster? “It’s my character, my edits on the AI backgrounds, my voice, and my story.” A process that took months now takes a week, she said. “Does that make it less original?”

User MoeHong, a graphic designer and typographer for the state of California, has been using Midjourney to make what he called generic illustrations (“backgrounds, people at work, kids at school, etc.”) for government websites, pamphlets, and literature: “I get some of the benefits of using custom art — not that we have a budget for commissions! — without the paying-an-artist part.” He said he has mostly replaced stock art, but he’s not entirely comfortable with the situation. “I have a number of friends who are commercial illustrators, and I’ve been very careful not to show them what I’ve made,” he said. He’s convinced that tools like this could eventually put people in his trade out of work. “But I’m already in my 50s,” he said, “and I hope I’ll be gone by the time that happens.”

Fan club

The last article I’m featuring here is a September 15, 2021 piece by Agnieszka Cichocka for DailyArt, which provides good, brief descriptions of algorithms, generative creative networks, machine learning, artificial neural networks, and more. She is an enthusiast (Note: Links have been removed),

I keep wondering if Leonardo da Vinci, who, in my opinion, was the most forward thinking artist of all time, would have ever imagined that art would one day be created by AI. He worked on numerous ideas and was constantly experimenting, and, although some were failures, he persistently tried new products, helping to move our world forward. Without such people, progress would not be possible. 

Machine Learning

As humans, we learn by acquiring knowledge through observations, senses, experiences, etc. This is similar to computers. Machine learning is a process in which a computer system learns how to perform a task better in two ways—either through exposure to environments that provide punishments and rewards (reinforcement learning) or by training with specific data sets (the system learns automatically and improves from previous experiences). Both methods help the systems improve their accuracy. Machines then use patterns and attempt to make an accurate analysis of things they have not seen before. To give an example, let’s say we feed the computer with thousands of photos of a dog. Consequently, it can learn what a dog looks like based on those. Later, even when faced with a picture it has never seen before, it can tell that the photo shows a dog.

If you want to see some creative machine learning experiments in art, check out ML x ART. This is a website with hundreds of artworks created using AI tools.

Some thoughts

As the saying goes “a picture is worth a thousand words” and, now, It seems that pictures will be made from words or so suggests the example of Jason M. Allen feeding prompts to the AI system Midjourney.

I suspect (as others have suggested) that in the end, artists who use AI systems will be absorbed into the art world in much the same way as artists who use photography, or are considered performance artists and/or conceptual artists, and/or use video have been absorbed. There will be some displacements and discomfort as the questions I opened this posting with (Who is an artist? What is an artist? Can everyone be an artist?) are passionately discussed and considered. Underlying many of these questions is the issue of money.

The impact on people’s livelihoods is cheering or concerning depending on how the AI system is being used. Herrman’s September 19, 2022 article highlights two examples that focus on graphic designers. Gila von Meissner, the illustrator and designer, who uses her own art to illustrate her children’s books in a faster, more cost effective way with an AI system and MoeHong, a graphic designer for the state of California, who uses an AI system to make ‘customized generic art’ for which the state government doesn’t have to pay.

So far, the focus has been on Midjourney and other AI agents that have been created by developers for use by visual artists and writers. What happens when the visual artist or the writer is the developer? A September 12, 2022 article by Brandon Scott Roye for Cool Hunting approaches the question (Note: Links have been removed),

Mario Klingemann and Sasha Stiles on Semi-Autonomous AI Artists

An artist and engineer at the forefront of generating AI artwork, Mario Klingemann and first-generation Kalmyk-American poet, artist and researcher Sasha Stiles both approach AI from a more human, personal angle. Creators of semi-autonomous systems, both Klingemann and Stiles are the minds behind Botto and Technelegy, respectively. They are both artists in their own right, but their creations are too. Within web3, the identity of the “artist” who creates with visuals and the “writer” who creates with words is enjoying a foundational shift and expansion. Many have fashioned themselves a new title as “engineer.”

Based on their primary identities as an artist and poet, Klingemann and Stiles face the conundrum of becoming engineers who design the tools, rather than artists responsible for the final piece. They now have the ability to remove themselves from influencing inputs and outputs.

If you have time, I suggest reading Roye’s September 12, 2022 article as it provides some very interesting ideas although I don’t necessarily agree with them, e.g., “They now have the ability to remove themselves from influencing inputs and outputs.” Anyone who’s following the ethics discussion around AI knows that biases are built into the algorithms whether we like it or not. As for artists and writers calling themselves ‘engineers’, they may get a little resistance from the engineering community.

As users of open source software, Klingemann and Stiles should not have to worry too much about intellectual property. However, it seems copyright for the actual works and patents for the software could raise some interesting issues especially since money is involved.

In a March 10, 2022 article by Shraddha Nair for Stir World, Klingemann claims to have made over $1M from auctions of Botto’s artworks. it’s not clear to me where Botto obtains its library of images for future use (which may signal a potential problem); Stiles’ Technelegy creates poems from prompts using its library of her poems. (For the curious, I have an August 30, 2022 post “Should AI algorithms get patents for their inventions and is anyone talking about copyright for texts written by AI algorithms?” which explores some of the issues around patents.)

Who gets the patent and/or the copyright? Assuming you and I are employing machine learning to train our AI agents separately, could there be an argument that if my version of the AI is different than yours and proves more popular with other content creators/ artists that I should own/share the patent to the software and rights to whatever the software produces?

Getting back to Herrman’s comment about high compute costs and energy, we seem to have an insatiable appetite for energy and that is not only a high cost financially but also environmentally.

Botto exhibition

Here’s more about Klingemann’s artist exhibition by Botto (from an October 6, 2022 announcement received via email),

Mario Klingemann is a pioneering figurehead in the field of AI art,
working deep in the field of Machine Learning. Governed by a community
of 5,000 people, Klingemann developed Botto around an idea of creating
an autonomous entity that is able to be creative and co-creative.
Inspired by Goethe’s artificial man in Faust, Botto is a genderless AI
entity that is guided by an international community and art historical
trends. Botto creates 350 art pieces per week that are presented to its
community. Members of the community give feedback on these art fragments
by voting, expressing their individual preferences on what is
aesthetically pleasing to them. Then collectively the votes are used as
feedback for Botto’s generative algorithm, dictating what direction
Botto should take in its next series of art pieces.

The creative capacity of its algorithm is far beyond the capacities of
an individual to combine and find relationships within all the
information available to the AI. Botto faces similar issues as a human
artist, and it is programmed to self-reflect and ask, “I’ve created
this type of work before. What can I show them that’s different this
week?”

Once a week, Botto auctions the art fragment with the most votes on
SuperRare. All proceeds from the auction go back to the community. The
AI artist auctioned its first three pieces, Asymmetrical Liberation,
Scene Precede, and Trickery Contagion for more than $900,000 dollars,
the most successful AI artist premiere. Today, Botto has produced
upwards of 22 artworks and current sales have generated over $2 million
in total
[emphasis mine].

From March 2022 when Botto had made $1M to October 2022 where it’s made over $2M. It seems Botto is a very financially successful artist.

Botto: A Whole Year of Co-Creation

This exhibition (October 26 – 30, 2022) is being held in London, England at this location:

The Department Store, Brixton 248 Ferndale Road London SW9 8FR United Kingdom

Enjoy!

One of world’s most precise microchip sensors thanks to nanotechnology, machine learning, extended cognition, and spiderwebs

I love science stories about the inspirational qualities of spiderwebs. A November 26, 2021 news item on phys.org describes how spiderwebs have inspired advances in sensors and, potentially, quantum computing,,

A team of researchers from TU Delft [Delft University of Technology; Netherlands] managed to design one of the world’s most precise microchip sensors. The device can function at room temperature—a ‘holy grail’ for quantum technologies and sensing. Combining nanotechnology and machine learning inspired by nature’s spiderwebs, they were able to make a nanomechanical sensor vibrate in extreme isolation from everyday noise. This breakthrough, published in the Advanced Materials Rising Stars Issue, has implications for the study of gravity and dark matter as well as the fields of quantum internet, navigation and sensing.

Inspired by nature’s spider webs and guided by machine learning, Richard Norte (left) and Miguel Bessa (right) demonstrate a new type of sensor in the lab. [Photography: Frank Auperlé]

A November 24, 2021 TU Delft press release (also on EurekAlert but published on November 23, 2021), which originated the news item, describes the research in more detail,

One of the biggest challenges for studying vibrating objects at the smallest scale, like those used in sensors or quantum hardware, is how to keep ambient thermal noise from interacting with their fragile states. Quantum hardware for example is usually kept at near absolute zero (−273.15°C) temperatures, with refrigerators costing half a million euros apiece. Researchers from TU Delft created a web-shaped microchip sensor which resonates extremely well in isolation from room temperature noise. Among other applications, their discovery will make building quantum devices much more affordable.

Hitchhiking on evolution
Richard Norte and Miguel Bessa, who led the research, were looking for new ways to combine nanotechnology and machine learning. How did they come up with the idea to use spiderwebs as a model? Richard Norte: “I’ve been doing this work already for a decade when during lockdown, I noticed a lot of spiderwebs on my terrace. I realised spiderwebs are really good vibration detectors, in that they want to measure vibrations inside the web to find their prey, but not outside of it, like wind through a tree. So why not hitchhike on millions of years of evolution and use a spiderweb as an initial model for an ultra-sensitive device?” 

Since the team did not know anything about spiderwebs’ complexities, they let machine learning guide the discovery process. Miguel Bessa: “We knew that the experiments and simulations were costly and time-consuming, so with my group we decided to use an algorithm called Bayesian optimization, to find a good design using few attempts.” Dongil Shin, co-first author in this work, then implemented the computer model and applied the machine learning algorithm to find the new device design. 

Microchip sensor based on spiderwebs
To the researcher’s surprise, the algorithm proposed a relatively simple spiderweb out of 150 different spiderweb designs, which consists of only six strings put together in a deceivingly simple way. Bessa: “Dongil’s computer simulations showed that this device could work at room temperature, in which atoms vibrate a lot, but still have an incredibly low amount of energy leaking in from the environment – a higher Quality factor in other words. With machine learning and optimization we managed to adapt Richard’s spider web concept towards this much better quality factor.”

Based on this new design, co-first author Andrea Cupertino built a microchip sensor with an ultra-thin, nanometre-thick film of ceramic material called Silicon Nitride. They tested the model by forcefully vibrating the microchip ‘web’ and measuring the time it takes for the vibrations to stop. The result was spectacular: a record-breaking isolated vibration at room temperature. Norte: “We found almost no energy loss outside of our microchip web: the vibrations move in a circle on the inside and don’t touch the outside. This is somewhat like giving someone a single push on a swing, and having them swing on for nearly a century without stopping.”

Implications for fundamental and applied sciences
With their spiderweb-based sensor, the researchers’ show how this interdisciplinary strategy opens a path to new breakthroughs in science, by combining bio-inspired designs, machine learning and nanotechnology. This novel paradigm has interesting implications for quantum internet, sensing, microchip technologies and fundamental physics: exploring ultra-small forces for example, like gravity or dark matter which are notoriously difficult to measure. According to the researchers, the discovery would not have been possible without the university’s Cohesion grant, which led to this collaboration between nanotechnology and machine learning.

Here’s a link to and a citation for the paper,

Spiderweb Nanomechanical Resonators via Bayesian Optimization: Inspired by Nature and Guided by Machine Learning by Dongil Shin, Andrea Cupertino, Matthijs H. J. de Jong, Peter G. Steeneken, Miguel A. Bessa, Richard A. Norte. Advanced Materials Volume34, Issue3 January 20, 2022 2106248 DOI: https://doi.org/10.1002/adma.202106248 First published (online): 25 October 2021

This paper is open access.

If spiderwebs can be sensors, can they also think?

it’s called ‘extended cognition’ or ‘extended mind thesis’ (Wikipedia entry) and the theory holds that the mind is not solely in the brain or even in the body. Predictably, the theory has both its supporters and critics as noted in Joshua Sokol’s article “The Thoughts of a Spiderweb” originally published on May 22, 2017 in Quanta Magazine (Note: Links have been removed),

Millions of years ago, a few spiders abandoned the kind of round webs that the word “spiderweb” calls to mind and started to focus on a new strategy. Before, they would wait for prey to become ensnared in their webs and then walk out to retrieve it. Then they began building horizontal nets to use as a fishing platform. Now their modern descendants, the cobweb spiders, dangle sticky threads below, wait until insects walk by and get snagged, and reel their unlucky victims in.

In 2008, the researcher Hilton Japyassú prompted 12 species of orb spiders collected from all over Brazil to go through this transition again. He waited until the spiders wove an ordinary web. Then he snipped its threads so that the silk drooped to where crickets wandered below. When a cricket got hooked, not all the orb spiders could fully pull it up, as a cobweb spider does. But some could, and all at least began to reel it in with their two front legs.

Their ability to recapitulate the ancient spiders’ innovation got Japyassú, a biologist at the Federal University of Bahia in Brazil, thinking. When the spider was confronted with a problem to solve that it might not have seen before, how did it figure out what to do? “Where is this information?” he said. “Where is it? Is it in her head, or does this information emerge during the interaction with the altered web?”

In February [2017], Japyassú and Kevin Laland, an evolutionary biologist at the University of Saint Andrews, proposed a bold answer to the question. They argued in a review paper, published in the journal Animal Cognition, that a spider’s web is at least an adjustable part of its sensory apparatus, and at most an extension of the spider’s cognitive system.

This would make the web a model example of extended cognition, an idea first proposed by the philosophers Andy Clark and David Chalmers in 1998 to apply to human thought. In accounts of extended cognition, processes like checking a grocery list or rearranging Scrabble tiles in a tray are close enough to memory-retrieval or problem-solving tasks that happen entirely inside the brain that proponents argue they are actually part of a single, larger, “extended” mind.

Among philosophers of mind, that idea has racked up citations, including supporters and critics. And by its very design, Japyassú’s paper, which aims to export extended cognition as a testable idea to the field of animal behavior, is already stirring up antibodies among scientists. …

It seems there is no definitive answer to the question of whether there is an ‘extended mind’ but it’s an intriguing question made (in my opinion) even more so with the spiderweb-inspired sensors from TU Delft.

Art appraised by algorithm

Artificial intelligence has been introduced to art appraisals and auctions by way of an academic research project. A January 27, 2022 University of Luxembourg press release (also on EurekAlert but published February 2, 2022) announces the research, Note: Links have been removed,

Does artificial intelligence have a place in such a fickle and quirky environment as the secondary art market? Can an algorithm learn to predict the value assigned to an artwork at auction?

These questions, among others, were analysed by a group of researchers including Roman Kräussl, professor at the Department of Finance at the University of Luxembourg and co-authors Mathieu Aubry (École des Ponts ParisTech), Gustavo Manso (Haas School of Business, University of California at Berkeley), and Christophe Spaenjers (HEC Paris). The resulting paper, Biased Auctioneers, has been accepted for publication in the top-ranked Journal of Finance.

Training a neural network to appraise art 

In this study, which combines fields of finance and computer science, researchers used machine learning and artificial intelligence to create a neural network algorithm that mimics the work of human appraisers by generating price predictions for art at auction. This algorithm relies on data using both visual and non-visual characteristics of artwork. The authors of this study unleashed their algorithm on a vast set of art sales data capturing 1.2 million painting auctions from 2008 to 2014, training the neural network with both an image of the artwork, and information such as the artist, the medium and the auction house where the work was sold. Once trained to this dataset, the authors asked the neural network to predict the auction house pre-sale estimates, ‘buy-in’ price (the minimum price at which the work will be sold), as well as the final auction price for art sales in the year 2015. It became then possible to compare the algorithm’s estimate with the real-word data, and determine whether the relative level of the machine-generated price predictions predicts relative price outcomes.

The path towards a more efficient market?

Not too surprisingly, the human experts’ predications [sic] were more accurate than the algorithm, whose prediction, in turn, was more accurate than the standard linear hedonic model which researchers used to benchmark the study. Reasons for the discrepancy between human and machine include, as the authors argue, mainly access to a larger amount of information about the individual works of art including provenance, condition and historical context. Although interesting, the authors’ goal was not to pit human against machine on this specific task. On the contrary, the authors aimed at discovering the usefulness and potential applications of machine-based valuations. For example, using such an algorithm, it may be possible to determine whether an auctioneer’s pre-sale valuations are too pessimistic or too optimistic, effectively predicting the prediction errors of the auctioneers. Ultimately, this information could be used to correct for these kinds of man-made market inefficiencies.

Beyond the auction block

The implications of this methodology and the applied computational power, however, is not limited to the art world. Other markets trading in ‘real’ assets, which rely heavily on human appraisers, namely the real estate market, can benefit from the research. While AI is not likely to replace humans just yet, machine-learning technology as demonstrated by the researchers may become an important tool for investors and intermediaries, who wish to gain access to as much information, as quickly and as cheaply as possible.

Here’s a link to and a citation for the paper,

Biased Auctioneers by Mathieu Aubry, Roman Kräussl, Gustavo Manso, and Christophe Spaenjers. Journal of Finance, Forthcoming [print issue], Available at SSRN: https://ssrn.com/abstract=3347175 or http://dx.doi.org/10.2139/ssrn.3347175 Published online: January 6, 2022

This paper appears to be open access online and was last revised on January 13, 2022.

Transformational machine learning (TML)

It seems machine learning is getting a tune-up. A November 29, 2021 news item on ScienceDaily describes research into improving machine learning from an international team of researchers,

Researchers have developed a new approach to machine learning that ‘learns how to learn’ and out-performs current machine learning methods for drug design, which in turn could accelerate the search for new disease treatments.

The method, called transformational machine learning (TML), was developed by a team from the UK, Sweden, India and Netherlands. It learns from multiple problems and improves performance while it learns.

A November 29, 2021 University of Cambridge press release (also on EurekAlert), which originated the news item, describes the potential this new technique may have on drug discovery and more,

TML could accelerate the identification and production of new drugs by improving the machine learning systems which are used to identify them. The results are reported in the Proceedings of the National Academy of Sciences.

Most types of machine learning (ML) use labelled examples, and these examples are almost always represented in the computer using intrinsic features, such as the colour or shape of an object. The computer then forms general rules that relate the features to the labels.

“It’s sort of like teaching a child to identify different animals: this is a rabbit, this is a donkey and so on,” said Professor Ross King from Cambridge’s Department of Chemical Engineering and Biotechnology, who led the research. “If you teach a machine learning algorithm what a rabbit looks like, it will be able to tell whether an animal is or isn’t a rabbit. This is the way that most machine learning works – it deals with problems one at a time.”

However, this is not the way that human learning works: instead of dealing with a single issue at a time, we get better at learning because we have learned things in the past.

“To develop TML, we applied this approach to machine learning, and developed a system that learns information from previous problems it has encountered in order to better learn new problems,” said King, who is also a Fellow at The Alan Turing Institute. “Where a typical ML system has to start from scratch when learning to identify a new type of animal – say a kitten – TML can use the similarity to existing animals: kittens are cute like rabbits, but don’t have long ears like rabbits and donkeys. This makes TML a much more powerful approach to machine learning.”

The researchers demonstrated the effectiveness of their idea on thousands of problems from across science and engineering. They say it shows particular promise in the area of drug discovery, where this approach speeds up the process by checking what other ML models say about a particular molecule. A typical ML approach will search for drug molecules of a particular shape, for example. TML instead uses the connection of the drugs to other drug discovery problems.

“I was surprised how well it works – better than anything else we know for drug design,” said King. “It’s better at choosing drugs than humans are – and without the best science, we won’t get the best results.”

Here’s a link to and a citation for the paper,

Transformational machine learning: Learning how to learn from many related scientific problems by Ivan Olier, Oghenejokpeme I. Orhobor, Tirtharaj Dash, Andy M. Davis, Larisa N. Soldatova, Joaquin Vanschoren, and Ross D. King. PNAS December 7, 2021 118 (49) e2108013118; DOI: https://doi.org/10.1073/pnas.2108013118

This paper appears to be open access.