Author Archives: Maryse de la Giroday

Coral reefs, beauty, citizen science, and surveys

I received this May 23, 2023 email invitation to participate in a citizen science project,

Dear all,

We need your valuable input to advance our research on the aesthetic value of tropical coral reefs! As a part of the Marine Science Department of the IPB University [Indonesia], the Lancaster Environment Centre [at Lancaster University, UK], the MARBEC laboratory [Marine Biodiversity Exploitation and Conservation (MARBEC)] research unit is one of the Unité mixte de recherche (UMR) partially funded by the CNRS], and the National Research and Innovation Agency of Indonesia [Badan Riset dan Inovasi Nasional, BRIN], we are conducting a survey to analyze human perspectives on the beauty of coral reefs.

By participating in this survey, you will play a vital role in the development of predictive computer models that can estimate the aesthetic value of different coral reefs. Your contribution will directly contribute to our ongoing research efforts. Estimated completion time is approximately 5 minutes.

Your participation is greatly appreciated, and together, we can make a significant impact on coral reef preservation and conservation. Please click the link below to start the survey:

https://www.biodiful.org/#/beautifulcorals

Thank you also for sharing this survey within your network (professional and personal). Actually we are really counting on you to trigger a snow ball effect and get out of our community (academia and divers). You can also retweet & like on twitter here : https://twitter.com/NicolasMouquet/status/1658020475107266563?s=20 or tweet yourself (if you do, please tag @NicolasMouquet so we will like your tweet and get it up in the threads; also add an image on your own (or copy the one used in the above mentioned tweet) as pasting only the link to the survey shows up a generic image which is not related to the Beauty of Coral Reefs survey). Hear a simple text that could be used on other social media « Help shape future coral reef restoration! Take our 5-minute survey and pick the most beautiful coral reef images. Your input will fuels research on these natural wonders! https://www.biodiful.org/#/beautifulcorals»

Thank you for your time and support. Let’s work together to celebrate the beauty of coral reefs!

Sincerely,

Nicolas Mouquet, CNRS [Centre national de la recherche scientifique], MARBEC, University of Montpellier. 
https://twitter.com/NicolasMouquet
http://nicolasmouquet.free.fr/ 

In late April 2023, I received a link to a paper by Mouquet as a thank you for participating in another of his projects. (I looked at two side-by-side pictures of fishes and selected the one I found most attractive.) As you can see from the image below, I was one of 13,000 respondents.

Fig 1. Evaluation and prediction of fish aesthetic values. (1) Pairs of images were presented to the public during the online survey and scored using the Elo algorithm (see Methods). Left Parma bicolor and right Abudefduf luridus. (2) Once the 345 new images were evaluated online, the values of the 157 images previously evaluated [16] were corrected using the 21 images shared between the 2 surveys. (3) The resulting 481 images with evaluated aesthetic values were used to train a ResNet50 algorithm (see Text E and Fig L in S1 File). Illustration inspired from the PlotNeuralNet [31]. (b) Left: The r2 of the linear relationship between the predicted values averaged across the 5 validation sets and the evaluated values is 0.79 ± SD 0.04 (the color of points indicates the 5 sets used to perform the cross validation). This algorithm was used to predict the aesthetic values of the 4,400 unevaluated images of our dataset. Right: Distribution of the 481 evaluated values in light blue and of the 4,400 predicted aesthetic values in dark blue. The dots at the bottom of the plot indicate the predicted aesthetic values of the images shown in panel (c). Data and code required to generate this Figure can be found in https://github.com/nmouquet/RLS_AESTHE. (c) Examples of fishes representative of the range of predicted aesthetic values. Decreasing aesthetic value from left to right and top to bottom: Holacanthus ciliaris, Aracana aurita, Amphiprion ephippium, Ctenochaetus marginatus, Scarus spinus, Amphiprion bicinctus, Epinephelides armatus, Fusigobius signipinnis, Diplodus annularis, Odontoscion dentex, Nemadactylus bergi, Mendosoma lineatum. See S1 Data for image copyright. https://doi.org/10.1371/journal.pbio.3001640.g001 [Downloaded from https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.3001640#pbio.3001640.s002]

Given how many people participated, I’m thrilled he got in touch,

Hello to all,

Finally some news about the internet campaign to measure the aesthetic value of reef fishes in which you participated in 2020. The time of research can sometimes be long and we were like you a little disturbed by the Covid episode, but here is where we are :We have published our results in an international scientific journal (Plos Biology) 😀 : Langlois J, Guilhaumon F, Baletaud F, Casajus N, De Almeida Braga C, Fleure V, Kulbicki K, Loiseau N, Mouillot D, Renoult JP, Stahl A, Stuart Smith RD, Tribot AS & N, Mouquet (2022) The aesthetic value of reef fishes is globally mismatched to their conservation priorities. PLoS Biol 20(6): e3001640. doi:10.1371/journal.pbio.3001640

You can download the article here: http://nicolasmouquet.free.fr/pdf/Langlois_et_al_2022_Plos_Biology.htm

Here is a summary: Reef fishes are closely connected to many human populations, yet their contributions to society are mostly considered through their economic and ecological values. Cultural and intrinsic values of reef fishes to the public can be critical drivers of conservation investment and success, but remain challenging to quantify. Aesthetic value represents one of the most immediate and direct means by which human societies engage with biodiversity, and can be evaluated from species to ecosystems. Here, we provide the aesthetic value of 2,417 ray-finned reef fish species by combining intensive evaluation of photographs of fishes by humans with predicted values from machine learning. We identified important biases in species’ aesthetic value relating to evolutionary history, ecological traits, and International Union for Conservation of Nature (IUCN) threat status. The most beautiful fishes are tightly packed into small parts of both the phylogenetic tree and the ecological trait space. In contrast, the less attractive fishes are the most ecologically and evolutionary distinct species and those recognized as threatened. Our study highlights likely important mismatches between potential public support for conservation and the species most in need of this support. It also provides a pathway for scaling-up our understanding of what are both an important nonmaterial facet of biodiversity and a key component of nature’s contribution to people, which could help better anticipate consequences of species loss and assist in developing appropriate communication strategies.

This work has received a significant echo in the scientific community as well as in the international press and we are now busy using these data to assess the aesthetic value of entire fish communities on reefs globally.

Again, a huge thank you for your help, without you we could not have done this work! And I apologize for being so late in getting back to you. 🙏

Our work on assessing the aesthetic value of biodiversity does not stop of course! And we may be calling on you soon for new adventures!

In the meantime you can also have a look at a twitter account we just opened dedicated to the presentation of beautiful or repulsive species, but always amazing and especially essential for the functioning of ecosystems ! https://twitter.com/Biodi_ful

With kind regards,

Nicolas Mouquet

—————————–

Nicolas Mouquet, CNRS

Scientific director of the Centre for the Synthesis and Analysis of Biodiversity (CESAB)
5 Rue de l’École de Médecine
34000, Montpellier

Chercheur à MARBEC
Université de Montpellier
Place Eugène Bataillon, CC093
34095 Montpellier Cedex 05

You can sign up to get updates regarding the research once you’ve finished the survey.

In the meantime, here’s a link to and a citation (in my usual style) for the paper on the aesthetics of reef fishes,

The aesthetic value of reef fishes is globally mismatched to their conservation priorities by Juliette Langlois, François Guilhaumon, Florian Baletaud, Nicolas Casajus, Cédric De Almeida Braga, Valentine Fleuré, Michel Kulbicki, Nicolas Loiseau, David Mouillot, Julien P. Renoult, Aliénor Stahl, Rick D. Stuart Smith, Anne-Sophie Tribot, Nicolas Mouquet. PLOS Biology DOI: https://doi.org/10.1371/journal.pbio.3001640 Published: June 7, 2022

This paper is open access.

You can find Nicolas Mouquet’s eponymous website here and you can start the coral reef survey here: https://www.biodiful.org/#/beautifulcorals.

Metacreation Lab’s greatest hits of Summer 2023

I received a May 31, 2023 ‘newsletter’ (via email) from Simon Fraser University’s (SFU) Metacreation Lab for Creative Artificial Intelligence and the first item celebrates some current and past work,

International Conference on New Interfaces for Musical Expressions | NIME 2023
May 31 – June 2 | Mexico City, Mexico

We’re excited to be a part of NIME 2023, launching in Mexico City this week! 

As part of the NIME Paper Sessions, some of Metacreation’s labs and affiliates will be presenting a study based on case studies of musicians playing with virtual musical agents. Titled eTu{d,b}e, the paper was co-authored by Tommy Davis, Kasey LV Pocius, and Vincent Cusson, developers of the eTube instrument, along with music technology and interface researchers Marcelo Wanderley and Philippe Pasquier. Learn about the project and listen to sessions involving human and non-human musicians.

This research project involved experimenting with Spire Muse, a virtual performance agent co-developed by Metacreation Lab members. The paper introducing the system was awarded the best paper award at the 2021 International Conference on New Interfaces for Musical Expression (NIME). 

Learn more about the NIME2023 conference and program at the link below, which will also present a series of online music concerts later this week.

Learn more about NIME 2023

Coming up later this summer and also from the May 31, 2023 newsletter,

Evaluating Human-AI Interaction for MMM-C: a Creative AI System for Music Composition | IJCAI [2023 International Joint Conference on Artificial Intelligence] Preview

For those following the impact of AI on music composition and production, we would like to share a sneak peek of a review of user experiences using an experimental AI-composition tool [Multi-Track Music Machine (MMM)] integrated into the Steinberg Cubase digital audio workstation. Conducted in partnership with Steinberg, this study will be presented at the 2023 International Joint Conference on Artificial Intelligence (IJCAI2023), as part of the Arts and Creativity track of the conference. This year’s IJCAI conference taking place in Macao from August 19th to Aug 25th, 2023.

The conference is being held in Macao (or Macau), which is officially (according to its Wikipedia entry) the Macao Special Administrative Region of the People’s Republic of China (MSAR). It has a longstanding reputation as an international gambling and party mecca comparable to Las Vegas.

Non-human authors (ChatGPT or others) of scientific and medical studies and the latest AI panic!!!

It’s fascinating to see all the current excitement (distressed and/or enthusiastic) around the act of writing and artificial intelligence. Easy to forget that it’s not new. First, the ‘non-human authors’ and then the panic(s).

How to handle non-human authors (ChatGPT and other AI agents)—the medical edition

The first time I wrote about the incursion of robots or artificial intelligence into the field of writing was in a July 16, 2014 posting titled “Writing and AI or is a robot writing this blog?” ChatGPT (then known as GPT-2) first made its way onto this blog in a February 18, 2019 posting titled “AI (artificial intelligence) text generator, too dangerous to release?

The folks at the Journal of the American Medical Association (JAMA) have recently adopted a pragmatic approach to the possibility of nonhuman authors of scientific and medical papers, from a January 31, 2022 JAMA editorial,

Artificial intelligence (AI) technologies to help authors improve the preparation and quality of their manuscripts and published articles are rapidly increasing in number and sophistication. These include tools to assist with writing, grammar, language, references, statistical analysis, and reporting standards. Editors and publishers also use AI-assisted tools for myriad purposes, including to screen submissions for problems (eg, plagiarism, image manipulation, ethical issues), triage submissions, validate references, edit, and code content for publication in different media and to facilitate postpublication search and discoverability..1

In November 2022, OpenAI released a new open source, natural language processing tool called ChatGPT.2,3 ChatGPT is an evolution of a chatbot that is designed to simulate human conversation in response to prompts or questions (GPT stands for “generative pretrained transformer”). The release has prompted immediate excitement about its many potential uses4 but also trepidation about potential misuse, such as concerns about using the language model to cheat on homework assignments, write student essays, and take examinations, including medical licensing examinations.5 In January 2023, Nature reported on 2 preprints and 2 articles published in the science and health fields that included ChatGPT as a bylined author.6 Each of these includes an affiliation for ChatGPT, and 1 of the articles includes an email address for the nonhuman “author.” According to Nature, that article’s inclusion of ChatGPT in the author byline was an “error that will soon be corrected.”6 However, these articles and their nonhuman “authors” have already been indexed in PubMed and Google Scholar.

Nature has since defined a policy to guide the use of large-scale language models in scientific publication, which prohibits naming of such tools as a “credited author on a research paper” because “attribution of authorship carries with it accountability for the work, and AI tools cannot take such responsibility.”7 The policy also advises researchers who use these tools to document this use in the Methods or Acknowledgment sections of manuscripts.7 Other journals8,9 and organizations10 are swiftly developing policies that ban inclusion of these nonhuman technologies as “authors” and that range from prohibiting the inclusion of AI-generated text in submitted work8 to requiring full transparency, responsibility, and accountability for how such tools are used and reported in scholarly publication.9,10 The International Conference on Machine Learning, which issues calls for papers to be reviewed and discussed at its conferences, has also announced a new policy: “Papers that include text generated from a large-scale language model (LLM) such as ChatGPT are prohibited unless the produced text is presented as a part of the paper’s experimental analysis.”11 The society notes that this policy has generated a flurry of questions and that it plans “to investigate and discuss the impact, both positive and negative, of LLMs on reviewing and publishing in the field of machine learning and AI” and will revisit the policy in the future.11

This is a link to and a citation for the JAMA editorial,

Nonhuman “Authors” and Implications for the Integrity of Scientific Publication and Medical Knowledge by Annette Flanagin, Kirsten Bibbins-Domingo, Michael Berkwits, Stacy L. Christiansen. JAMA. 2023;329(8):637-639. doi:10.1001/jama.2023.1344

The editorial appears to be open access.

ChatGPT in the field of education

Dr. Andrew Maynard (scientist, author, and professor of Advanced Technology Transitions in the Arizona State University [ASU] School for the Future if Innovation in Society and founder of the ASU Future of Being Human initiative and Director of the ASU Risk Innovation Nexus) also takes a pragmatic approach in a March 14, 2023 posting on his eponymous blog,

Like many of my colleagues, I’ve been grappling with how ChatGPT and other Large Language Models (LLMs) are impacting teaching and education — especially at the undergraduate level.

We’re already seeing signs of the challenges here as a growing divide emerges between LLM-savvy students who are experimenting with novel ways of using (and abusing) tools like ChatGPT, and educators who are desperately trying to catch up. As a result, educators are increasingly finding themselves unprepared and poorly equipped to navigate near-real-time innovations in how students are using these tools. And this is only exacerbated where their knowledge of what is emerging is several steps behind that of their students.

To help address this immediate need, a number of colleagues and I compiled a practical set of Frequently Asked Questions on ChatGPT in the classroom. These covers the basics of what ChatGPT is, possible concerns over use by students, potential creative ways of using the tool to enhance learning, and suggestions for class-specific guidelines.

Dr. Maynard goes on to offer the FAQ/practical guide here. Prior to issuing the ‘guide’, he wrote a December 8, 2022 essay on Medium titled “I asked Open AI’s ChatGPT about responsible innovation. This is what I got.”

Crawford Kilian, a longtime educator, author, and contributing editor to The Tyee, expresses measured enthusiasm for the new technology (as does Dr. Maynard), in a December 13, 2022 article for thetyee.ca, Note: Links have been removed,

ChatGPT, its makers tell us, is still in beta form. Like a million other new users, I’ve been teaching it (tuition-free) so its answers will improve. It’s pretty easy to run a tutorial: once you’ve created an account, you’re invited to ask a question or give a command. Then you watch the reply, popping up on the screen at the speed of a fast and very accurate typist.

Early responses to ChatGPT have been largely Luddite: critics have warned that its arrival means the end of high school English, the demise of the college essay and so on. But remember that the Luddites were highly skilled weavers who commanded high prices for their products; they could see that newfangled mechanized looms would produce cheap fabrics that would push good weavers out of the market. ChatGPT, with sufficient tweaks, could do just that to educators and other knowledge workers.

Having spent 40 years trying to teach my students how to write, I have mixed feelings about this prospect. But it wouldn’t be the first time that a technological advancement has resulted in the atrophy of a human mental skill.

Writing arguably reduced our ability to memorize — and to speak with memorable and persuasive coherence. …

Writing and other technological “advances” have made us what we are today — powerful, but also powerfully dangerous to ourselves and our world. If we can just think through the implications of ChatGPT, we may create companions and mentors that are not so much demonic as the angels of our better nature.

More than writing: emergent behaviour

The ChatGPT story extends further than writing and chatting. From a March 6, 2023 article by Stephen Ornes for Quanta Magazine, Note: Links have been removed,

What movie do these emojis describe?

That prompt was one of 204 tasks chosen last year to test the ability of various large language models (LLMs) — the computational engines behind AI chatbots such as ChatGPT. The simplest LLMs produced surreal responses. “The movie is a movie about a man who is a man who is a man,” one began. Medium-complexity models came closer, guessing The Emoji Movie. But the most complex model nailed it in one guess: Finding Nemo.

“Despite trying to expect surprises, I’m surprised at the things these models can do,” said Ethan Dyer, a computer scientist at Google Research who helped organize the test. It’s surprising because these models supposedly have one directive: to accept a string of text as input and predict what comes next, over and over, based purely on statistics. Computer scientists anticipated that scaling up would boost performance on known tasks, but they didn’t expect the models to suddenly handle so many new, unpredictable ones.

“That language models can do these sort of things was never discussed in any literature that I’m aware of,” said Rishi Bommasani, a computer scientist at Stanford University. Last year, he helped compile a list of dozens of emergent behaviors [emphasis mine], including several identified in Dyer’s project. That list continues to grow.

Now, researchers are racing not only to identify additional emergent abilities but also to figure out why and how they occur at all — in essence, to try to predict unpredictability. Understanding emergence could reveal answers to deep questions around AI and machine learning in general, like whether complex models are truly doing something new or just getting really good at statistics. It could also help researchers harness potential benefits and curtail emergent risks.

Biologists, physicists, ecologists and other scientists use the term “emergent” to describe self-organizing, collective behaviors that appear when a large collection of things acts as one. Combinations of lifeless atoms give rise to living cells; water molecules create waves; murmurations of starlings swoop through the sky in changing but identifiable patterns; cells make muscles move and hearts beat. Critically, emergent abilities show up in systems that involve lots of individual parts. But researchers have only recently been able to document these abilities in LLMs as those models have grown to enormous sizes.

But the debut of LLMs also brought something truly unexpected. Lots of somethings. With the advent of models like GPT-3, which has 175 billion parameters — or Google’s PaLM, which can be scaled up to 540 billion — users began describing more and more emergent behaviors. One DeepMind engineer even reported being able to convince ChatGPT that it was a Linux terminal and getting it to run some simple mathematical code to compute the first 10 prime numbers. Remarkably, it could finish the task faster than the same code running on a real Linux machine.

As with the movie emoji task, researchers had no reason to think that a language model built to predict text would convincingly imitate a computer terminal. Many of these emergent behaviors illustrate “zero-shot” or “few-shot” learning, which describes an LLM’s ability to solve problems it has never — or rarely — seen before. This has been a long-time goal in artificial intelligence research, Ganguli [Deep Ganguli, a computer scientist at the AI startup Anthropic] said. Showing that GPT-3 could solve problems without any explicit training data in a zero-shot setting, he said, “led me to drop what I was doing and get more involved.”

There is an obvious problem with asking these models to explain themselves: They are notorious liars. [emphasis mine] “We’re increasingly relying on these models to do basic work,” Ganguli said, “but I do not just trust these. I check their work.” As one of many amusing examples, in February [2023] Google introduced its AI chatbot, Bard. The blog post announcing the new tool shows Bard making a factual error.

If you have time, I recommend reading Omes’s March 6, 2023 article.

The panic

Perhaps not entirely unrelated to current developments, there was this announcement in a May 1, 2023 article by Hannah Alberga for CTV (Canadian Television Network) news, Note: Links have been removed,

Toronto’s pioneer of artificial intelligence quits Google to openly discuss dangers of AI

Geoffrey Hinton, professor at the University of Toronto and the “godfather” of deep learning – a field of artificial intelligence that mimics the human brain – announced his departure from the company on Monday [May 1, 2023] citing the desire to freely discuss the implications of deep learning and artificial intelligence, and the possible consequences if it were utilized by “bad actors.”

Hinton, a British-Canadian computer scientist, is best-known for a series of deep neural network breakthroughs that won him, Yann LeCun and Yoshua Bengio the 2018 Turing Award, known as the Nobel Prize of computing. 

Hinton has been invested in the now-hot topic of artificial intelligence since its early stages. In 1970, he got a Bachelor of Arts in experimental psychology from Cambridge, followed by his Ph.D. in artificial intelligence in Edinburgh, U.K. in 1978.

He joined Google after spearheading a major breakthrough with two of his graduate students at the University of Toronto in 2012, in which the team uncovered and built a new method of artificial intelligence: neural networks. The team’s first neural network was  incorporated and sold to Google for $44 million.

Neural networks are a method of deep learning that effectively teaches computers how to learn the way humans do by analyzing data, paving the way for machines to classify objects and understand speech recognition.

There’s a bit more from Hinton in a May 3, 2023 article by Sheena Goodyear for the Canadian Broadcasting Corporation’s (CBC) radio programme, As It Happens (the 10 minute radio interview is embedded in the article), Note: A link has been removed,

There was a time when Geoffrey Hinton thought artificial intelligence would never surpass human intelligence — at least not within our lifetimes.

Nowadays, he’s not so sure.

“I think that it’s conceivable that this kind of advanced intelligence could just take over from us,” the renowned British-Canadian computer scientist told As It Happens host Nil Köksal. “It would mean the end of people.”

For the last decade, he [Geoffrey Hinton] divided his career between teaching at the University of Toronto and working for Google’s deep-learning artificial intelligence team. But this week, he announced his resignation from Google in an interview with the New York Times.

Now Hinton is speaking out about what he fears are the greatest dangers posed by his life’s work, including governments using AI to manipulate elections or create “robot soldiers.”

But other experts in the field of AI caution against his visions of a hypothetical dystopian future, saying they generate unnecessary fear, distract from the very real and immediate problems currently posed by AI, and allow bad actors to shirk responsibility when they wield AI for nefarious purposes. 

Ivana Bartoletti, founder of the Women Leading in AI Network, says dwelling on dystopian visions of an AI-led future can do us more harm than good. 

“It’s important that people understand that, to an extent, we are at a crossroads,” said Bartoletti, chief privacy officer at the IT firm Wipro.

“My concern about these warnings, however, is that we focus on the sort of apocalyptic scenario, and that takes us away from the risks that we face here and now, and opportunities to get it right here and now.”

Ziv Epstein, a PhD candidate at the Massachusetts Institute of Technology who studies the impacts of technology on society, says the problems posed by AI are very real, and he’s glad Hinton is “raising the alarm bells about this thing.”

“That being said, I do think that some of these ideas that … AI supercomputers are going to ‘wake up’ and take over, I personally believe that these stories are speculative at best and kind of represent sci-fi fantasy that can monger fear” and distract from more pressing issues, he said.

He especially cautions against language that anthropomorphizes — or, in other words, humanizes — AI.

“It’s absolutely possible I’m wrong. We’re in a period of huge uncertainty where we really don’t know what’s going to happen,” he [Hinton] said.

Don Pittis in his May 4, 2022 business analysis for CBC news online offers a somewhat jaundiced view of Hinton’s concern regarding AI, Note: Links have been removed,

As if we needed one more thing to terrify us, the latest warning from a University of Toronto scientist considered by many to be the founding intellect of artificial intelligence, adds a new layer of dread.

Others who have warned in the past that thinking machines are a threat to human existence seem a little miffed with the rock-star-like media coverage Geoffrey Hinton, billed at a conference this week as the Godfather of AI, is getting for what seems like a last minute conversion. Others say Hinton’s authoritative voice makes a difference.

Not only did Hinton tell an audience of experts at Wednesday’s [May 3, 2023] EmTech Digital conference that humans will soon be supplanted by AI — “I think it’s serious and fairly close.” — he said that due to national and business competition, there is no obvious way to prevent it.

“What we want is some way of making sure that even if they’re smarter than us, they’re going to do things that are beneficial,” said Hinton on Wednesday [May 3, 2023] as he explained his change of heart in detailed technical terms. 

“But we need to try and do that in a world where there’s bad actors who want to build robot soldiers that kill people and it seems very hard to me.”

“I wish I had a nice and simple solution I could push, but I don’t,” he said. “It’s not clear there is a solution.”

So when is all this happening?

“In a few years time they may be significantly more intelligent than people,” he told Nil Köksal on CBC Radio’s As It Happens on Wednesday [May 3, 2023].

While he may be late to the party, Hinton’s voice adds new clout to growing anxiety that artificial general intelligence, or AGI, has now joined climate change and nuclear Armageddon as ways for humans to extinguish themselves.

But long before that final day, he worries that the new technology will soon begin to strip away jobs and lead to a destabilizing societal gap between rich and poor that current politics will be unable to solve.

The EmTech Digital conference is a who’s who of AI business and academia, fields which often overlap. Most other participants at the event were not there to warn about AI like Hinton, but to celebrate the explosive growth of AI research and business.

As one expert I spoke to pointed out, the growth in AI is exponential and has been for a long time. But even knowing that, the increase in the dollar value of AI to business caught the sector by surprise.

Eight years ago when I wrote about the expected increase in AI business, I quoted the market intelligence group Tractica that AI spending would “be worth more than $40 billion in the coming decade,” which sounded like a lot at the time. It appears that was an underestimate.

“The global artificial intelligence market size was valued at $428 billion U.S. in 2022,” said an updated report from Fortune Business Insights. “The market is projected to grow from $515.31 billion U.S. in 2023.”  The estimate for 2030 is more than $2 trillion. 

This week the new Toronto AI company Cohere, where Hinton has a stake of his own, announced it was “in advanced talks” to raise $250 million. The Canadian media company Thomson Reuters said it was planning “a deeper investment in artificial intelligence.” IBM is expected to “pause hiring for roles that could be replaced with AI.” The founders of Google DeepMind and LinkedIn have launched a ChatGPT competitor called Pi.

And that was just this week.

“My one hope is that, because if we allow it to take over it will be bad for all of us, we could get the U.S. and China to agree, like we did with nuclear weapons,” said Hinton. “We’re all the in same boat with respect to existential threats, so we all ought to be able to co-operate on trying to stop it.”

Interviewer and moderator Will Douglas Heaven, an editor at MIT Technology Review finished Hinton’s sentence for him: “As long as we can make some money on the way.”

Hinton has attracted some criticism himself. Wilfred Chan writing for Fast Company has two articles, “‘I didn’t see him show up’: Ex-Googlers blast ‘AI godfather’ Geoffrey Hinton’s silence on fired AI experts” on May 5, 2023, Note: Links have been removed,

Geoffrey Hinton, the 75-year-old computer scientist known as the “Godfather of AI,” made headlines this week after resigning from Google to sound the alarm about the technology he helped create. In a series of high-profile interviews, the machine learning pioneer has speculated that AI will surpass humans in intelligence and could even learn to manipulate or kill people on its own accord.

But women who for years have been speaking out about AI’s problems—even at the expense of their jobs—say Hinton’s alarmism isn’t just opportunistic but also overshadows specific warnings about AI’s actual impacts on marginalized people.

“It’s disappointing to see this autumn-years redemption tour [emphasis mine] from someone who didn’t really show up” for other Google dissenters, says Meredith Whittaker, president of the Signal Foundation and an AI researcher who says she was pushed out of Google in 2019 in part over her activism against the company’s contract to build machine vision technology for U.S. military drones. (Google has maintained that Whittaker chose to resign.)

Another prominent ex-Googler, Margaret Mitchell, who co-led the company’s ethical AI team, criticized Hinton for not denouncing Google’s 2020 firing of her coleader Timnit Gebru, a leading researcher who had spoken up about AI’s risks for women and people of color.

“This would’ve been a moment for Dr. Hinton to denormalize the firing of [Gebru],” Mitchell tweeted on Monday. “He did not. This is how systemic discrimination works.”

Gebru, who is Black, was sacked in 2020 after refusing to scrap a research paper she coauthored about the risks of large language models to multiply discrimination against marginalized people. …

… An open letter in support of Gebru was signed by nearly 2,700 Googlers in 2020, but Hinton wasn’t one of them. 

Instead, Hinton has used the spotlight to downplay Gebru’s voice. In an appearance on CNN Tuesday [May 2, 2023], for example, he dismissed a question from Jake Tapper about whether he should have stood up for Gebru, saying her ideas “aren’t as existentially serious as the idea of these things getting more intelligent than us and taking over.” [emphasis mine]

Gebru has been mentioned here a few times. She’s mentioned in passing in a June 23, 2022 posting “Racist and sexist robots have flawed AI” and in a little more detail in an August 30, 2022 posting “Should AI algorithms get patents for their inventions and is anyone talking about copyright for texts written by AI algorithms?” scroll down to the ‘Consciousness and ethical AI’ subhead

Chan has another Fast Company article investigating AI issues also published on May 5, 2023, “Researcher Meredith Whittaker says AI’s biggest risk isn’t ‘consciousness’—it’s the corporations that control them.”

The last two existential AI panics

The term “autumn-years redemption tour”is striking and while the reference to age could be viewed as problematic, it also hints at the money, honours, and acknowledgement that Hinton has enjoyed as an eminent scientist. I’ve covered two previous panics set off by eminent scientists. “Existential risk” is the title of my November 26, 2012 posting which highlights Martin Rees’ efforts to found the Centre for Existential Risk at the University of Cambridge.

Rees is a big deal. From his Wikipedia entry, Note: Links have been removed,

Martin John Rees, Baron Rees of Ludlow OM FRS FREng FMedSci FRAS HonFInstP[10][2] (born 23 June 1942) is a British cosmologist and astrophysicist.[11] He is the fifteenth Astronomer Royal, appointed in 1995,[12][13][14] and was Master of Trinity College, Cambridge, from 2004 to 2012 and President of the Royal Society between 2005 and 2010.[15][16][17][18][19][20]

The Centre for Existential Risk can be found here online (it is located at the University of Cambridge). Interestingly, Hinton who was born in December 1947 will be giving a lecture “Digital versus biological intelligence: Reasons for concern about AI” in Cambridge on May 25, 2023.

The next panic was set off by Stephen Hawking (1942 – 2018; also at the University of Cambridge, Wikipedia entry) a few years before he died. (Note: Rees, Hinton, and Hawking were all born within five years of each other and all have/had ties to the University of Cambridge. Interesting coincidence, eh?) From a January 9, 2015 article by Emily Chung for CBC news online,

Machines turning on their creators has been a popular theme in books and movies for decades, but very serious people are starting to take the subject very seriously. Physicist Stephen Hawking says, “the development of full artificial intelligence could spell the end of the human race.” Tesla Motors and SpaceX founder Elon Musk suggests that AI is probably “our biggest existential threat.”

Artificial intelligence experts say there are good reasons to pay attention to the fears expressed by big minds like Hawking and Musk — and to do something about it while there is still time.

Hawking made his most recent comments at the beginning of December [2014], in response to a question about an upgrade to the technology he uses to communicate, He relies on the device because he has amyotrophic lateral sclerosis, a degenerative disease that affects his ability to move and speak.

Popular works of science fiction – from the latest Terminator trailer, to the Matrix trilogy, to Star Trek’s borg – envision that beyond that irreversible historic event, machines will destroy, enslave or assimilate us, says Canadian science fiction writer Robert J. Sawyer.

Sawyer has written about a different vision of life beyond singularity [when machines surpass humans in general intelligence,] — one in which machines and humans work together for their mutual benefit. But he also sits on a couple of committees at the Lifeboat Foundation, a non-profit group that looks at future threats to the existence of humanity, including those posed by the “possible misuse of powerful technologies” such as AI. He said Hawking and Musk have good reason to be concerned.

To sum up, the first panic was in 2012, the next in 2014/15, and the latest one began earlier this year (2023) with a letter. A March 29, 2023 Thompson Reuters news item on CBC news online provides information on the contents,

Elon Musk and a group of artificial intelligence experts and industry executives are calling for a six-month pause in developing systems more powerful than OpenAI’s newly launched GPT-4, in an open letter citing potential risks to society and humanity.

Earlier this month, Microsoft-backed OpenAI unveiled the fourth iteration of its GPT (Generative Pre-trained Transformer) AI program, which has wowed users with its vast range of applications, from engaging users in human-like conversation to composing songs and summarizing lengthy documents.

The letter, issued by the non-profit Future of Life Institute and signed by more than 1,000 people including Musk, called for a pause on advanced AI development until shared safety protocols for such designs were developed, implemented and audited by independent experts.

Co-signatories included Stability AI CEO Emad Mostaque, researchers at Alphabet-owned DeepMind, and AI heavyweights Yoshua Bengio, often referred to as one of the “godfathers of AI,” and Stuart Russell, a pioneer of research in the field.

According to the European Union’s transparency register, the Future of Life Institute is primarily funded by the Musk Foundation, as well as London-based effective altruism group Founders Pledge, and Silicon Valley Community Foundation.

The concerns come as EU police force Europol on Monday {March 27, 2023] joined a chorus of ethical and legal concerns over advanced AI like ChatGPT, warning about the potential misuse of the system in phishing attempts, disinformation and cybercrime.

Meanwhile, the U.K. government unveiled proposals for an “adaptable” regulatory framework around AI.

The government’s approach, outlined in a policy paper published on Wednesday [March 29, 2023], would split responsibility for governing artificial intelligence (AI) between its regulators for human rights, health and safety, and competition, rather than create a new body dedicated to the technology.

The engineers have chimed in, from an April 7, 2023 article by Margo Anderson for the IEEE (institute of Electrical and Electronics Engineers) Spectrum magazine, Note: Links have been removed,

The open letter [published March 29, 2023], titled “Pause Giant AI Experiments,” was organized by the nonprofit Future of Life Institute and signed by more than 27,565 people (as of 8 May). It calls for cessation of research on “all AI systems more powerful than GPT-4.”

It’s the latest of a host of recent “AI pause” proposals including a suggestion by Google’s François Chollet of a six-month “moratorium on people overreacting to LLMs” in either direction.

In the news media, the open letter has inspired straight reportage, critical accounts for not going far enough (“shut it all down,” Eliezer Yudkowsky wrote in Time magazine), as well as critical accounts for being both a mess and an alarmist distraction that overlooks the real AI challenges ahead.

IEEE members have expressed a similar diversity of opinions.

There was an earlier open letter in January 2015 according to Wikipedia’s “Open Letter on Artificial Intelligence” entry, Note: Links have been removed,

In January 2015, Stephen Hawking, Elon Musk, and dozens of artificial intelligence experts[1] signed an open letter on artificial intelligence calling for research on the societal impacts of AI. The letter affirmed that society can reap great potential benefits from artificial intelligence, but called for concrete research on how to prevent certain potential “pitfalls”: artificial intelligence has the potential to eradicate disease and poverty, but researchers must not create something which is unsafe or uncontrollable.[1] The four-paragraph letter, titled “Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter”, lays out detailed research priorities in an accompanying twelve-page document.

As for ‘Mr. ChatGPT’ or Sam Altman, CEO of OpenAI, while he didn’t sign the March 29, 2023 letter, he appeared before US Congress suggesting AI needs to be regulated according to May 16, 2023 news article by Mohar Chatterjee for Politico.

You’ll notice I’ve arbitrarily designated three AI panics by assigning their origins to eminent scientists. In reality, these concerns rise and fall in ways that don’t allow for such a tidy analysis. As Chung notes, science fiction regularly addresses this issue. For example, there’s my October 16, 2013 posting, “Wizards & Robots: a comic book encourages study in the sciences and maths and discussions about existential risk.” By the way, will.i.am (of the Black Eyed Peas band was involved in the comic book project and he us a longtime supporter of STEM (science, technology, engineering, and mathematics) initiatives.

Finally (but not quite)

Puzzling, isn’t it? I’m not sure we’re asking the right questions but it’s encouraging to see that at least some are being asked.

Dr. Andrew Maynard in a May 12, 2023 essay for The Conversation (h/t May 12, 2023 item on phys.org) notes that ‘Luddites’ questioned technology’s inevitable progress and were vilified for doing so, Note: Links have been removed,

The term “Luddite” emerged in early 1800s England. At the time there was a thriving textile industry that depended on manual knitting frames and a skilled workforce to create cloth and garments out of cotton and wool. But as the Industrial Revolution gathered momentum, steam-powered mills threatened the livelihood of thousands of artisanal textile workers.

Faced with an industrialized future that threatened their jobs and their professional identity, a growing number of textile workers turned to direct action. Galvanized by their leader, Ned Ludd, they began to smash the machines that they saw as robbing them of their source of income.

It’s not clear whether Ned Ludd was a real person, or simply a figment of folklore invented during a period of upheaval. But his name became synonymous with rejecting disruptive new technologies – an association that lasts to this day.

Questioning doesn’t mean rejecting

Contrary to popular belief, the original Luddites were not anti-technology, nor were they technologically incompetent. Rather, they were skilled adopters and users of the artisanal textile technologies of the time. Their argument was not with technology, per se, but with the ways that wealthy industrialists were robbing them of their way of life

In December 2015, Stephen Hawking, Elon Musk and Bill Gates were jointly nominated for a “Luddite Award.” Their sin? Raising concerns over the potential dangers of artificial intelligence.

The irony of three prominent scientists and entrepreneurs being labeled as Luddites underlines the disconnect between the term’s original meaning and its more modern use as an epithet for anyone who doesn’t wholeheartedly and unquestioningly embrace technological progress.

Yet technologists like Musk and Gates aren’t rejecting technology or innovation. Instead, they’re rejecting a worldview that all technological advances are ultimately good for society. This worldview optimistically assumes that the faster humans innovate, the better the future will be.

In an age of ChatGPT, gene editing and other transformative technologies, perhaps we all need to channel the spirit of Ned Ludd as we grapple with how to ensure that future technologies do more good than harm.

In fact, “Neo-Luddites” or “New Luddites” is a term that emerged at the end of the 20th century.

In 1990, the psychologist Chellis Glendinning published an essay titled “Notes toward a Neo-Luddite Manifesto.”

Then there are the Neo-Luddites who actively reject modern technologies, fearing that they are damaging to society. New York City’s Luddite Club falls into this camp. Formed by a group of tech-disillusioned Gen-Zers, the club advocates the use of flip phones, crafting, hanging out in parks and reading hardcover or paperback books. Screens are an anathema to the group, which sees them as a drain on mental health.

I’m not sure how many of today’s Neo-Luddites – whether they’re thoughtful technologists, technology-rejecting teens or simply people who are uneasy about technological disruption – have read Glendinning’s manifesto. And to be sure, parts of it are rather contentious. Yet there is a common thread here: the idea that technology can lead to personal and societal harm if it is not developed responsibly.

Getting back to where this started with nonhuman authors, Amelia Eqbal has written up an informal transcript of a March 16, 2023 CBC radio interview (radio segment is embedded) about ChatGPT-4 (the latest AI chatbot from OpenAI) between host Elamin Abdelmahmoud and tech journalist, Alyssa Bereznak.

I was hoping to add a little more Canadian content, so in March 2023 and again in April 2023, I sent a question about whether there were any policies regarding nonhuman or AI authors to Kim Barnhardt at the Canadian Medical Association Journal (CMAJ). To date, there has been no reply but should one arrive, I will place it here.

In the meantime, I have this from Canadian writer, Susan Baxter in her May 15, 2023 blog posting “Coming soon: Robot Overlords, Sentient AI and more,”

The current threat looming (Covid having been declared null and void by the WHO*) is Artificial Intelligence (AI) which, we are told, is becoming too smart for its own good and will soon outsmart humans. Then again, given some of the humans I’ve met along the way that wouldn’t be difficult.

All this talk of scary-boo AI seems to me to have become the worst kind of cliché, one that obscures how our lives have become more complicated and more frustrating as apps and bots and cyber-whatsits take over.

The trouble with clichés, as Alain de Botton wrote in How Proust Can Change Your Life, is not that they are wrong or contain false ideas but more that they are “superficial articulations of good ones”. Cliches are oversimplifications that become so commonplace we stop noticing the more serious subtext. (This is rife in medicine where metaphors such as talk of “replacing” organs through transplants makes people believe it’s akin to changing the oil filter in your car. Or whatever it is EV’s have these days that needs replacing.)

Should you live in Vancouver (Canada) and are attending a May 28, 2023 AI event, you may want to read Susan Baxter’s piece as a counterbalance to, “Discover the future of artificial intelligence at this unique AI event in Vancouver,” a May 19, 2023 sponsored content by Katy Brennan for the Daily Hive,

If you’re intrigued and eager to delve into the rapidly growing field of AI, you’re not going to want to miss this unique Vancouver event.

On Sunday, May 28 [2023], a Multiplatform AI event is coming to the Vancouver Playhouse — and it’s set to take you on a journey into the future of artificial intelligence.

The exciting conference promises a fusion of creativity, tech innovation, and thought–provoking insights, with talks from renowned AI leaders and concept artists, who will share their experiences and opinions.

Guests can look forward to intense discussions about AI’s pros and cons, hear real-world case studies, and learn about the ethical dimensions of AI, its potential threats to humanity, and the laws that govern its use.

Live Q&A sessions will also be held, where leading experts in the field will address all kinds of burning questions from attendees. There will also be a dynamic round table and several other opportunities to connect with industry leaders, pioneers, and like-minded enthusiasts. 

This conference is being held at The Playhouse, 600 Hamilton Street, from 11 am to 7:30 pm, ticket prices range from $299 to $349 to $499 (depending on when you make your purchase, From the Multiplatform AI Conference homepage,

Event Speakers

Max Sills
General Counsel at Midjourney

From Jan 2022 – present (Advisor – now General Counsel) – Midjourney – An independent research lab exploring new mediums of thought and expanding the imaginative powers of the human species (SF) Midjourney – a generative artificial intelligence program and service created and hosted by a San Francisco-based independent research lab Midjourney, Inc. Midjourney generates images from natural language descriptions, called “prompts”, similar to OpenAI’s DALL-E and Stable Diffusion. For now the company uses Discord Server as a source of service and, with huge 15M+ members, is the biggest Discord server in the world. In the two-things-at-once department, Max Sills also known as an owner of Open Advisory Services, firm which is set up to help small and medium tech companies with their legal needs (managing outside counsel, employment, carta, TOS, privacy). Their clients are enterprise level, medium companies and up, and they are here to help anyone on open source and IP strategy. Max is an ex-counsel at Block, ex-general manager of the Crypto Open Patent Alliance. Prior to that Max led Google’s open source legal group for 7 years.

So, the first speaker listed is a lawyer associated with Midjourney, a highly controversial generative artificial intelligence programme used to generate images. According to their entry on Wikipedia, the company is being sued, Note: Links have been removed,

On January 13, 2023, three artists – Sarah Andersen, Kelly McKernan, and Karla Ortiz – filed a copyright infringement lawsuit against Stability AI, Midjourney, and DeviantArt, claiming that these companies have infringed the rights of millions of artists, by training AI tools on five billion images scraped from the web, without the consent of the original artists.[32]

My October 24, 2022 posting highlights some of the issues with generative image programmes and Midjourney is mentioned throughout.

As I noted earlier, I’m glad to see more thought being put into the societal impact of AI and somewhat disconcerted by the hyperbole from the like of Geoffrey Hinton and the like of Vancouver’s Multiplatform AI conference organizers. Mike Masnick put it nicely in his May 24, 2023 posting on TechDirt (Note 1: I’ve taken a paragraph out of context, his larger issue is about proposals for legislation; Note 2: Links have been removed),

Honestly, this is partly why I’ve been pretty skeptical about the “AI Doomers” who keep telling fanciful stories about how AI is going to kill us all… unless we give more power to a few elite people who seem to think that it’s somehow possible to stop AI tech from advancing. As I noted last month, it is good that some in the AI space are at least conceptually grappling with the impact of what they’re building, but they seem to be doing so in superficial ways, focusing only on the sci-fi dystopian futures they envision, and not things that are legitimately happening today from screwed up algorithms.

For anyone interested in the Canadian government attempts to legislate AI, there’s my May 1, 2023 posting, “Canada, AI regulation, and the second reading of the Digital Charter Implementation Act, 2022 (Bill C-27).”

Addendum (June 1, 2023)

Another statement warning about runaway AI was issued on Tuesday, May 30, 2023. This was far briefer than the previous March 2023 warning, from the Center for AI Safety’s “Statement on AI Risk” webpage,

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war [followed by a list of signatories] …

Vanessa Romo’s May 30, 2023 article (with contributions from Bobby Allyn) for NPR ([US] National Public Radio) offers an overview of both warnings. Rae Hodge’s May 31, 2023 article for Salon offers a more critical view, Note: Links have been removed,

The artificial intelligence world faced a swarm of stinging backlash Tuesday morning, after more than 350 tech executives and researchers released a public statement declaring that the risks of runaway AI could be on par with those of “nuclear war” and human “extinction.” Among the signatories were some who are actively pursuing the profitable development of the very products their statement warned about — including OpenAI CEO Sam Altman and Google DeepMind CEO Demis Hassabis.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement from the non-profit Center for AI Safety said.

But not everyone was shaking in their boots, especially not those who have been charting AI tech moguls’ escalating use of splashy language — and those moguls’ hopes for an elite global AI governance board.

TechCrunch’s Natasha Lomas, whose coverage has been steeped in AI, immediately unravelled the latest panic-push efforts with a detailed rundown of the current table stakes for companies positioning themselves at the front of the fast-emerging AI industry.

“Certainly it speaks volumes about existing AI power structures that tech execs at AI giants including OpenAI, DeepMind, Stability AI and Anthropic are so happy to band and chatter together when it comes to publicly amplifying talk of existential AI risk. And how much more reticent to get together to discuss harms their tools can be seen causing right now,” Lomas wrote.

“Instead of the statement calling for a development pause, which would risk freezing OpenAI’s lead in the generative AI field, it lobbies policymakers to focus on risk mitigation — doing so while OpenAI is simultaneously crowdfunding efforts to shape ‘democratic processes for steering AI,'” Lomas added.

The use of scary language and fear as a marketing tool has a long history in tech. And, as the LA Times’ Brian Merchant pointed out in an April column, OpenAI stands to profit significantly from a fear-driven gold rush of enterprise contracts.

“[OpenAI is] almost certainly betting its longer-term future on more partnerships like the one with Microsoft and enterprise deals serving large companies,” Merchant wrote. “That means convincing more corporations that if they want to survive the coming AI-led mass upheaval, they’d better climb aboard.”

Fear, after all, is a powerful sales tool.

Romo’s May 30, 2023 article for NPR offers a good overview and, if you have the time, I recommend reading Hodge’s May 31, 2023 article for Salon in its entirety.

Nanomedicine development: a South African perspective

Ms Sinovuyo Banzana, science communicator at DSI-Mandela Nanomedicine Platform (Nelson Mandela University, South Africa) and Dr Steven Mufamadi, Research Chair in nanomedicine at the DSI-Mandela Nanomedicine Platform (Nelson Mandela University) and the founder of Nabio Consulting (Pty) Ltd. have written a January 15, 2023 Nanowerk Spotlight article. While the focus is largely on South Africa, they also provide insight into what is happening in other countries on the African continent.

From the January 15, 2023 Nanowerk Spotlight article, Note: Links have been removed,

In Africa, South Africa is considered as the leading country in terms of health care services and biomedical research. In the past few years or so, the South Africa Agency for Science and Technology Advancement (SAASTA) and other education programs started to engage with the community and spread the word on nanomedicine so that everyone can have a better understanding about how nanomedicine works.

South Africa has established a MSc Nanoscience Postgraduate Programme – a collaborative programme between the University of Johannesburg (UJ), Nelson Mandela University (NMU), the University of the Free State (UFS) and the University of the Western Cape (UWC).

In Egypt, the Zewail City of Science, Technology, and Innovation a non-profit, independent institution of learning, research and innovation, has established an undergraduate bachelor’s degree of science in nanoscience: the BSc in Nano Science.

Over the past decade, the South African government has been investing in nanotechnology-based equipment and infrastructure, human capital development, and R&D at several public universities and science centres. These research facilities are available to researchers from across the continent and beyond. Prominent among them are the

Centre for High Resolution Transmission Electron Microscopy (HRTEM)

DSI-Mandela nanomedicine platform at Nelson Mandela University (NMU)

National Centre for Nanostructured Materials – a characterisation facility and nanomaterials industrial development facility at the Council for Scientific and Industrial Research (CSIR)

Mintek Nanotechnology Innovation Centre (NIC).

The future of nanomedicine in Africa is promising. The World Health Organization (WHO) has established partnerships with the pharmaceutical industry, such as Pfizer in South Africa and Moderna in Kenya, to establish the first two African mRNA hubs. These public-private partnerships focus on technology transfer and human capacity building, which will enable African scientists and inventors to produce their own mRNA vaccines and nanomedicine products that are tailored to the specific needs of the African population. This is crucial in addressing vaccine inequality and ensuring access to medicine for all.

In the next few years, it is likely that we will see nanomedicine-based drugs or vaccines developed in Africa enter the global market.

African governments need to take advantage of nanomedicine innovation and their partnerships with international private companies in order to develop their nanomedicine innovation and create job opportunities, and/or to achieve their United Nation Sustainable Development Goals (UN-SDGs) by 2030.

You can find out more about the Nanomedicine programme at Nelson Mandela University here.

Thank you to Ms Banzana and Dr. Mufamadi. It’s always good to get some insight into nanotechnology developments from a region that is not North America or Europe.

Announcing: Fully Funded PhD Positions at McGill Nanofactory in Montréal, Canada

It’s been a while since I’ve published news about funded positions. Here’s more about the Montréal-based positions from a May 22, 2023 McGill University news release on EurekAlert,

McGill Nanofactory, led by Prof. Cao [Professor Changhong Cao], has multiple fully funded Ph.D. positions (Winter and Fall 2024) in the directions including nano-mechanics of 2D materials, mechano-electro-chemical studies of solid-state batteries, and additive manufacturing of advanced structures. Candidates with expertise in one or more of the following areas are strongly encouraged to contact Prof. Cao at changhong.cao@mcgill.ca. 2D materials, solid mechanics, MEMS design and fabrication, electrochemistry, AFM, and 3D printing. Full application submission on McGill’s online portal must be before July 15th, 2023 [emphasis mine] for the Winter 2024 admission round. Details here: https://www.mcgill.ca/mecheng/grad/admission

You can also find out more on the McGill Nanofactory website.

Good luck with your application!

Fluidic memristor with neuromorphic (brainlike) functions

I think this is the first time I’ve had occasion to feature a fluidic memristor. From a January 13, 2023 news item on Nahowerk, Note: Links have been removed,

Neuromorphic devices have attracted increasing attention because of their potential applications in neuromorphic [brainlike] computing, intelligence sensing, brain-machine interfaces and neuroprosthetics. However, most of the neuromorphic functions realized are based on the mimic of electric pulses with solid state devices. Mimicking the functions of chemical synapses, especially neurotransmitter-related functions, is still a challenge in this research area.

In a study published in Science (“Neuromorphic functions with a polyelectrolyte-confined fluidic memristor”), the research group led by Prof. YU Ping and MAO Lanqun from the Institute of Chemistry of the Chinese Academy of Sciences developed a polyelectrolyte-confined fluidic memristor (PFM), which could emulate diverse electric pulse with ultralow energy consumption. Moreover, benefitting from the fluidic nature of PFM, chemical-regulated electric pulses and chemical-electric signal transduction could also be emulated.

A January 12, 2023 Chinese Academy of Science (CAS) press release, which originated the news item, offers more technical detail,

The researchers first fabricated the polyelectrolyte-confined fluidic channel by surface-initiated atomic transfer polymerization. By systematically studying the current-voltage relationship, they found that the fabricated fluidic channel well satisfied the nature memristor, defined as PFM. The origin of the ion memory was originated from the relatively slow diffusion dynamics of anions into and out of the polyelectrolyte brushes.  

The PFM could well emulate the short-term plasticity patterns (STP), including paired-pulse facilitation and paired-pulse depression. These functions can be operated at the voltage and energy consumption as low as those biological systems, suggesting the potential application in bioinspired sensorimotor implementation, intelligent sensing and neuroprosthetics.  

The PFM could also emulate the chemical-regulated STP electric pulses. Based on the interaction between polyelectrolyte and counterions, the retention time could be regulated in different electrolyte.

More importantly, in a physiological electrolyte (i.e., phosphate-buffered saline solution, pH7.4), the PFM could emulate the regulation of memory by adenosine triphosphate (ATP), demonstrating the possibility to regulate the synaptic plasticity by neurotransmitter.  More importantly, based on the interaction between polyelectrolytes and counterions, the chemical-electric signal transduction was accomplished with the PFM, which is a key step towards the fabrication of artificial chemical synapses.

With structural emulation to ion channels, PFM features versatility and easily interfaces with biological systems, paving a way to building neuromorphic devices with advanced functions by introducing rich chemical designs. This study provides a new way to interface the chemistry with neuromorphic device. 

Here’s a link to and a citation for the paper,

Neuromorphic functions with a polyelectrolyte-confined fluidic memristor by Tianyi Xiong, Changwei Li, Xiulan He, Boyang Xie, Jianwei Zong, Yanan Jiang, Wenjie Ma, Fei Wu, Junjie Fei, Ping Yu, and Lanqun Mao. Science 12 Jan 2023 Vol 379, Issue 6628 pp. 156-161 DOI: 10.1126/science.adc9150

This paper is behind a paywall.

Drying and redispersing cellulose nanocrystals (CNC)

A January 11, 2023 news item on ScienceDaily announces some new research on cellulose nanocrystals (CNC),

Cellulose nanocrystals—bio-based nanomaterials derived from natural resources such as plant cellulose—are valuable for their use in water treatment, packaging, tissue engineering, electronics, antibacterial coatings and much more. Though the materials provide a sustainable alternative to non-bio-based materials, transporting them in liquid taxes industrial infrastructures and leads to environmental impacts.

A team of Penn State [Pennsylvania State University] chemical engineering researchers studied the mechanisms of drying the nanocrystals and proposed nanotechnology to render the nanocrystals highly redispersible in aqueous mediums, while retaining their full functionality, to make them easier to store and transport. They published their results in the journal Biomacromolecules.

This image illustrates what the drying process does,

This graphic representation of hairy cellulose nanocrystals, shown attached at their hairy ends when dried (right), will be featured as the Biomacromolecules journal cover in the Jan. 17 issue. Credit: Sheikhi Research Group. All Rights Reserved.

A Pennsylvania State University (Penn State) news release (also on EurekAlert) by Mariah R. Lucas, which originated the news item, provides more detail, Note: A link has been removed,

“We looked at how we could take hairy nanocrystals, dry them in ovens, and redisperse them in solutions containing different ions,” said co-first author Breanna Huntington, current chemical engineering doctoral student at the University of Delaware and former member of the Sheikhi Research Group while an undergraduate student at Penn State. “We then compared their functionality to conventional, non-hairy cellulose nanocrystals.”  

The nanocrystals have negatively charged cellulose chains at their ends, known as hairs. When rehydrated, the hairs repel each other and separate, dispersing again through a liquid, as a result of electrosteric repulsion — a term meaning charge-driven, or electrostatic, and free-volume dependent, or steric.  

“The hairy ends of the nanocrystals are nanoengineered to be negatively charged and repel each other when placed in an aqueous medium,” said corresponding author Amir Sheikhi, Penn State assistant professor of chemical engineering and of biomedical engineering. “To have maximum function, the nanocrystals must be separate, individual particles, not chained together as they are when they are dry.” 

After the hairy particles were redispersed, researchers tested them and measured their size and surface properties and found their characteristics and performance were the same as those that had never been dried. They also found the particles could perform well and maintain their stability in a variety of liquid mixtures of different salinities and pH levels.

“The hairy nanocrystals can become redispersed even at high salt concentrations, which is convenient, as they remain functional in harsh media and may be used in a broad range of applications,” said co-first author Mica Pitcher, Penn State doctoral student in chemistry, supervised by Sheikhi. “This work may pave the way for sustainable and large-scale processing of nanocelluloses without using additive or energy-intensive methods.” 

The Penn State College of Engineering Summer Research Experiences for Undergraduates program and the NASA Pennsylvania Space Grant Consortium graduate fellowship program supported this work.  

Here’s a link to and a citation for the paper,

Nanoengineering the Redispersibility of Cellulose Nanocrystals by Breanna Huntington, Mica L. Pitcher, and Amir Sheikhi. Biomacromolecules 2023, 24, 1, 43–56 DOI: https://doi.org/10.1021/acs.biomac.2c00518 Publication Date:December 5, 2022 Copyright © 2022 American Chemical Society

This paper is behind a paywall.

Ada Lovelace’s skills (embroidery, languages, and more) led to her pioneering computer work in the 19th century

This is a cleaned up version of the Ada Lovelace story,

A pioneer in the field of computing, she has a remarkable life story as noted in this October 13, 2014 posting, and explored further in this October 13, 2015 posting (Ada Lovelace “… manipulative, aggressive, a drug addict …” and a genius but was she likable?) published to honour the 200th anniversary of her birth.

In a December 8, 2022 essay for The Conversation, Corinna Schlombs focuses on skills other than mathematics that influenced her thinking about computers (Note: Links have been removed),

Growing up in a privileged aristocratic family, Lovelace was educated by home tutors, as was common for girls like her. She received lessons in French and Italian, music and in suitable handicrafts such as embroidery. Less common for a girl in her time, she also studied math. Lovelace continued to work with math tutors into her adult life, and she eventually corresponded with mathematician and logician Augustus De Morgan at London University about symbolic logic.

Lovelace drew on all of these lessons when she wrote her computer program – in reality, it was a set of instructions for a mechanical calculator that had been built only in parts.

The computer in question was the Analytical Engine designed by mathematician, philosopher and inventor Charles Babbage. Lovelace had met Babbage when she was introduced to London society. The two related to each other over their shared love for mathematics and fascination for mechanical calculation. By the early 1840s, Babbage had won and lost government funding for a mathematical calculator, fallen out with the skilled craftsman building the precision parts for his machine, and was close to giving up on his project. At this point, Lovelace stepped in as an advocate.

To make Babbage’s calculator known to a British audience, Lovelace proposed to translate into English an article that described the Analytical Engine. The article was written in French by the Italian mathematician Luigi Menabrea and published in a Swiss journal. Scholars believe that Babbage encouraged her to add notes of her own.

In her notes, which ended up twice as long as the original article, Lovelace drew on different areas of her education. Lovelace began by describing how to code instructions onto cards with punched holes, like those used for the Jacquard weaving loom, a device patented in 1804 that used punch cards to automate weaving patterns in fabric.

Having learned embroidery herself, Lovelace was familiar with the repetitive patterns used for handicrafts. Similarly repetitive steps were needed for mathematical calculations. To avoid duplicating cards for repetitive steps, Lovelace used loops, nested loops and conditional testing in her program instructions.

Finally, Lovelace recognized that the numbers manipulated by the Analytical Engine could be seen as other types of symbols, such as musical notes. An accomplished singer and pianist, Lovelace was familiar with musical notation symbols representing aspects of musical performance such as pitch and duration, and she had manipulated logical symbols in her correspondence with De Morgan. It was not a large step for her to realize that the Analytical Engine could process symbols — not just crunch numbers — and even compose music.

… Lovelace applied knowledge from what we today think of as disparate fields in the sciences, arts and the humanities. A well-rounded thinker, she created solutions that were well ahead of her time.

If you have time, do check out Schlombs’ essay (h/t December 9, 2022 news item on phys.org).

For more about Jacquard looms and computing, there’s Sarah Laskow’s September 16, 2014 article for The Atlantic, which includes some interesting details (Note: Links have been removed),

…, one of the very first machines that could run something like what we now call a “program” was used to make fabric. This machine—a loom—could process so much information that the fabric it produced could display pictures detailed enough that they might be mistaken for engravings.

Like, for instance, the image above [as of March 3, 2023, the image is not there]: a woven piece of fabric that depicts Joseph-Marie Jacquard, the inventor of the weaving technology that made its creation possible. As James Essinger recounts in Jacquard’s Web, in the early 1840s Charles Babbage kept a copy at home and would ask guests to guess how it was made. They were usually wrong.

.. At its simplest, weaving means taking a series of parallel strings (the warp) lifting a selection of them up, and running another string (the weft) between the two layers, creating a crosshatch. …

The Jacquard loom, though, could process information about which of those strings should be lifted up and in what order. That information was stored in punch cards—often 2,000 or more strung together. The holes in the punch cards would let through only a selection of the rods that lifted the warp strings. In other words, the machine could replace the role of a person manually selecting which strings would appear on top. Once the punch cards were created, Jacquard looms could quickly make pictures with subtle curves and details that earlier would have take months to complete. …

… As Ada Lovelace wrote him: “We may say most aptly that the Analytical Engine weaves algebraical patterns just as the Jacquard-loom weaves flowers and leaves.”

For anyone who’s very curious about Jacquard looms, there’s a June 25, 2019 Objects and Stories article (Programming patterns: the story of the Jacquard loom) on the UK’s Science and Industry Museum (in Manchester) website.

Artificial organic neuron mimics characteristics of biological nerve cells

There’s a possibility that in the future, artificial neurons could be used for medical treatment according to a January 12, 2023 news item on phys.org,

Researchers at Linköping University (LiU), Sweden, have created an artificial organic neuron that closely mimics the characteristics of biological nerve cells. This artificial neuron can stimulate natural nerves, making it a promising technology for various medical treatments in the future.

Work to develop increasingly functional artificial nerve cells continues at the Laboratory for Organic Electronics, LOE. In 2022, a team of scientists led by associate professor Simone Fabiano demonstrated how an artificial organic neuron could be integrated into a living carnivorous plant [emphasis mine] to control the opening and closing of its maw. This synthetic nerve cell met two of the 20 characteristics that differentiate it from a biological nerve cell.

I wasn’t expecting a carnivorous plant, living or otherwise. Sadly, they don’t seem to have been able to include it in this image although the ‘green mitts’ are evocative,

Caption: Artificial neurons created by the researchers at Linköping University. Credit: Thor Balkhed

A January 13, 2023 Linköping University (LiU) press release by Mikael Sönne (also on EurkeAlert but published January 12, 2023), which originated the news item, delves further into the work,

In their latest study, published in the journal Nature Materials, the same researchers at LiU have developed a new artificial nerve cell called “conductance-based organic electrochemical neuron” or c-OECN, which closely mimics 15 out of the 20 neural features that characterise biological nerve cells, making its functioning much more similar to natural nerve cells.

“One of the key challenges in creating artificial neurons that effectively mimic real biological neurons is the ability to incorporate ion modulation. Traditional artificial neurons made of silicon can emulate many neural features but cannot communicate through ions. In contrast, c-OECNs use ions to demonstrate several key features of real biological neurons”, says Simone Fabiano, principal investigator of the Organic Nanoelectronics group at LOE.

In 2018, this research group at Linköping University was one of the first to develop organic electrochemical transistors based on n-type conducting polymers, which are materials that can conduct negative charges. This made it possible to build printable complementary organic electrochemical circuits. Since then, the group has been working to optimise these transistors so that they can be printed in a printing press on a thin plastic foil. As a result, it is now possible to print thousands of transistors on a flexible substrate and use them to develop artificial nerve cells.

In the newly developed artificial neuron, ions are used to control the flow of electronic current through an n-type conducting polymer, leading to spikes in the device’s voltage. This process is similar to that which occurs in biological nerve cells. The unique material in the artificial nerve cell also allows the current to be increased and decreased in an almost perfect bell-shaped curve that resembles the activation and inactivation of sodium ion channels found in biology.

“Several other polymers show this behaviour, but only rigid polymers are resilient to disorder, enabling stable device operation”, says Simone Fabiano

In experiments carried out in collaboration with Karolinska Institute (KI), the new c-OECN neurons were connected to the vagus nerve of mice. The results show that the artificial neuron could stimulate the mice’s nerves, causing a 4.5% change in their heart rate.

The fact that the artificial neuron can stimulate the vagus nerve itself could, in the long run, pave the way for essential applications in various forms of medical treatment. In general, organic semiconductors have the advantage of being biocompatible, soft, and malleable, while the vagus nerve plays a key role, for example, in the body’s immune system and metabolism.

The next step for the researchers will be to reduce the energy consumption of the artificial neurons, which is still much higher than that of human nerve cells. Much work remains to be done to replicate nature artificially.

“There is much we still don’t fully understand about the human brain and nerve cells. In fact, we don’t know how the nerve cell makes use of many of these 15 demonstrated features. Mimicking the nerve cells can enable us to understand the brain better and build circuits capable of performing intelligent tasks. We’ve got a long road ahead, but this study is a good start,” says Padinhare Cholakkal Harikesh, postdoc and main author of the scientific paper.

Here’s a link to and a citation for the paper,

Ion-tunable antiambipolarity in mixed ion–electron conducting polymers enables biorealistic organic electrochemical neurons by Padinhare Cholakkal Harikesh, Chi-Yuan Yang, Han-Yan Wu, Silan Zhang, Mary J. Donahue, April S. Caravaca, Jun-Da Huang, Peder S. Olofsson, Magnus Berggren, Deyu Tu & Simone Fabiano. Nature Materials volume 22, pages 242–248 (2023) DOI: https://doi.org/10.1038/s41563-022-01450-8 Published online: 12 January 2023 Issue Date: February 2023

This paper is open access.

Insect-inspired microphones

I was hoping that there would be some insect audio files but this research is more about their role as inspiration for a new type of microphone than the sounds they make themselves. From a May 10, 2023 Acoustical Society of America news release (also on EurekAlert),

What can an insect hear? Surprisingly, quite a lot. Though small and simple, their hearing systems are highly efficient. For example, with a membrane only 2 millimeters across, the desert locust can decompose frequencies comparable to human capability. By understanding how insects perceive sound and using 3D-printing technology to create custom materials, it is possible to develop miniature, bio-inspired microphones.

The displacement of the wax moth Acroia grisella membrane, which is one of the key sources of inspiration for designing miniature, bio-inspired microphones. Credit: Andrew Reid

Andrew Reid of the University of Strathclyde in the U.K. will present his work creating such microphones, which can autonomously collect acoustic data with little power consumption. His presentation, “Unnatural hearing — 3D printing functional polymers as a path to bio-inspired microphone design,” will take place Wednesday, May 10 [2023], at 10:05 a.m. Eastern U.S. in the Northwestern/Ohio State room, as part of the 184th Meeting of the Acoustical Society of America running May 8-12 at the Chicago Marriott Downtown Magnificent Mile Hotel.

“Insect ears are ideal templates for lowering energy and data transmission costs, reducing the size of the sensors, and removing data processing,” said Reid.

Reid’s team takes inspiration from insect ears in multiple ways. On the chemical and structural level, the researchers use 3D-printing technology to fabricate custom materials that mimic insect membranes. These synthetic membranes are highly sensitive and efficient acoustic sensors. Without 3D printing, traditional, silicon-based attempts at bio-inspired microphones lack the flexibility and customization required.

“In images, our microphone looks like any other microphone. The mechanical element is a simple diaphragm, perhaps in a slightly unusual ellipsoid or rectangular shape,” Reid said. “The interesting bits are happening on the microscale, with small variations in thickness and porosity, and on the nanoscale, with variations in material properties such as the compliance and density of the material.”

More than just the material, the entire data collection process is inspired by biological systems. Unlike traditional microphones that collect a range of information, these microphones are designed to detect a specific signal. This streamlined process is similar to how nerve endings detect and transmit signals. The specialization of the sensor enables it to quickly discern triggers without consuming a lot of energy or requiring supervision.

The bio-inspired sensors, with their small size, autonomous function, and low energy consumption, are ideal for applications that are hazardous or hard to reach, including locations embedded in a structure or within the human body.

Bio-inspired 3D-printing techniques can be applied to solve many other challenges, including working on blood-brain barrier organoids or ultrasound structural monitoring.

Here’s a link to and a citation for the paper,

Unnatural hearing—3D printing functional polymers as a path to bio-inspired microphone design by Andrew Reid. J Acoust Soc Am 153, A195 (2023) or JASA (Journal of the Acoustical Sociey of America) Volume 153, Issue 3_supplement March 2023 DOI: https://doi.org/10.1121/10.0018636

You will find the abstract but I wish you good luck with finding the paper online; I wasn’t able and am guessing it’s available on paper only.