Monthly Archives: August 2022

Should AI algorithms get patents for their inventions and is anyone talking about copyright for texts written by AI algorithms?

A couple of Australian academics have written a comment for the journal Nature, which bears the intriguing subtitle: “The patent system assumes that inventors are human. Inventions devised by machines require their own intellectual property law and an international treaty.” (For the curious, I’ve linked to a few of my previous posts touching on intellectual property [IP], specifically the patent’s fraternal twin, copyright at the end of this piece.)

Before linking to the comment, here’s the May 27, 2022 University of New South Wales (UNCSW) press release (also on EurekAlert but published May 30, 2022) which provides an overview of their thinking on the subject, Note: Links have been removed,

It’s not surprising these days to see new inventions that either incorporate or have benefitted from artificial intelligence (AI) in some way, but what about inventions dreamt up by AI – do we award a patent to a machine?

This is the quandary facing lawmakers around the world with a live test case in the works that its supporters say is the first true example of an AI system named as the sole inventor.

In commentary published in the journal Nature, two leading academics from UNSW Sydney examine the implications of patents being awarded to an AI entity.

Intellectual Property (IP) law specialist Associate Professor Alexandra George and AI expert, Laureate Fellow and Scientia Professor Toby Walsh argue that patent law as it stands is inadequate to deal with such cases and requires legislators to amend laws around IP and patents – laws that have been operating under the same assumptions for hundreds of years.

The case in question revolves around a machine called DABUS (Device for the Autonomous Bootstrapping of Unified Sentience) created by Dr Stephen Thaler, who is president and chief executive of US-based AI firm Imagination Engines. Dr Thaler has named DABUS as the inventor of two products – a food container with a fractal surface that helps with insulation and stacking, and a flashing light for attracting attention in emergencies.

For a short time in Australia, DABUS looked like it might be recognised as the inventor because, in late July 2021, a trial judge accepted Dr Thaler’s appeal against IP Australia’s rejection of the patent application five months earlier. But after the Commissioner of Patents appealed the decision to the Full Court of the Federal Court of Australia, the five-judge panel upheld the appeal, agreeing with the Commissioner that an AI system couldn’t be named the inventor.

A/Prof. George says the attempt to have DABUS awarded a patent for the two inventions instantly creates challenges for existing laws which has only ever considered humans or entities comprised of humans as inventors and patent-holders.

“Even if we do accept that an AI system is the true inventor, the first big problem is ownership. How do you work out who the owner is? An owner needs to be a legal person, and an AI is not recognised as a legal person,” she says.

Ownership is crucial to IP law. Without it there would be little incentive for others to invest in the new inventions to make them a reality.

“Another problem with ownership when it comes to AI-conceived inventions, is even if you could transfer ownership from the AI inventor to a person: is it the original software writer of the AI? Is it a person who has bought the AI and trained it for their own purposes? Or is it the people whose copyrighted material has been fed into the AI to give it all that information?” asks A/Prof. George.

For obvious reasons

Prof. Walsh says what makes AI systems so different to humans is their capacity to learn and store so much more information than an expert ever could. One of the requirements of inventions and patents is that the product or idea is novel, not obvious and is useful.

“There are certain assumptions built into the law that an invention should not be obvious to a knowledgeable person in the field,” Prof. Walsh says.

“Well, what might be obvious to an AI won’t be obvious to a human because AI might have ingested all the human knowledge on this topic, way more than a human could, so the nature of what is obvious changes.”

Prof. Walsh says this isn’t the first time that AI has been instrumental in coming up with new inventions. In the area of drug development, a new antibiotic was created in 2019 – Halicin – that used deep learning to find a chemical compound that was effective against drug-resistant strains of bacteria.

“Halicin was originally meant to treat diabetes, but its effectiveness as an antibiotic was only discovered by AI that was directed to examine a vast catalogue of drugs that could be repurposed as antibiotics. So there’s a mixture of human and machine coming into this discovery.”

Prof. Walsh says in the case of DABUS, it’s not entirely clear whether the system is truly responsible for the inventions.

“There’s lots of involvement of Dr Thaler in these inventions, first in setting up the problem, then guiding the search for the solution to the problem, and then interpreting the result,” Prof. Walsh says.

“But it’s certainly the case that without the system, you wouldn’t have come up with the inventions.”

Change the laws

Either way, both authors argue that governing bodies around the world will need to modernise the legal structures that determine whether or not AI systems can be awarded IP protection. They recommend the introduction of a new ‘sui generis’ form of IP law – which they’ve dubbed ‘AI-IP’ – that would be specifically tailored to the circumstances of AI-generated inventiveness. This, they argue, would be more effective than trying to retrofit and shoehorn AI-inventiveness into existing patent laws.

Looking forward, after examining the legal questions around AI and patent law, the authors are currently working on answering the technical question of how AI is going to be inventing in the future.

Dr Thaler has sought ‘special leave to appeal’ the case concerning DABUS to the High Court of Australia. It remains to be seen whether the High Court will agree to hear it. Meanwhile, the case continues to be fought in multiple other jurisdictions around the world.

Here’s a link to and a citation for the paper,

Artificial intelligence is breaking patent law by Alexandra George & Toby Walsh. Nature (Nature) COMMENT ISSN 1476-4687 (online) 24 May 2022 ISSN 0028-0836 (print) Vol 605 26 May 2022 pp. 616-18 DOI: 10.1038/d41586-022-01391-x

This paper appears to be open access.

The Journey

DABIUS has gotten a patent in one jurisdiction, from an August 8, 2021 article on brandedequity.com,

The patent application listing DABUS as the inventor was filed in patent offices around the world, including the US, Europe, Australia, and South Afica. But only South Africa granted the patent (Australia followed suit a few days later after a court judgment gave the go-ahard [and rejected it several months later]).

Natural person?

This September 27, 2021 article by Miguel Bibe for Inventa covers some of the same ground adding some some discussion of the ‘natural person’ problem,

The patent is for “a food container based on fractal geometry”, and was accepted by the CIPC [Companies and Intellectual Property Commission] on June 24, 2021. The notice of issuance was published in the July 2021 “Patent Journal”.  

South Africa does not have a substantive patent examination system and, instead, requires applicants to merely complete a filing for their inventions. This means that South Africa patent laws do not provide a definition for “inventor” and the office only proceeds with a formal examination in order to confirm if the paperwork was filled correctly.

… according to a press release issued by the University of Surrey: “While patent law in many jurisdictions is very specific in how it defines an inventor, the DABUS team is arguing that the status quo is not fit for purpose in the Fourth Industrial Revolution.”

On the other hand, this may not be considered as a victory for the DABUS team since several doubts and questions remain as to who should be considered the inventor of the patent. Current IP laws in many jurisdictions follow the traditional term of “inventor” as being a “natural person”, and there is no legal precedent in the world for inventions created by a machine.

August 2022 update

Mike Masnick in an August 15, 2022 posting on Techdirt provides the latest information on Stephen Thaler’s efforts to have patents and copyrights awarded to his AI entity, DABUS,

Stephen Thaler is a man on a mission. It’s not a very good mission, but it’s a mission. He created something called DABUS (Device for the Autonomous Bootstrapping of Unified Sentience) and claims that it’s creating things, for which he has tried to file for patents and copyrights around the globe, with his mission being to have DABUS named as the inventor or author. This is dumb for many reasons. The purpose of copyright and patents are to incentivize the creation of these things, by providing to the inventor or author a limited time monopoly, allowing them to, in theory, use that monopoly to make some money, thereby making the entire inventing/authoring process worthwhile. An AI doesn’t need such an incentive. And this is why patents and copyright only are given to persons and not animals or AI.

… Thaler’s somewhat quixotic quest continues to fail. The EU Patent Office rejected his application. The Australian patent office similarly rejected his request. In that case, a court sided with Thaler after he sued the Australian patent office, and said that his AI could be named as an inventor, but thankfully an appeals court set aside that ruling a few months ago. In the US, Thaler/DABUS keeps on losing as well. Last fall, he lost in court as he tried to overturn the USPTO ruling, and then earlier this year, the US Copyright Office also rejected his copyright attempt (something it has done a few times before). In June, he sued the Copyright Office over this, which seems like a long shot.

And now, he’s also lost his appeal of the ruling in the patent case. CAFC, the Court of Appeals for the Federal Circuit — the appeals court that handles all patent appeals — has rejected Thaler’s request just like basically every other patent and copyright office, and nearly all courts.

If you have the time, the August 15, 2022 posting is an interesting read.

Consciousness and ethical AI

Just to make things more fraught, an engineer at Google has claimed that one of their AI chatbots has consciousness. From a June 16, 2022 article (in Canada’s National Post [previewed on epaper]) by Patrick McGee,

Google has ignited a social media firestorm on the the nature of consciousness after placing an engineer on paid leave with his belief that the tech group’s chatbot has become “sentient.”

Blake Lemoine, a senior software engineer in Google’s Responsible AI unit, did not receive much attention when he wrote a Medium post saying he “may be fired soon for doing AI ethics work.”

But a Saturday [June 11, 2022] profile in the Washington Post characterized Lemoine as “the Google engineer who thinks “the company’s AI has come to life.”

This is not the first time that Google has run into a problem with ethics and AI. Famously, Timnit Gebru who co-led (with Margaret Mitchell) Google’s ethics and AI unit departed in 2020. Gebru said (and maintains to this day) she was fired. They said she was ?, they never did make a final statement although after an investigation Gebru did receive an apology. You *can* read more about Gebru and the issues she brought to light in her Wikipedia entry. Coincidentally (or not), Margaret Mitchell was terminated/fired in February 2021 from Google after criticizing the company for Gebru’s ‘firing’. See a February 19, 2021 article by Megan Rose Dickey for TechCrunch for details about what the company has admitted is a firing or Margaret Mitchell’s termination from the company.

Getting back intellectual property and AI.

What about copyright?

There are no mentions of copyright in the earliest material I have here about the ‘creative’ arts and artificial intelligence is this, “Writing and AI or is a robot writing this blog?” posted July 16, 2014. More recently, there’s “Beer and wine reviews, the American Chemical Society’s (ACS) AI editors, and the Turing Test” posted May 20, 2022. The type of writing featured is not literary or typically considered creative writing.

On the more creative front, there’s “True love with AI (artificial intelligence): The Nature of Things explores emotional and creative AI (long read)” posted on December 3, 2021. The literary/creative portion of the post can be found under the ‘AI and creativity’ subhead approximately 30% of the way down and where I mention Douglas Coupland. Again, there’s no mention of copyright.

It’s with the visual arts that copyright gets mentioned. The first one I can find here is “Robot artists—should they get copyright protection” posted on July 10, 2017.

Fun fact: Andres Guadamuz who was mentioned in my posting took to his own blog where he gave my blog a shout out while implying that I wasn’t thoughtful. The gist of his August 8, 2017 posting was that he was misunderstood by many people, which led to the title for his post, “Should academics try to engage the public?” Thankfully, he soldiers on trying to educate us with his TechnoLama blog.

Lastly, there’s this August 16, 2019 posting “AI (artificial intelligence) artist got a show at a New York City art gallery” where you can scroll down to the ‘What about intellectual property?’ subhead about 80% of the way.

You look like a thing …

i am recommending a book for anyone who’d like to learn a little more about how artificial intelligence (AI) works, “You look like a thing and I love you; How Artificial Intelligence Works and Why It’s Making the World a Weirder Place” by Janelle Shane (2019).

It does not require an understanding of programming/coding/algorithms/etc.; Shane makes the subject as accessible as possible and gives you insight into why the term ‘artificial stupidity’ is more applicable than you might think. You can find Shane’s website here and you can find her 10 minute TED talk here.

*’can’ added to sentence on May 12, 2023.

Incorporating human cells into computer chips

What are the ethics of incorporating human cells into computer chips? That’s the question that Julian Savulescu (Visiting Professor in biomedical Ethics, University of Melbourne and Uehiro Chair in Practical Ethics, University of Oxford), Christopher Gyngell (Research Fellow in Biomedical Ethics, The University of Melbourne), and Tsutomu Sawai (Associate Professor, Humanities and Social Sciences, Hiroshima University) discuss in a May 24, 2022 essay on The Conversation (Note: A link has been removed),

The year is 2030 and we are at the world’s largest tech conference, CES in Las Vegas. A crowd is gathered to watch a big tech company unveil its new smartphone. The CEO comes to the stage and announces the Nyooro, containing the most powerful processor ever seen in a phone. The Nyooro can perform an astonishing quintillion operations per second, which is a thousand times faster than smartphone models in 2020. It is also ten times more energy-efficient with a battery that lasts for ten days.

A journalist asks: “What technological advance allowed such huge performance gains?” The chief executive replies: “We created a new biological chip using lab-grown human neurons. These biological chips are better than silicon chips because they can change their internal structure, adapting to a user’s usage pattern and leading to huge gains in efficiency.”

Another journalist asks: “Aren’t there ethical concerns about computers that use human brain matter?”

Although the name and scenario are fictional, this is a question we have to confront now. In December 2021, Melbourne-based Cortical Labs grew groups of neurons (brain cells) that were incorporated into a computer chip. The resulting hybrid chip works because both brains and neurons share a common language: electricity.

The authors explain their comment that brains and neurons share the common language of electricity (Note: Links have been removed),

In silicon computers, electrical signals travel along metal wires that link different components together. In brains, neurons communicate with each other using electric signals across synapses (junctions between nerve cells). In Cortical Labs’ Dishbrain system, neurons are grown on silicon chips. These neurons act like the wires in the system, connecting different components. The major advantage of this approach is that the neurons can change their shape, grow, replicate, or die in response to the demands of the system.

Dishbrain could learn to play the arcade game Pong faster than conventional AI systems. The developers of Dishbrain said: “Nothing like this has ever existed before … It is an entirely new mode of being. A fusion of silicon and neuron.”

Cortical Labs believes its hybrid chips could be the key to the kinds of complex reasoning that today’s computers and AI cannot produce. Another start-up making computers from lab-grown neurons, Koniku, believes their technology will revolutionise several industries including agriculture, healthcare, military technology and airport security. Other types of organic computers are also in the early stages of development.

Ethics issues arise (Note: Links have been removed),

… this raises questions about donor consent. Do people who provide tissue samples for technology research and development know that it might be used to make neural computers? Do they need to know this for their consent to be valid?

People will no doubt be much more willing to donate skin cells for research than their brain tissue. One of the barriers to brain donation is that the brain is seen as linked to your identity. But in a world where we can grow mini-brains from virtually any cell type, does it make sense to draw this type of distinction?

… Consider the scandal regarding Henrietta Lacks, an African-American woman whose cells were used extensively in medical and commercial research without her knowledge and consent.

Henrietta’s cells are still used in applications which generate huge amounts of revenue for pharmaceutical companies (including recently to develop COVID vaccines. The Lacks family still has not received any compensation. If a donor’s neurons end up being used in products like the imaginary Nyooro, should they be entitled to some of the profit made from those products?

Another key ethical consideration for neural computers is whether they could develop some form of consciousness and experience pain. Would neural computers be more likely to have experiences than silicon-based ones? …

This May 24, 2022 essay is fascinating and, if you have the time, I encourage you to read it all.

If you’re curious, you can find out about Cortical Labs here, more about Dishbrain in a February 22, 2022 article by Brian Patrick Green for iai (Institute for Art and Ideas) news, and more about Koniku in a May 31, 2018 posting about ‘wetware’ by Alissa Greenberg on Medium.

As for Henrietta Lacks, there’s this from my May 13, 2016 posting,

*HeLa cells are named for Henrietta Lacks who unknowingly donated her immortal cell line to medical research. You can find more about the story on the Oprah Winfrey website, which features an excerpt from the Rebecca Skloot book “The Immortal Life of Henrietta Lacks.”’ …

I checked; the excerpt is still on the Oprah Winfrey site.

h/t May 24, 2022 Nanowerk Spotlight article

Save energy with neuromorphic (brainlike) hardware

It seems the appetite for computing power is bottomless, which presents a problem in a world where energy resources are increasingly constrained. A May 24, 2022 news item on ScienceDaily announces research into neuromorphic computing which hints the energy efficiency long promised by the technology may be realized in the foreseeable future,

For the first time TU Graz’s [Graz University of Technology; Austria] Institute of Theoretical Computer Science and Intel Labs demonstrated experimentally that a large neural network can process sequences such as sentences while consuming four to sixteen times less energy while running on neuromorphic hardware than non-neuromorphic hardware. The new research based on Intel Labs’ Loihi neuromorphic research chip that draws on insights from neuroscience to create chips that function similar to those in the biological brain.

Rich Uhlig, managing director of Intel Labs, holds one of Intel’s Nahuku boards, each of which contains 8 to 32 Intel Loihi neuromorphic chips. Intel’s latest neuromorphic system, Pohoiki Beach, is made up of multiple Nahuku boards and contains 64 Loihi chips. Pohoiki Beach was introduced in July 2019. (Credit: Tim Herman/Intel Corporation)

A May 24, 2022 Graz University of Technology (TU Graz) press release (also on EurekAlert), which originated the news item, delves further into the research, Note: Links have been removed,

The research was funded by The Human Brain Project (HBP), one of the largest research projects in the world with more than 500 scientists and engineers across Europe studying the human brain. The results of the research are published in the research paper “Memory for AI Applications in Spike-based Neuromorphic Hardware” [sic] (DOI 10.1038/s42256-022-00480-w) which in published in Nature Machine Intelligence.  

Human brain as a role model

Smart machines and intelligent computers that can autonomously recognize and infer objects and relationships between different objects are the subjects of worldwide artificial intelligence (AI) research. Energy consumption is a major obstacle on the path to a broader application of such AI methods. It is hoped that neuromorphic technology will provide a push in the right direction. Neuromorphic technology is modelled after the human brain, which is highly efficient in using energy. To process information, its hundred billion neurons consume only about 20 watts, not much more energy than an average energy-saving light bulb.

In the research, the group focused on algorithms that work with temporal processes. For example, the system had to answer questions about a previously told story and grasp the relationships between objects or people from the context. The hardware tested consisted of 32 Loihi chips.

Loihi research chip: up to sixteen times more energy-efficient than non-neuromorphic hardware

“Our system is four to sixteen times more energy-efficient than other AI models on conventional hardware,” says Philipp Plank, a doctoral student at TU Graz’s Institute of Theoretical Computer Science. Plank expects further efficiency gains as these models are migrated to the next generation of Loihi hardware, which significantly improves the performance of chip-to-chip communication.

“Intel’s Loihi research chips promise to bring gains in AI, especially by lowering their high energy cost,“ said Mike Davies, director of Intel’s Neuromorphic Computing Lab. “Our work with TU Graz provides more evidence that neuromorphic technology can improve the energy efficiency of today’s deep learning workloads by re-thinking their implementation from the perspective of biology.”

Mimicking human short-term memory

In their neuromorphic network, the group reproduced a presumed memory mechanism of the brain, as Wolfgang Maass, Philipp Plank’s doctoral supervisor at the Institute of Theoretical Computer Science, explains: “Experimental studies have shown that the human brain can store information for a short period of time even without neural activity, namely in so-called ‘internal variables’ of neurons. Simulations suggest that a fatigue mechanism of a subset of neurons is essential for this short-term memory.”

Direct proof is lacking because these internal variables cannot yet be measured, but it does mean that the network only needs to test which neurons are currently fatigued to reconstruct what information it has previously processed. In other words, previous information is stored in the non-activity of neurons, and non-activity consumes the least energy.

Symbiosis of recurrent and feed-forward network

The researchers link two types of deep learning networks for this purpose. Feedback neural networks are responsible for “short-term memory.” Many such so-called recurrent modules filter out possible relevant information from the input signal and store it. A feed-forward network then determines which of the relationships found are very important for solving the task at hand. Meaningless relationships are screened out, the neurons only fire in those modules where relevant information has been found. This process ultimately leads to energy savings.

“Recurrent neural structures are expected to provide the greatest gains for applications running on neuromorphic hardware in the future,” said Davies. “Neuromorphic hardware like Loihi is uniquely suited to facilitate the fast, sparse and unpredictable patterns of network activity that we observe in the brain and need for the most energy efficient AI applications.”

This research was financially supported by Intel and the European Human Brain Project, which connects neuroscience, medicine, and brain-inspired technologies in the EU. For this purpose, the project is creating a permanent digital research infrastructure, EBRAINS. This research work is anchored in the Fields of Expertise Human and Biotechnology and Information, Communication & Computing, two of the five Fields of Expertise of TU Graz.

Here’s a link to and a citation for the paper,

A Long Short-Term Memory for AI Applications in Spike-based Neuromorphic Hardware by Arjun Rao, Philipp Plank, Andreas Wild & Wolfgang Maass. Nature Machine Intelligence (2022) DOI: https://doi.org/10.1038/s42256-022-00480-w Published: 19 May 2022

This paper is behind a paywall.

For anyone interested in the EBRAINS project, here’s a description from their About page,

EBRAINS provides digital tools and services which can be used to address challenges in brain research and brain-inspired technology development. Its components are designed with, by, and for researchers. The tools assist scientists to collect, analyse, share, and integrate brain data, and to perform modelling and simulation of brain function.

EBRAINS’ goal is to accelerate the effort to understand human brain function and disease.

This EBRAINS research infrastructure is the entry point for researchers to discover EBRAINS services. The services are being developed and powered by the EU-funded Human Brain Project.

You can register to use the EBRAINS research infrastructure HERE

One last note, the Human Brain Project is a major European Union (EU)-funded science initiative (1B Euros) announced in 2013 and to be paid out over 10 years.

Simulating neurons and synapses with memristive devices

I’ve been meaning to get to this research on ‘neuromorphic memory’ for a while. From a May 20, 2022 news item on Nanowerk,

Researchers have reported a nano-sized neuromorphic memory device that emulates neurons and synapses simultaneously in a unit cell, another step toward completing the goal of neuromorphic computing designed to rigorously mimic the human brain with semiconductor devices.

Neuromorphic computing aims to realize artificial intelligence (AI) by mimicking the mechanisms of neurons and synapses that make up the human brain. Inspired by the cognitive functions of the human brain that current computers cannot provide, neuromorphic devices have been widely investigated.

However, current Complementary Metal-Oxide Semiconductor (CMOS)-based neuromorphic circuits simply connect artificial neurons and synapses without synergistic interactions, and the concomitant implementation of neurons and synapses still remains a challenge.

A May 20, 2022 Korea Advanced Institute of Science and Technology (KAIST) press release (also on EurekAlert), which originated the news item, delves further into the research,

To address these issues, a research team led by Professor Keon Jae Lee from the Department of Materials Science and Engineering implemented the biological working mechanisms of humans by introducing the neuron-synapse interactions in a single memory cell, rather than the conventional approach of electrically connecting artificial neuronal and synaptic devices.

Similar to commercial graphics cards, the artificial synaptic devices previously studied often used to accelerate parallel computations, which shows clear differences from the operational mechanisms of the human brain. The research team implemented the synergistic interactions between neurons and synapses in the neuromorphic memory device, emulating the mechanisms of the biological neural network. In addition, the developed neuromorphic device can replace complex CMOS neuron circuits with a single device, providing high scalability and cost efficiency. 

The human brain consists of a complex network of 100 billion neurons and 100 trillion synapses. The functions and structures of neurons and synapses can flexibly change according to the external stimuli, adapting to the surrounding environment. The research team developed a neuromorphic device in which short-term and long-term memories coexist using volatile and non-volatile memory devices that mimic the characteristics of neurons and synapses, respectively. A threshold switch device is used as volatile memory and phase-change memory is used as a non-volatile device. Two thin-film devices are integrated without intermediate electrodes, implementing the functional adaptability of neurons and synapses in the neuromorphic memory.

Professor Keon Jae Lee explained, “Neurons and synapses interact with each other to establish cognitive functions such as memory and learning, so simulating both is an essential element for brain-inspired artificial intelligence. The developed neuromorphic memory device also mimics the retraining effect that allows quick learning of the forgotten information by implementing a positive feedback effect between neurons and synapses.”

Here’s a link to and a citation for the paper,

Simultaneous emulation of synaptic and intrinsic plasticity using a memristive synapse by Sang Hyun Sung, Tae Jin Kim, Hyera Shin, Tae Hong Im & Keon Jae Lee. Nature Communications volume 13, Article number: 2811 (2022) DOI https://doi.org/10.1038/s41467-022-30432-2 Published 19 May 2022

This paper is open access.

Memristive control of mutual spin

It may be my imagination but it seems I’m stumbling across more research on neuromorphic (brainlike) computing than usual this year. In May 2022 alone I stumbled across three items. Today (August 24, 2022), here’s a May 14, 2022 news item on Nanowerk describes some work from the University of Gothenburg (Sweden),

Artificial Intelligence (AI) is making it possible for machines to do things that were once considered uniquely human. With AI, computers can use logic to solve problems, make decisions, learn from experience and perform human-like tasks. However, they still cannot do this as effectively and energy efficiently as the human brain.

Research conducted with support from the EU-funded TOPSPIN and SpinAge projects has brought scientists a step closer to achieving this goal.

“Finding new ways of performing calculations that resemble the brain’s energy-efficient processes has been a major goal of research for decades,” observes Prof. Johan Åkerman of TOPSIN project host University of Gothenburg, Sweden. “Cognitive tasks, like image and voice recognition, require significant computer power, and mobile applications, in particular, like mobile phones, drones and satellites, require energy efficient solutions,” continues Prof. Åkerman, who is also the founder and CEO of SpinAge project partner NanOsc, also in Sweden.

A May 13, 2022 CORDIS press release, which originated the news item, provides more detail,

The research team succeeded in combining a memory function and a calculation function in one component for the very first time. The achievement is described in their study published in the journal ‘Nature Materials’. The memory and calculation functions were combined by linking oscillator networks and memristors – the two main tools needed to carry out advanced calculations. Oscillators are described as oscillating circuits capable of performing calculations. Memristors, short for memory resistors, are electronic devices whose resistance can be programmed and remains stored. In other words, the memristor’s resistance performs a memory function by remembering what value it had when the device was powered on.

A major development

Prof. Åkerman comments on the discovery: “This is an important breakthrough because we show that it is possible to combine a memory function with a calculating function in the same component. These components work more like the brain’s energy-efficient neural networks, allowing them to become important building blocks in future, more brain-like computers.”

As reported in the news item, Prof. Åkerman believes this achievement will lead to the development of technologies that are faster, easier to use and less energy-consuming. Also, the fact that hundreds of components can fit into an area the size of a single bacterium could have a significant impact on smaller applications. “More energy-efficient calculations could lead to new functionality in mobile phones. An example is digital assistants like Siri or Google. Today, all processing is done by servers since the calculations require too much energy for the small size of a phone. If the calculations could instead be performed locally, on the actual phone, they could be done faster and easier without a need to connect to servers.”

Prof. Åkerman concludes: “The more energy-efficiently that cognitive calculations can be performed, the more applications become possible. That’s why our study really has the potential to advance the field.” The TOPSPIN (Topotronic multi-dimensional spin Hall nano-oscillator networks) and SpinAge (Weighted Spintronic-Nano-Oscillator-based Neuromorphic Computing System Assisted by laser for Cognitive Computing) projects end in 2024.

For more information, please see:
TOPSPIN project
SpinAge project

The University of Gothenburg first announced the research in a November 29, 2021 press release on EurekAlert,

Research has long strived to develop computers to work as energy efficiently as our brains. A study, led by researchers at the University of Gothenburg, has succeeded for the first time in combining a memory function with a calculation function in the same component. The discovery opens the way for more efficient technologies, everything from mobile phones to self-driving cars.

In recent years, computers have been able to tackle advanced cognitive tasks, like language and image recognition or displaying superhuman chess skills, thanks in large part to artificial intelligence (AI). At the same time, the human brain is still unmatched in its ability to perform tasks effectively and energy efficiently.

“Finding new ways of performing calculations that resemble the brain’s energy-efficient processes has been a major goal of research for decades. Cognitive tasks, like image and voice recognition, require significant computer power, and mobile applications, in particular, like mobile phones, drones and satellites, require energy efficient solutions,” says Johan Åkerman, professor of applied spintronics at the University of Gothenburg.

Important breakthrough
Working with a research team at Tohoko University, Åkerman led a study that has now taken an important step forward in achieving this goal. In the study, now published in the highly ranked journal Nature Materials, the researchers succeeded for the first time in linking the two main tools for advanced calculations: oscillator networks and memristors.

Åkerman describes oscillators as oscillating circuits that can perform calculations and that are comparable to human nerve cells. Memristors are programable resistors that can also perform calculations and that have integrated memory. This makes them comparable to memory cells. Integrating the two is a major advancement by the researchers.

“This is an important breakthrough because we show that it is possible to combine a memory function with a calculating function in the same component. These components work more like the brain’s energy-efficient neural networks, allowing them to become important building blocks in future, more brain-like computers.”

Enables energy-efficient technologies
According to Johan Åkerman, the discovery will enable faster, easier to use and less energy consuming technologies in many areas. He feels that it is a huge advantage that the research team has successfully produced the components in an extremely small footprint: hundreds of components fit into an area equivalent to a single bacterium. This can be of particular importance in smaller applications like mobile phones.

“More energy-efficient calculations could lead to new functionality in mobile phones. An example is digital assistants like Siri or Google. Today, all processing is done by servers since the calculations require too much energy for the small size of a phone. If the calculations could instead be performed locally, on the actual phone, they could be done faster and easier without a need to connect to servers.”

He notes self-driving cars and drones as other examples of where more energy-efficient calculations could drive developments.

“The more energy-efficiently that cognitive calculations can be performed, the more applications become possible. That’s why our study really has the potential to advance the field.”

Here’s a link to and a citation for the paper,

Memristive control of mutual spin Hall nano-oscillator synchronization for neuromorphic computing by Mohammad Zahedinejad, Himanshu Fulara, Roman Khymyn, Afshin Houshang, Mykola Dvornik, Shunsuke Fukami, Shun Kanai, Hideo Ohno & Johan Åkerman. Nature Materials volume 21, pages 81–87 (2022) DOI: https://doi.org/10.1038/s41563-021-01153-6 First Published: 29 November 2021 Issue Date: January 2022

This paper is behind a paywall.

Neuromorphic hardware could yield computational advantages for more than just artificial intelligence

Neuromorphic (brainlike) computing doesn’t have to be used for cognitive tasks only according to a research team at the US Dept. of Energy’s Sandia National Laboratories as per their March 11, 2022 news release by Neal Singer (also on EurekAlert but published March 10, 2022), Note: Links have been removed,

With the insertion of a little math, Sandia National Laboratories researchers have shown that neuromorphic computers, which synthetically replicate the brain’s logic, can solve more complex problems than those posed by artificial intelligence and may even earn a place in high-performance computing.

A random walk diffusion model based on data from Sandia National Laboratories algorithms running on an Intel Loihi neuromorphic platform. Video courtesy of Sandia National Laboratories. …

The findings, detailed in a recent article in the journal Nature Electronics, show that neuromorphic simulations employing the statistical method called random walks can track X-rays passing through bone and soft tissue, disease passing through a population, information flowing through social networks and the movements of financial markets, among other uses, said Sandia theoretical neuroscientist and lead researcher James Bradley Aimone.

“Basically, we have shown that neuromorphic hardware can yield computational advantages relevant to many applications, not just artificial intelligence to which it’s obviously kin,” said Aimone. “Newly discovered applications range from radiation transport and molecular simulations to computational finance, biology modeling and particle physics.”

In optimal cases, neuromorphic computers will solve problems faster and use less energy than conventional computing, he said.

The bold assertions should be of interest to the high-performance computing community because finding capabilities to solve statistical problems is of increasing concern, Aimone said.

“These problems aren’t really well-suited for GPUs [graphics processing units], which is what future exascale systems are likely going to rely on,” Aimone said. “What’s exciting is that no one really has looked at neuromorphic computing for these types of applications before.”

Sandia engineer and paper author Brian Franke said, “The natural randomness of the processes you list will make them inefficient when directly mapped onto vector processors like GPUs on next-generation computational efforts. Meanwhile, neuromorphic architectures are an intriguing and radically different alternative for particle simulation that may lead to a scalable and energy-efficient approach for solving problems of interest to us.”

Franke models photon and electron radiation to understand their effects on components.

The team successfully applied neuromorphic-computing algorithms to model random walks of gaseous molecules diffusing through a barrier, a basic chemistry problem, using the 50-million-chip Loihi platform Sandia received approximately a year and a half ago from Intel Corp., said Aimone. “Then we showed that our algorithm can be extended to more sophisticated diffusion processes useful in a range of applications.”

The claims are not meant to challenge the primacy of standard computing methods used to run utilities, desktops and phones. “There are, however, areas in which the combination of computing speed and lower energy costs may make neuromorphic computing the ultimately desirable choice,” he said.

Showing a neuromorphic advantage, both the IBM TrueNorth and Intel Loihi neuromorphic chips observed by Sandia National Laboratories researchers were significantly more energy efficient than conventional computing hardware. The graph shows Loihi can perform about 10 times more calculations per unit of energy than a conventional processor. Energy is the limiting factor — more chips can be inserted to run things in parallel, thus faster, but the same electric bill occurs whether it is one computer doing everything or 10,000 computers doing the work. Image courtesy of Sandia National Laboratories. Click on the thumbnail for a high-resolution image.

Unlike the difficulties posed by adding qubits to quantum computers — another interesting method of moving beyond the limitations of conventional computing — chips containing artificial neurons are cheap and easy to install, Aimone said.

There can still be a high cost for moving data on or off the neurochip processor. “As you collect more, it slows down the system, and eventually it won’t run at all,” said Sandia mathematician and paper author William Severa. “But we overcame this by configuring a small group of neurons that effectively computed summary statistics, and we output those summaries instead of the raw data.”

Severa wrote several of the experiment’s algorithms.

Like the brain, neuromorphic computing works by electrifying small pin-like structures, adding tiny charges emitted from surrounding sensors until a certain electrical level is reached. Then the pin, like a biological neuron, flashes a tiny electrical burst, an action known as spiking. Unlike the metronomical regularity with which information is passed along in conventional computers, said Aimone, the artificial neurons of neuromorphic computing flash irregularly, as biological ones do in the brain, and so may take longer to transmit information. But because the process only depletes energies from sensors and neurons if they contribute data, it requires less energy than formal computing, which must poll every processor whether contributing or not. The conceptually bio-based process has another advantage: Its computing and memory components exist in the same structure, while conventional computing uses up energy by distant transfer between these two functions. The slow reaction time of the artificial neurons initially may slow down its solutions, but this factor disappears as the number of neurons is increased so more information is available in the same time period to be totaled, said Aimone.

The process begins by using a Markov chain — a mathematical construct where, like a Monopoly gameboard, the next outcome depends only on the current state and not the history of all previous states. That randomness contrasts, said Sandia mathematician and paper author Darby Smith, with most linked events. For example, he said, the number of days a patient must remain in the hospital are at least partially determined by the preceding length of stay.

Beginning with the Markov random basis, the researchers used Monte Carlo simulations, a fundamental computational tool, to run a series of random walks that attempt to cover as many routes as possible.

“Monte Carlo algorithms are a natural solution method for radiation transport problems,” said Franke. “Particles are simulated in a process that mirrors the physical process.”

The energy of each walk was recorded as a single energy spike by an artificial neuron reading the result of each walk in turn. “This neural net is more energy efficient in sum than recording each moment of each walk, as ordinary computing must do. This partially accounts for the speed and efficiency of the neuromorphic process,” said Aimone. More chips will help the process move faster using the same amount of energy, he said.

The next version of Loihi, said Sandia researcher Craig Vineyard, will increase its current chip scale from 128,000 neurons per chip to up to one million. Larger scale systems then combine multiple chips to a board.

“Perhaps it makes sense that a technology like Loihi may find its way into a future high-performance computing platform,” said Aimone. “This could help make HPC much more energy efficient, climate-friendly and just all around more affordable.”

Here’s a link to and a citation for the paper,

Neuromorphic scaling advantages for energy-efficient random walk computations by J. Darby Smith, Aaron J. Hill, Leah E. Reeder, Brian C. Franke, Richard B. Lehoucq, Ojas Parekh, William Severa & James B. Aimone. Nature Electronics volume 5, pages 102–112 (2022) DOI: https://doi.org/10.1038/s41928-021-00705-7 Issue Date February 2022 Published 14 February 2022

This paper is open access.

Say goodbye to crunchy (ice crystal-laden) in ice cream thanks to cellulose nanocrystals (CNC)

The American Chemical Society (ACS) held its 2022 Spring Meeting from March 20 – 24, 2022 and it seems like a good excuse to feature ice cream.

Adding cellulose nanocrystals prevents the growth of small ice crystals (bottom left) into the large ones (top left) that can make ice cream (right) unpleasantly crunchy. Scale bar = 100 μm. Credit: Tao Wu

A March 20, 2022 news item on phys.org introduces an ice cream presentation given at the meeting on Monday, March 20, 2022,

Ice cream can be a culinary delight, except when it gets unpleasantly crunchy because ice crystals have grown in it. Today, scientists report that a form of cellulose obtained from plants can be added to the tasty treat to stop crystals cold—and the additive works better than currently used ice growth inhibitors in the face of temperature fluctuations. The findings could be extended to the preservation of other frozen foods and perhaps donated organs and tissues

A March 20, 2022 ACS press release, which originated the news item, provides more details about crunchy ice cream and how it might be avoided,

Freshly made ice cream contains tiny ice crystals. But during storage and transport, the ice melts and regrows. During this recrystallization process, smaller crystals melt, and the water diffuses to join larger ones, causing them to grow, says Tao Wu, Ph.D., the project’s principal investigator. If the ice crystals become bigger than 50 micrometers — or roughly the diameter of a hair — the dessert takes on a grainy, icy texture that reduces consumer appeal, Wu says. “Controlling the formation and growth of ice crystals is thus the key to obtaining high-quality frozen foods.”

One fix would be to copy nature’s solution: “Some fish, insects and plants can survive in sub-zero temperatures because they produce antifreeze proteins that fight the growth of ice crystals,” Wu says. But antifreeze proteins are costlier than gold and limited in supply, so they’re not practical to add to ice cream. Polysaccharides such as guar gum or locust bean gum are used instead. “But these stabilizers are not very effective,” Wu notes. “Their performance is influenced by many factors, including storage temperature and time, and the composition and concentration of other ingredients. This means they sometimes work in one product but not in another.” In addition, their mechanism of action is uncertain. Wu wanted to clarify how they work and develop better alternatives.

Although Wu didn’t use antifreeze proteins in the study, he drew inspiration from them. These proteins are amphiphilic, meaning they have a hydrophilic surface with an affinity for water, as well as a hydrophobic surface that repels water. Wu knew that nano-sized crystals of cellulose are also amphiphilic, so he figured it was worth checking if they could stop ice crystal growth in ice cream. These cellulose nanocrystals (CNCs) are extracted from the plant cell walls of agricultural and forestry byproducts, so they are inexpensive, abundant and renewable.

In a model ice cream — a 25% sucrose solution — the CNCs initially had no effect, says Min Li, a graduate student in Wu’s lab at the University of Tennessee. Though still small, ice crystals were the same size whether CNCs were present or not. But after the model ice cream was stored for a few hours, the researchers found that the CNCs completely shut down the growth of ice crystals, while the crystals continued to enlarge in the untreated model ice cream.

The team’s tests also revealed that the cellulose inhibits ice recrystallization through surface adsorption. CNCs, like antifreeze proteins, appear to stick to the surfaces of ice crystals, preventing them from drawing together and fusing. “This completely contradicted the existing belief that stabilizers inhibit ice recrystallization by increasing viscosity, which was thought to slow diffusion of water molecules,” adds Li, who will present the work at the meeting.

In their latest study, the scientists found that CNCs are more protective than current stabilizers when ice cream is exposed to fluctuating temperatures, such as when the treat is stored in the supermarket and then taken home. The team also discovered the additive can slow the melting of ice crystals, so it could be used to produce slow-melting ice cream. Other labs have shown the stabilizer is nontoxic at the levels needed in food, Wu notes, but the additive would require review by the U.S. Food and Drug Administration.

With further research, CNCs could be used to protect the quality of other foods — such as frozen dough and fish — or perhaps to preserve cells, tissues and organs in biomedicine, Wu says. “At present, a heart must be transplanted within a few hours after being removed from a donor,” he explains. “But this time limit could be eliminated if we could inhibit the growth of ice crystals when the heart is kept at low temperatures.”

Interesting to see that this research into ice cream crystals could lead to new techniques for organ transplants.

Maybe spray-on technology can be used for heart repair?

Courtesy: University of Ottawa

That is a pretty stunning image and this March 15, 2022 news item on phys.org provides an explanation of what you see (Note: A link has been removed),

Could a spritz of super-tiny particles of gold and peptides on a damaged heart potentially provide minimally invasive, on-the-spot repair?

Cutting-edge research led by University of Ottawa Faculty of Medicine Associate Professors Dr. Emilio Alarcon and Dr. Erik Suuronen suggests a spray-on technology using customized nanoparticles of one of the world’s most precious metals offers tremendous therapeutic potential and could eventually help save many lives. Cardiovascular diseases are the leading cause of death globally, claiming roughly 18 million lives each year.

In a paper recently published online in ACS Nano, a peer-reviewed journal that highlighted the new research on its supplementary cover, Dr. Alarcon and his team of fellow investigators suggest that this approach might one day be used in conjunction with coronary artery bypass surgeries. That’s the most common type of heart surgery.

A March 15, 2021 University of Ottawa news release (also on EurekAlert) by David McFadden, which originated the news item, describes the research in more detail (Note: A link has been removed),

The therapy tested by the researchers – which was sprayed on the hearts of lab mice – used very low concentrations of peptide-modified particles of gold created in the laboratory. From the nozzle of a miniaturized spraying apparatus, the material can be evenly painted on the surface of a heart within a few seconds.

Gold nanoparticles have been shown to have some unusual properties and are highly chemically reactive. For years, researchers have been employing gold nanoparticles – so tiny they are undetectable by the human eye – in such a wide range of technologies that it’s become an area of intense research interest.

In this case, the custom-made nanogold modified with peptides—a short chain of amino acids —was sprayed on the hearts of lab mice. The research found that the spray-on therapy not only resulted in an increase in cardiac function and heart electrical conductivity but that there was no off-target organ infiltration by the tiny gold particles.

“That’s the beauty of this approach. You spray, then you wait a couple of weeks, and the animals are doing just fine compared to the controls,” says Dr. Alarcon, who is part of the Faculty of Medicine’s Department of Biochemistry, Microbiology and Immunology and also Director of the Bio-nanomaterials Chemistry and Engineering Laboratory at the University of the Ottawa Heart Institute.

Dr. Alarcon says that not only does the data suggest that the therapeutic action of the spray-on nanotherapeutic is highly effective, but its application is far simpler than other regenerative approaches for treating an infarcted heart.

At first, the observed improvement of cardiac function and electrical signal propagation in the hearts of tested mice was hard for the team to believe. But repeated experiments delivered the same positive results, according to Dr. Alarcon, who is part of the Faculty of Medicine’s Department of Biochemistry, Microbiology and Immunology and Director of the Bio-nanomaterials Chemistry and Engineering Laboratory at the University of Ottawa Heart Institute.

To validate the exciting findings in mice, the team is now seeking to adapt this technology to minimally invasive procedures that will expedite testing in large animal models, such as rabbits and pigs.

Dr. Alarcon praised the research culture at uOttawa and the Heart Institute, saying that the freedom to explore is paramount. “When you have an environment where you are allowed to make mistakes and criticize, that really drives discoveries,” he says.

The team involved in the paper includes researchers from uOttawa and the University of Talca in Chile. Part of the work was funded by the Canadian government’s New Frontiers in Research Fund, which was launched in 2018 and supports transformative high risk/high reward research led by Canadian researchers working with local and international partners.

Here’s a link to and a citation for the paper,

Nanoengineered Sprayable Therapy for Treating Myocardial Infarction by Marcelo Muñoz, Cagla Eren Cimenci, Keshav Goel, Maxime Comtois-Bona, Mahir Hossain, Christopher McTiernan, Matias Zuñiga-Bustos, Alex Ross, Brenda Truong, Darryl R. Davis, Wenbin Liang, Benjamin Rotstein, Marc Ruel, Horacio Poblete, Erik J. Suuronen, and Emilio I. Alarcon. ACS Nano 2022, 16, 3, 3522–3537 DOI: https://doi.org/10.1021/acsnano.1c08890 Publication Date: February 14, 2022 Copyright © 2022 The Authors. Published by American Chemical Society

This paper appears to be open access.

Harvest fresh water from dry air with hydrogels

Turning Air Into Drinking Water from University of Texas at Austin on Vimeo. Video by Thomas Swafford. Written by Sara Robberson Lentz.

Seems almost magical but it takes years to do this research. That video was posted in September 2019 and the latest research is being announced in a February 28, 2022 news item on phys.org,

Hydrogels have an astonishing ability to swell and take on water. In daily life, they are used in dressings, nappies, and more to lock moisture away. A team of researchers has now found another use: quickly extracting large amounts of freshwater from air using a specially developed hydrogel containing a hygroscopic salt. The study, published in the journal Angewandte Chemie, shows that the salt enhances the moisture uptake of the gel, making it suitable for water harvesting in dry regions.

A February 28, 2022 Wiley Publishing news release on EurekAlert delves further into hydrogels and the research into how they might be used to harvest water from the air,

Hydrogels can absorb and store many times their weight in water. In so doing, the underlying polymer swells considerably by incorporating water. However, to date, use of this property to produce freshwater from atmospheric water has not been feasible, since collecting moisture from the air is still too slow and inefficient.

On the other hand, moisture absorption could be enhanced by adding hygroscopic salts that can rapidly remove large amounts of moisture from the air. However, hygroscopic salts and hydrogels are usually not compatible, as a large amount of salt influences the swelling capability of the hydrogel and thus degrades its properties. In addition, the salt ions are not tightly coordinated within the gel and are easily washed away.

The materials scientist Guihua Yu and his team at the University of Texas at Austin, USA, have now overcome these issues by developing a particularly “salt-friendly” hydrogel. As their study shows, this gel gains the ability to absorb and retain water when combined with a hygroscopic salt. Using their hydrogel, the team were able to extract almost six liters of pure water per kilo of material in 24 hours, from air with 30% relative humidity.

The basis for the new hydrogel was a polymer constructed from zwitterionic molecules. Polyzwitterions carry both positive and negative charged functional groups, which helped the polymer to become more responsive to the salt in this case. Initially, the molecular strands in the polymer were tightly intermingled, but when the researchers added the lithium chloride salt, the strands relaxed and a porous, spongy hydrogel was formed. This hydrogel loaded with the hygroscopic salt was able to incorporate water molecules quickly and easily.

In fact, water incorporation was so quick and easy that the team were able to set up a cyclical system for continuous water separation. They left the hydrogel for an hour each time to absorb atmospheric moisture, then dried the gel in a condenser to collect the condensed water. They repeated this procedure multiple times without it resulting in any substantial loss of the amount of water absorbed, condensed, or collected.

Yu and the team say that the as-prepared hydrogel “should be optimal for efficient moisture harvesting for the potential daily water yield”. They add that polyzwitterionic hydrogels could play a fundamental role in the future for recovering atmospheric water in arid, drought-stricken regions.

Here’s a link to and a citation for the paper,

Polyzwitterionic Hydrogels for Efficient Atmospheric Water Harvesting by Chuxin Lei, Youhong Guo, Weixin Guan, Hengyi Lu, Wen Shi, Guihua Yu. Angewandte Chemie International Edition Volume 61, Issue1 3 March 21, 2022 e202200271 DOI: https://doi.org/10.1002/anie.202200271 First published: 28 January 2022

This paper is behind a paywall.