An August 31, 2022 news item on ScienceDaily highlights the power of an introspective AI,
An artificial intelligence with the ability to look inward and fine tune its own neural network performs better when it chooses diversity over lack of diversity, a new study finds. The resulting diverse neural networks were particularly effective at solving complex tasks.
“We created a test system with a non-human intelligence, an artificial intelligence (AI), to see if the AI would choose diversity over the lack of diversity and if its choice would improve the performance of the AI,” says William Ditto, professor of physics at North Carolina State University, director of NC State’s Nonlinear Artificial Intelligence Laboratory (NAIL) and co-corresponding author of the work. “The key was giving the AI the ability to look inward and learn how it learns.”
An August 31, 2023 North Carolina State University (NCSU) news release (also on EurekAlert), describes how an AI can become ‘introspective’ and employ neural ‘diversity’, Note: A link has been removed,
Neural networks are an advanced type of AI loosely based on the way that our brains work. Our natural neurons exchange electrical impulses according to the strengths of their connections. Artificial neural networks create similarly strong connections by adjusting numerical weights and biases during training sessions. For example, a neural network can be trained to identify photos of dogs by sifting through a large number of photos, making a guess about whether the photo is of a dog, seeing how far off it is and then adjusting its weights and biases until they are closer to reality.
Conventional AI uses neural networks to solve problems, but these networks are typically composed of large numbers of identical artificial neurons. The number and strength of connections between those identical neurons may change as it learns, but once the network is optimized, those static neurons are the network.
Ditto’s team, on the other hand, gave its AI the ability to choose the number, shape and connection strength between neurons in its neural network, creating sub-networks of different neuron types and connection strengths within the network as it learns.
“Our real brains have more than one type of neuron,” Ditto says. “So we gave our AI the ability to look inward and decide whether it needed to modify the composition of its neural network. Essentially, we gave it the control knob for its own brain. So it can solve the problem, look at the result, and change the type and mixture of artificial neurons until it finds the most advantageous one. It’s meta-learning for AI.
“Our AI could also decide between diverse or homogenous neurons,” Ditto says. “And we found that in every instance the AI chose diversity as a way to strengthen its performance.”
The team tested the AI’s accuracy by asking it to perform a standard numerical classifying exercise, and saw that its accuracy increased as the number of neurons and neuronal diversity increased. A standard, homogenous AI could identify the numbers with 57% accuracy, while the meta-learning, diverse AI was able to reach 70% accuracy.
According to Ditto, the diversity-based AI is up to 10 times more accurate than conventional AI in solving more complicated problems, such as predicting a pendulum’s swing or the motion of galaxies.
“We have shown that if you give an AI the ability to look inward and learn how it learns it will change its internal structure – the structure of its artificial neurons – to embrace diversity and improve its ability to learn and solve problems efficiently and more accurately,” Ditto says. “Indeed, we also observed that as the problems become more complex and chaotic the performance improves even more dramatically over an AI that does not embrace diversity.”
The research appears in Scientific Reports, and was supported by the Office of Naval Research (under grant N00014-16-1-3066) and by United Therapeutics. Former post-doctoral researcher Anshul Choudhary is first author. John Lindner, visiting professor and emeritus professor of physics at the College of Wooster, NC State graduate student Anil Radhakrishnan and Sudeshna Sinha, professor of physics at the Indian Institute of Science Education and Research Mohali, also contributed to the work.
Here’s a link to and a citation for the paper,
Neuronal diversity can improve machine learning for physics and beyond by Anshul Choudhary, Anil Radhakrishnan, John F. Lindner, Sudeshna Sinha & William L. Ditto. Scientific Reports volume 13, Article number: 13962 (2023) DOI: https://doi.org/10.1038/s41598-023-40766-6 Published: 26 August 2023
This paper is open access.