Tag Archives: Universidade de São Paulo

Taxonomies (classification schemes) rouse passions

There seems to have been some lively debate among biologists about matters most of us treat as invisible: naming, establishing, and classifying categories. These activities can become quite visible when learning a new language, e.g., French which divides nouns into two genders or German which classifies nouns with any of three genders.

A July 26, 2020 essay by Stephen Garnett (Professor of Conservation and Sustainable Livelihoods, Charles Darwin University, Australia), Les Christidis (Professor, Southern Cross University, Australia), Richard L. Pyle (Associate lecturer, University of Hawaii, US), and Scott Thomson (Research associate, Universidade de São Paulo, Brazil) for The Conversation (also on phys.org but published July 27, 2020) describes a very heated debate over taxonomy,

Taxonomy, or the naming of species, is the foundation of modern biology. It might sound like a fairly straightforward exercise, but in fact it’s complicated and often controversial.

Why? Because there’s no one agreed list of all the world’s species. Competing lists exist for organisms such as mammals and birds, while other less well-known groups have none. And there are more than 30 definitions of what constitutes a species [emphasis mine]. This can make life difficult for biodiversity researchers and those working in areas such as conservation, biosecurity and regulation of the wildlife trade.

In the past few years, a public debate erupted among global taxonomists, including those who authored and contributed to this article, about whether the rules of taxonomy should be changed. Strongly worded ripostes were exchanged. A comparison to Stalin [emphasis mine] was floated.

Here’s how it started,

In May 2017 two of the authors, Stephen Garnett and Les Christidis, published an article in Nature. They argued taxonomy needed rules around what should be called a species, because currently there are none. They wrote:

” … for a discipline aiming to impose order on the natural world, taxonomy (the classification of complex organisms) is remarkably anarchic […] There is reasonable agreement among taxonomists that a species should represent a distinct evolutionary lineage. But there is none about how a lineage should be defined.

‘Species’ are often created or dismissed arbitrarily, according to the individual taxonomist’s adherence to one of at least 30 definitions. Crucially, there is no global oversight of taxonomic decisions — researchers can ‘split or lump’ species with no consideration of the consequences.”

Garnett and Christidis proposed that any changes to the taxonomy of complex organisms be overseen by the highest body in the global governance of biology, the International Union of Biological Sciences (IUBS), which would “restrict […] freedom of taxonomic action.”

… critics rejected the description of taxonomy as “anarchic”. In fact, they argued there are detailed rules around the naming of species administered by groups such as the International Commission on Zoological Nomenclature and the International Code of Nomenclature for algae, fungi, and plants. For 125 years, the codes have been almost universally adopted by scientists.

So in March 2018, 183 researchers – led by Scott Thomson and Richard Pyle – wrote an animated response to the Nature article, published in PLoS Biology [PLoS is Public Library of Science; it is an open access journal].

They wrote that Garnett and Christidis’ IUBS proposal was “flawed in terms of scientific integrity […] but is also untenable in practice”. They argued:

“Through taxonomic research, our understanding of biodiversity and classifications of living organisms will continue to progress. Any system that restricts such progress runs counter to basic scientific principles, which rely on peer review and subsequent acceptance or rejection by the community, rather than third-party regulation.”

In a separate paper, another group of taxonomists accused Garnett and Christidis of trying to suppress freedom of scientific thought, likening them to Stalin’s science advisor Trofim Lysenko.

The various parties did come together,

We hope by 2030, a scientific debate that began with claims of anarchy might lead to a clear governance system – and finally, the world’s first endorsed global list of species.

As for how they got to a “clear governance system”, there’s the rest of the July 26, 2020 essay on The Conversation or there’s the copy on phys.org (published July 27, 2020).

High-performance, low-energy artificial synapse for neural network computing

This artificial synapse is apparently an improvement on the standard memristor-based artificial synapse but that doesn’t become clear until reading the abstract for the paper. First, there’s a Feb. 20, 2017 Stanford University news release by Taylor Kubota (dated Feb. 21, 2017 on EurekAlert), Note: Links have been removed,

For all the improvements in computer technology over the years, we still struggle to recreate the low-energy, elegant processing of the human brain. Now, researchers at Stanford University and Sandia National Laboratories have made an advance that could help computers mimic one piece of the brain’s efficient design – an artificial version of the space over which neurons communicate, called a synapse.

“It works like a real synapse but it’s an organic electronic device that can be engineered,” said Alberto Salleo, associate professor of materials science and engineering at Stanford and senior author of the paper. “It’s an entirely new family of devices because this type of architecture has not been shown before. For many key metrics, it also performs better than anything that’s been done before with inorganics.”

The new artificial synapse, reported in the Feb. 20 issue of Nature Materials, mimics the way synapses in the brain learn through the signals that cross them. This is a significant energy savings over traditional computing, which involves separately processing information and then storing it into memory. Here, the processing creates the memory.

This synapse may one day be part of a more brain-like computer, which could be especially beneficial for computing that works with visual and auditory signals. Examples of this are seen in voice-controlled interfaces and driverless cars. Past efforts in this field have produced high-performance neural networks supported by artificially intelligent algorithms but these are still distant imitators of the brain that depend on energy-consuming traditional computer hardware.

Building a brain

When we learn, electrical signals are sent between neurons in our brain. The most energy is needed the first time a synapse is traversed. Every time afterward, the connection requires less energy. This is how synapses efficiently facilitate both learning something new and remembering what we’ve learned. The artificial synapse, unlike most other versions of brain-like computing, also fulfills these two tasks simultaneously, and does so with substantial energy savings.

“Deep learning algorithms are very powerful but they rely on processors to calculate and simulate the electrical states and store them somewhere else, which is inefficient in terms of energy and time,” said Yoeri van de Burgt, former postdoctoral scholar in the Salleo lab and lead author of the paper. “Instead of simulating a neural network, our work is trying to make a neural network.”

The artificial synapse is based off a battery design. It consists of two thin, flexible films with three terminals, connected by an electrolyte of salty water. The device works as a transistor, with one of the terminals controlling the flow of electricity between the other two.

Like a neural path in a brain being reinforced through learning, the researchers program the artificial synapse by discharging and recharging it repeatedly. Through this training, they have been able to predict within 1 percent of uncertainly what voltage will be required to get the synapse to a specific electrical state and, once there, it remains at that state. In other words, unlike a common computer, where you save your work to the hard drive before you turn it off, the artificial synapse can recall its programming without any additional actions or parts.

Testing a network of artificial synapses

Only one artificial synapse has been produced but researchers at Sandia used 15,000 measurements from experiments on that synapse to simulate how an array of them would work in a neural network. They tested the simulated network’s ability to recognize handwriting of digits 0 through 9. Tested on three datasets, the simulated array was able to identify the handwritten digits with an accuracy between 93 to 97 percent.

Although this task would be relatively simple for a person, traditional computers have a difficult time interpreting visual and auditory signals.

“More and more, the kinds of tasks that we expect our computing devices to do require computing that mimics the brain because using traditional computing to perform these tasks is becoming really power hungry,” said A. Alec Talin, distinguished member of technical staff at Sandia National Laboratories in Livermore, California, and senior author of the paper. “We’ve demonstrated a device that’s ideal for running these type of algorithms and that consumes a lot less power.”

This device is extremely well suited for the kind of signal identification and classification that traditional computers struggle to perform. Whereas digital transistors can be in only two states, such as 0 and 1, the researchers successfully programmed 500 states in the artificial synapse, which is useful for neuron-type computation models. In switching from one state to another they used about one-tenth as much energy as a state-of-the-art computing system needs in order to move data from the processing unit to the memory.

This, however, means they are still using about 10,000 times as much energy as the minimum a biological synapse needs in order to fire. The researchers are hopeful that they can attain neuron-level energy efficiency once they test the artificial synapse in smaller devices.

Organic potential

Every part of the device is made of inexpensive organic materials. These aren’t found in nature but they are largely composed of hydrogen and carbon and are compatible with the brain’s chemistry. Cells have been grown on these materials and they have even been used to make artificial pumps for neural transmitters. The voltages applied to train the artificial synapse are also the same as those that move through human neurons.

All this means it’s possible that the artificial synapse could communicate with live neurons, leading to improved brain-machine interfaces. The softness and flexibility of the device also lends itself to being used in biological environments. Before any applications to biology, however, the team plans to build an actual array of artificial synapses for further research and testing.

Additional Stanford co-authors of this work include co-lead author Ewout Lubberman, also of the University of Groningen in the Netherlands, Scott T. Keene and Grégorio C. Faria, also of Universidade de São Paulo, in Brazil. Sandia National Laboratories co-authors include Elliot J. Fuller and Sapan Agarwal in Livermore and Matthew J. Marinella in Albuquerque, New Mexico. Salleo is an affiliate of the Stanford Precourt Institute for Energy and the Stanford Neurosciences Institute. Van de Burgt is now an assistant professor in microsystems and an affiliate of the Institute for Complex Molecular Studies (ICMS) at Eindhoven University of Technology in the Netherlands.

This research was funded by the National Science Foundation, the Keck Faculty Scholar Funds, the Neurofab at Stanford, the Stanford Graduate Fellowship, Sandia’s Laboratory-Directed Research and Development Program, the U.S. Department of Energy, the Holland Scholarship, the University of Groningen Scholarship for Excellent Students, the Hendrik Muller National Fund, the Schuurman Schimmel-van Outeren Foundation, the Foundation of Renswoude (The Hague and Delft), the Marco Polo Fund, the Instituto Nacional de Ciência e Tecnologia/Instituto Nacional de Eletrônica Orgânica in Brazil, the Fundação de Amparo à Pesquisa do Estado de São Paulo and the Brazilian National Council.

Here’s an abstract for the researchers’ paper (link to paper provided after abstract) and it’s where you’ll find the memristor connection explained,

The brain is capable of massively parallel information processing while consuming only ~1–100fJ per synaptic event1, 2. Inspired by the efficiency of the brain, CMOS-based neural architectures3 and memristors4, 5 are being developed for pattern recognition and machine learning. However, the volatility, design complexity and high supply voltages for CMOS architectures, and the stochastic and energy-costly switching of memristors complicate the path to achieve the interconnectivity, information density, and energy efficiency of the brain using either approach. Here we describe an electrochemical neuromorphic organic device (ENODe) operating with a fundamentally different mechanism from existing memristors. ENODe switches at low voltage and energy (<10pJ for 103μm2 devices), displays >500 distinct, non-volatile conductance states within a ~1V range, and achieves high classification accuracy when implemented in neural network simulations. Plastic ENODes are also fabricated on flexible substrates enabling the integration of neuromorphic functionality in stretchable electronic systems6, 7. Mechanical flexibility makes ENODes compatible with three-dimensional architectures, opening a path towards extreme interconnectivity comparable to the human brain.

Here’s a link to and a citation for the paper,

A non-volatile organic electrochemical device as a low-voltage artificial synapse for neuromorphic computing by Yoeri van de Burgt, Ewout Lubberman, Elliot J. Fuller, Scott T. Keene, Grégorio C. Faria, Sapan Agarwal, Matthew J. Marinella, A. Alec Talin, & Alberto Salleo. Nature Materials (2017) doi:10.1038/nmat4856 Published online 20 February 2017

This paper is behind a paywall.

ETA March 8, 2017 10:28 PST: You may find this this piece on ferroelectricity and neuromorphic engineering of interest (March 7, 2017 posting titled: Ferroelectric roadmap to neuromorphic computing).