In 2017, Stanford University researchers presented a new device that mimics the brain’s efficient and low-energy neural learning process [see my March 8, 2017 posting for more]. It was an artificial version of a synapse — the gap across which neurotransmitters travel to communicate between neurons — made from organic materials. In 2019, the researchers assembled nine of their artificial synapses together in an array, showing that they could be simultaneously programmed to mimic the parallel operation of the brain [see my Sept. 17, 2019 posting].
Now, in a paper published June 15  in Nature Materials, they have tested the first biohybrid version of their artificial synapse and demonstrated that it can communicate with living cells. Future technologies stemming from this device could function by responding directly to chemical signals from the brain. The research was conducted in collaboration with researchers at Istituto Italiano di Tecnologia (Italian Institute of Technology — IIT) in Italy and at Eindhoven University of Technology (Netherlands).
“This paper really highlights the unique strength of the materials that we use in being able to interact with living matter,” said Alberto Salleo, professor of materials science and engineering at Stanford and co-senior author of the paper. “The cells are happy sitting on the soft polymer. But the compatibility goes deeper: These materials work with the same molecules neurons use naturally.”
While other brain-integrated devices require an electrical signal to detect and process the brain’s messages, the communications between this device and living cells occur through electrochemistry — as though the material were just another neuron receiving messages from its neighbor.
The biohybrid artificial synapse consists of two soft polymer electrodes, separated by a trench filled with electrolyte solution – which plays the part of the synaptic cleft that separates communicating neurons in the brain. When living cells are placed on top of one electrode, neurotransmitters that those cells release can react with that electrode to produce ions. Those ions travel across the trench to the second electrode and modulate the conductive state of this electrode. Some of that change is preserved, simulating the learning process occurring in nature.
“In a biological synapse, essentially everything is controlled by chemical interactions at the synaptic junction. Whenever the cells communicate with one another, they’re using chemistry,” said Scott Keene, a graduate student at Stanford and co-lead author of the paper. “Being able to interact with the brain’s natural chemistry gives the device added utility.”
This process mimics the same kind of learning seen in biological synapses, which is highly efficient in terms of energy because computing and memory storage happen in one action. In more traditional computer systems, the data is processed first and then later moved to storage.
To test their device, the researchers used rat neuroendocrine cells that release the neurotransmitter dopamine. Before they ran their experiment, they were unsure how the dopamine would interact with their material – but they saw a permanent change in the state of their device upon the first reaction.
“We knew the reaction is irreversible, so it makes sense that it would cause a permanent change in the device’s conductive state,” said Keene. “But, it was hard to know whether we’d achieve the outcome we predicted on paper until we saw it happen in the lab. That was when we realized the potential this has for emulating the long-term learning process of a synapse.”
A first step
This biohybrid design is in such early stages that the main focus of the current research was simply to make it work.
“It’s a demonstration that this communication melding chemistry and electricity is possible,” said Salleo. “You could say it’s a first step toward a brain-machine interface, but it’s a tiny, tiny very first step.”
Now that the researchers have successfully tested their design, they are figuring out the best paths for future research, which could include work on brain-inspired computers, brain-machine interfaces, medical devices or new research tools for neuroscience. Already, they are working on how to make the device function better in more complex biological settings that contain different kinds of cells and neurotransmitters.
Here’s a link to and a citation for the paper,
A biohybrid synapse with neurotransmitter-mediated plasticity by Scott T. Keene, Claudia Lubrano, Setareh Kazemzadeh, Armantas Melianas, Yaakov Tuchman, Giuseppina Polino, Paola Scognamiglio, Lucio Cinà, Alberto Salleo, Yoeri van de Burgt & Francesca Santoro. Nature Materials (2020) DOI: https://doi.org/10.1038/s41563-020-0703-y Published: 15 June 2020
For the first time, people with arm amputations can experience sensations of touch in a mind-controlled arm prosthesis that they use in everyday life. A study in the New England Journal of Medicine reports on three Swedish patients who have lived, for several years, with this new technology – one of the world’s most integrated interfaces between human and machine.
The advance is unique: the patients have used a mind-controlled prosthesis in their everyday life for up to seven years. For the last few years, they have also lived with a new function – sensations of touch in the prosthetic hand. This is a new concept for artificial limbs, which are called neuromusculoskeletal prostheses – as they are connected to the user’s nerves, muscles, and skeleton.
The research was led by Max Ortiz Catalan, Associate Professor at Chalmers University of Technology, in collaboration with Sahlgrenska University Hospital, University of Gothenburg, and Integrum AB, all in Gothenburg, Sweden. Researchers at Medical University of Vienna in Austria and the Massachusetts Institute of Technology in the USA were also involved.
“Our study shows that a prosthetic hand, attached to the bone and controlled by electrodes implanted in nerves and muscles, can operate much more precisely than conventional prosthetic hands. We further improved the use of the prosthesis by integrating tactile sensory feedback that the patients use to mediate how hard to grab or squeeze an object. Over time, the ability of the patients to discern smaller changes in the intensity of sensations has improved,” says Max Ortiz Catalan.
“The most important contribution of this study was to demonstrate that this new type of prosthesis is a clinically viable replacement for a lost arm. No matter how sophisticated a neural interface becomes, it can only deliver real benefit to patients if the connection between the patient and the prosthesis is safe and reliable in the long term. Our results are the product of many years of work, and now we can finally present the first bionic arm prosthesis that can be reliably controlled using implanted electrodes, while also conveying sensations to the user in everyday life”, continues Max Ortiz Catalan.
Since receiving their prostheses, the patients have used them daily in all their professional and personal activities.
The new concept of a neuromusculoskeletal prosthesis is unique in that it delivers several different features which have not been presented together in any other prosthetic technology in the world:
 It has a direct connection to a person’s nerves, muscles, and skeleton.
 It is mind-controlled and delivers sensations that are perceived by the user as arising from the missing hand.
 It is self-contained; all electronics needed are contained within the prosthesis, so patients do not need to carry additional equipment or batteries.
 It is safe and stable in the long term; the technology has been used without interruption by patients during their everyday activities, without supervision from the researchers, and it is not restricted to confined or controlled environments.
The newest part of the technology, the sensation of touch, is possible through stimulation of the nerves that used to be connected to the biological hand before the amputation. Force sensors located in the thumb of the prosthesis measure contact and pressure applied to an object while grasping. This information is transmitted to the patients’ nerves leading to their brains. Patients can thus feel when they are touching an object, its characteristics, and how hard they are pressing it, which is crucial for imitating a biological hand.
“Currently, the sensors are not the obstacle for restoring sensation,” says Max Ortiz Catalan. “The challenge is creating neural interfaces that can seamlessly transmit large amounts of artificially collected information to the nervous system, in a way that the user can experience sensations naturally and effortlessly.” The implantation of this new technology took place at Sahlgrenska University Hospital, led by Professor Rickard Brånemark and Doctor Paolo Sassu. Over a million people worldwide suffer from limb loss, and the end goal for the research team, in collaboration with Integrum AB, is to develop a widely available product suitable for as many of these people as possible.
“Right now, patients in Sweden are participating in the clinical validation of this new prosthetic technology for arm amputation,” says Max Ortiz Catalan. “We expect this system to become available outside Sweden within a couple of years, and we are also making considerable progress with a similar technology for leg prostheses, which we plan to implant in a first patient later this year.”
More about: How the technology works:
The implant system for the arm prosthesis is called e-OPRA and is based on the OPRA implant system created by Integrum AB. The implant system anchors the prosthesis to the skeleton in the stump of the amputated limb, through a process called osseointegration (osseo = bone). Electrodes are implanted in muscles and nerves inside the amputation stump, and the e-OPRA system sends signals in both directions between the prosthesis and the brain, just like in a biological arm.
The prosthesis is mind-controlled, via the electrical muscle and nerve signals sent through the arm stump and captured by the electrodes. The signals are passed into the implant, which goes through the skin and connects to the prosthesis. The signals are then interpreted by an embedded control system developed by the researchers. The control system is small enough to fit inside the prosthesis and it processes the signals using sophisticated artificial intelligence algorithms, resulting in control signals for the prosthetic hand’s movements.
The touch sensations arise from force sensors in the prosthetic thumb. The signals from the sensors are converted by the control system in the prosthesis into electrical signals which are sent to stimulate a nerve in the arm stump. The nerve leads to the brain, which then perceives the pressure levels against the hand.
The neuromusculoskeletal implant can connect to any commercially available arm prosthesis, allowing them to operate more effectively.
More about: How the artificial sensation is experienced:
People who lose an arm or leg often experience phantom sensations, as if the missing body part remains although not physically present. When the force sensors in the prosthetic thumb react, the patients in the study feel that the sensation comes from their phantom hand. Precisely where on the phantom hand varies between patients, depending on which nerves in the stump receive the signals. The lowest level of pressure can be compared to touching the skin with the tip of a pencil. As the pressure increases, the feeling becomes stronger and increasingly ‘electric’.
I have read elsewhere that one of the most difficult aspects of dealing with a prosthetic is the loss of touch. This has to be exciting news for a lot of people. Here’s a link to and a citation for the paper,
What better way to say ‘Happy Canada Day’ than to highlight a data sonfication project from HotPopRobot. These are not all of the awards won by the HotPopRobot team (based in Toronto, Canada), from the hotpoprobot.com homepage,
Micro:bit Challenge North America Runners Up 2020.
NASA SpaceApps 2019, 2018, 2017, 2014.
Imagining the Skies 2019.
Jesse Ketchum Astronomy Award 2018. Hon.
Mention at 2019 NASA Planetary Defense Conference. Emerald Code Grand Prize 2018.
Canadian Space Apps 2017
Here’s more about this intriguing team from the site’s About Us page,
HotPopRobot is a maker-family enterprise co-founded in 2014 by Artash […], Arushi […], Rati, and Vikas to bring discussions on Science, Space Exploration, Astronomy, and Technology in our everyday conversation. It encourages families, kids, and youths to become creators (and not consumers), scientists, artists, or whatever they want to be by undertaking projects on space, robotics, coding, and science.
We started this enterprise after winning the NASA Space Apps Toronto 2014 Award for our Mars Rover: CuriousBot. We ended up among the top 5 NASA Space Apps Winners (people’s choice) globally! We won the NASA SpaceApps Challenge Toronto again in 2019, 2018, and 2017 as well as the Canadian Space Agency’s Space Apps Challenge 2017 for our project – “Yes I Can” which used RadarSat-2 satellite data to recreate the #Canada150 logo. We ended up getting invited to the Canadian Space Agency to present our project and meet the new Canadian Astronauts.
The latest project is a musical based on data sonification of data on COVID-19 impacts in Toronto, Canada. Here’s a video of the ‘Toronto COVID-19 Lockdown Musical’ or more formally the ‘Musical Scales Project’,
As of June 2020, Artash and Arushi are in grade eight and grade five, respectively, which means they are likely 13 and 10 years old now and were seven and four years old, respectively, when they and their parents started the HotPopRobots enterprise in 2014.
Definitely visit their website if you’re interested in artificial intelligence, robots, machine learning, as well as, their other topics.
Regarding their latest project, here’s more about the Musical Scales Project from a June 19 (?), 2020 posting on their website,
The beauty of the human mind is that once you set it free, it soars high. Our mind too was teeming with big questions that we wanted to find the answers about. Would the COVID19 lockdown have increased the bird density in the city skies, would the closure of all economic activities have affected the rotation of the Earth, would an Alien civilization be able to figure out that something drastic must have happened on Earth?
All questions are good questions. But from our previous experiences of making projects, we knew we had to limit our imagination for the time being and focus on practicality to come up with workable project design. Once we have made something and it works, we could always keep improving it or make newer versions of the same.
So between the two of us [Artash and Arushi], we limited our questions to:
Has the noise levels on our streets gone down?
Has the air we breathe become cleaner?
Have the traffic levels on our streets gone down?
Has the lockdown affected the vibration of the Earth due to the stopping of businesses, economic, and construction activities?
We often have to dismantle some of our older projects to get the components for our newer projects. It is not a good feeling as we often use our older projects to give demonstrations at various public events. So where possible we try to make our projects modular so that we can use the same components for more than one project.
We ended up collecting the following sensors and cameras for this project.
Light Sensor: It measures the light around us. It has a photo-resistor whose value decreases when light falls on it. It is the base sensor that will help us visualize separate daily data readings as well as changes in data collected during day and night.
Sound Sensor: To listen to street noise around us. It is similar to a microphone but gives analog values of sound levels. This raw data then has to be calibrated to understand how it changes with the change in sound levels.
PM 2.5 Dust Sensor: It is a sensor to measure particulate matters of 2.5 microns in the air. There is a small heater in the sensor which directs the flow of air in the sensor in an upward direction (convection current). The flow of air passes through infrared light which bounces around. The more the bounce the more the particulate matter or more polluted the air.
Temperature Sensor: We wanted to see how much the temperature was changing around us. The sensor is just like a digital thermometer but it prints out the readings.
Humidity Sensor: It measures how damp the air around is. We measure humidity and temperature as they both affect the pollution levels.
Intel RealSense Camera: To get a wide overview of the traffic on King Street. Its high resolution allows us to apply machine learning for object identification and tracking.
In addition to getting data from our sensors, we had to rely on external databases to get some other information.
Covid19 Infection Rates in Toronto: from City of Toronto Public Health website
The intensity of Night Lights Over Toronto: Using NASA Night Light Data to understand changes in night lights over Toronto during different weeks.
Seismic Vibrations in Toronto: We got the displacement data of Earth along the vertical direction from the Leslie Spit Seismic Station in Toronto.
We used the free Musical Algorithm software (www.musicalgorithms.org) to bring all the data together and create the COVID19 Lockdown Musical.
The descriptions and instructions are comprehensive, which is very helpful if you’re planning your own project.
Artificial Neural Network (ANN) is a type of information processing system based on mimicking the principles of biological brains, and has been broadly applied in application domains such as pattern recognition, automatic control, signal processing, decision support system and artificial intelligence. Spiking Neural Network (SNN) is a type of biologically-inspired ANN that perform information processing based on discrete-time spikes. It is more biologically realistic than classic ANNs, and can potentially achieve much better performance-power ratio. Recently, researchers from Zhejiang University and Hangzhou Dianzi University in Hangzhou, China successfully developed the Darwin Neural Processing Unit (NPU), a neuromorphic hardware co-processor based on Spiking Neural Networks, fabricated by standard CMOS technology.
With the rapid development of the Internet-of-Things and intelligent hardware systems, a variety of intelligent devices are pervasive in today’s society, providing many services and convenience to people’s lives, but they also raise challenges of running complex intelligent algorithms on small devices. Sponsored by the college of Computer science of Zhejiang University, the research group led by Dr. De Ma from Hangzhou Dianzi university and Dr. Xiaolei Zhu from Zhejiang university has developed a co-processor named as Darwin.The Darwin NPU aims to provide hardware acceleration of intelligent algorithms, with target application domain of resource-constrained, low-power small embeddeddevices. It has been fabricated by 180nm standard CMOS process, supporting a maximum of 2048 neurons, more than 4 million synapses and 15 different possible synaptic delays. It is highly configurable, supporting reconfiguration of SNN topology and many parameters of neurons and synapses.Figure 1 shows photos of the die and the prototype development board, which supports input/output in the form of neural spike trains via USB port.
The successful development ofDarwin demonstrates the feasibility of real-time execution of Spiking Neural Networks in resource-constrained embedded systems. It supports flexible configuration of a multitude of parameters of the neural network, hence it can be used to implement different functionalities as configured by the user. Its potential applications include intelligent hardware systems, robotics, brain-computer interfaces, and others.Since it uses spikes for information processing and transmission,similar to biological neural networks, it may be suitable for analysis and processing of biological spiking neural signals, and building brain-computer interface systems by interfacing with animal or human brains. As a prototype application in Brain-Computer Interfaces, Figure 2 [not included here] describes an application example ofrecognizingthe user’s motor imagery intention via real-time decoding of EEG signals, i.e., whether he is thinking of left or right, and using it to control the movement direction of a basketball in the virtual environment. Different from conventional EEG signal analysis algorithms, the input and output to Darwin are both neural spikes: the input is spike trains that encode EEG signals; after processing by the neural network, the output neuron with the highest firing rate is chosen as the classification result.
The second generation of the Darwin Neural Processing Unit (Darwin NPU 2) as well as its corresponding toolchain and micro-operating system was released in Hangzhou recently. This research was led by Zhejiang University, with Hangzhou Dianzi University and Huawei Central Research Institute participating in the development and algorisms of the chip. The Darwin NPU 2 can be primarily applied to smart Internet of Things (IoT). It can support up to 150,000 neurons and has achieved the largest-scale neurons on a nationwide basis.
The Darwin NPU 2 is fabricated by standard 55nm CMOS technology. Every “neuromorphic” chip is made up of 576 kernels, each of which can support 256 neurons. It contains over 10 million synapses which can construct a powerful brain-inspired computing system.
“A brain-inspired chip can work like the neurons inside a human brain and it is remarkably unique in image recognition, visual and audio comprehension and naturalistic language processing,” said MA De, an associate professor at the College of Computer Science and Technology on the research team.
“In comparison with traditional chips, brain-inspired chips are more adept at processing ambiguous data, say, perception tasks. Another prominent advantage is their low energy consumption. In the process of information transmission, only those neurons that receive and process spikes will be activated while other neurons will stay dormant. In this case, energy consumption can be extremely low,” said Dr. ZHU Xiaolei at the School of Microelectronics.
To cater to the demands for voice business, Huawei Central Research Institute designed an efficient spiking neural network algorithm in accordance with the defining feature of the Darwin NPU 2 architecture, thereby increasing computing speeds and improving recognition accuracy tremendously.
Scientists have developed a host of applications, including gesture recognition, image recognition, voice recognition and decoding of electroencephalogram (EEG) signals, on the Darwin NPU 2 and reduced energy consumption by at least two orders of magnitude.
In comparison with the first generation of the Darwin NPU which was developed in 2015, the Darwin NPU 2 has escalated the number of neurons by two orders of magnitude from 2048 neurons and augmented the flexibility and plasticity of the chip configuration, thus expanding the potential for applications appreciably. The improvement in the brain-inspired chip will bring in its wake the revolution of computer technology and artificial intelligence. At present, the brain-inspired chip adopts a relatively simplified neuron model, but neurons in a real brain are far more sophisticated and many biological mechanisms have yet to be explored by neuroscientists and biologists. It is expected that in the not-too-distant future, a fascinating improvement on the Darwin NPU 2 will come over the horizon.
I haven’t been able to find a recent (i.e., post 2017) research paper featuring Darwin but there is another chip and research on that one was published in July 2019. First, the news.
The Tianjic chip
A July 31, 2019 article in the New York Times by Cade Metz describes the research and offers what seems to be a jaundiced perspective about the field of neuromorphic computing (Note: A link has been removed),
As corporate giants like Ford, G.M. and Waymo struggle to get their self-driving cars on the road, a team of researchers in China is rethinking autonomous transportation using a souped-up bicycle.
This bike can roll over a bump on its own, staying perfectly upright. When the man walking just behind it says “left,” it turns left, angling back in the direction it came.
It also has eyes: It can follow someone jogging several yards ahead, turning each time the person turns. And if it encounters an obstacle, it can swerve to the side, keeping its balance and continuing its pursuit.
… Chinese researchers who built the bike believe it demonstrates the future of computer hardware. It navigates the world with help from what is called a neuromorphic chip, modeled after the human brain.
Here’s a video, released by the researchers, demonstrating the chip’s abilities,
The short video did not show the limitations of the bicycle (which presumably tips over occasionally), and even the researchers who built the bike admitted in an email to The Times that the skills on display could be duplicated with existing computer hardware. But in handling all these skills with a neuromorphic processor, the project highlighted the wider effort to achieve new levels of artificial intelligence with novel kinds of chips.
This effort spans myriad start-up companies and academic labs, as well as big-name tech companies like Google, Intel and IBM. And as the Nature paper demonstrates, the movement is gaining significant momentum in China, a country with little experience designing its own computer processors, but which has invested heavily in the idea of an “A.I. chip.”
If you can get past what seems to be a patronizing attitude, there are some good explanations and cogent criticisms in the piece (Metz’s July 31, 2019 article, Note: Links have been removed),
… it faces significant limitations.
A neural network doesn’t really learn on the fly. Engineers train a neural network for a particular task before sending it out into the real world, and it can’t learn without enormous numbers of examples. OpenAI, a San Francisco artificial intelligence lab, recently built a system that could beat the world’s best players at a complex video game called Dota 2. But the system first spent months playing the game against itself, burning through millions of dollars in computing power.
Researchers aim to build systems that can learn skills in a manner similar to the way people do. And that could require new kinds of computer hardware. Dozens of companies and academic labs are now developing chips specifically for training and operating A.I. systems. The most ambitious projects are the neuromorphic processors, including the Tianjic chip under development at Tsinghua University in China.
Such chips are designed to imitate the network of neurons in the brain, not unlike a neural network but with even greater fidelity, at least in theory.
Neuromorphic chips typically include hundreds of thousands of faux neurons, and rather than just processing 1s and 0s, these neurons operate by trading tiny bursts of electrical signals, “firing” or “spiking” only when input signals reach critical thresholds, as biological neurons do.
Tiernan Ray’s August 3, 2019 article about the chip for ZDNet.com offers some thoughtful criticism with a side dish of snark (Note: Links have been removed),
Nature magazine’s cover story [July 31, 2019] is about a Chinese chip [Tianjic chip]that can run traditional deep learning code and also perform “neuromorophic” operations in the same circuitry. The work’s value seems obscured by a lot of hype about “artificial general intelligence” that has no real justification.
The term “artificial general intelligence,” or AGI, doesn’t actually refer to anything, at this point, it is merely a placeholder, a kind of Rorschach Test for people to fill the void with whatever notions they have of what it would mean for a machine to “think” like a person.
Despite that fact, or perhaps because of it, AGI is an ideal marketing term to attach to a lot of efforts in machine learning. Case in point, a research paper featured on the cover of this week’s Nature magazine about a new kind of computer chip developed by researchers at China’s Tsinghua University that could “accelerate the development of AGI,” they claim.
The chip is a strange hybrid of approaches, and is intriguing, but the work leaves unanswered many questions about how it’s made, and how it achieves what researchers claim of it. And some longtime chip observers doubt the impact will be as great as suggested.
“This paper is an example of the good work that China is doing in AI,” says Linley Gwennap, longtime chip-industry observer and principal analyst with chip analysis firm The Linley Group. “But this particular idea isn’t going to take over the world.”
The premise of the paper, “Towards artificial general intelligence with hybrid Tianjic chip architecture,” is that to achieve AGI, computer chips need to change. That’s an idea supported by fervent activity these days in the land of computer chips, with lots of new chip designs being proposed specifically for machine learning.
The Tsinghua authors specifically propose that the mainstream machine learning of today needs to be merged in the same chip with what’s called “neuromorphic computing.” Neuromorphic computing, first conceived by Caltech professor Carver Mead in the early ’80s, has been an obsession for firms including IBM for years, with little practical result.
[Missing details about the chip] … For example, the part is said to have “reconfigurable” circuits, but how the circuits are to be reconfigured is never specified. It could be so-called “field programmable gate array,” or FPGA, technology or something else. Code for the project is not provided by the authors as it often is for such research; the authors offer to provide the code “on reasonable request.”
More important is the fact the chip may have a hard time stacking up to a lot of competing chips out there, says analyst Gwennap. …
What the paper calls ANN and SNN are two very different means of solving similar problems, kind of like rotating (helicopter) and fixed wing (airplane) are for aviation,” says Gwennap. “Ultimately, I expect ANN [?] and SNN [spiking neural network] to serve different end applications, but I don’t see a need to combine them in a single chip; you just end up with a chip that is OK for two things but not great for anything.”
But you also end up generating a lot of buzz, and given the tension between the U.S. and China over all things tech, and especially A.I., the notion China is stealing a march on the U.S. in artificial general intelligence — whatever that may be — is a summer sizzler of a headline.
ANN could be either artificial neural network or something mentioned earlier in Ray’s article, a shortened version of CANN [continuous attractor neural network].
Shelly Fan’s August 7, 2019 article for the SingularityHub is almost as enthusiastic about the work as the podcasters for Nature magazine were (a little more about that later),
The study shows that China is readily nipping at the heels of Google, Facebook, NVIDIA, and other tech behemoths investing in developing new AI chip designs—hell, with billions in government investment it may have already had a head start. A sweeping AI plan from 2017 looks to catch up with the US on AI technology and application by 2020. By 2030, China’s aiming to be the global leader—and a champion for building general AI that matches humans in intellectual competence.
The country’s ambition is reflected in the team’s parting words.
“Our study is expected to stimulate AGI [artificial general intelligence] development by paving the way to more generalized hardware platforms,” said the authors, led by Dr. Luping Shi at Tsinghua University.
Using nanoscale fabrication, the team arranged 156 FCores, containing roughly 40,000 neurons and 10 million synapses, onto a chip less than a fifth of an inch in length and width. Initial tests showcased the chip’s versatility, in that it can run both SNNs and deep learning algorithms such as the popular convolutional neural network (CNNs) often used in machine vision.
Compared to IBM TrueNorth, the density of Tianjic’s cores increased by 20 percent, speeding up performance ten times and increasing bandwidth at least 100-fold, the team said. When pitted against GPUs, the current hardware darling of machine learning, the chip increased processing throughput up to 100 times, while using just a sliver (1/10,000) of energy.
Shelly Xuelai Fan is a neuroscientist-turned-science writer. She completed her PhD in neuroscience at the University of British Columbia, where she developed novel treatments for neurodegeneration. While studying biological brains, she became fascinated with AI and all things biotech. Following graduation, she moved to UCSF [University of California at San Francisco] to study blood-based factors that rejuvenate aged brains. She is the co-founder of Vantastic Media, a media venture that explores science stories through text and video, and runs the award-winning blog NeuroFantastic.com. Her first book, “Will AI Replace Us?” (Thames & Hudson) will be out April 2019.
Onto Nature. Here’s a link to and a citation for the paper,
Towards artificial general intelligence with hybrid Tianjic chip architecture by Jing Pei, Lei Deng, Sen Song, Mingguo Zhao, Youhui Zhang, Shuang Wu, Guanrui Wang, Zhe Zou, Zhenzhi Wu, Wei He, Feng Chen, Ning Deng, Si Wu, Yu Wang, Yujie Wu, Zheyu Yang, Cheng Ma, Guoqi Li, Wentao Han, Huanglong Li, Huaqiang Wu, Rong Zhao, Yuan Xie & Luping Shi. Nature volume 572, pages106–111(2019) DOI: https//doi.org/10.1038/s41586-019-1424-8 Published: 31 July 2019 Issue Date: 01 August 2019
This paper is behind a paywall.
The July 31, 2019 Nature podcast, which includes a segment about the Tianjic chip research from China, which is at the 9 mins. 13 secs. mark (AI hardware) or you can scroll down about 55% of the way to the transcript of the interview with Luke Fleet, the Nature editor who dealt with the paper.
The pundits put me in mind of my own reaction when I heard about phones that could take pictures. I didn’t see the point but, as it turned out, there was a perfectly good reason for combining what had been two separate activities into one device. It was no longer just a telephone and I had completely missed the point.
This too may be the case with the Tianjic chip. I think it’s too early to say whether or not it represents a new type of chip or if it’s a dead end.
I’ve been meaning to get to this news item from late 2019 as it features work from a team that I’ve been following for a number of years now. First mentioned here in an October 17, 2011 posting, James Gimzewski has been working with researchers at the University of California at Los Angeles (UCLA) and researchers at Japan’s National Institute for Materials Science (NIMS) on neuromorphic computing.
This particular research had a protracted rollout with the paper being published in October 2019 and the last news item about it being published in mid-December 2019.
UCLA scientists James Gimzewski and Adam Stieg are part of an international research team that has taken a significant stride toward the goal of creating thinking machines.
Led by researchers at Japan’s National Institute for Materials Science, the team created an experimental device that exhibited characteristics analogous to certain behaviors of the brain — learning, memorization, forgetting, wakefulness and sleep. The paper, published in Scientific Reports (“Emergent dynamics of neuromorphic nanowire networks”), describes a network in a state of continuous flux.
“This is a system between order and chaos, on the edge of chaos,” said Gimzewski, a UCLA distinguished professor of chemistry and biochemistry, a member of the California NanoSystems Institute at UCLA and a co-author of the study. “The way that the device constantly evolves and shifts mimics the human brain. It can come up with different types of behavior patterns that don’t repeat themselves.”
The research is one early step along a path that could eventually lead to computers that physically and functionally resemble the brain — machines that may be capable of solving problems that contemporary computers struggle with, and that may require much less power than today’s computers do.
The device the researchers studied is made of a tangle of silver nanowires — with an average diameter of just 360 nanometers. (A nanometer is one-billionth of a meter.) The nanowires were coated in an insulating polymer about 1 nanometer thick. Overall, the device itself measured about 10 square millimeters — so small that it would take 25 of them to cover a dime.
Allowed to randomly self-assemble on a silicon wafer, the nanowires formed highly interconnected structures that are remarkably similar to those that form the neocortex, the part of the brain involved with higher functions such as language, perception and cognition.
One trait that differentiates the nanowire network from conventional electronic circuits is that electrons flowing through them cause the physical configuration of the network to change. In the study, electrical current caused silver atoms to migrate from within the polymer coating and form connections where two nanowires overlap. The system had about 10 million of these junctions, which are analogous to the synapses where brain cells connect and communicate.
The researchers attached two electrodes to the brain-like mesh to profile how the network performed. They observed “emergent behavior,” meaning that the network displayed characteristics as a whole that could not be attributed to the individual parts that make it up. This is another trait that makes the network resemble the brain and sets it apart from conventional computers.
After current flowed through the network, the connections between nanowires persisted for as much as one minute in some cases, which resembled the process of learning and memorization in the brain. Other times, the connections shut down abruptly after the charge ended, mimicking the brain’s process of forgetting.
In other experiments, the research team found that with less power flowing in, the device exhibited behavior that corresponds to what neuroscientists see when they use functional MRI scanning to take images of the brain of a sleeping person. With more power, the nanowire network’s behavior corresponded to that of the wakeful brain.
The paper is the latest in a series of publications examining nanowire networks as a brain-inspired system, an area of research that Gimzewski helped pioneer along with Stieg, a UCLA research scientist and an associate director of CNSI.
“Our approach may be useful for generating new types of hardware that are both energy-efficient and capable of processing complex datasets that challenge the limits of modern computers,” said Stieg, a co-author of the study.
The borderline-chaotic activity of the nanowire network resembles not only signaling within the brain but also other natural systems such as weather patterns. That could mean that, with further development, future versions of the device could help model such complex systems.
In other experiments, Gimzewski and Stieg already have coaxed a silver nanowire device to successfully predict statistical trends in Los Angeles traffic patterns based on previous years’ traffic data.
Because of their similarities to the inner workings of the brain, future devices based on nanowire technology could also demonstrate energy efficiency like the brain’s own processing. The human brain operates on power roughly equivalent to what’s used by a 20-watt incandescent bulb. By contrast, computer servers where work-intensive tasks take place — from training for machine learning to executing internet searches — can use the equivalent of many households’ worth of energy, with the attendant carbon footprint.
“In our studies, we have a broader mission than just reprogramming existing computers,” Gimzewski said. “Our vision is a system that will eventually be able to handle tasks that are closer to the way the human being operates.”
The study’s first author, Adrian Diaz-Alvarez, is from the International Center for Material Nanoarchitectonics at Japan’s National Institute for Materials Science. Co-authors include Tomonobu Nakayama and Rintaro Higuchi, also of NIMS; and Zdenka Kuncic at the University of Sydney in Australia.
An international joint research team led by NIMS succeeded in fabricating a neuromorphic network composed of numerous metallic nanowires. Using this network, the team was able to generate electrical characteristics similar to those associated with higher order brain functions unique to humans, such as memorization, learning, forgetting, becoming alert and returning to calm. The team then clarified the mechanisms that induced these electrical characteristics.
The development of artificial intelligence (AI) techniques has been rapidly advancing in recent years and has begun impacting our lives in various ways. Although AI processes information in a manner similar to the human brain, the mechanisms by which human brains operate are still largely unknown. Fundamental brain components, such as neurons and the junctions between them (synapses), have been studied in detail. However, many questions concerning the brain as a collective whole need to be answered. For example, we still do not fully understand how the brain performs such functions as memorization, learning and forgetting, and how the brain becomes alert and returns to calm. In addition, live brains are difficult to manipulate in experimental research. For these reasons, the brain remains a “mysterious organ.” A different approach to brain research?in which materials and systems capable of performing brain-like functions are created and their mechanisms are investigated?may be effective in identifying new applications of brain-like information processing and advancing brain science.
The joint research team recently built a complex brain-like network by integrating numerous silver (Ag) nanowires coated with a polymer (PVP) insulating layer approximately 1 nanometer in thickness. A junction between two nanowires forms a variable resistive element (i.e., a synaptic element) that behaves like a neuronal synapse. This nanowire network, which contains a large number of intricately interacting synaptic elements, forms a “neuromorphic network”. When a voltage was applied to the neuromorphic network, it appeared to “struggle” to find optimal current pathways (i.e., the most electrically efficient pathways). The research team measured the processes of current pathway formation, retention and deactivation while electric current was flowing through the network and found that these processes always fluctuate as they progress, similar to the human brain’s memorization, learning, and forgetting processes. The observed temporal fluctuations also resemble the processes by which the brain becomes alert or returns to calm. Brain-like functions simulated by the neuromorphic network were found to occur as the huge number of synaptic elements in the network collectively work to optimize current transport, in the other words, as a result of self-organized and emerging dynamic processes..
The research team is currently developing a brain-like memory device using the neuromorphic network material. The team intends to design the memory device to operate using fundamentally different principles than those used in current computers. For example, while computers are currently designed to spend as much time and electricity as necessary in pursuit of absolutely optimum solutions, the new memory device is intended to make a quick decision within particular limits even though the solution generated may not be absolutely optimum. The team also hopes that this research will facilitate understanding of the brain’s information processing mechanisms.
This project was carried out by an international joint research team led by Tomonobu Nakayama (Deputy Director, International Center for Materials Nanoarchitectonics (WPI-MANA), NIMS), Adrian Diaz Alvarez (Postdoctoral Researcher, WPI-MANA, NIMS), Zdenka Kuncic (Professor, School of Physics, University of Sydney, Australia) and James K. Gimzewski (Professor, California NanoSystems Institute, University of California Los Angeles, USA).
Here at last is a link to and a citation for the paper,
Emergent dynamics of neuromorphic nanowire networks by Adrian Diaz-Alvarez, Rintaro Higuchi, Paula Sanz-Leon, Ido Marcus, Yoshitaka Shingaya, Adam Z. Stieg, James K. Gimzewski, Zdenka Kuncic & Tomonobu Nakayama. Scientific Reports volume 9, Article number: 14920 (2019) DOI: https://doi.org/10.1038/s41598-019-51330-6 Published: 17 October 2019
It’s hard to believe that a brain-on-a-chip might need sleep but that seems to be the case as far as the US Dept. of Energy’s Los Alamos National Laboratory is concerned. Before pursuing that line of thought, here’s some work from the Massachusetts Institute of Technology (MIT) involving memristors and a brain-on-a-chip. From a June 8, 2020 news item on ScienceDaily,
MIT engineers have designed a “brain-on-a-chip,” smaller than a piece of confetti, that is made from tens of thousands of artificial brain synapses known as memristors — silicon-based components that mimic the information-transmitting synapses in the human brain.
The researchers borrowed from principles of metallurgy to fabricate each memristor from alloys of silver and copper, along with silicon. When they ran the chip through several visual tasks, the chip was able to “remember” stored images and reproduce them many times over, in versions that were crisper and cleaner compared with existing memristor designs made with unalloyed elements.
Their results, published today in the journal Nature Nanotechnology, demonstrate a promising new memristor design for neuromorphic devices — electronics that are based on a new type of circuit that processes information in a way that mimics the brain’s neural architecture. Such brain-inspired circuits could be built into small, portable devices, and would carry out complex computational tasks that only today’s supercomputers can handle.
This ‘metallurgical’ approach differs somewhat from the protein nanowire approach used by the University of Massachusetts at Amherst team mentioned in my June 15, 2020 posting. Scientists are pursuing multiple pathways and we may find that we arrive with not ‘a single artificial brain but with many types of artificial brains.
“So far, artificial synapse networks exist as software. We’re trying to build real neural network hardware for portable artificial intelligence systems,” says Jeehwan Kim, associate professor of mechanical engineering at MIT. “Imagine connecting a neuromorphic device to a camera on your car, and having it recognize lights and objects and make a decision immediately, without having to connect to the internet. We hope to use energy-efficient memristors to do those tasks on-site, in real-time.”
Memristors, or memory transistors [Note: Memristors are usually described as memory resistors; this is the first time I’ve seen ‘memory transistor’], are an essential element in neuromorphic computing. In a neuromorphic device, a memristor would serve as the transistor in a circuit, though its workings would more closely resemble a brain synapse — the junction between two neurons. The synapse receives signals from one neuron, in the form of ions, and sends a corresponding signal to the next neuron.
A transistor in a conventional circuit transmits information by switching between one of only two values, 0 and 1, and doing so only when the signal it receives, in the form of an electric current, is of a particular strength. In contrast, a memristor would work along a gradient, much like a synapse in the brain. The signal it produces would vary depending on the strength of the signal that it receives. This would enable a single memristor to have many values, and therefore carry out a far wider range of operations than binary transistors.
Like a brain synapse, a memristor would also be able to “remember” the value associated with a given current strength, and produce the exact same signal the next time it receives a similar current. This could ensure that the answer to a complex equation, or the visual classification of an object, is reliable — a feat that normally involves multiple transistors and capacitors.
Ultimately, scientists envision that memristors would require far less chip real estate than conventional transistors, enabling powerful, portable computing devices that do not rely on supercomputers, or even connections to the Internet.
Existing memristor designs, however, are limited in their performance. A single memristor is made of a positive and negative electrode, separated by a “switching medium,” or space between the electrodes. When a voltage is applied to one electrode, ions from that electrode flow through the medium, forming a “conduction channel” to the other electrode. The received ions make up the electrical signal that the memristor transmits through the circuit. The size of the ion channel (and the signal that the memristor ultimately produces) should be proportional to the strength of the stimulating voltage.
Kim says that existing memristor designs work pretty well in cases where voltage stimulates a large conduction channel, or a heavy flow of ions from one electrode to the other. But these designs are less reliable when memristors need to generate subtler signals, via thinner conduction channels.
The thinner a conduction channel, and the lighter the flow of ions from one electrode to the other, the harder it is for individual ions to stay together. Instead, they tend to wander from the group, disbanding within the medium. As a result, it’s difficult for the receiving electrode to reliably capture the same number of ions, and therefore transmit the same signal, when stimulated with a certain low range of current.
Borrowing from metallurgy
Kim and his colleagues found a way around this limitation by borrowing a technique from metallurgy, the science of melding metals into alloys and studying their combined properties.
“Traditionally, metallurgists try to add different atoms into a bulk matrix to strengthen materials, and we thought, why not tweak the atomic interactions in our memristor, and add some alloying element to control the movement of ions in our medium,” Kim says.
Engineers typically use silver as the material for a memristor’s positive electrode. Kim’s team looked through the literature to find an element that they could combine with silver to effectively hold silver ions together, while allowing them to flow quickly through to the other electrode.
The team landed on copper as the ideal alloying element, as it is able to bind both with silver, and with silicon.
“It acts as a sort of bridge, and stabilizes the silver-silicon interface,” Kim says.
To make memristors using their new alloy, the group first fabricated a negative electrode out of silicon, then made a positive electrode by depositing a slight amount of copper, followed by a layer of silver. They sandwiched the two electrodes around an amorphous silicon medium. In this way, they patterned a millimeter-square silicon chip with tens of thousands of memristors.
As a first test of the chip, they recreated a gray-scale image of the Captain America shield. They equated each pixel in the image to a corresponding memristor in the chip. They then modulated the conductance of each memristor that was relative in strength to the color in the corresponding pixel.
The chip produced the same crisp image of the shield, and was able to “remember” the image and reproduce it many times, compared with chips made of other materials.
The team also ran the chip through an image processing task, programming the memristors to alter an image, in this case of MIT’s Killian Court, in several specific ways, including sharpening and blurring the original image. Again, their design produced the reprogrammed images more reliably than existing memristor designs.
“We’re using artificial synapses to do real inference tests,” Kim says. “We would like to develop this technology further to have larger-scale arrays to do image recognition tasks. And some day, you might be able to carry around artificial brains to do these kinds of tasks, without connecting to supercomputers, the internet, or the cloud.”
Here’s a link to and a citation for the paper,
Alloying conducting channels for reliable neuromorphic computing by Hanwool Yeon, Peng Lin, Chanyeol Choi, Scott H. Tan, Yongmo Park, Doyoon Lee, Jaeyong Lee, Feng Xu, Bin Gao, Huaqiang Wu, He Qian, Yifan Nie, Seyoung Kim & Jeehwan Kim. Nature Nanotechnology (2020 DOI: https://doi.org/10.1038/s41565-020-0694-5 Published: 08 June 2020
This paper is behind a paywall.
Electric sheep and sleeping androids
I find it impossible to mention that androids might need sleep without reference to Philip K. Dick’s 1968 novel, “Do Androids Dream of Electric Sheep?”; its Wikipedia entry is here.
As it happens, I’m not the only one who felt the need to reference the novel, from a June 8, 2020 news item on ScienceDaily,
No one can say whether androids will dream of electric sheep, but they will almost certainly need periods of rest that offer benefits similar to those that sleep provides to living brains, according to new research from Los Alamos National Laboratory.
“We study spiking neural networks, which are systems that learn much as living brains do,” said Los Alamos National Laboratory computer scientist Yijing Watkins. “We were fascinated by the prospect of training a neuromorphic processor in a manner analogous to how humans and other biological systems learn from their environment during childhood development.”
Watkins and her research team found that the network simulations became unstable after continuous periods of unsupervised learning. When they exposed the networks to states that are analogous to the waves that living brains experience during sleep, stability was restored. “It was as though we were giving the neural networks the equivalent of a good night’s rest,” said Watkins.
The discovery came about as the research team worked to develop neural networks that closely approximate how humans and other biological systems learn to see. The group initially struggled with stabilizing simulated neural networks undergoing unsupervised dictionary training, which involves classifying objects without having prior examples to compare them to.
“The issue of how to keep learning systems from becoming unstable really only arises when attempting to utilize biologically realistic, spiking neuromorphic processors or when trying to understand biology itself,” said Los Alamos computer scientist and study coauthor Garrett Kenyon. “The vast majority of machine learning, deep learning, and AI researchers never encounter this issue because in the very artificial systems they study they have the luxury of performing global mathematical operations that have the effect of regulating the overall dynamical gain of the system.”
The researchers characterize the decision to expose the networks to an artificial analog of sleep as nearly a last ditch effort to stabilize them. They experimented with various types of noise, roughly comparable to the static you might encounter between stations while tuning a radio. The best results came when they used waves of so-called Gaussian noise, which includes a wide range of frequencies and amplitudes. They hypothesize that the noise mimics the input received by biological neurons during slow-wave sleep. The results suggest that slow-wave sleep may act, in part, to ensure that cortical neurons maintain their stability and do not hallucinate.
The groups’ next goal is to implement their algorithm on Intel’s Loihi neuromorphic chip. They hope allowing Loihi to sleep from time to time will enable it to stably process information from a silicon retina camera in real time. If the findings confirm the need for sleep in artificial brains, we can probably expect the same to be true of androids and other intelligent machines that may come about in the future.
Watkins will be presenting the research at the Women in Computer Vision Workshop on June 14  in Seattle.
The 2020 Women in Computer Vition Workshop (WICV) website is here. As is becoming standard practice for these times, the workshop was held in a virtual environment. Here’s a link to and a citation for the poster presentation paper,
Robot comedian is not my first thought on seeing that image; ventriloquist’s dummy is what came to mind. However, it’s not the first time I’ve been wrong about something. A May 19, 2020 news item on ScienceDaily reveals the truth about Jon, a comedian in robot form,
Standup comedian Jon the Robot likes to tell his audiences that he does lots of auditions but has a hard time getting bookings.
“They always think I’m too robotic,” he deadpans.
If raucous laughter follows, he comes back with, “Please tell the booking agents how funny that joke was.”
If it doesn’t, he follows up with, “Sorry about that. I think I got caught in a loop. Please tell the booking agents that you like me … that you like me … that you like me … that you like me.”
Jon the Robot, with assistance from Oregon State University researcher Naomi Fitter, recently wrapped up a 32-show tour of comedy clubs in greater Los Angeles and in Oregon, generating guffaws and more importantly data that scientists and engineers can use to help robots and people relate more effectively with one another via humor.
“Social robots and autonomous social agents are becoming more and more ingrained in our everyday lives,” said Fitter, assistant professor of robotics in the OSU College of Engineering. “Lots of them tell jokes to engage users – most people understand that humor, especially nuanced humor, is essential to relationship building. But it’s challenging to develop entertaining jokes for robots that are funny beyond the novelty level.”
Live comedy performances are a way for robots to learn “in the wild” which jokes and which deliveries work and which ones don’t, Fitter said, just like human comedians do.
Two studies comprised the comedy tour, which included assistance from a team of Southern California comedians in coming up with material true to, and appropriate for, a robot comedian.
The first study, consisting of 22 performances in the Los Angeles area, demonstrated that audiences found a robot comic with good timing – giving the audience the right amounts of time to react, etc. – to be significantly more funny than one without good timing.
The second study, based on 10 routines in Oregon, determined that an “adaptive performance” – delivering post-joke “tags” that acknowledge an audience’s reaction to the joke – wasn’t necessarily funnier overall, but the adaptations almost always improved the audience’s perception of individual jokes. In the second study, all performances featured appropriate timing.
“In bad-timing mode, the robot always waited a full five seconds after each joke, regardless of audience response,” Fitter said. “In appropriate-timing mode, the robot used timing strategies to pause for laughter and continue when it subsided, just like an effective human comedian would. Overall, joke response ratings were higher when the jokes were delivered with appropriate timing.”
The number of performances, given to audiences of 10 to 20, provide enough data to identify significant differences between distinct modes of robot comedy performance, and the research helped to answer key questions about comedic social interaction, Fitter said.
“Audience size, social context, cultural context, the microphone-holding human presence and the novelty of a robot comedian may have influenced crowd responses,” Fitter said. “The current software does not account for differences in laughter profiles, but future work can account for these differences using a baseline response measurement. The only sensing we used to evaluate joke success was audio readings. Future work might benefit from incorporating additional types of sensing.”
Still, the studies have key implications for artificial intelligence efforts to understand group responses to dynamic, entertaining social robots in real-world environments, she said.
“Also, possible advances in comedy from this work could include improved techniques for isolating and studying the effects of comedic techniques and better strategies to help comedians assess the success of a joke or routine,” she said. “The findings will guide our next steps toward giving autonomous social agents improved humor capabilities.”
The studies were published by the Association for Computing Machinery [ACM]/Institute of Electrical and Electronics Engineering’s [IEEE] International Conference on Human-Robot Interaction [HRI].
Here’s another link to the two studies published in a single paper, which were first presented at the 2020 International Conference on Human-Robot Interaction [HRI]. along with a citation for the title of the published presentation,
To just solve a puzzle or play a game, artificial intelligence can require software running on thousands of computers. That could be the energy that three nuclear plants produce in one hour.
A team of engineers has created hardware that can learn skills using a type of AI that currently runs on software platforms. Sharing intelligence features between hardware and software would offset the energy needed for using AI in more advanced applications such as self-driving cars or discovering drugs.
“Software is taking on most of the challenges in AI. If you could incorporate intelligence into the circuit components in addition to what is happening in software, you could do things that simply cannot be done today,” said Shriram Ramanathan, a professor of materials engineering at Purdue University.
AI hardware development is still in early research stages. Researchers have demonstrated AI in pieces of potential hardware, but haven’t yet addressed AI’s large energy demand.
As AI penetrates more of daily life, a heavy reliance on software with massive energy needs is not sustainable, Ramanathan said. If hardware and software could share intelligence features, an area of silicon might be able to achieve more with a given input of energy.
Ramanathan’s team is the first to demonstrate artificial “tree-like” memory in a piece of potential hardware at room temperature. Researchers in the past have only been able to observe this kind of memory in hardware at temperatures that are too low for electronic devices.
The results of this study are published in the journal Nature Communications.
The hardware that Ramanathan’s team developed is made of a so-called quantum material. These materials are known for having properties that cannot be explained by classical physics. Ramanathan’s lab has been working to better understand these materials and how they might be used to solve problems in electronics.
Software uses tree-like memory to organize information into various “branches,” making that information easier to retrieve when learning new skills or tasks.
The strategy is inspired by how the human brain categorizes information and makes decisions.
“Humans memorize things in a tree structure of categories. We memorize ‘apple’ under the category of ‘fruit’ and ‘elephant’ under the category of ‘animal,’ for example,” said Hai-Tian Zhang, a Lillian Gilbreth postdoctoral fellow in Purdue’s College of Engineering. “Mimicking these features in hardware is potentially interesting for brain-inspired computing.”
The team introduced a proton to a quantum material called neodymium nickel oxide. They discovered that applying an electric pulse to the material moves around the proton. Each new position of the proton creates a different resistance state, which creates an information storage site called a memory state. Multiple electric pulses create a branch made up of memory states.
“We can build up many thousands of memory states in the material by taking advantage of quantum mechanical effects. The material stays the same. We are simply shuffling around protons,” Ramanathan said.
Through simulations of the properties discovered in this material, the team showed that the material is capable of learning the numbers 0 through 9. The ability to learn numbers is a baseline test of artificial intelligence.
The demonstration of these trees at room temperature in a material is a step toward showing that hardware could offload tasks from software.
“This discovery opens up new frontiers for AI that have been largely ignored because implementing this kind of intelligence into electronic hardware didn’t exist,” Ramanathan said.
The material might also help create a way for humans to more naturally communicate with AI.
“Protons also are natural information transporters in human beings. A device enabled by proton transport may be a key component for eventually achieving direct communication with organisms, such as through a brain implant,” Zhang said.
Here’s a link to and a citation for the published study,
Perovskite neural trees by Hai-Tian Zhang, Tae Joon Park, Shriram Ramanathan. Nature Communications volume 11, Article number: 2245 (2020) DOI: https://doi.org/10.1038/s41467-020-16105-y Published: 07 May 2020
Those are fabulous toes. Geckos and the fine hairs on their toes have been of great interest to researchers looking to increase qualities of adhesion for all kinds of purposes including for robots that climb. The latest foray into the research suggests that it’s not just the fine hairs found on gecko toes that are important.
Robots with toes? Experiments suggest that climbing robots could benefit from having flexible, hairy toes, like those of geckos, that can adjust quickly to accommodate shifting weight and slippery surfaces.
Biologists from the University of California, Berkeley, and Nanjing University of Aeronautics and Astronautics observed geckos running horizontally along walls to learn how they use their five toes to compensate for different types of surfaces without slowing down.
“The research helped answer a fundamental question: Why have many toes?” said Robert Full, UC Berkeley professor of integrative biology.
As his previous research showed, geckos’ toes can stick to the smoothest surfaces through the use of intermolecular forces, and uncurl and peel in milliseconds. Their toes have up to 15,000 hairs per foot, and each hair has “an awful case of split ends, with as many as a thousand nano-sized tips that allow close surface contact,” he said.
These discoveries have spawned research on new types of adhesives that use intermolecular forces, or van der Waals forces, to stick almost anywhere, even underwater.
One puzzle, he said, is that gecko toes only stick in one direction. They grab when pulled in one direction, but release when peeled in the opposite direction. Yet, geckos move agilely in any orientation.
To determine how geckos have learned to deal with shifting forces as they move on different surfaces, Yi Song, a UC Berkeley visiting student from Nanjing, China, ran geckos sideways along a vertical wall while making high-speed video recordings to show the orientation of their toes. The sideways movement allowed him to distinguish downward gravity from forward running forces to best test the idea of toe compensation.
Using a technique called frustrated total internal reflection, Song, also measured the area of contact of each toe. The technique made the toes light up when they touched a surface.
To the researcher’s surprise, geckos ran sideways just as fast as they climbed upward, easily and quickly realigning their toes against gravity. The toes of the front and hind top feet during sideways wall-running shifted upward and acted just like toes of the front feet during climbing.
To further explore the value of adjustable toes, researchers added slippery patches and strips, as well as irregular surfaces. To deal with these hazards, geckos took advantage of having multiple, soft toes. The redundancy allowed toes that still had contact with the surface to reorient and distribute the load, while the softness let them conform to rough surfaces.
“Toes allowed agile locomotion by distributing control among multiple, compliant, redundant structures that mitigate the risks of moving on challenging terrain,” Full said. “Distributed control shows how biological adhesion can be deployed more effectively and offers design ideas for new robot feet, novel grippers and unique manipulators.”
The team, which also includes Zhendong Dai and Zhouyi Wang of the College of Mechanical and Electrical Engineering at Nanjing University of Aeronautics and Astronautics, published its findings this week in the journal Proceedings of the Royal Society B.
Mark Wilson announces a timely new online programme from the Massachusetts Institute of Technology (MIT) in his April 9, 2020 article for Fast Company (Note: Links have been removed).
Not every child will grow up to attend MIT, but that doesn’t mean they can’t get a jump start on its curriculum. In response to the COVID-19 pandemic, which has forced millions of students to learn from home, MIT Media Lab associate professor Cynthia Breazeal has released [April 7, 2020] a website for K-12 students to learn about one of the most important topics in STEM [science, technology, engineering, and mathematics]: artificial intelligence.
The site provides 60 activities, lesson plans, and links to interactive AI experiments that MIT and companies like Google have developed in the past. Projects include coding robots to doodle, developing an image classifier (a tool that can identify images), writing speculative fiction to tackle the murky ethics of AI, and developing a chatbot (your grade schooler cannot possibly be worse at that task than I was). Everything is free, but schools are supposed to license lesson plans from MIT before adopting them.
Various associated MIT groups are covering a wide range of topics including the already mentioned AI ethics, as well as, cyber security and privacy issues, creativity, and more. Here’s a little something from a programme for the Girl Scouts of America, which focused on data privacy and tech policy,
You can find MIT’s AI education website here. While the focus is largely on children, it seems they are inviting adults to participate as well. At least that’s what I infer from what one of the groups associated with this AI education website, the LifeLong Kindergarten group states on their webpage,
The Lifelong Kindergarten group develops new technologies and activities that, in the spirit of the blocks and finger paint of kindergarten, engage people in creative learning experiences. Our ultimate goal is to foster a world full of playfully creative people, who are constantly inventing new possibilities for themselves and their communities.
The website is a little challenging with regard to navigation but perhaps these links to the Research Projects page will help you get started quickly or, for those who like to investigate a little further before jumping in, this News page (which is a blog) might prove helpful.
That’s it for today. I wish everyone a peaceful long weekend while we all observe as joyfully and carefully as possible our various religious and seasonal traditions. From my tradition to yours, Joyeuses Pâques!