Category Archives: robots

New chip for neuromorphic computing runs at a fraction of the energy of today’s systems

An August 17, 2022 news item on Nanowerk announces big (so to speak) claims from a team researching neuromorphic (brainlike) computer chips,

An international team of researchers has designed and built a chip that runs computations directly in memory and can run a wide variety of artificial intelligence (AI) applications–all at a fraction of the energy consumed by computing platforms for general-purpose AI computing.

The NeuRRAM neuromorphic chip brings AI a step closer to running on a broad range of edge devices, disconnected from the cloud, where they can perform sophisticated cognitive tasks anywhere and anytime without relying on a network connection to a centralized server. Applications abound in every corner of the world and every facet of our lives, and range from smart watches, to VR headsets, smart earbuds, smart sensors in factories and rovers for space exploration.

The NeuRRAM chip is not only twice as energy efficient as the state-of-the-art “compute-in-memory” chips, an innovative class of hybrid chips that runs computations in memory, it also delivers results that are just as accurate as conventional digital chips. Conventional AI platforms are a lot bulkier and typically are constrained to using large data servers operating in the cloud.

In addition, the NeuRRAM chip is highly versatile and supports many different neural network models and architectures. As a result, the chip can be used for many different applications, including image recognition and reconstruction as well as voice recognition.

..

An August 17, 2022 University of California at San Diego (UCSD) news release (also on EurekAlert), which originated the news item, provides more detail than usually found in a news release,

“The conventional wisdom is that the higher efficiency of compute-in-memory is at the cost of versatility, but our NeuRRAM chip obtains efficiency while not sacrificing versatility,” said Weier Wan, the paper’s first corresponding author and a recent Ph.D. graduate of Stanford University who worked on the chip while at UC San Diego, where he was co-advised by Gert Cauwenberghs in the Department of Bioengineering. 

The research team, co-led by bioengineers at the University of California San Diego, presents their results in the Aug. 17 [2022] issue of Nature.

Currently, AI computing is both power hungry and computationally expensive. Most AI applications on edge devices involve moving data from the devices to the cloud, where the AI processes and analyzes it. Then the results are moved back to the device. That’s because most edge devices are battery-powered and as a result only have a limited amount of power that can be dedicated to computing. 

By reducing power consumption needed for AI inference at the edge, this NeuRRAM chip could lead to more robust, smarter and accessible edge devices and smarter manufacturing. It could also lead to better data privacy as the transfer of data from devices to the cloud comes with increased security risks. 

On AI chips, moving data from memory to computing units is one major bottleneck. 

“It’s the equivalent of doing an eight-hour commute for a two-hour work day,” Wan said. 

To solve this data transfer issue, researchers used what is known as resistive random-access memory, a type of non-volatile memory that allows for computation directly within memory rather than in separate computing units. RRAM and other emerging memory technologies used as synapse arrays for neuromorphic computing were pioneered in the lab of Philip Wong, Wan’s advisor at Stanford and a main contributor to this work. Computation with RRAM chips is not necessarily new, but generally it leads to a decrease in the accuracy of the computations performed on the chip and a lack of flexibility in the chip’s architecture. 

“Compute-in-memory has been common practice in neuromorphic engineering since it was introduced more than 30 years ago,” Cauwenberghs said.  “What is new with NeuRRAM is that the extreme efficiency now goes together with great flexibility for diverse AI applications with almost no loss in accuracy over standard digital general-purpose compute platforms.”

A carefully crafted methodology was key to the work with multiple levels of “co-optimization” across the abstraction layers of hardware and software, from the design of the chip to its configuration to run various AI tasks. In addition, the team made sure to account for various constraints that span from memory device physics to circuits and network architecture. 

“This chip now provides us with a platform to address these problems across the stack from devices and circuits to algorithms,” said Siddharth Joshi, an assistant professor of computer science and engineering at the University of Notre Dame , who started working on the project as a Ph.D. student and postdoctoral researcher in Cauwenberghs lab at UC San Diego. 

Chip performance

Researchers measured the chip’s energy efficiency by a measure known as energy-delay product, or EDP. EDP combines both the amount of energy consumed for every operation and the amount of times it takes to complete the operation. By this measure, the NeuRRAM chip achieves 1.6 to 2.3 times lower EDP (lower is better) and 7 to 13 times higher computational density than state-of-the-art chips. 

Researchers ran various AI tasks on the chip. It achieved 99% accuracy on a handwritten digit recognition task; 85.7% on an image classification task; and 84.7% on a Google speech command recognition task. In addition, the chip also achieved a 70% reduction in image-reconstruction error on an image-recovery task. These results are comparable to existing digital chips that perform computation under the same bit-precision, but with drastic savings in energy. 

Researchers point out that one key contribution of the paper is that all the results featured are obtained directly on the hardware. In many previous works of compute-in-memory chips, AI benchmark results were often obtained partially by software simulation. 

Next steps include improving architectures and circuits and scaling the design to more advanced technology nodes. Researchers also plan to tackle other applications, such as spiking neural networks.

“We can do better at the device level, improve circuit design to implement additional features and address diverse applications with our dynamic NeuRRAM platform,” said Rajkumar Kubendran, an assistant professor for the University of Pittsburgh, who started work on the project while a Ph.D. student in Cauwenberghs’ research group at UC San Diego.

In addition, Wan is a founding member of a startup that works on productizing the compute-in-memory technology. “As a researcher and  an engineer, my ambition is to bring research innovations from labs into practical use,” Wan said. 

New architecture 

The key to NeuRRAM’s energy efficiency is an innovative method to sense output in memory. Conventional approaches use voltage as input and measure current as the result. But this leads to the need for more complex and more power hungry circuits. In NeuRRAM, the team engineered a neuron circuit that senses voltage and performs analog-to-digital conversion in an energy efficient manner. This voltage-mode sensing can activate all the rows and all the columns of an RRAM array in a single computing cycle, allowing higher parallelism. 

In the NeuRRAM architecture, CMOS neuron circuits are physically interleaved with RRAM weights. It differs from conventional designs where CMOS circuits are typically on the peripheral of RRAM weights.The neuron’s connections with the RRAM array can be configured to serve as either input or output of the neuron. This allows neural network inference in various data flow directions without incurring overheads in area or power consumption. This in turn makes the architecture easier to reconfigure. 

To make sure that accuracy of the AI computations can be preserved across various neural network architectures, researchers developed a set of hardware algorithm co-optimization techniques. The techniques were verified on various neural networks including convolutional neural networks, long short-term memory, and restricted Boltzmann machines. 

As a neuromorphic AI chip, NeuroRRAM performs parallel distributed processing across 48 neurosynaptic cores. To simultaneously achieve high versatility and high efficiency, NeuRRAM supports data-parallelism by mapping a layer in the neural network model onto multiple cores for parallel inference on multiple data. Also, NeuRRAM offers model-parallelism by mapping different layers of a model onto different cores and performing inference in a pipelined fashion.

An international research team

The work is the result of an international team of researchers. 

The UC San Diego team designed the CMOS circuits that implement the neural functions interfacing with the RRAM arrays to support the synaptic functions in the chip’s architecture, for high efficiency and versatility. Wan, working closely with the entire team, implemented the design; characterized the chip; trained the AI models; and executed the experiments. Wan also developed a software toolchain that maps AI applications onto the chip. 

The RRAM synapse array and its operating conditions were extensively characterized and optimized at Stanford University. 

The RRAM array was fabricated and integrated onto CMOS at Tsinghua University. 

The Team at Notre Dame contributed to both the design and architecture of the chip and the subsequent machine learning model design and training.

The research started as part of the National Science Foundation funded Expeditions in Computing project on Visual Cortex on Silicon at Penn State University, with continued funding support from the Office of Naval Research Science of AI program, the Semiconductor Research Corporation and DARPA [{US} Defense Advanced Research Projects Agency] JUMP program, and Western Digital Corporation. 

Here’s a link to and a citation for the paper,

A compute-in-memory chip based on resistive random-access memory by Weier Wan, Rajkumar Kubendran, Clemens Schaefer, Sukru Burc Eryilmaz, Wenqiang Zhang, Dabin Wu, Stephen Deiss, Priyanka Raina, He Qian, Bin Gao, Siddharth Joshi, Huaqiang Wu, H.-S. Philip Wong & Gert Cauwenberghs. Nature volume 608, pages 504–512 (2022) DOI: https://doi.org/10.1038/s41586-022-04992-8 Published: 17 August 2022 Issue Date: 18 August 2022

This paper is open access.

‘Necrobotic’ spiders as mechanical grippers

A July 25, 2022 news item on ScienceDaily describes research utilizing dead spiders,

Spiders are amazing. They’re useful even when they’re dead.

Rice University mechanical engineers are showing how to repurpose deceased spiders as mechanical grippers that can blend into natural environments while picking up objects, like other insects, that outweigh them.

Caption: An illustration shows the process by which Rice University mechanical engineers turn deceased spiders into necrobotic grippers, able to grasp items when triggered by hydraulic pressure. Credit: Preston Innovation Laboratory/Rice University

A July 25, 2022 Rice University news release (also on on EurekAlert but published August 4, 2022), which originated the news item, explains the reasoning, Note: Links have been removed,

“It happens to be the case that the spider, after it’s deceased, is the perfect architecture for small scale, naturally derived grippers,” said Daniel Preston of Rice’s George R. Brown School of Engineering. 

An open-access study in Advanced Science outlines the process by which Preston and lead author Faye Yap harnessed a spider’s physiology in a first step toward a novel area of research they call “necrobotics.”

Preston’s lab specializes in soft robotic systems that often use nontraditional materials, as opposed to hard plastics, metals and electronics. “We use all kinds of interesting new materials like hydrogels and elastomers that can be actuated by things like chemical reactions, pneumatics and light,” he said. “We even have some recent work on textiles and wearables. 

“This area of soft robotics is a lot of fun because we get to use previously untapped types of actuation and materials,” Preston said. “The spider falls into this line of inquiry. It’s something that hasn’t been used before but has a lot of potential.”

Unlike people and other mammals that move their limbs by synchronizing opposing muscles, spiders use hydraulics. A chamber near their heads contracts to send blood to limbs, forcing them to extend. When the pressure is relieved, the legs contract. 

The cadavers Preston’s lab pressed into service were wolf spiders, and testing showed they were reliably able to lift more than 130% of their own body weight, and sometimes much more. They had the grippers manipulate a circuit board, move objects and even lift another spider.  

The researchers noted smaller spiders can carry heavier loads in comparison to their size. Conversely, the larger the spider, the smaller the load it can carry in comparison to its own body weight. Future research will likely involve testing this concept with spiders smaller than the wolf spider, Preston said

Yap said the project began shortly after Preston established his lab in Rice’s Department of Mechanical Engineering in 2019.

“We were moving stuff around in the lab and we noticed a curled up spider at the edge of the hallway,” she said. “We were really curious as to why spiders curl up after they die.”

A quick search found the answer: “Spiders do not have antagonistic muscle pairs, like biceps and triceps in humans,” Yap said. “They only have flexor muscles, which allow their legs to curl in, and they extend them outward by hydraulic pressure. When they die, they lose the ability to actively pressurize their bodies. That’s why they curl up. 

“At the time, we were thinking, ‘Oh, this is super interesting.’ We wanted to find a way to leverage this mechanism,” she said.

Internal valves in the spiders’ hydraulic chamber, or prosoma, allow them to control each leg individually, and that will also be the subject of future research, Preston said. “The dead spider isn’t controlling these valves,” he said. “They’re all open. That worked out in our favor in this study, because it allowed us to control all the legs at the same time.”

Setting up a spider gripper was fairly simple. Yap tapped into the prosoma chamber with a needle, attaching it with a dab of superglue. The other end of the needle was connected to one of the lab’s test rigs or a handheld syringe, which delivered a minute amount of air to activate the legs almost instantly. 

The lab ran one ex-spider through 1,000 open-close cycles to see how well its limbs held up, and found it to be fairly robust. “It starts to experience some wear and tear as we get close to 1,000 cycles,” Preston said. “We think that’s related to issues with dehydration of the joints. We think we can overcome that by applying polymeric coatings.”

What turns the lab’s work from a cool stunt into a useful technology?

Preston said a few necrobotic applications have occurred to him. “There are a lot of pick-and-place tasks we could look into, repetitive tasks like sorting or moving objects around at these small scales, and maybe even things like assembly of microelectronics,” he said. 

“Another application could be deploying it to capture smaller insects in nature, because it’s inherently camouflaged,” Yap added. 

“Also, the spiders themselves are biodegradable,” Preston said. “So we’re not introducing a big waste stream, which can be a problem with more traditional components.”

Preston and Yap are aware the experiments may sound to some people like the stuff of nightmares, but they said what they’re doing doesn’t qualify as reanimation. 

“Despite looking like it might have come back to life, we’re certain that it’s inanimate, and we’re using it in this case strictly as a material derived from a once-living spider,” Preston said. “It’s providing us with something really useful.”

Co-authors of the paper are graduate students Zhen Liu and Trevor Shimokusu and postdoctoral fellow Anoop Rajappan. Preston is an assistant professor of mechanical engineering.

Here’s a link to and a citation for the paper,

Necrobotics: Biotic Materials as Ready-to-Use Actuators by Te Faye Yap, Zhen Liu, Anoop Rajappan, Trevor J. Shimokusu, Daniel J. Preston. Advanced Science
DOI: https://doi.org/10.1002/advs.202201174 First published: 25 July 2022

As noted in the news release, this paper is open access.

A robot with body image and self awareness

This research is a rather interesting direction for robotics to take (from a July 13, 2022 news item on ScienceDaily),

As every athletic or fashion-conscious person knows, our body image is not always accurate or realistic, but it’s an important piece of information that determines how we function in the world. When you get dressed or play ball, your brain is constantly planning ahead so that you can move your body without bumping, tripping, or falling over.

We humans acquire our body-model as infants, and robots are following suit. A Columbia Engineering team announced today they have created a robot that — for the first time — is able to learn a model of its entire body from scratch, without any human assistance. In a new study published by Science Robotics,, the researchers demonstrate how their robot created a kinematic model of itself, and then used its self-model to plan motion, reach goals, and avoid obstacles in a variety of situations. It even automatically recognized and then compensated for damage to its body.

Courtesy Columbia University School of Engineering and Applied Science

A July 13, 2022 Columbia University news release by Holly Evarts (also on EurekAlert), which originated the news item, describes the research in more detail, Note: Links have been removed,

Robot watches itself like an an infant exploring itself in a hall of mirrors

The researchers placed a robotic arm inside a circle of five streaming video cameras. The robot watched itself through the cameras as it undulated freely. Like an infant exploring itself for the first time in a hall of mirrors, the robot wiggled and contorted to learn how exactly its body moved in response to various motor commands. After about three hours, the robot stopped. Its internal deep neural network had finished learning the relationship between the robot’s motor actions and the volume it occupied in its environment. 

“We were really curious to see how the robot imagined itself,” said Hod Lipson, professor of mechanical engineering and director of Columbia’s Creative Machines Lab, where the work was done. “But you can’t just peek into a neural network, it’s a black box.” After the researchers struggled with various visualization techniques, the self-image gradually emerged. “It was a sort of gently flickering cloud that appeared to engulf the robot’s three-dimensional body,” said Lipson. “As the robot moved, the flickering cloud gently followed it.” The robot’s self-model was accurate to about 1% of its workspace.

Self-modeling robots will lead to more self-reliant autonomous systems

The ability of robots to model themselves without being assisted by engineers is important for many reasons: Not only does it save labor, but it also allows the robot to keep up with its own wear-and-tear, and even detect and compensate for damage. The authors argue that this ability is important as we need autonomous systems to be more self-reliant. A factory robot, for instance, could detect that something isn’t moving right, and compensate or call for assistance.

“We humans clearly have a notion of self,” explained the study’s first author Boyuan Chen, who led the work and is now an assistant professor at Duke University. “Close your eyes and try to imagine how your own body would move if you were to take some action, such as stretch your arms forward or take a step backward. Somewhere inside our brain we have a notion of self, a self-model that informs us what volume of our immediate surroundings we occupy, and how that volume changes as we move.”

Self-awareness in robots

The work is part of Lipson’s decades-long quest to find ways to grant robots some form of self-awareness.  “Self-modeling is a primitive form of self-awareness,” he explained. “If a robot, animal, or human, has an accurate self-model, it can function better in the world, it can make better decisions, and it has an evolutionary advantage.” 

The researchers are aware of the limits, risks, and controversies surrounding granting machines greater autonomy through self-awareness. Lipson is quick to admit that the kind of self-awareness demonstrated in this study is, as he noted, “trivial compared to that of humans, but you have to start somewhere. We have to go slowly and carefully, so we can reap the benefits while minimizing the risks.”  

Here’s a link to and a citation for the paper,

Fully body visual self-modeling of robot morphologies by Boyuan Chen, Robert Kwiatkowski, Carl Vondrick and Hod Lipson. Science Robotics 13 Jul 2022 Vol 7, Issue 68 DOI: 10.1126/scirobotics.abn1944

This paper is behind a paywall.

If you follow the link to the July 13, 2022 Columbia University news release, you’ll find an approximately 25 min. video of Hod Lipson showing you how they did it. As Lipson notes discussion of self-awareness and sentience is not found in robotics programmes. Plus, there are more details and links if you follow the EurekAlert link.

Reconfiguring a LEGO-like AI chip with light

MIT engineers have created a reconfigurable AI chip that comprises alternating layers of sensing and processing elements that can communicate with each other. Credit: Figure courtesy of the researchers and edited by MIT News

This image certainly challenges any ideas I have about what Lego looks like. It seems they see things differently at the Massachusetts Institute of Technology (MIT). From a June 13, 2022 MIT news release (also on EurekAlert),

Imagine a more sustainable future, where cellphones, smartwatches, and other wearable devices don’t have to be shelved or discarded for a newer model. Instead, they could be upgraded with the latest sensors and processors that would snap onto a device’s internal chip — like LEGO bricks incorporated into an existing build. Such reconfigurable chipware could keep devices up to date while reducing our electronic waste. 

Now MIT engineers have taken a step toward that modular vision with a LEGO-like design for a stackable, reconfigurable artificial intelligence chip.

The design comprises alternating layers of sensing and processing elements, along with light-emitting diodes (LED) that allow for the chip’s layers to communicate optically. Other modular chip designs employ conventional wiring to relay signals between layers. Such intricate connections are difficult if not impossible to sever and rewire, making such stackable designs not reconfigurable.

The MIT design uses light, rather than physical wires, to transmit information through the chip. The chip can therefore be reconfigured, with layers that can be swapped out or stacked on, for instance to add new sensors or updated processors.

“You can add as many computing layers and sensors as you want, such as for light, pressure, and even smell,” says MIT postdoc Jihoon Kang. “We call this a LEGO-like reconfigurable AI chip because it has unlimited expandability depending on the combination of layers.”

The researchers are eager to apply the design to edge computing devices — self-sufficient sensors and other electronics that work independently from any central or distributed resources such as supercomputers or cloud-based computing.

“As we enter the era of the internet of things based on sensor networks, demand for multifunctioning edge-computing devices will expand dramatically,” says Jeehwan Kim, associate professor of mechanical engineering at MIT. “Our proposed hardware architecture will provide high versatility of edge computing in the future.”

The team’s results are published today in Nature Electronics. In addition to Kim and Kang, MIT authors include co-first authors Chanyeol Choi, Hyunseok Kim, and Min-Kyu Song, and contributing authors Hanwool Yeon, Celesta Chang, Jun Min Suh, Jiho Shin, Kuangye Lu, Bo-In Park, Yeongin Kim, Han Eol Lee, Doyoon Lee, Subeen Pang, Sang-Hoon Bae, Hun S. Kum, and Peng Lin, along with collaborators from Harvard University, Tsinghua University, Zhejiang University, and elsewhere.

Lighting the way

The team’s design is currently configured to carry out basic image-recognition tasks. It does so via a layering of image sensors, LEDs, and processors made from artificial synapses — arrays of memory resistors, or “memristors,” that the team previously developed, which together function as a physical neural network, or “brain-on-a-chip.” Each array can be trained to process and classify signals directly on a chip, without the need for external software or an Internet connection.

In their new chip design, the researchers paired image sensors with artificial synapse arrays, each of which they trained to recognize certain letters — in this case, M, I, and T. While a conventional approach would be to relay a sensor’s signals to a processor via physical wires, the team instead fabricated an optical system between each sensor and artificial synapse array to enable communication between the layers, without requiring a physical connection. 

“Other chips are physically wired through metal, which makes them hard to rewire and redesign, so you’d need to make a new chip if you wanted to add any new function,” says MIT postdoc Hyunseok Kim. “We replaced that physical wire connection with an optical communication system, which gives us the freedom to stack and add chips the way we want.”

The team’s optical communication system consists of paired photodetectors and LEDs, each patterned with tiny pixels. Photodetectors constitute an image sensor for receiving data, and LEDs to transmit data to the next layer. As a signal (for instance an image of a letter) reaches the image sensor, the image’s light pattern encodes a certain configuration of LED pixels, which in turn stimulates another layer of photodetectors, along with an artificial synapse array, which classifies the signal based on the pattern and strength of the incoming LED light.

Stacking up

The team fabricated a single chip, with a computing core measuring about 4 square millimeters, or about the size of a piece of confetti. The chip is stacked with three image recognition “blocks,” each comprising an image sensor, optical communication layer, and artificial synapse array for classifying one of three letters, M, I, or T. They then shone a pixellated image of random letters onto the chip and measured the electrical current that each neural network array produced in response. (The larger the current, the larger the chance that the image is indeed the letter that the particular array is trained to recognize.)

The team found that the chip correctly classified clear images of each letter, but it was less able to distinguish between blurry images, for instance between I and T. However, the researchers were able to quickly swap out the chip’s processing layer for a better “denoising” processor, and found the chip then accurately identified the images.

“We showed stackability, replaceability, and the ability to insert a new function into the chip,” notes MIT postdoc Min-Kyu Song.

The researchers plan to add more sensing and processing capabilities to the chip, and they envision the applications to be boundless.

“We can add layers to a cellphone’s camera so it could recognize more complex images, or makes these into healthcare monitors that can be embedded in wearable electronic skin,” offers Choi, who along with Kim previously developed a “smart” skin for monitoring vital signs.

Another idea, he adds, is for modular chips, built into electronics, that consumers can choose to build up with the latest sensor and processor “bricks.”

“We can make a general chip platform, and each layer could be sold separately like a video game,” Jeehwan Kim says. “We could make different types of neural networks, like for image or voice recognition, and let the customer choose what they want, and add to an existing chip like a LEGO.”

This research was supported, in part, by the Ministry of Trade, Industry, and Energy (MOTIE) from South Korea; the Korea Institute of Science and Technology (KIST); and the Samsung Global Research Outreach Program.

Here’s a link to and a citation for the paper,

Reconfigurable heterogeneous integration using stackable chips with embedded artificial intelligence by Chanyeol Choi, Hyunseok Kim, Ji-Hoon Kang, Min-Kyu Song, Hanwool Yeon, Celesta S. Chang, Jun Min Suh, Jiho Shin, Kuangye Lu, Bo-In Park, Yeongin Kim, Han Eol Lee, Doyoon Lee, Jaeyong Lee, Ikbeom Jang, Subeen Pang, Kanghyun Ryu, Sang-Hoon Bae, Yifan Nie, Hyun S. Kum, Min-Chul Park, Suyoun Lee, Hyung-Jun Kim, Huaqiang Wu, Peng Lin & Jeehwan Kim. Nature Electronics volume 5, pages 386–393 (2022) 05 May 2022 Issue Date: June 2022 Published: 13 June 2022 DOI: https://doi.org/10.1038/s41928-022-00778-y

This paper is behind a paywall.

Speaking in Color, an AI-powered paint tool

This June 16, 2022 article by Jeff Beer for Fast Company took me in an unexpected direction but first, there’s this from Beer’s story,

If an architect wanted to create a building that matched the color of a New York City summer sunset, they’d have to pore over potentially hundreds of color cards designed for industry to get anything close, and still it’d be a tall order to find that exact match. But a new AI-powered, voice-controlled tool from Sherwin-Williams aims to change that.

The paint brand recently launched Speaking in Color, a tool that allows users to tell it about certain places, objects, or shades in order to arrive at that perfect color. You start with a broad description like, say, “New York City summer sunset,” and then fine tune from there once it responds with photos and other options with more in-depth preferences like “darker red,” “make it moodier,” or “add a sliver of sun,” until it’s done.

Developed with agency Wunderman Thompson, it’s a React web app that uses natural language to find your preferred color using both third-party and proprietary code. The tool’s custom algorithm allows you to tweak colors in a way that translates statements like “make it dimmer,” “add warmth,” or “more like the 1980s” into mathematical adjustments.

It seems to me Wunderman Thompson needs to rethink its Sherwin Williams Speaking in Color promotional video (it’s embedded with Beer’s June 16, 2022 article or you can find it here; scroll down about 50% of the way). You’ll note, the color prompts are not spoken; they’re in text, e.g., ‘crystal-clear Caribbean ocean’. So much for ‘speaking in color’ but the article aroused my curiosity which is how I found this May 19, 2017 article by Annalee Newitz for Ars Technica highlighting another color/AI project (Note: A link has been removed),

At some point, we’ve all wondered about the incredibly strange names for paint colors. Research scientist and neural network goofball Janelle Shane took the wondering a step further. Shane decided to train a neural network to generate new paint colors, complete with appropriate names. The results are possibly the greatest work of artificial intelligence I’ve seen to date.

Writes Shane on her Tumblr, “For this experiment, I gave the neural network a list of about 7,700 Sherwin-Williams paint colors along with their RGB values. (RGB = red, green, and blue color values.) Could the neural network learn to invent new paint colors and give them attractive names?”

Shane told Ars that she chose a neural network algorithm called char-rnn, which predicts the next character in a sequence. So basically the algorithm was working on two tasks: coming up with sequences of letters to form color names, and coming up with sequences of numbers that map to an RGB value. As she checked in on the algorithm’s progress, she found that it was able to create colors long before it could actually name them reliably.

The longer it processed the dataset, the closer the algorithm got to making legit color names, though they were still mostly surreal: “Soreer Gray” is a kind of greenish color, and “Sane Green” is a purplish blue. When Shane cranked up “creativity” on the algorithm’s output, it gave her a violet color called “Dondarf” and a Kelly green called “Bylfgoam Glosd.” After churning through several more iterations of this process, Shane was able to get the algorithm to recognize some basic colors like red and gray, “though not reliably,” because she also gets a sky blue called “Gray Pubic” and a dark green called “Stoomy Brown.”

Brown has since written a book about artificial intelligence (You Look Like a Thing and I Love You; How Artificial Intelligence Works and Why It’s Making the World a Weirder Place [2019]) and continues her investigations of AI. You can find her website and blog here and her Wikipedia entry here.

FrogHeart’s 2022 comes to an end as 2023 comes into view

I look forward to 2023 and hope it will be as stimulating as 2022 proved to be. Here’s an overview of the year that was on this blog:

Sounds of science

It seems 2022 was the year that science discovered the importance of sound and the possibilities of data sonification. Neither is new but this year seemed to signal a surge of interest or maybe I just happened to stumble onto more of the stories than usual.

This is not an exhaustive list, you can check out my ‘Music’ category for more here. I have tried to include audio files with the postings but it all depends on how accessible the researchers have made them.

Aliens on earth: machinic biology and/or biological machinery?

When I first started following stories in 2008 (?) about technology or machinery being integrated with the human body, it was mostly about assistive technologies such as neuroprosthetics. You’ll find most of this year’s material in the ‘Human Enhancement’ category or you can search the tag ‘machine/flesh’.

However, the line between biology and machine became a bit more blurry for me this year. You can see what’s happening in the titles listed below (you may recognize the zenobot story; there was an earlier version of xenobots featured here in 2021):

This was the story that shook me,

Are the aliens going to come from outer space or are we becoming the aliens?

Brains (biological and otherwise), AI, & our latest age of anxiety

As we integrate machines into our bodies, including our brains, there are new issues to consider:

  • Going blind when your neural implant company flirts with bankruptcy (long read) April 5, 2022 posting
  • US National Academies Sept. 22-23, 2022 workshop on techno, legal & ethical issues of brain-machine interfaces (BMIs) September 21, 2022 posting

I hope the US National Academies issues a report on their “Brain-Machine and Related Neural Interface Technologies: Scientific, Technical, Ethical, and Regulatory Issues – A Workshop” for 2023.

Meanwhile the race to create brainlike computers continues and I have a number of posts which can be found under the category of ‘neuromorphic engineering’ or you can use these search terms ‘brainlike computing’ and ‘memristors’.

On the artificial intelligence (AI) side of things, I finally broke down and added an ‘artificial intelligence (AI) category to this blog sometime between May and August 2021. Previously, I had used the ‘robots’ category as a catchall. There are other stories but these ones feature public engagement and policy (btw, it’s a Canadian Science Policy Centre event), respectively,

  • “The “We are AI” series gives citizens a primer on AI” March 23, 2022 posting
  • “Age of AI and Big Data – Impact on Justice, Human Rights and Privacy Zoom event on September 28, 2022 at 12 – 1:30 pm EDT” September 16, 2022 posting

These stories feature problems, which aren’t new but seem to be getting more attention,

While there have been issues over AI, the arts, and creativity previously, this year they sprang into high relief. The list starts with my two-part review of the Vancouver Art Gallery’s AI show; I share most of my concerns in part two. The third post covers intellectual property issues (mostly visual arts but literary arts get a nod too). The fourth post upends the discussion,

  • “Mad, bad, and dangerous to know? Artificial Intelligence at the Vancouver (Canada) Art Gallery (1 of 2): The Objects” July 28, 2022 posting
  • “Mad, bad, and dangerous to know? Artificial Intelligence at the Vancouver (Canada) Art Gallery (2 of 2): Meditations” July 28, 2022 posting
  • “AI (artificial intelligence) and art ethics: a debate + a Botto (AI artist) October 2022 exhibition in the Uk” October 24, 2022 posting
  • Should AI algorithms get patents for their inventions and is anyone talking about copyright for texts written by AI algorithms? August 30, 2022 posting

Interestingly, most of the concerns seem to be coming from the visual and literary arts communities; I haven’t come across major concerns from the music community. (The curious can check out Vancouver’s Metacreation Lab for Artificial Intelligence [located on a Simon Fraser University campus]. I haven’t seen any cautionary or warning essays there; it’s run by an AI and creativity enthusiast [professor Philippe Pasquier]. The dominant but not sole focus is art, i.e., music and AI.)

There is a ‘new kid on the block’ which has been attracting a lot of attention this month. If you’re curious about the latest and greatest AI anxiety,

  • Peter Csathy’s December 21, 2022 Yahoo News article (originally published in The WRAP) makes this proclamation in the headline “Chat GPT Proves That AI Could Be a Major Threat to Hollywood Creatives – and Not Just Below the Line | PRO Insight”
  • Mouhamad Rachini’s December 15, 2022 article for the Canadian Broadcasting Corporation’s (CBC) online news overs a more generalized overview of the ‘new kid’ along with an embedded CBC Radio file which runs approximately 19 mins. 30 secs. It’s titled “ChatGPT a ‘landmark event’ for AI, but what does it mean for the future of human labour and disinformation?” The chat bot’s developer, OpenAI, has been mentioned here many times including the previously listed July 28, 2022 posting (part two of the VAG review) and the October 24, 2022 posting.

Opposite world (quantum physics in Canada)

Quantum computing made more of an impact here (my blog) than usual. it started in 2021 with the announcement of a National Quantum Strategy in the Canadian federal government budget for that year and gained some momentum in 2022:

  • “Quantum Mechanics & Gravity conference (August 15 – 19, 2022) launches Vancouver (Canada)-based Quantum Gravity Institute and more” July 26, 2022 posting Note: This turned into one of my ‘in depth’ pieces where I comment on the ‘Canadian quantum scene’ and highlight the appointment of an expert panel for the Council of Canada Academies’ report on Quantum Technologies.
  • “Bank of Canada and Multiverse Computing model complex networks & cryptocurrencies with quantum computing” July 25, 2022 posting
  • “Canada, quantum technology, and a public relations campaign?” December 29, 2022 posting

This one was a bit of a puzzle with regard to placement in this end-of-year review, it’s quantum but it’s also about brainlike computing

It’s getting hot in here

Fusion energy made some news this year.

There’s a Vancouver area company, General Fusion, highlighted in both postings and the October posting includes an embedded video of Canadian-born rapper Baba Brinkman’s “You Must LENR” [L ow E nergy N uclear R eactions or sometimes L attice E nabled N anoscale R eactions or Cold Fusion or CANR (C hemically A ssisted N uclear R eactions)].

BTW, fusion energy can generate temperatures up to 150 million degrees Celsius.

Ukraine, science, war, and unintended consequences

Here’s what you might expect,

These are the unintended consequences (from Rachel Kyte’s, Dean of the Fletcher School, Tufts University, December 26, 2022 essay on The Conversation [h/t December 27, 2022 news item on phys.org]), Note: Links have been removed,

Russian President Vladimir Putin’s war on Ukraine has reverberated through Europe and spread to other countries that have long been dependent on the region for natural gas. But while oil-producing countries and gas lobbyists are arguing for more drilling, global energy investments reflect a quickening transition to cleaner energy. [emphasis mine]

Call it the Putin effect – Russia’s war is speeding up the global shift away from fossil fuels.

In December [2022?], the International Energy Agency [IEA] published two important reports that point to the future of renewable energy.

First, the IEA revised its projection of renewable energy growth upward by 30%. It now expects the world to install as much solar and wind power in the next five years as it installed in the past 50 years.

The second report showed that energy use is becoming more efficient globally, with efficiency increasing by about 2% per year. As energy analyst Kingsmill Bond at the energy research group RMI noted, the two reports together suggest that fossil fuel demand may have peaked. While some low-income countries have been eager for deals to tap their fossil fuel resources, the IEA warns that new fossil fuel production risks becoming stranded, or uneconomic, in the next 20 years.

Kyte’s essay is not all ‘sweetness and light’ but it does provide a little optimism.

Kudos, nanotechnology, culture (pop & otherwise), fun, and a farewell in 2022

This one was a surprise for me,

Sometimes I like to know where the money comes from and I was delighted to learn of the Ărramăt Project funded through the federal government’s New Frontiers in Research Fund (NFRF). Here’s more about the Ărramăt Project from the February 14, 2022 posting,

“The Ărramăt Project is about respecting the inherent dignity and interconnectedness of peoples and Mother Earth, life and livelihood, identity and expression, biodiversity and sustainability, and stewardship and well-being. Arramăt is a word from the Tamasheq language spoken by the Tuareg people of the Sahel and Sahara regions which reflects this holistic worldview.” (Mariam Wallet Aboubakrine)

Over 150 Indigenous organizations, universities, and other partners will work together to highlight the complex problems of biodiversity loss and its implications for health and well-being. The project Team will take a broad approach and be inclusive of many different worldviews and methods for research (i.e., intersectionality, interdisciplinary, transdisciplinary). Activities will occur in 70 different kinds of ecosystems that are also spiritually, culturally, and economically important to Indigenous Peoples.

The project is led by Indigenous scholars and activists …

Kudos to the federal government and all those involved in the Salmon science camps, the Ărramăt Project, and other NFRF projects.

There are many other nanotechnology posts here but this appeals to my need for something lighter at this point,

  • “Say goodbye to crunchy (ice crystal-laden) in ice cream thanks to cellulose nanocrystals (CNC)” August 22, 2022 posting

The following posts tend to be culture-related, high and/or low but always with a science/nanotechnology edge,

Sadly, it looks like 2022 is the last year that Ada Lovelace Day is to be celebrated.

… this year’s Ada Lovelace Day is the final such event due to lack of financial backing. Suw Charman-Anderson told the BBC [British Broadcasting Corporation] the reason it was now coming to an end was:

You can read more about it here:

In the rearview mirror

A few things that didn’t fit under the previous heads but stood out for me this year. Science podcasts, which were a big feature in 2021, also proliferated in 2022. I think they might have peaked and now (in 2023) we’ll see what survives.

Nanotechnology, the main subject on this blog, continues to be investigated and increasingly integrated into products. You can search the ‘nanotechnology’ category here for posts of interest something I just tried. It surprises even me (I should know better) how broadly nanotechnology is researched and applied.

If you want a nice tidy list, Hamish Johnston in a December 29, 2022 posting on the Physics World Materials blog has this “Materials and nanotechnology: our favourite research in 2022,” Note: Links have been removed,

“Inherited nanobionics” makes its debut

The integration of nanomaterials with living organisms is a hot topic, which is why this research on “inherited nanobionics” is on our list. Ardemis Boghossian at EPFL [École polytechnique fédérale de Lausanne] in Switzerland and colleagues have shown that certain bacteria will take up single-walled carbon nanotubes (SWCNTs). What is more, when the bacteria cells split, the SWCNTs are distributed amongst the daughter cells. The team also found that bacteria containing SWCNTs produce a significantly more electricity when illuminated with light than do bacteria without nanotubes. As a result, the technique could be used to grow living solar cells, which as well as generating clean energy, also have a negative carbon footprint when it comes to manufacturing.

Getting to back to Canada, I’m finding Saskatchewan featured more prominently here. They do a good job of promoting their science, especially the folks at the Canadian Light Source (CLS), Canada’s synchrotron, in Saskatoon. Canadian live science outreach events seeming to be coming back (slowly). Cautious organizers (who have a few dollars to spare) are also enthusiastic about hybrid events which combine online and live outreach.

After what seems like a long pause, I’m stumbling across more international news, e.g. “Nigeria and its nanotechnology research” published December 19, 2022 and “China and nanotechnology” published September 6, 2022. I think there’s also an Iran piece here somewhere.

With that …

Making resolutions in the dark

Hopefully this year I will catch up with the Council of Canadian Academies (CCA) output and finally review a few of their 2021 reports such as Leaps and Boundaries; a report on artificial intelligence applied to science inquiry and, perhaps, Powering Discovery; a report on research funding and Natural Sciences and Engineering Research Council of Canada.

Given what appears to a renewed campaign to have germline editing (gene editing which affects all of your descendants) approved in Canada, I might even reach back to a late 2020 CCA report, Research to Reality; somatic gene and engineered cell therapies. it’s not the same as germline editing but gene editing exists on a continuum.

For anyone who wants to see the CCA reports for themselves they can be found here (both in progress and completed).

I’m also going to be paying more attention to how public relations and special interests influence what science is covered and how it’s covered. In doing this 2022 roundup, I noticed that I featured an overview of fusion energy not long before the breakthrough. Indirect influence on this blog?

My post was precipitated by an article by Alex Pasternak in Fast Company. I’m wondering what precipitated Alex Pasternack’s interest in fusion energy since his self-description on the Huffington Post website states this “… focus on the intersections of science, technology, media, politics, and culture. My writing about those and other topics—transportation, design, media, architecture, environment, psychology, art, music … .”

He might simply have received a press release that stimulated his imagination and/or been approached by a communications specialist or publicists with an idea. There’s a reason for why there are so many public relations/media relations jobs and agencies.

Que sera, sera (Whatever will be, will be)

I can confidently predict that 2023 has some surprises in store. I can also confidently predict that the European Union’s big research projects (1B Euros each in funding for the Graphene Flagship and Human Brain Project over a ten year period) will sunset in 2023, ten years after they were first announced in 2013. Unless, the powers that be extend the funding past 2023.

I expect the Canadian quantum community to provide more fodder for me in the form of a 2023 report on Quantum Technologies from the Council of Canadian academies, if nothing else otherwise.

I’ve already featured these 2023 science events but just in case you missed them,

  • 2023 Preview: Bill Nye the Science Guy’s live show and Marvel Avengers S.T.A.T.I.O.N. (Scientific Training And Tactical Intelligence Operative Network) coming to Vancouver (Canada) November 24, 2022 posting
  • September 2023: Auckland, Aotearoa New Zealand set to welcome women in STEM (science, technology, engineering, and mathematics) November 15, 2022 posting

Getting back to this blog, it may not seem like a new year during the first few weeks of 2023 as I have quite the stockpile of draft posts. At this point I have drafts that are dated from June 2022 and expect to be burning through them so as not to fall further behind but will be interspersing them, occasionally, with more current posts.

Most importantly: a big thank you to everyone who drops by and reads (and sometimes even comments) on my posts!!! it’s very much appreciated and on that note: I wish you all the best for 2023.

Robots with living human skin tissue?

So far, it looks like they’ve managed a single robotic finger. I expect it will take a great deal more work before an entire robotic hand is covered in living skin. BTW, I have a few comments at the end of this post.

Caption: Illustration showing the cutting and healing process of the robotic finger (A), its anchoring structure (B) and fabrication process (C). Credit: ©2022 Takeuchi et al.

I have two news releases highlighting the work. This a June 9, 2022 Cell Press news release,

From action heroes to villainous assassins, biohybrid robots made of both living and artificial materials have been at the center of many sci-fi fantasies, inspiring today’s robotic innovations. It’s still a long way until human-like robots walk among us in our daily lives, but scientists from Japan are bringing us one step closer by crafting living human skin on robots. The method developed, presented June 9 in the journal Matter, not only gave a robotic finger skin-like texture, but also water-repellent and self-healing functions.

“The finger looks slightly ‘sweaty’ straight out of the culture medium,” says first author Shoji Takeuchi, a professor at the University of Tokyo, Japan. “Since the finger is driven by an electric motor, it is also interesting to hear the clicking sounds of the motor in harmony with a finger that looks just like a real one.”

Looking “real” like a human is one of the top priorities for humanoid robots that are often tasked to interact with humans in healthcare and service industries. A human-like appearance can improve communication efficiency and evoke likability. While current silicone skin made for robots can mimic human appearance, it falls short when it comes to delicate textures like wrinkles and lacks skin-specific functions. Attempts at fabricating living skin sheets to cover robots have also had limited success, since it’s challenging to conform them to dynamic objects with uneven surfaces.

“With that method, you have to have the hands of a skilled artisan who can cut and tailor the skin sheets,” says Takeuchi. “To efficiently cover surfaces with skin cells, we established a tissue molding method to directly mold skin tissue around the robot, which resulted in a seamless skin coverage on a robotic finger.”

To craft the skin, the team first submerged the robotic finger in a cylinder filled with a solution of collagen and human dermal fibroblasts, the two main components that make up the skin’s connective tissues. Takeuchi says the study’s success lies within the natural shrinking tendency of this collagen and fibroblast mixture, which shrank and tightly conformed to the finger. Like paint primers, this layer provided a uniform foundation for the next coat of cells—human epidermal keratinocytes—to stick to. These cells make up 90% of the outermost layer of skin, giving the robot a skin-like texture and moisture-retaining barrier properties.

The crafted skin had enough strength and elasticity to bear the dynamic movements as the robotic finger curled and stretched. The outermost layer was thick enough to be lifted with tweezers and repelled water, which provides various advantages in performing specific tasks like handling electrostatically charged tiny polystyrene foam, a material often used in packaging. When wounded, the crafted skin could even self-heal like humans’ with the help of a collagen bandage, which gradually morphed into the skin and withstood repeated joint movements.

“We are surprised by how well the skin tissue conforms to the robot’s surface,” says Takeuchi. “But this work is just the first step toward creating robots covered with living skin.” The developed skin is much weaker than natural skin and can’t survive long without constant nutrient supply and waste removal. Next, Takeuchi and his team plan to address those issues and incorporate more sophisticated functional structures within the skin, such as sensory neurons, hair follicles, nails, and sweat glands.

“I think living skin is the ultimate solution to give robots the look and touch of living creatures since it is exactly the same material that covers animal bodies,” says Takeuchi.

A June 10, 2022 University of Tokyo news release (also on EurekAlert but published June 9, 2022) covers some of the same ground while providing more technical details,

Researchers from the University of Tokyo pool knowledge of robotics and tissue culturing to create a controllable robotic finger covered with living skin tissue. The robotic digit had living cells and supporting organic material grown on top of it for ideal shaping and strength. As the skin is soft and can even heal itself, so could be useful in applications that require a gentle touch but also robustness. The team aims to add other kinds of cells into future iterations, giving devices the ability to sense as we do.

Professor Shoji Takeuchi is a pioneer in the field of biohybrid robots, the intersection of robotics and bioengineering. Together with researchers from around the University of Tokyo, he explores things such as artificial muscles, synthetic odor receptors, lab-grown meat, and more. His most recent creation is both inspired by and aims to aid medical research on skin damage such as deep wounds and burns, as well as help advance manufacturing.

“We have created a working robotic finger that articulates just as ours does, and is covered by a kind of artificial skin that can heal itself,” said Takeuchi. “Our skin model is a complex three-dimensional matrix that is grown in situ on the finger itself. It is not grown separately then cut to size and adhered to the device; our method provides a more complete covering and is more strongly anchored too.”

Three-dimensional skin models have been used for some time for cosmetic and drug research and testing, but this is the first time such materials have been used on a working robot. In this case, the synthetic skin is made from a lightweight collagen matrix known as a hydrogel, within which several kinds of living skin cells called fibroblasts and keratinocytes are grown. The skin is grown directly on the robotic component which proved to be one of the more challenging aspects of this research, requiring specially engineered structures that can anchor the collagen matrix to them, but it was worth it for the aforementioned benefits.

“Our creation is not only soft like real skin but can repair itself if cut or damaged in some way. So we imagine it could be useful in industries where in situ repairability is important as are humanlike qualities, such as dexterity and a light touch,” said Takeuchi. “In the future, we will develop more advanced versions by reproducing some of the organs found in skin, such as sensory cells, hair follicles and sweat glands. Also, we would like to try to coat larger structures.”

The main long-term aim for this research is to open up new possibilities in advanced manufacturing industries. Having humanlike manipulators could allow for the automation of things currently only achievable by highly skilled professionals. Other areas such as cosmetics, pharmaceuticals and regenerative medicine could also benefit. This could potentially reduce cost, time and complexity of research in these areas and could even reduce the need for animal testing.

Here’s a link to and a citation for the paper,

Living skin on a robot by Michio Kawai, Minghao Nie, Haruka Oda, Yuya Morimoto, Shoji Takeuchi. Matter DOI: https://doi.org/10.1016/j.matt.2022.05.019 Published:June 09, 2022

This paper appears to be open access.

There more images and there’s at least one video all of which can be found by clicking on the links to one or both of the news releases and to the paper. Personally, I found the images fascinating and …

Frankenstein, cyborgs, and more

The word is creepy. I find the robot finger images fascinating and creepy. The work brings to mind Frankenstein (by Mary Shelley) and The Island of Dr. Moreau (by H. G. Wells) both of which feature cautionary tales. Dr. Frankenstein tries to bring a dead ‘person’ assembled with parts from various corpses to life and Dr. Moreau attempts to create hybrids composed humans and animals. It’s fascinating how 19th century nightmares prefigure some of the research being performed now.

The work also brings to mind the ‘uncanny valley’, a term coined by Masahiro Mori, where people experience discomfort when something that’s not human seems too human. I have an excerpt from an essay that Mori wrote about the uncanny valley in my March 10, 2011 posting; scroll down about 50% of the way.) The diagram which accompanies it illustrates the gap between the least uncanny or the familiar (a healthy person, a puppet, etc.) and the most uncanny or the unfamiliar (a corpse, a zombie, a prosthetic hand).

Mori notes that the uncanny valley is not immovable; things change and the unfamiliar becomes familiar. Presumably, one day, I will no longer find robots with living skin to be creepy.

All of this changes the meaning (for me) of a term i coined for this site, ‘machine/flesh’. At the time, I was thinking of prosthetics and implants and how deeply they are being integrated into the body. But this research reverses the process. Now, the body (skin in this case) is being added to the machine (robot).

Art and 5G at museums in Turin (Italy)

Caption: In the framework of EU-funded project 5GTours, R1 humanoid robot tested at GAM (Turin) its ability to navigate and interact with visitors at the 20th-century collections, accompanying them to explore a selection of the museum’s most representative works, such as Andy Warhol’s “Orange car crash”. The robot has been designed and developed by IIT, while the 5G connection was set up by TIM using Ericsson technology.. Credit: IIT-Istituto Italiano di Tecnologia/GAM

This May 27, 2022 Istituto Italiano di Tecnologia (IIT) press release on EurekAlert offers an intriguing view into the potential for robots in art galleries,

Robotics, 5G and art: during the month of May visitors to the Turin’s art museums, Turin Civic Gallery of Modern and Contemporary Art (GAM) and Turin City Museum of Ancient Art (Palazzo Madama), had the opportunity to be part of various experiments based on 5G-network technology. Interactive technologies and robots were the focus of an innovative enjoyment of the art collections, with a great appreciation from the public.

Visitors to the GAM and to Palazzo Madama were provided with a number of engaging interactive experiences made possible through a significant collaboration between public and private organisations, which have been working together for more than three years to experiment the potential of new 5G technology in the framework of the EU-funded project 5GTours (https://5gtours.eu/).

The demonstrations set up in Turin led to the creation of innovative applications in the tourism and culture sectors that can easily be replicated in any artistic or museum context.

In both venues, visitors had the opportunity to meet R1, the humanoid robot designed by the IIT-Istituto Italiano di Tecnologia (Italian Institute of Technology) in Genova and created to operate in domestic and professional environments, whose autonomous and remote navigation system is well integrated with the bandwidth and latency offered by a 5G connection. R1, the robot – 1 metre 25 cm in height, weighing 50 kg, made 50% from plastic and 50% from carbon fibre and metal – is able to describe the works and answer questions regarding the artist or the period in history to which the work belongs. 5G connectivity is required in order to transmit the considerable quantity of data generated by the robot’s sensors and the algorithms that handle environmental perception, autonomous navigation and dialogue to external processing systems with extremely rapid response times.

At Palazzo Madama R1 humanoid robot led a guided tour of the Ceramics Room, while at GAM it was available to visitors of the twentieth-century collections, accompanying them to explore a selection of the museum’s most representative works. R1 robot explained and responded to questions about six relevant paintings: Felice Casorati’s “Daphne a Pavarolo”, Osvaldo Lucini’s “Uccello 2”, Marc Chagall’s “Dans mon pays”, Alberto Burri’s “Sacco”, Andy Warhol’s “Orange car crash” and Mario Merz’s “Che Fare?”.

Moreover, visitors – with the use of Meta Quest visors also connected to the 5G network – were required to solve a puzzle, putting the paintings in the Guards’ Room back into their frames. With these devices, the works in the hall, which in reality cannot be touched, can be handled and moved virtually. Lastly, the visitors involved had the opportunity to visit the underground spaces of Palazzo Madama with the mini-robot Double 3, which uses the 5G network to move reactively and precisely within the narrow spaces.

At GAM a class of students from a local school were able to remotely connect and manoeuvre the mini-robot Double 3 located in the rooms of the twentieth-century collections at the GAM directly from their classroom. A treasure hunt held in the museum with the participants never leaving the school.

In the Educational Area, a group of youngsters had the opportunity of collaborating in the painting of a virtual work of art on a large technological wall, drawing inspiration from works by Nicola De Maria.

The 5G network solutions created at the GAM and at Palazzo Madama by TIM [Telecom Italia] with Ericsson technology in collaboration with the City of Turin and the Turin Museum Foundation, guarantee constant high-speed transmission and extremely low latency. These solutions, which comply with 3GPP standard, are extremely flexible in terms of setting up and use. In the case of Palazzo Madama, a UNESCO World Heritage Site, tailor-made installations were designed, using apparatus and solutions that perfectly integrate with the museum spaces, while at the same time guaranteeing extremely high performance. At the GAM, the Radio Dot System has been implemented, a new 5G solution from Ericsson that is small enough to be held in the palm of a hand, and that provides network coverage and performance required for busy indoor areas. Thanks to these activities, Turin is ever increasingly playing a role as an open-air laboratory for urban innovation; since 2021 it has been the location of the “House of Emerging Technology – CTE NEXT”, a veritable centre for technology transfer via 5G and for emerging technologies coordinated by the Municipality of Turin and financed by the Ministry for Economic Development.

Through these solutions, Palazzo Madama and the GAM are now unique examples of technology in Italy and a rare example on a European level of museum buildings with full 5G coverage.

The experience was the result of the project financed by the European Union, 5G-TOURS 5G smarT mObility, media and e-health for toURists and citizenS”, the city of Turin – Department and Directorate of Innovation, in collaboration with the Department of Culture – Ericsson, TIM [Telecom Italia], the Turin Museum Foundation and the IIT-Istituto Italiano di Tecnologia (Italian Institute of Technology) of Genova, with the contribution of the international partners Atos and Samsung. The 5G coverage within the two museums was set up by TIM using Ericsson technology, solutions that perfectly integrated with the areas within the two museums structures.

Just in case you missed the link in the press release, you can find more information about this European Union Horizon 2020-funded 5G project, here at 5G TOURS (SmarT mObility, media and e-health for toURists and citizenS). You can find out more about the grant, e.g., this project sunset in July 2022, here.

Swiss researchers, memristors, perovskite crystals, and neuromorphic (brainlike) computing

A May 18, 2022 news item on Nanowerk highlights research into making memristors more ‘flexible’, (Note: There’s an almost identical May 18, 2022 news item on ScienceDaily but the issuing agency is listed as ETH Zurich rather than Empa as listed on Nanowerk),

Compared with computers, the human brain is incredibly energy-efficient. Scientists are therefore drawing on how the brain and its interconnected neurons function for inspiration in designing innovative computing technologies. They foresee that these brain-inspired computing systems, will be more energy-efficient than conventional ones, as well as better at performing machine-learning tasks.

Much like neurons, which are responsible for both data storage and data processing in the brain, scientists want to combine storage and processing in a single type of electronic component, known as a memristor. Their hope is that this will help to achieve greater efficiency because moving data between the processor and the storage, as conventional computers do, is the main reason for the high energy consumption in machine-learning applications.

Researchers at ETH Zurich, Empa and the University of Zurich have now developed an innovative concept for a memristor that can be used in a far wider range of applications than existing memristors.

“There are different operation modes for memristors, and it is advantageous to be able to use all these modes depending on an artificial neural network’s architecture,” explains ETH Zurich postdoc Rohit John. “But previous conventional memristors had to be configured for one of these modes in advance.”

The new memristors can now easily switch between two operation modes while in use: a mode in which the signal grows weaker over time and dies (volatile mode), and one in which the signal remains constant (non-volatile mode).

Once you get past the first two paragraphs in the Nanowerk news item, you find the ETH Zurich and Empa May 18, 2022 press releases by Fabio Begamin, in both cases, are identical (ETH is listed as the authoring agency on EurekAlert), (Note: A link has been removed in the following),

Just like in the brain

“These two operation modes are also found in the human brain,” John says. On the one hand, stimuli at the synapses are transmitted from neuron to neuron with biochemical neurotransmitters. These stimuli start out strong and then gradually become weaker. On the other hand, new synaptic connections to other neurons form in the brain while we learn. These connections are longer-​lasting.

John, who is a postdoc in the group headed by ETH Professor Maksym Kovalenko, was awarded an ETH fellowship for outstanding postdoctoral researchers in 2020. John conducted this research together with Yiğit Demirağ, a doctoral student in Professor Giacomo Indiveri’s group at the Institute for Neuroinformatics of the University of Zurich and ETH Zurich.

Semiconductor known from solar cells

The memristors the researchers have developed are made of halide perovskite nanocrystals, a semiconductor material known primarily from its use in photovoltaic cells. “The ‘nerve conduction’ in these new memristors is mediated by temporarily or permanently stringing together silver ions from an electrode to form a nanofilament penetrating the perovskite structure through which current can flow,” explains Kovalenko.

This process can be regulated to make the silver-​ion filament either thin, so that it gradually breaks back down into individual silver ions (volatile mode), or thick and permanent (non-​volatile mode). This is controlled by the intensity of the current conducted on the memristor: applying a weak current activates the volatile mode, while a strong current activates the non-​volatile mode.

New toolkit for neuroinformaticians

“To our knowledge, this is the first memristor that can be reliably switched between volatile and non-​volatile modes on demand,” Demirağ says. This means that in the future, computer chips can be manufactured with memristors that enable both modes. This is a significance advance because it is usually not possible to combine several different types of memristors on one chip.

Within the scope of the study, which they published in the journal Nature Communications, the researchers tested 25 of these new memristors and carried out 20,000 measurements with them. In this way, they were able to simulate a computational problem on a complex network. The problem involved classifying a number of different neuron spikes as one of four predefined patterns.

Before these memristors can be used in computer technology, they will need to undergo further optimisation.  However, such components are also important for research in neuroinformatics, as Indiveri points out: “These components come closer to real neurons than previous ones. As a result, they help researchers to better test hypotheses in neuroinformatics and hopefully gain a better understanding of the computing principles of real neuronal circuits in humans and animals.”

Here’s a link to and a citation for the paper,

Reconfigurable halide perovskite nanocrystal memristors for neuromorphic computing by Rohit Abraham John, Yiğit Demirağ, Yevhen Shynkarenko, Yuliia Berezovska, Natacha Ohannessian, Melika Payvand, Peng Zeng, Maryna I. Bodnarchuk, Frank Krumeich, Gökhan Kara, Ivan Shorubalko, Manu V. Nair, Graham A. Cooke, Thomas Lippert, Giacomo Indiveri & Maksym V. Kovalenko. Nature Communications volume 13, Article number: 2074 (2022) DOI: https://doi.org/10.1038/s41467-022-29727-1 Published: 19 April 2022

This paper is open access.

How AI-designed fiction reading lists and self-publishing help nurture far-right and neo-Nazi novelists

Literary theorists Helen Young and Geoff M Boucher, both at Deakin University (Australia), have co-written a fascinating May 29, 2022 essay on The Conversation (and republished on phys.org) analyzing some of the reasons (e.g., novels) for the resurgence in neo-Nazi activity and far-right extremism, Note: Links have been removed,

Far-right extremists pose an increasing risk in Australia and around the world. In 2020, ASIO [Australian Security Intelligence Organisation] revealed that about 40% of its counter-terrorism work involved the far right.

The recent mass murder in Buffalo, U.S., and the attack in Christchurch, New Zealand, in 2019 are just two examples of many far-right extremist acts of terror.

Far-right extremists have complex and diverse methods for spreading their messages of hate. These can include through social media, video games, wellness culture, interest in medieval European history, and fiction [emphasis mine]. Novels by both extremist and non-extremist authors feature on far-right “reading lists” designed to draw people into their beliefs and normalize hate.

Here’s more about how the books get published and distributed, from the May 29, 2022 essay, Note: Links have been removed,

Publishing houses once refused to print such books, but changes in technology have made traditional publishers less important. With self-publishing and e-books, it is easy for extremists to produce and distribute their fiction.

In this article, we have only given the titles and authors of those books that are already notorious, to avoid publicizing other dangerous hate-filled fictions.

Why would far-right extremists write novels?

Reading fiction is different to reading non-fiction. Fiction offers readers imaginative scenarios that can seem to be truthful, even though they are not fact-based. It can encourage readers to empathize with the emotions, thoughts and ethics of characters, particularly when they recognize those characters as being “like” them.

A novel featuring characters who become radicalized to far-right extremism, or who undertake violent terrorist acts, can help make those things seem justified and normal.

Novels that promote political violence, such as The Turner Diaries, are also ways for extremists to share plans and give readers who hold extreme views ideas about how to commit terrorist acts. …

In the late 20th century, far-right extremists without Pierce’s notoriety [American neo-Nazi William L. Pierce published The Turner Diaries (1978)] found it impossible to get their books published. One complained about this on his blog in 1999, blaming feminists and Jewish people. Just a few years later, print-on-demand and digital self-publishing made it possible to circumvent this difficulty.

The same neo-Nazi self-published what he termed “a lifetime of writing” in the space of a few years in the early 2000s. The company he paid to produce his books—iUniverse.com—helped get them onto the sales lists of major booksellers Barnes and Noble and Amazon in the early 2000s, making a huge difference to how easily they circulated outside extremist circles.

It still produces print-on-demand hard copies, even though the author has died. The same author’s books also circulate in digital versions, including on Google Play and Kindle, making them easily accessible.

Distributing extremist novels digitally

Far-right extremists use social media to spread their beliefs, but other digital platforms are also useful for them.

Seemingly innocent sites that host a wide range of mainstream material, such as Google Books, Project Gutenberg, and the Internet Archive, are open to exploitation. Extremists use them to share, for example, material denying the Holocaust alongside historical Nazi newspapers.

Amazon’s Kindle self-publishing service has been called “a haven for white supremacists” because of how easy it is for them to circulate political tracts there. The far-right extremist who committed the Oslo terrorist attacks in 2011 recommended in his manifesto that his followers use Kindle to to spread his message.

Our research has shown that novels by known far-right extremists have been published and circulated through Kindle as well as other digital self-publishing services.

Ai and its algorithms also play a role, from the May 29, 2022 essay,

Radicalising recommendations

As we researched how novels by known violent extremists circulate, we noticed that the sales algorithms of mainstream platforms were suggesting others that we might also be interested in. Sales algorithms work by recommending items that customers who purchased one book have also viewed or bought.

Those recommendations directed us to an array of novels that, when we investigated them, proved to resonate with far-right ideologies.

A significant number of them were by authors with far-right political views. Some had ties to US militia movements and the gun-obsessed “prepper” subculture. Almost all of the books were self-published as e-books and print-on-demand editions.

Without the marketing and distribution channels of established publishing houses, these books rely on digital circulation for sales, including sale recommendation algorithms.

The trail of sales recommendations led us, with just two clicks, to the novels of mainstream authors. They also led us back again, from mainstream authors’ books to extremist novels. This is deeply troubling. It risks unsuspecting readers being introduced to the ideologies, world-views and sometimes powerful emotional narratives of far-right extremist novels designed to radicalise.

It’s not always easy to tell right away if you’re reading fiction promoting far-right ideologies, from the May 29, 2022 essay,

Recognising far-right messages

Some extremist novels follow the lead of The Turner Diaries and represent the start of a racist, openly genocidal war alongside a call to bring one about. Others are less obvious about their violent messages.

Some are not easily distinguished from mainstream novels – for example, from political thrillers and dystopian adventure stories like those of Tom Clancy or Matthew Reilly – so what is different about them? Openly neo-Nazi authors, like Pierce, often use racist, homophobic and misogynist slurs, but many do not. This may be to help make their books more palatable to general readers, or to avoid digital moderation based on specific words.

Knowing more about far-right extremism can help. Researchers generally say that there are three main things that connect the spectrum of far-right extremist politics: acceptance of social inequality, authoritarianism, and embracing violence as a tool for political change. Willingness to commit or endorse violence is a key factor separating extremism from other radical politics.

It is very unlikely that anyone would become radicalised to violent extremism just by reading novels. Novels can, however, reinforce political messages heard elsewhere (such as on social media) and help make those messages and acts of hate feel justified.

With the growing threat of far-right extremism and deliberate recruitment strategies of extremists targeting unexpected places, it is well worth being informed enough to recognise the hate-filled stories they tell.

I recommend reading the essay as my excerpts don’t do justice to the ideas being presented. As Young and Boucher note, it’s “… unlikely that anyone would become radicalised to violent extremism …” by reading novels but far-right extremists and neo-Nazis write fiction because the tactic works at some level.