Category Archives: robots

Single chip mimics human vision and memory abilities

A June 15, 2023 RMIT University (Australia) press release (also on EurekAlert but published June 14, 2023) announces a neuromorphic (brainlike) computer chip, which mimics human vision and ‘creates’ memories,

Researchers have created a small device that ‘sees’ and creates memories in a similar way to humans, in a promising step towards one day having applications that can make rapid, complex decisions such as in self-driving cars.

The neuromorphic invention is a single chip enabled by a sensing element, doped indium oxide, that’s thousands of times thinner than a human hair and requires no external parts to operate.

RMIT University engineers in Australia led the work, with contributions from researchers at Deakin University and the University of Melbourne.

The team’s research demonstrates a working device that captures, processes and stores visual information. With precise engineering of the doped indium oxide, the device mimics a human eye’s ability to capture light, pre-packages and transmits information like an optical nerve, and stores and classifies it in a memory system like the way our brains can.

Collectively, these functions could enable ultra-fast decision making, the team says.

Team leader Professor Sumeet Walia said the new device can perform all necessary functions – sensing, creating and processing information, and retaining memories – rather than relying on external energy-intensive computation, which prevents real-time decision making.

“Performing all of these functions on one small device had proven to be a big challenge until now,” said Walia from RMIT’s School of Engineering.

“We’ve made real-time decision making a possibility with our invention, because it doesn’t need to process large amounts of irrelevant data and it’s not being slowed down by data transfer to separate processors.”

What did the team achieve and how does the technology work?

The new device was able to demonstrate an ability to retain information for longer periods of time, compared to previously reported devices, without the need for frequent electrical signals to refresh the memory. This ability significantly reduces energy consumption and enhances the device’s performance.

Their findings and analysis are published in Advanced Functional Materials.

First author and RMIT PhD researcher Aishani Mazumder said the human brain used analog processing, which allowed it to process information quickly and efficiently using minimal energy.

“By contrast, digital processing is energy and carbon intensive, and inhibits rapid information gathering and processing,” she said.

“Neuromorphic vision systems are designed to use similar analog processing to the human brain, which can greatly reduce the amount of energy needed to perform complex visual tasks compared with today’s technologies

What are the potential applications?

The team used ultraviolet light as part of their experiments, and are working to expand this technology even further for visible and infrared light – with many possible applications such as bionic vision, autonomous operations in dangerous environments, shelf-life assessments of food and advanced forensics.

“Imagine a self-driving car that can see and recognise objects on the road in the same way that a human driver can or being able to able to rapidly detect and track space junk. This would be possible with neuromorphic vision technology.”

Walia said neuromorphic systems could adapt to new situations over time, becoming more efficient with more experience.

“Traditional computer vision systems – which cannot be miniaturised like neuromorphic technology – are typically programmed with specific rules and can’t adapt as easily,” he said.

“Neuromorphic robots have the potential to run autonomously for long periods, in dangerous situations where workers are exposed to possible cave-ins, explosions and toxic air.”

The human eye has a single retina that captures an entire image, which is then processed by the brain to identify objects, colours and other visual features.

The team’s device mimicked the retina’s capabilities by using single-element image sensors that capture, store and process visual information on one platform, Walia said.

“The human eye is exceptionally adept at responding to changes in the surrounding environment in a faster and much more efficient way than cameras and computers currently can,” he said.

“Taking inspiration from the eye, we have been working for several years on creating a camera that possesses similar abilities, through the process of neuromorphic engineering.” 

Here’s a link to and a citation for the paper,

Long Duration Persistent Photocurrent in 3 nm Thin Doped Indium Oxide for Integrated Light Sensing and In-Sensor Neuromorphic Computation by Aishani Mazumder, Chung Kim Nguyen, Thiha Aung, Mei Xian Low, Md. Ataur Rahman, Salvy P. Russo, Sherif Abdulkader Tawfik, Shifan Wang, James Bullock, Vaishnavi Krishnamurthi. Advanced Functional Materials DOI: https://doi.org/10.1002/adfm.202303641 First published: 14 June 2023

This paper is open access.

Dealing with mosquitos: a robot story and an engineered human tissue story

I have two ‘mosquito and disease’ stories, the first concerning dengue fever and the second, malaria.

Dengue fever in Taiwan

A June 8, 2023 news item on phys.org features robotic vehicles, dengue fever, and mosquitoes,

Unmanned ground vehicles can be used to identify and eliminate the breeding sources of mosquitos that carry dengue fever in urban areas, according to a new study published in PLOS Neglected Tropical Diseases by Wei-Liang Liu of the Taiwan National Mosquito-Borne Diseases Control Research Center, and colleagues.

It turns out sewers are a problem according to this June 8, 2023 PLOS (Public Library of Science) news release on EurekAlert, provides more context and detail,

Dengue fever is an infectious disease caused by the dengue virus and spread by several mosquito species in the genus Aedes, which also spread chikungunya, yellow fever and zika. Through the process of urbanization, sewers have become easy breeding grounds for Aedes mosquitos and most current mosquito monitoring programs struggle to monitor and analyze the density of mosquitos in these hidden areas.

In the new control effort, researchers combined a crawling robot, wire-controlled cable car and real-time monitoring system into an unmanned ground vehicle system (UGV) that can take high-resolution, real-time images of areas within sewers. From May to August 2018, the system was deployed in five administrative districts in Kaohsiung city, Taiwan, with covered roadside sewer ditches suspected to be hotspots for mosquitos. Mosquito gravitraps were places above the sewers to monitor effects of the UGV intervention on adult mosquitos in the area.

In 20.7% of inspected sewers, the system found traces of Aedes mosquitos in stages from larvae to adult. In positive sewers, additional prevention control measures were carried out, using either insecticides or high-temperature water jets.  Immediately after these interventions, the gravitrap index (GI)—  a measure of the adult mosquito density nearby— dropped significantly from 0.62 to 0.19.

“The widespread use of UGVs can potentially eliminate some of the breeding sources of vector mosquitoes, thereby reducing the annual prevalence of dengue fever in Kaohsiung city,” the authors say.

Here’s a link to and a citation for the paper,

Use of unmanned ground vehicle systems in urbanized zones: A study of vector Mosquito surveillance in Kaohsiung by Yu-Xuan Chen, Chao-Ying Pan, Bo-Yu Chen, Shu-Wen Jeng, Chun-Hong Chen, Joh-Jong Huang, Chaur-Dong Chen, Wei-Liang Liu. PLOS Neglected Tropical Diseases DOI: https://doi.org/10.1371/journal.pntd.0011346 Published: June 8, 2023

This paper is open access.

Dengue on the rise

Like many diseases, dengue is one where you may not have symptoms (asymptomatic), or they’re relatively mild and can be handled at home, or you may need care in a hospital and, in some cases, it can be fatal.

The World Health Organization (WHO) notes that dengue fever cases have increased exponentially since 2000 (from the March 17, 2023 version of the WHO’s “Dengue and severe dengue” fact sheet),

Global burden

The incidence of dengue has grown dramatically around the world in recent decades, with cases reported to WHO increased from 505 430 cases in 2000 to 5.2 million in 2019. A vast majority of cases are asymptomatic or mild and self-managed, and hence the actual numbers of dengue cases are under-reported. Many cases are also misdiagnosed as other febrile illnesses (1).

One modelling estimate indicates 390 million dengue virus infections per year of which 96 million manifest clinically (2). Another study on the prevalence of dengue estimates that 3.9 billion people are at risk of infection with dengue viruses.

The disease is now endemic in more than 100 countries in the WHO Regions of Africa, the Americas, the Eastern Mediterranean, South-East Asia and the Western Pacific. The Americas, South-East Asia and Western Pacific regions are the most seriously affected, with Asia representing around 70% of the global disease burden.

Dengue is spreading to new areas including Europe, [emphasis mine] and explosive outbreaks are occurring. Local transmission was reported for the first time in France and Croatia in 2010 [emphasis mine] and imported cases were detected in 3 other European countries.

The largest number of dengue cases ever reported globally was in 2019. All regions were affected, and dengue transmission was recorded in Afghanistan for the first time. The American Region reported 3.1 million cases, with more than 25 000 classified as severe. A high number of cases were reported in Bangladesh (101 000), Malaysia (131 000) Philippines (420 000), Vietnam (320 000) in Asia.

Dengue continues to affect Brazil, Colombia, the Cook Islands, Fiji, India, Kenya, Paraguay, Peru, the Philippines, the Reunion Islands and Vietnam as of 2021. 

There’s information from an earlier version of the fact sheet, in my July 2, 2013 posting, highlighting different aspects of the disease, e.g., “About 2.5% of those affected die.”

A July 21, 2023 United Nations press release warns that the danger from mosquitoes spreading dengue fever could increase along with the temperature,

Global warming marked by higher average temperatures, precipitation and longer periods of drought, could prompt a record number of dengue infections worldwide, the World Health Organization (WHO) warned on Friday [July 21, 2023].

Despite the absence of mosquitoes infected with the dengue virus in Canada, the government has a Dengue fever information page. At this point, the concern is likely focused on travelers who’ve contracted the disease from elsewhere. However, I am guessing that researchers are keeping a close eye on Canadian mosquitoes as these situations can change.

Malaria in Florida (US)

The researchers from the University of Central Florida (UCF) couldn’t have known when they began their project to study mosquito bites and disease that Florida would register its first malaria cases in 20 years this summer, from a July 26, 2023 article by Stephanie Colombini for NPR ([US] National Public Radio), Note: Links have been removed,

First local transmission in U.S. in 20 years

Heath [Hannah Heath] is one of eight known people in recent months who have contracted malaria in the U.S., after being bitten by a local mosquito, rather than while traveling abroad. The cases comprise the nation’s first locally transmitted outbreak in 20 years. The last time this occurred was in 2003, when eight people tested positive for malaria in Palm Beach, Fla.

One of the eight cases is in Texas; the rest occurred in the northern part of Sarasota County.

The Florida Department of Health recorded the most recent case in its weekly arbovirus report for July 9-15 [2023].

For the past month, health officials have issued a mosquito-borne illness alert for residents in Sarasota and neighboring Manatee County. Mosquito management teams are working to suppress the population of the type of mosquito that carries malaria, Anopheles.

Sarasota Memorial Hospital has treated five of the county’s seven malaria patients, according to Dr. Manuel Gordillo, director of infection control.

“The cases that are coming in are classic malaria, you know they come in with fever, body aches, headaches, nausea, vomiting, diarrhea,” Gordillo said, explaining that his hospital usually treats just one or two patients a year who acquire malaria while traveling abroad in Central or South America, or Africa.

All the locally acquired cases were of Plasmodium vivax malaria, a strain that typically produces milder symptoms or can even be asymptomatic, according to the Centers for Disease Control and Prevention. But the strain can still cause death, and pregnant people and children are particularly vulnerable.

Malaria does not spread from human-to-human contact; a mosquito carrying the disease has to bite someone to transmit the parasites.

Workers with Sarasota County Mosquito Management Services have been especially busy since May 26 [2023], when the first local case was confirmed.

Like similar departments across Florida, the team is experienced in responding to small outbreaks of mosquito-borne illnesses such as West Nile virus or dengue. They have protocols for addressing travel-related cases of malaria as well, but have ramped up their efforts now that they have confirmation that transmission is occurring locally between mosquitoes and humans.

While organizations like the World Health Organization have cautioned climate change could lead to more global cases and deaths from malaria and other mosquito-borne diseases, experts say it’s too soon to tell if the local transmission seen these past two months has any connection to extreme heat or flooding.

“We don’t have any reason to think that climate change has contributed to these particular cases,” said Ben Beard, deputy director of the CDC’s US Centers for Disease Control and Prevention] division of vector-borne diseases and deputy incident manager for this year’s local malaria response.

“In a more general sense though, milder winters, earlier springs, warmer, longer summers – all of those things sort of translate into mosquitoes coming out earlier, getting their replication cycles sooner, going through those cycles faster and being out longer,” he said. And so we are concerned about the impact of climate change and environmental change in general on what we call vector-borne diseases.”.

Beard co-authored a 2019 report that highlights a significant increase in diseases spread by ticks and mosquitoes in recent decades. Lyme disease and West Nile virus were among the top five most prevalent.

“In the big picture it’s a very significant concern that we have,” he said.

Engineered tissue and bloodthirsty mosquitoes

A June 8, 2023 University of Central Florida (UCF) news release (also on EurekAlert) by Eric Eraso describes the research into engineered human tissue and features a ‘bloodthirsty’ video. First, the video,

Note: A link has been removed,

A UCF research team has engineered tissue with human cells that mosquitoes love to bite and feed upon — with the goal of helping fight deadly diseases transmitted by the biting insects.

A multidisciplinary team led by College of Medicine biomedical researcher Bradley Jay Willenberg with Mollie Jewett (UCF Burnett School of Biomedical Sciences) and Andrew Dickerson (University of Tennessee) lined 3D capillary gel biomaterials with human cells to create engineered tissue and then infused it with blood. Testing showed mosquitoes readily bite and blood feed on the constructs. Scientists hope to use this new platform to study how pathogens that mosquitoes carry impact and infect human cells and tissues. Presently, researchers rely largely upon animal models and cells cultured on flat dishes for such investigations.

Further, the new system holds great promise for blood feeding mosquito species that have proven difficult to rear and maintain as colonies in the laboratory, an important practical application. The Willenberg team’s work was published Friday in the journal Insects.

Mosquitos have often been called the world’s deadliest animal, as vector-borne illnesses, including those from mosquitos cause more than 700,000 deaths worldwide each year. Malaria, dengue, Zika virus and West Nile virus are all transmitted by mosquitos. Even for those who survive these illnesses, many are left suffering from organ failure, seizures and serious neurological impacts.

“Many people get sick with mosquito-borne illnesses every year, including in the United States. The toll of such diseases can be especially devastating for many countries around the world,” Willenberg says.

This worldwide impact of mosquito-borne disease is what drives Willenberg, whose lab employs a unique blend of biomedical engineering, biomaterials, tissue engineering, nanotechnology and vector biology to develop innovative mosquito surveillance, control and research tools. He said he hopes to adapt his new platform for application to other vectors such as ticks, which spread Lyme disease.

“We have demonstrated the initial proof-of-concept with this prototype” he says. “I think there are many potential ways to use this technology.”

Captured on video, Willenberg observed mosquitoes enthusiastically blood feeding from the engineered tissue, much as they would from a human host. This demonstration represents the achievement of a critical milestone for the technology: ensuring the tissue constructs were appetizing to the mosquitoes.

“As one of my mentors shared with me long ago, the goal of physicians and biomedical researchers is to help reduce human suffering,” he says. “So, if we can provide something that helps us learn about mosquitoes, intervene with diseases and, in some way, keep mosquitoes away from people, I think that is a positive.”

Willenberg came up with the engineered tissue idea when he learned the National Institutes of Health (NIH) was looking for new in vitro 3D models that could help study pathogens that mosquitoes and other biting arthropods carry.

“When I read about the NIH seeking these models, it got me thinking that maybe there is a way to get the mosquitoes to bite and blood feed [on the 3D models] directly,” he says. “Then I can bring in the mosquito to do the natural delivery and create a complete vector-host-pathogen interface model to study it all together.”

As this platform is still in its early stages, Willenberg wants to incorporate addition types of cells to move the system closer to human skin. He is also developing collaborations with experts that study pathogens and work with infected vectors, and is working with mosquito control organizations to see how they can use the technology.

“I have a particular vision for this platform, and am going after it. My experience too is that other good ideas and research directions will flourish when it gets into the hands of others,” he says. “At the end of the day, the collective ideas and efforts of the various research communities propel a system like ours to its full potential. So, if we can provide them tools to enable their work, while also moving ours forward at the same time, that is really exciting.”

Willenberg received his Ph.D. in biomedical engineering from the University of Florida and continued there for his postdoctoral training and then in scientist, adjunct scientist and lecturer positions. He joined the UCF College of Medicine in 2014, where he is currently an assistant professor of medicine.

Willenberg is also a co-founder, co-owner and manager of Saisijin Biotech, LLC and has a minor ownership stake in Sustained Release Technologies, Inc. Neither entity was involved in any way with the work presented in this story. Team members may also be listed as inventors on patent/patent applications that could result in royalty payments. This technology is available for licensing. To learn more, please visit ucf.flintbox.com/technologies/44c06966-2748-4c14-87d7-fc40cbb4f2c6.

Here’s a link to and a citation for the paper,

Engineered Human Tissue as A New Platform for Mosquito Bite-Site Biology Investigations by Corey E. Seavey, Mona Doshi, Andrew P. Panarello, Michael A. Felice, Andrew K. Dickerson, Mollie W. Jewett and Bradley J. Willenberg. Insects 2023, 14(6), 514; https://doi.org/10.3390/insects14060514 Published: 2 June 2023

This paper is open access.

That final paragraph in the news release is new to me. I’ve seen them list companies where the researchers have financial interests but this is the first time I’ve seen a news release that offers a statement attempting to cover all the bases including some future possibilities such as: “Team members may also be listed as inventors on patent/patent applications that could result in royalty payments.

It seems pretty clear that there’s increasing concern about mosquito-borne diseases no matter where you live.

Should robots have rights? Confucianism offers some ideas

Fascinating although I’m not sure I entirely understand his argument,

This May 24, 2023 Carnegie Mellon University (CMU) news release (also on EurekAlert but published May 25, 2023) has Professor Tae Wan Kim’s clarification, Note: Links have been removed,

Philosophers and legal scholars have explored significant aspects of the moral and legal status of robots, with some advocating for giving robots rights. As robots assume more roles in the world, a new analysis reviewed research on robot rights, concluding that granting rights to robots is a bad idea. Instead, the article looks to Confucianism to offer an alternative.

The analysis, by a researcher at Carnegie Mellon University (CMU), appears in Communications of the ACM, published by the Association for Computing Machinery.

“People are worried about the risks of granting rights to robots,” notes Tae Wan Kim, Associate Professor of Business Ethics at CMU’s Tepper School of Business, who conducted the analysis. “Granting rights is not the only way to address the moral status of robots: Envisioning robots as rites bearers—not a rights bearers—could work better.”

Although many believe that respecting robots should lead to granting them rights, Kim argues for a different approach. Confucianism, an ancient Chinese belief system, focuses on the social value of achieving harmony; individuals are made distinctively human by their ability to conceive of interests not purely in terms of personal self-interest, but in terms that include a relational and a communal self. This, in turn, requires a unique perspective on rites, with people enhancing themselves morally by participating in proper rituals.

When considering robots, Kim suggests that the Confucian alternative of assigning rites—or what he calls role obligations—to robots is more appropriate than giving robots rights. The concept of rights is often adversarial and competitive, and potential conflict between humans and robots is concerning.

“Assigning role obligations to robots encourages teamwork, which triggers an understanding that fulfilling those obligations should be done harmoniously,” explains Kim. “Artificial intelligence (AI) imitates human intelligence, so for robots to develop as rites bearers, they must be powered by a type of AI that can imitate humans’ capacity to recognize and execute team activities—and a machine can learn that ability in various ways.”

Kim acknowledges that some will question why robots should be treated respectfully in the first place. “To the extent that we make robots in our image, if we don’t treat them well, as entities capable of participating in rites, we degrade ourselves,” he suggests.

Various non-natural entities—such as corporations—are considered people and even assume some Constitutional rights. In addition, humans are not the only species with moral and legal status; in most developed societies, moral and legal considerations preclude researchers from gratuitously using animals for lab experiments.

Here’s a link to and a citation for the paper,

Should Robots Have Rights or Rites? by Tae Wan Kim, Alan Strudler. Communications of the ACM, June 2023, Vol. 66 No. 6, Pages 78-85 DOI: 10.1145/3571721

This work is licensed under a http://creativecommons.org/licenses/by/4.0/ In other words, this paper is open access.

The paper is quite readable, as academic papers go, (Note: Links have been removed),

Boston Dynamics recently released a video introducing Atlas, a six-foot bipedal humanoid robot capable of search and rescue missions. Part of the video contained employees apparently abusing Atlas (for example, kicking, hitting it with a hockey stick, pushing it with a heavy ball). The video quickly raised a public and academic debate regarding how humans should treat robots. A robot, in some sense, is nothing more than software embedded in hardware, much like a laptop computer. If it is your property and kicking it harms no one nor infringes on anyone’s rights, it’s okay to kick it, although that would be a stupid thing to do. Likewise, there seems to be no significant reason that kicking a robot should be deemed as a moral or legal wrong. However, the question—”What do we owe to robots?”—is not that simple. Philosophers and legal scholars have seriously explored and defended some significant aspects of the moral and legal status of robots—and their rights.3,6,15,16,24,29,36 In fact, various non-natural entities—for example, corporations—are treated as persons and even enjoy some constitutional rights.a In addition, humans are not the only species that get moral and legal status. In most developed societies, for example, moral and legal considerations preclude researchers from gratuitously using animals for lab experiments. The fact that corporations are treated as persons and animals are recognized as having some rights does not entail that robots should be treated analogously.

Connie Lin’s May 26, 2023 article for Fast Company “Confucianism for robots? Ethicist says that’s better than giving them full rights” offers a brief overview and more comments from Kim. For the curious, you find out more about Boston Dynamics and Atlas here.

Smart fabric from University of Waterloo (Canada) responds to temperature and electricity

This textile from the University of Waterloo is intriguing,

Caption: An electric current is applied to an engineered smart fabric consisting of plastic and steel fibres. Credit: University of Waterloo

An April 24, 2023 news item on phys.org introduces this new material,

A new smart material developed by researchers at the University of Waterloo is activated by both heat and electricity, making it the first ever to respond to two different stimuli.

The unique design paves the way for a wide variety of potential applications, including clothing that warms up while you walk from the car to the office in winter and vehicle bumpers that return to their original shape after a collision.

An April 24, 2023 University of Waterloo news release (also on EurekAlert), which originated the news item, provides more detail, Note: A link has been removed,

Inexpensively made with polymer nano-composite fibres from recycled plastic, the programmable fabric can change its colour and shape when stimuli are applied.

“As a wearable material alone, it has almost infinite potential in AI, robotics and virtual reality games and experiences,” said Dr. Milad Kamkar, a chemical engineering professor at Waterloo. “Imagine feeling warmth or a physical trigger eliciting a more in-depth adventure in the virtual world.”

The novel fabric design is a product of the happy union of soft and hard materials, featuring a combination of highly engineered polymer composites and stainless steel in a woven structure. 

Researchers created a device similar to a traditional loom to weave the smart fabric. The resulting process is extremely versatile, enabling design freedom and macro-scale control of the fabric’s properties.

The fabric can also be activated by a lower voltage of electricity than previous systems, making it more energy-efficient and cost-effective. In addition, lower voltage allows integration into smaller, more portable devices, making it suitable for use in biomedical devices and environment sensors.

“The idea of these intelligent materials was first bred and born from biomimicry science,” said Kamkar, director of the Multi-scale Materials Design (MMD) Centre at Waterloo.

“Through the ability to sense and react to environmental stimuli such as temperature, this is proof of concept that our new material can interact with the environment to monitor ecosystems without damaging them.”

The next step for researchers is to improve the fabric’s shape-memory performance for applications in the field of robotics. The aim is to construct a robot that can effectively carry and transfer weight to complete tasks.

Here’s a link to and a citation for the paper,

Multi-Stimuli Dually-Responsive Intelligent Woven Structures with Local Programmability for Biomimetic Applications by Runxin Xu, Guanzheng Wu, Mengmeng Jiang, Shaojie Cao, Mahyar Panahi-Sarmad, Milad Kamkar, Xueliang Xiao. Nano-Micro Small DOI: https://doi.org/10.1002/smll.202207900 First published: 19 February 2023

This paper is open access.

Mind-controlled robots based on graphene: an Australian research story

As they keep saying these days, ‘it’s not science fiction anymore’.

It’s so fascinating I almost forgot what it’s like to make a video where it can take hours to get a few minutes (the video is a little over 3 mins.) and all the failures are edited out. Plus, I haven’t found any information about training both the human users and the robotic dogs/quadrupeds. Does it take minutes? hours? days? more? Can you work with any old robotic dog /quadruped or does it have to be the one you’ve ‘gotten to know’? Etc. Bottom line: I don’t know if I can take what I see in the video at face value.

A March 20, 2023 news item on Nanowerk announces the work from Australia,

The advanced brain-computer interface [BCI] was developed by Distinguished Professor Chin-Teng Lin and Professor Francesca Iacopi, from the UTS [University of Technology Sydney; Australia] Faculty of Engineering and IT, in collaboration with the Australian Army and Defence Innovation Hub.

As well as defence applications, the technology has significant potential in fields such as advanced manufacturing, aerospace and healthcare – for example allowing people with a disability to control a wheelchair or operate prosthetics.

“The hands-free, voice-free technology works outside laboratory settings, anytime, anywhere. It makes interfaces such as consoles, keyboards, touchscreens and hand-gesture recognition redundant,” said Professor Iacopi.

A March 20, 2023 University of Technology Sydney (UTS) press release, also on EurekAlert but published March 19, 2023, which originated the news item, describes the interface in more detail,

“By using cutting edge graphene material, combined with silicon, we were able to overcome issues of corrosion, durability and skin contact resistance, to develop the wearable dry sensors,” she said.

A new study outlining the technology has just been published in the peer-reviewed journal ACS Applied Nano Materials. It shows that the graphene sensors developed at UTS are very conductive, easy to use and robust.

The hexagon patterned sensors are positioned over the back of the scalp, to detect brainwaves from the visual cortex. The sensors are resilient to harsh conditions so they can be used in extreme operating environments.

The user wears a head-mounted augmented reality lens which displays white flickering squares. By concentrating on a particular square, the brainwaves of the operator are picked up by the biosensor, and a decoder translates the signal into commands.

The technology was recently demonstrated by the Australian Army, where soldiers operated a Ghost Robotics quadruped robot using the brain-machine interface [BMI]. The device allowed hands-free command of the robotic dog with up to 94% accuracy.

“Our technology can issue at least nine commands in two seconds. This means we have nine different kinds of commands and the operator can select one from those nine within that time period,” Professor Lin said.

“We have also explored how to minimise noise from the body and environment to get a clearer signal from an operator’s brain,” he said.

The researchers believe the technology will be of interest to the scientific community, industry and government, and hope to continue making advances in brain-computer interface systems.

Here’s a link to and a citation for the paper,

Noninvasive Sensors for Brain–Machine Interfaces Based on Micropatterned Epitaxial Graphene by Shaikh Nayeem Faisal, Tien-Thong Nguyen Do, Tasauf Torzo, Daniel Leong, Aiswarya Pradeepkumar, Chin-Teng Lin, and Francesca Iacopi. ACS Appl. Nano Mater. 2023, 6, 7, 5440–5447 DOI: https://doi.org/10.1021/acsanm.2c05546 Publication Date: March 16, 2023 Copyright © 2023 The Authors. Published by American Chemical Society

This paper is open access.

Comments

For anyone who’s bothered by this, the terminology is fluid. Sometimes you’ll see brain-computer interface (BCI), sometimes you’ll see human-computer interface, or brain-machine interface (BMI) and, as I’ve now found in the video although I notice the Australians are not hyphenating it, brain-robotic interface (BRI).

You can find Ghost Robotics here, the makers of the robotic ‘dog’.

There seems to be a movement to replace the word ‘soldiers’ with warfighters and, according to this video, military practitioners. I wonder how medical doctors and other practitioners feel about the use of ‘practitioners’ in a military context.

Fairy-like robot powered by wind and light

Caption: For their artificial fairy, Hao Zeng and Jianfeng Yang got inspired by dandelion seeds. Credit: Jianfeng Yang / Tampere University

That image makes me think of Tinker Bell (the fairy in the novel/play/movie with ‘Peter Pan’ in its titles) but I can also see how the researchers were inspired by dandelion seeds, which we used to call ‘wishes’.

Dandelion Seeds Free Stock Photo – Public Domain Pictures

A January 30, 2023 news item on ScienceDaily announces the fairy-like robot,

The development of stimuli-responsive polymers has brought about a wealth of material-related opportunities for next-generation small-scale, wirelessly controlled soft-bodied robots. For some time now, engineers have known how to use these materials to make small robots that can walk, swim and jump. So far, no one has been able to make them fly.

Researchers of the Light Robots group at Tampere University [Finland] are now researching how to make smart material fly. Hao Zeng, Academy Research Fellow and the group leader, and Jianfeng Yang, a doctoral researcher, have come up with a new design for their project called FAIRY — Flying Aero-robots based on Light Responsive Materials Assembly. They have developed a polymer-assembly robot that flies by wind and is controlled by light.

A January 26, 2023 Tampere University press release (also on EurekAlert but published January 30, 2023), which originated the news item, elucidates why the researchers are excited about their work,

Superior to its natural counterparts, this artificial seed is equipped with a soft actuator. The actuator is made of light-responsive liquid crystalline elastomer, which induces opening or closing actions of the bristles upon visible light excitation,” explains Hao Zeng.

The artificial fairy is controlled by light

The artificial fairy developed by Zeng and Yang has several biomimetic features. Because of its high porosity (0.95) and lightweight (1.2 mg) structure, it can easily float in the air directed by the wind. What is more, a stable separated vortex ring generation enables long-distance wind-assisted travelling.

“The fairy can be powered and controlled by a light source, such as a laser beam or LED,” Zeng says.

This means that light can be used to change the shape of the tiny dandelion seed-like structure. The fairy can adapt manually to wind direction and force by changing its shape. A light beam can also be used to control the take-off and landing actions of the polymer assembly.

Potential application opportunities in agriculture

Next, the researchers will focus on improving the material sensitivity to enable the operation of the device in sunlight. In addition, they will up-scale the structure so that it can carry micro-electronic devices such as GPS and sensors as well as biochemical compounds.

According to Zeng, there is potential for even more significant applications.

“It sounds like science fiction, but the proof-of-concept experiments included in our research show that the robot we have developed provides an important step towards realistic applications suitable for artificial pollination,” he reveals.

In the future, millions of artificial dandelion seeds carrying pollen could be dispersed freely by natural winds and then steered by light toward specific areas with trees awaiting pollination.

“This would have a huge impact on agriculture globally since the loss of pollinators due to global warming has become a serious threat to biodiversity and food production,” Zeng says.

Challenges remain to be solved

However, many problems need to be solved first. For example, how to control the landing spot in a precise way, and how to reuse the devices and make them biodegradable? These issues require close collaboration with materials scientists and people working on microrobotics.

The FAIRY project started in September 2021 and will last until August 2026. It is funded by the Academy of Finland. The flying robot is researched in cooperation with Dr. Wenqi Hu from Max Planck Institute for Intelligent Systems (Germany) and Dr. Hang Zhang from Aalto University.

Here’s a link to and a citation for the paper,

Dandelion-Inspired, Wind-Dispersed Polymer-Assembly Controlled by Light by Jianfeng Yang, Hang Zhang, Alex Berdin, Wenqi Hu, Hao Zeng. Advanced Science Volume 10, Issue 7 March 3, 2023 2206752 DOI: https://doi.org/10.1002/advs.202206752 First published online: 27 December 2022

This paper is open access.

In vitro biological neural networks (BNNs): review paper

The race to merge the biological with machines continues apace as this press release makes clear, From a March 9, 2023 Beijing Institute of Technology Press Co. press release on EurekAlert, Note: A link has been removed,

A review paper by scientists at the Beijing Institute of Technology summarized recent efforts and future potentials in the use of in vitro biological neural networks (BNNs) for the realization of biological intelligence, with a focus on those related to robot intelligence.

The review paper, published on Jan. 10 in the journal Cyborg and Bionic Systems, provided an overview of 1) the underpinnings of intelligence presented in in vitro BNNs, such as memory and learning; 2) how these BNNs can be embodied with robots through bidirectional connection, forming so-called BNN-based neuro-robotic systems; 3) preliminary intelligent behaviors achieved by these neuro-robotic systems; and 4) current trends and future challenges in the research area of BNN-based neuro-robotic systems.

“our human brain is a complex biological neural network (BNN) composed of billions of neurons, which gives rise to our consciousness and intelligence. However, studying the brain as a whole is extremely challenging due to its intricate nature. By culturing a part of the neurons from the brain in a Petri dish, simpler BNNs, such as mini-brains, can be formed, allowing for easier observation and investigation of the network. These mini-brains may provide valuable insights into the enigmatic origins of consciousness and intelligence.” explained study author Zhiqiang Yu, an assistant researcher at the Beijing Institute of Technology.

“Interestingly, mini-brains are not only structurally similar to human brains, but they can also learn and memorize information in a similar way.” said Yu. In particular, these in vitro BNNs share the same basic structure as in vivo BNNs, where neurons are connected through synapses, and they exhibit short-term memory through fading and hidden memory processes. Additionally, these mini-brains can perform supervised learning and be trained to respond to specific stimuli signals. Recently, researchers have demonstrated that in vitro BNNs can even accomplish unsupervised learning tasks, such as separating mixed signals. “This fascinating ability may have something to do with the famous free energy principle. That is, these BNNs have a tendency to minimize their uncertainty about the outer world,” said Yu.

These abilities of in vitro BNNs are quite intriguing. However, only having such a ‘mini-brain’ on your hand is not enough for the rise of consciousness and intelligence. Our brain relies on our body to perceive, comprehend, and adapt to the outside world, and similarly, these mini-brains require a body to interact with their environment. A robot is an ideal candidate for this purpose, leading to a burgeoning interdisciplinary field at the intersection of neuroscience and robotics: BNN-based neuro-robotic systems.

“A stable bidirectional connection is a prerequisite for these systems.” said study authors, “In this review, we summarize the mainstream means of constructing such a bidirectional connection, which can be broadly classified into two categories based on the direction of connection: from robots to BNNs and from BNNs to robots.” The former involves transmitting sensor signals from the robot to BNNs, utilizing electrical, optical, and chemical stimulation methods, while the latter records the neural activities of BNNs and decode these activities into commands to control the robot, using extracellular, calcium, and intracellular recording techniques.

“Embodied by robots, in vitro BNNs exhibit a wide range of fascinating intelligent behaviors,” according to Yu. “These behaviors include supervised and unsupervised learning, memorization, mobile object tracking, active obstacle avoidance, and even learning to play games such as ‘Pong’.”

The intelligent behaviors displayed by these BNN-based neuro-robotic systems can be divided into two categories based on their dependence on either computing capacity or network plasticity, as explained by Yu. “In computing capacity-dependent behaviors, learning is unnecessary, and the BNN is regarded as an information processor that generates specific neural activities in response to stimuli. However, for the latter, learning is a crucial process, as the BNN adapts to stimuli and these changes are integral to the behaviors or tasks performed by the robot,” added Yu.

To facilitate easy comparison of the recording and stimulation techniques, encoding and decoding rules, training policies, and robot tasks, representative studies from these two categories have been compiled into two tables. Additionally, to provide readers with a historical overview of BNN-based neuro-robotic systems, several noteworthy studies have been selected and arranged chronologically.

The study authors also discussed current trends and main challenges in the field. According to Yu, “Four challenges are keen to be addressed and are being intensely investigated. How to fabricate BNNs in 3D, thereby making in vitro BNNs close to their in vivo counterparts, is the most urgent one of them”

Perhaps the most challenging aspect is how to train these robot-embodied BNNs. The study authors noted that BNNs are composed only of neurons and lack the participation of various neuromodulators, which makes it difficult to transplant various animal training methods to BNNs. Additionally, BNNs have their own limitations. While a monkey can be trained to ride a bicycle, it is much more challenging to accomplish tasks that require higher-level thought processes, such as playing Go.

“The mystery of how consciousness and intelligence emerge from the network of cells in our brains still eludes neuroscientists” said Yu. However, with the development of embodying in vitro BNNs with robots, we may observe more intelligent behaviors in them and bring people closer to the truth behind the mystery.

I think that ‘in vitro biological neural networks (BNNs) or mini-brains’ can also be called brain organoids, which seems to be the more popular term in some circles.

Here’s a link to and a citation for the paper,

An Overview of In Vitro Biological Neural Networks for Robot Intelligence by Zhe Chen, Qian Liang, Zihou Wei, Xie Chen, Qing Shi, Zhiqiang Yu, and Tao Sun. Cyborg and Bionic Systems 10 Jan 2023 Vol 4 Article ID: 0001 DOI: 10.34133/cbsystems.0001

This paper is open access.

Non-human authors (ChatGPT or others) of scientific and medical studies and the latest AI panic!!!

It’s fascinating to see all the current excitement (distressed and/or enthusiastic) around the act of writing and artificial intelligence. Easy to forget that it’s not new. First, the ‘non-human authors’ and then the panic(s). *What follows the ‘nonhuman authors’ is essentially a survey of situation/panic.*

How to handle non-human authors (ChatGPT and other AI agents)—the medical edition

The first time I wrote about the incursion of robots or artificial intelligence into the field of writing was in a July 16, 2014 posting titled “Writing and AI or is a robot writing this blog?” ChatGPT (then known as GPT-2) first made its way onto this blog in a February 18, 2019 posting titled “AI (artificial intelligence) text generator, too dangerous to release?

The folks at the Journal of the American Medical Association (JAMA) have recently adopted a pragmatic approach to the possibility of nonhuman authors of scientific and medical papers, from a January 31, 2022 JAMA editorial,

Artificial intelligence (AI) technologies to help authors improve the preparation and quality of their manuscripts and published articles are rapidly increasing in number and sophistication. These include tools to assist with writing, grammar, language, references, statistical analysis, and reporting standards. Editors and publishers also use AI-assisted tools for myriad purposes, including to screen submissions for problems (eg, plagiarism, image manipulation, ethical issues), triage submissions, validate references, edit, and code content for publication in different media and to facilitate postpublication search and discoverability..1

In November 2022, OpenAI released a new open source, natural language processing tool called ChatGPT.2,3 ChatGPT is an evolution of a chatbot that is designed to simulate human conversation in response to prompts or questions (GPT stands for “generative pretrained transformer”). The release has prompted immediate excitement about its many potential uses4 but also trepidation about potential misuse, such as concerns about using the language model to cheat on homework assignments, write student essays, and take examinations, including medical licensing examinations.5 In January 2023, Nature reported on 2 preprints and 2 articles published in the science and health fields that included ChatGPT as a bylined author.6 Each of these includes an affiliation for ChatGPT, and 1 of the articles includes an email address for the nonhuman “author.” According to Nature, that article’s inclusion of ChatGPT in the author byline was an “error that will soon be corrected.”6 However, these articles and their nonhuman “authors” have already been indexed in PubMed and Google Scholar.

Nature has since defined a policy to guide the use of large-scale language models in scientific publication, which prohibits naming of such tools as a “credited author on a research paper” because “attribution of authorship carries with it accountability for the work, and AI tools cannot take such responsibility.”7 The policy also advises researchers who use these tools to document this use in the Methods or Acknowledgment sections of manuscripts.7 Other journals8,9 and organizations10 are swiftly developing policies that ban inclusion of these nonhuman technologies as “authors” and that range from prohibiting the inclusion of AI-generated text in submitted work8 to requiring full transparency, responsibility, and accountability for how such tools are used and reported in scholarly publication.9,10 The International Conference on Machine Learning, which issues calls for papers to be reviewed and discussed at its conferences, has also announced a new policy: “Papers that include text generated from a large-scale language model (LLM) such as ChatGPT are prohibited unless the produced text is presented as a part of the paper’s experimental analysis.”11 The society notes that this policy has generated a flurry of questions and that it plans “to investigate and discuss the impact, both positive and negative, of LLMs on reviewing and publishing in the field of machine learning and AI” and will revisit the policy in the future.11

This is a link to and a citation for the JAMA editorial,

Nonhuman “Authors” and Implications for the Integrity of Scientific Publication and Medical Knowledge by Annette Flanagin, Kirsten Bibbins-Domingo, Michael Berkwits, Stacy L. Christiansen. JAMA. 2023;329(8):637-639. doi:10.1001/jama.2023.1344

The editorial appears to be open access.

ChatGPT in the field of education

Dr. Andrew Maynard (scientist, author, and professor of Advanced Technology Transitions in the Arizona State University [ASU] School for the Future if Innovation in Society and founder of the ASU Future of Being Human initiative and Director of the ASU Risk Innovation Nexus) also takes a pragmatic approach in a March 14, 2023 posting on his eponymous blog,

Like many of my colleagues, I’ve been grappling with how ChatGPT and other Large Language Models (LLMs) are impacting teaching and education — especially at the undergraduate level.

We’re already seeing signs of the challenges here as a growing divide emerges between LLM-savvy students who are experimenting with novel ways of using (and abusing) tools like ChatGPT, and educators who are desperately trying to catch up. As a result, educators are increasingly finding themselves unprepared and poorly equipped to navigate near-real-time innovations in how students are using these tools. And this is only exacerbated where their knowledge of what is emerging is several steps behind that of their students.

To help address this immediate need, a number of colleagues and I compiled a practical set of Frequently Asked Questions on ChatGPT in the classroom. These covers the basics of what ChatGPT is, possible concerns over use by students, potential creative ways of using the tool to enhance learning, and suggestions for class-specific guidelines.

Dr. Maynard goes on to offer the FAQ/practical guide here. Prior to issuing the ‘guide’, he wrote a December 8, 2022 essay on Medium titled “I asked Open AI’s ChatGPT about responsible innovation. This is what I got.”

Crawford Kilian, a longtime educator, author, and contributing editor to The Tyee, expresses measured enthusiasm for the new technology (as does Dr. Maynard), in a December 13, 2022 article for thetyee.ca, Note: Links have been removed,

ChatGPT, its makers tell us, is still in beta form. Like a million other new users, I’ve been teaching it (tuition-free) so its answers will improve. It’s pretty easy to run a tutorial: once you’ve created an account, you’re invited to ask a question or give a command. Then you watch the reply, popping up on the screen at the speed of a fast and very accurate typist.

Early responses to ChatGPT have been largely Luddite: critics have warned that its arrival means the end of high school English, the demise of the college essay and so on. But remember that the Luddites were highly skilled weavers who commanded high prices for their products; they could see that newfangled mechanized looms would produce cheap fabrics that would push good weavers out of the market. ChatGPT, with sufficient tweaks, could do just that to educators and other knowledge workers.

Having spent 40 years trying to teach my students how to write, I have mixed feelings about this prospect. But it wouldn’t be the first time that a technological advancement has resulted in the atrophy of a human mental skill.

Writing arguably reduced our ability to memorize — and to speak with memorable and persuasive coherence. …

Writing and other technological “advances” have made us what we are today — powerful, but also powerfully dangerous to ourselves and our world. If we can just think through the implications of ChatGPT, we may create companions and mentors that are not so much demonic as the angels of our better nature.

More than writing: emergent behaviour

The ChatGPT story extends further than writing and chatting. From a March 6, 2023 article by Stephen Ornes for Quanta Magazine, Note: Links have been removed,

What movie do these emojis describe?

That prompt was one of 204 tasks chosen last year to test the ability of various large language models (LLMs) — the computational engines behind AI chatbots such as ChatGPT. The simplest LLMs produced surreal responses. “The movie is a movie about a man who is a man who is a man,” one began. Medium-complexity models came closer, guessing The Emoji Movie. But the most complex model nailed it in one guess: Finding Nemo.

“Despite trying to expect surprises, I’m surprised at the things these models can do,” said Ethan Dyer, a computer scientist at Google Research who helped organize the test. It’s surprising because these models supposedly have one directive: to accept a string of text as input and predict what comes next, over and over, based purely on statistics. Computer scientists anticipated that scaling up would boost performance on known tasks, but they didn’t expect the models to suddenly handle so many new, unpredictable ones.

“That language models can do these sort of things was never discussed in any literature that I’m aware of,” said Rishi Bommasani, a computer scientist at Stanford University. Last year, he helped compile a list of dozens of emergent behaviors [emphasis mine], including several identified in Dyer’s project. That list continues to grow.

Now, researchers are racing not only to identify additional emergent abilities but also to figure out why and how they occur at all — in essence, to try to predict unpredictability. Understanding emergence could reveal answers to deep questions around AI and machine learning in general, like whether complex models are truly doing something new or just getting really good at statistics. It could also help researchers harness potential benefits and curtail emergent risks.

Biologists, physicists, ecologists and other scientists use the term “emergent” to describe self-organizing, collective behaviors that appear when a large collection of things acts as one. Combinations of lifeless atoms give rise to living cells; water molecules create waves; murmurations of starlings swoop through the sky in changing but identifiable patterns; cells make muscles move and hearts beat. Critically, emergent abilities show up in systems that involve lots of individual parts. But researchers have only recently been able to document these abilities in LLMs as those models have grown to enormous sizes.

But the debut of LLMs also brought something truly unexpected. Lots of somethings. With the advent of models like GPT-3, which has 175 billion parameters — or Google’s PaLM, which can be scaled up to 540 billion — users began describing more and more emergent behaviors. One DeepMind engineer even reported being able to convince ChatGPT that it was a Linux terminal and getting it to run some simple mathematical code to compute the first 10 prime numbers. Remarkably, it could finish the task faster than the same code running on a real Linux machine.

As with the movie emoji task, researchers had no reason to think that a language model built to predict text would convincingly imitate a computer terminal. Many of these emergent behaviors illustrate “zero-shot” or “few-shot” learning, which describes an LLM’s ability to solve problems it has never — or rarely — seen before. This has been a long-time goal in artificial intelligence research, Ganguli [Deep Ganguli, a computer scientist at the AI startup Anthropic] said. Showing that GPT-3 could solve problems without any explicit training data in a zero-shot setting, he said, “led me to drop what I was doing and get more involved.”

There is an obvious problem with asking these models to explain themselves: They are notorious liars. [emphasis mine] “We’re increasingly relying on these models to do basic work,” Ganguli said, “but I do not just trust these. I check their work.” As one of many amusing examples, in February [2023] Google introduced its AI chatbot, Bard. The blog post announcing the new tool shows Bard making a factual error.

If you have time, I recommend reading Omes’s March 6, 2023 article.

The panic

Perhaps not entirely unrelated to current developments, there was this announcement in a May 1, 2023 article by Hannah Alberga for CTV (Canadian Television Network) news, Note: Links have been removed,

Toronto’s pioneer of artificial intelligence quits Google to openly discuss dangers of AI

Geoffrey Hinton, professor at the University of Toronto and the “godfather” of deep learning – a field of artificial intelligence that mimics the human brain – announced his departure from the company on Monday [May 1, 2023] citing the desire to freely discuss the implications of deep learning and artificial intelligence, and the possible consequences if it were utilized by “bad actors.”

Hinton, a British-Canadian computer scientist, is best-known for a series of deep neural network breakthroughs that won him, Yann LeCun and Yoshua Bengio the 2018 Turing Award, known as the Nobel Prize of computing. 

Hinton has been invested in the now-hot topic of artificial intelligence since its early stages. In 1970, he got a Bachelor of Arts in experimental psychology from Cambridge, followed by his Ph.D. in artificial intelligence in Edinburgh, U.K. in 1978.

He joined Google after spearheading a major breakthrough with two of his graduate students at the University of Toronto in 2012, in which the team uncovered and built a new method of artificial intelligence: neural networks. The team’s first neural network was  incorporated and sold to Google for $44 million.

Neural networks are a method of deep learning that effectively teaches computers how to learn the way humans do by analyzing data, paving the way for machines to classify objects and understand speech recognition.

There’s a bit more from Hinton in a May 3, 2023 article by Sheena Goodyear for the Canadian Broadcasting Corporation’s (CBC) radio programme, As It Happens (the 10 minute radio interview is embedded in the article), Note: A link has been removed,

There was a time when Geoffrey Hinton thought artificial intelligence would never surpass human intelligence — at least not within our lifetimes.

Nowadays, he’s not so sure.

“I think that it’s conceivable that this kind of advanced intelligence could just take over from us,” the renowned British-Canadian computer scientist told As It Happens host Nil Köksal. “It would mean the end of people.”

For the last decade, he [Geoffrey Hinton] divided his career between teaching at the University of Toronto and working for Google’s deep-learning artificial intelligence team. But this week, he announced his resignation from Google in an interview with the New York Times.

Now Hinton is speaking out about what he fears are the greatest dangers posed by his life’s work, including governments using AI to manipulate elections or create “robot soldiers.”

But other experts in the field of AI caution against his visions of a hypothetical dystopian future, saying they generate unnecessary fear, distract from the very real and immediate problems currently posed by AI, and allow bad actors to shirk responsibility when they wield AI for nefarious purposes. 

Ivana Bartoletti, founder of the Women Leading in AI Network, says dwelling on dystopian visions of an AI-led future can do us more harm than good. 

“It’s important that people understand that, to an extent, we are at a crossroads,” said Bartoletti, chief privacy officer at the IT firm Wipro.

“My concern about these warnings, however, is that we focus on the sort of apocalyptic scenario, and that takes us away from the risks that we face here and now, and opportunities to get it right here and now.”

Ziv Epstein, a PhD candidate at the Massachusetts Institute of Technology who studies the impacts of technology on society, says the problems posed by AI are very real, and he’s glad Hinton is “raising the alarm bells about this thing.”

“That being said, I do think that some of these ideas that … AI supercomputers are going to ‘wake up’ and take over, I personally believe that these stories are speculative at best and kind of represent sci-fi fantasy that can monger fear” and distract from more pressing issues, he said.

He especially cautions against language that anthropomorphizes — or, in other words, humanizes — AI.

“It’s absolutely possible I’m wrong. We’re in a period of huge uncertainty where we really don’t know what’s going to happen,” he [Hinton] said.

Don Pittis in his May 4, 2022 business analysis for CBC news online offers a somewhat jaundiced view of Hinton’s concern regarding AI, Note: Links have been removed,

As if we needed one more thing to terrify us, the latest warning from a University of Toronto scientist considered by many to be the founding intellect of artificial intelligence, adds a new layer of dread.

Others who have warned in the past that thinking machines are a threat to human existence seem a little miffed with the rock-star-like media coverage Geoffrey Hinton, billed at a conference this week as the Godfather of AI, is getting for what seems like a last minute conversion. Others say Hinton’s authoritative voice makes a difference.

Not only did Hinton tell an audience of experts at Wednesday’s [May 3, 2023] EmTech Digital conference that humans will soon be supplanted by AI — “I think it’s serious and fairly close.” — he said that due to national and business competition, there is no obvious way to prevent it.

“What we want is some way of making sure that even if they’re smarter than us, they’re going to do things that are beneficial,” said Hinton on Wednesday [May 3, 2023] as he explained his change of heart in detailed technical terms. 

“But we need to try and do that in a world where there’s bad actors who want to build robot soldiers that kill people and it seems very hard to me.”

“I wish I had a nice and simple solution I could push, but I don’t,” he said. “It’s not clear there is a solution.”

So when is all this happening?

“In a few years time they may be significantly more intelligent than people,” he told Nil Köksal on CBC Radio’s As It Happens on Wednesday [May 3, 2023].

While he may be late to the party, Hinton’s voice adds new clout to growing anxiety that artificial general intelligence, or AGI, has now joined climate change and nuclear Armageddon as ways for humans to extinguish themselves.

But long before that final day, he worries that the new technology will soon begin to strip away jobs and lead to a destabilizing societal gap between rich and poor that current politics will be unable to solve.

The EmTech Digital conference is a who’s who of AI business and academia, fields which often overlap. Most other participants at the event were not there to warn about AI like Hinton, but to celebrate the explosive growth of AI research and business.

As one expert I spoke to pointed out, the growth in AI is exponential and has been for a long time. But even knowing that, the increase in the dollar value of AI to business caught the sector by surprise.

Eight years ago when I wrote about the expected increase in AI business, I quoted the market intelligence group Tractica that AI spending would “be worth more than $40 billion in the coming decade,” which sounded like a lot at the time. It appears that was an underestimate.

“The global artificial intelligence market size was valued at $428 billion U.S. in 2022,” said an updated report from Fortune Business Insights. “The market is projected to grow from $515.31 billion U.S. in 2023.”  The estimate for 2030 is more than $2 trillion. 

This week the new Toronto AI company Cohere, where Hinton has a stake of his own, announced it was “in advanced talks” to raise $250 million. The Canadian media company Thomson Reuters said it was planning “a deeper investment in artificial intelligence.” IBM is expected to “pause hiring for roles that could be replaced with AI.” The founders of Google DeepMind and LinkedIn have launched a ChatGPT competitor called Pi.

And that was just this week.

“My one hope is that, because if we allow it to take over it will be bad for all of us, we could get the U.S. and China to agree, like we did with nuclear weapons,” said Hinton. “We’re all the in same boat with respect to existential threats, so we all ought to be able to co-operate on trying to stop it.”

Interviewer and moderator Will Douglas Heaven, an editor at MIT Technology Review finished Hinton’s sentence for him: “As long as we can make some money on the way.”

Hinton has attracted some criticism himself. Wilfred Chan writing for Fast Company has two articles, “‘I didn’t see him show up’: Ex-Googlers blast ‘AI godfather’ Geoffrey Hinton’s silence on fired AI experts” on May 5, 2023, Note: Links have been removed,

Geoffrey Hinton, the 75-year-old computer scientist known as the “Godfather of AI,” made headlines this week after resigning from Google to sound the alarm about the technology he helped create. In a series of high-profile interviews, the machine learning pioneer has speculated that AI will surpass humans in intelligence and could even learn to manipulate or kill people on its own accord.

But women who for years have been speaking out about AI’s problems—even at the expense of their jobs—say Hinton’s alarmism isn’t just opportunistic but also overshadows specific warnings about AI’s actual impacts on marginalized people.

“It’s disappointing to see this autumn-years redemption tour [emphasis mine] from someone who didn’t really show up” for other Google dissenters, says Meredith Whittaker, president of the Signal Foundation and an AI researcher who says she was pushed out of Google in 2019 in part over her activism against the company’s contract to build machine vision technology for U.S. military drones. (Google has maintained that Whittaker chose to resign.)

Another prominent ex-Googler, Margaret Mitchell, who co-led the company’s ethical AI team, criticized Hinton for not denouncing Google’s 2020 firing of her coleader Timnit Gebru, a leading researcher who had spoken up about AI’s risks for women and people of color.

“This would’ve been a moment for Dr. Hinton to denormalize the firing of [Gebru],” Mitchell tweeted on Monday. “He did not. This is how systemic discrimination works.”

Gebru, who is Black, was sacked in 2020 after refusing to scrap a research paper she coauthored about the risks of large language models to multiply discrimination against marginalized people. …

… An open letter in support of Gebru was signed by nearly 2,700 Googlers in 2020, but Hinton wasn’t one of them. 

Instead, Hinton has used the spotlight to downplay Gebru’s voice. In an appearance on CNN Tuesday [May 2, 2023], for example, he dismissed a question from Jake Tapper about whether he should have stood up for Gebru, saying her ideas “aren’t as existentially serious as the idea of these things getting more intelligent than us and taking over.” [emphasis mine]

Gebru has been mentioned here a few times. She’s mentioned in passing in a June 23, 2022 posting “Racist and sexist robots have flawed AI” and in a little more detail in an August 30, 2022 posting “Should AI algorithms get patents for their inventions and is anyone talking about copyright for texts written by AI algorithms?” scroll down to the ‘Consciousness and ethical AI’ subhead

Chan has another Fast Company article investigating AI issues also published on May 5, 2023, “Researcher Meredith Whittaker says AI’s biggest risk isn’t ‘consciousness’—it’s the corporations that control them.”

The last two existential AI panics

The term “autumn-years redemption tour”is striking and while the reference to age could be viewed as problematic, it also hints at the money, honours, and acknowledgement that Hinton has enjoyed as an eminent scientist. I’ve covered two previous panics set off by eminent scientists. “Existential risk” is the title of my November 26, 2012 posting which highlights Martin Rees’ efforts to found the Centre for Existential Risk at the University of Cambridge.

Rees is a big deal. From his Wikipedia entry, Note: Links have been removed,

Martin John Rees, Baron Rees of Ludlow OM FRS FREng FMedSci FRAS HonFInstP[10][2] (born 23 June 1942) is a British cosmologist and astrophysicist.[11] He is the fifteenth Astronomer Royal, appointed in 1995,[12][13][14] and was Master of Trinity College, Cambridge, from 2004 to 2012 and President of the Royal Society between 2005 and 2010.[15][16][17][18][19][20]

The Centre for Existential Risk can be found here online (it is located at the University of Cambridge). Interestingly, Hinton who was born in December 1947 will be giving a lecture “Digital versus biological intelligence: Reasons for concern about AI” in Cambridge on May 25, 2023.

The next panic was set off by Stephen Hawking (1942 – 2018; also at the University of Cambridge, Wikipedia entry) a few years before he died. (Note: Rees, Hinton, and Hawking were all born within five years of each other and all have/had ties to the University of Cambridge. Interesting coincidence, eh?) From a January 9, 2015 article by Emily Chung for CBC news online,

Machines turning on their creators has been a popular theme in books and movies for decades, but very serious people are starting to take the subject very seriously. Physicist Stephen Hawking says, “the development of full artificial intelligence could spell the end of the human race.” Tesla Motors and SpaceX founder Elon Musk suggests that AI is probably “our biggest existential threat.”

Artificial intelligence experts say there are good reasons to pay attention to the fears expressed by big minds like Hawking and Musk — and to do something about it while there is still time.

Hawking made his most recent comments at the beginning of December [2014], in response to a question about an upgrade to the technology he uses to communicate, He relies on the device because he has amyotrophic lateral sclerosis, a degenerative disease that affects his ability to move and speak.

Popular works of science fiction – from the latest Terminator trailer, to the Matrix trilogy, to Star Trek’s borg – envision that beyond that irreversible historic event, machines will destroy, enslave or assimilate us, says Canadian science fiction writer Robert J. Sawyer.

Sawyer has written about a different vision of life beyond singularity [when machines surpass humans in general intelligence,] — one in which machines and humans work together for their mutual benefit. But he also sits on a couple of committees at the Lifeboat Foundation, a non-profit group that looks at future threats to the existence of humanity, including those posed by the “possible misuse of powerful technologies” such as AI. He said Hawking and Musk have good reason to be concerned.

To sum up, the first panic was in 2012, the next in 2014/15, and the latest one began earlier this year (2023) with a letter. A March 29, 2023 Thompson Reuters news item on CBC news online provides information on the contents,

Elon Musk and a group of artificial intelligence experts and industry executives are calling for a six-month pause in developing systems more powerful than OpenAI’s newly launched GPT-4, in an open letter citing potential risks to society and humanity.

Earlier this month, Microsoft-backed OpenAI unveiled the fourth iteration of its GPT (Generative Pre-trained Transformer) AI program, which has wowed users with its vast range of applications, from engaging users in human-like conversation to composing songs and summarizing lengthy documents.

The letter, issued by the non-profit Future of Life Institute and signed by more than 1,000 people including Musk, called for a pause on advanced AI development until shared safety protocols for such designs were developed, implemented and audited by independent experts.

Co-signatories included Stability AI CEO Emad Mostaque, researchers at Alphabet-owned DeepMind, and AI heavyweights Yoshua Bengio, often referred to as one of the “godfathers of AI,” and Stuart Russell, a pioneer of research in the field.

According to the European Union’s transparency register, the Future of Life Institute is primarily funded by the Musk Foundation, as well as London-based effective altruism group Founders Pledge, and Silicon Valley Community Foundation.

The concerns come as EU police force Europol on Monday {March 27, 2023] joined a chorus of ethical and legal concerns over advanced AI like ChatGPT, warning about the potential misuse of the system in phishing attempts, disinformation and cybercrime.

Meanwhile, the U.K. government unveiled proposals for an “adaptable” regulatory framework around AI.

The government’s approach, outlined in a policy paper published on Wednesday [March 29, 2023], would split responsibility for governing artificial intelligence (AI) between its regulators for human rights, health and safety, and competition, rather than create a new body dedicated to the technology.

The engineers have chimed in, from an April 7, 2023 article by Margo Anderson for the IEEE (institute of Electrical and Electronics Engineers) Spectrum magazine, Note: Links have been removed,

The open letter [published March 29, 2023], titled “Pause Giant AI Experiments,” was organized by the nonprofit Future of Life Institute and signed by more than 27,565 people (as of 8 May). It calls for cessation of research on “all AI systems more powerful than GPT-4.”

It’s the latest of a host of recent “AI pause” proposals including a suggestion by Google’s François Chollet of a six-month “moratorium on people overreacting to LLMs” in either direction.

In the news media, the open letter has inspired straight reportage, critical accounts for not going far enough (“shut it all down,” Eliezer Yudkowsky wrote in Time magazine), as well as critical accounts for being both a mess and an alarmist distraction that overlooks the real AI challenges ahead.

IEEE members have expressed a similar diversity of opinions.

There was an earlier open letter in January 2015 according to Wikipedia’s “Open Letter on Artificial Intelligence” entry, Note: Links have been removed,

In January 2015, Stephen Hawking, Elon Musk, and dozens of artificial intelligence experts[1] signed an open letter on artificial intelligence calling for research on the societal impacts of AI. The letter affirmed that society can reap great potential benefits from artificial intelligence, but called for concrete research on how to prevent certain potential “pitfalls”: artificial intelligence has the potential to eradicate disease and poverty, but researchers must not create something which is unsafe or uncontrollable.[1] The four-paragraph letter, titled “Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter”, lays out detailed research priorities in an accompanying twelve-page document.

As for ‘Mr. ChatGPT’ or Sam Altman, CEO of OpenAI, while he didn’t sign the March 29, 2023 letter, he appeared before US Congress suggesting AI needs to be regulated according to May 16, 2023 news article by Mohar Chatterjee for Politico.

You’ll notice I’ve arbitrarily designated three AI panics by assigning their origins to eminent scientists. In reality, these concerns rise and fall in ways that don’t allow for such a tidy analysis. As Chung notes, science fiction regularly addresses this issue. For example, there’s my October 16, 2013 posting, “Wizards & Robots: a comic book encourages study in the sciences and maths and discussions about existential risk.” By the way, will.i.am (of the Black Eyed Peas band was involved in the comic book project and he us a longtime supporter of STEM (science, technology, engineering, and mathematics) initiatives.

Finally (but not quite)

Puzzling, isn’t it? I’m not sure we’re asking the right questions but it’s encouraging to see that at least some are being asked.

Dr. Andrew Maynard in a May 12, 2023 essay for The Conversation (h/t May 12, 2023 item on phys.org) notes that ‘Luddites’ questioned technology’s inevitable progress and were vilified for doing so, Note: Links have been removed,

The term “Luddite” emerged in early 1800s England. At the time there was a thriving textile industry that depended on manual knitting frames and a skilled workforce to create cloth and garments out of cotton and wool. But as the Industrial Revolution gathered momentum, steam-powered mills threatened the livelihood of thousands of artisanal textile workers.

Faced with an industrialized future that threatened their jobs and their professional identity, a growing number of textile workers turned to direct action. Galvanized by their leader, Ned Ludd, they began to smash the machines that they saw as robbing them of their source of income.

It’s not clear whether Ned Ludd was a real person, or simply a figment of folklore invented during a period of upheaval. But his name became synonymous with rejecting disruptive new technologies – an association that lasts to this day.

Questioning doesn’t mean rejecting

Contrary to popular belief, the original Luddites were not anti-technology, nor were they technologically incompetent. Rather, they were skilled adopters and users of the artisanal textile technologies of the time. Their argument was not with technology, per se, but with the ways that wealthy industrialists were robbing them of their way of life

In December 2015, Stephen Hawking, Elon Musk and Bill Gates were jointly nominated for a “Luddite Award.” Their sin? Raising concerns over the potential dangers of artificial intelligence.

The irony of three prominent scientists and entrepreneurs being labeled as Luddites underlines the disconnect between the term’s original meaning and its more modern use as an epithet for anyone who doesn’t wholeheartedly and unquestioningly embrace technological progress.

Yet technologists like Musk and Gates aren’t rejecting technology or innovation. Instead, they’re rejecting a worldview that all technological advances are ultimately good for society. This worldview optimistically assumes that the faster humans innovate, the better the future will be.

In an age of ChatGPT, gene editing and other transformative technologies, perhaps we all need to channel the spirit of Ned Ludd as we grapple with how to ensure that future technologies do more good than harm.

In fact, “Neo-Luddites” or “New Luddites” is a term that emerged at the end of the 20th century.

In 1990, the psychologist Chellis Glendinning published an essay titled “Notes toward a Neo-Luddite Manifesto.”

Then there are the Neo-Luddites who actively reject modern technologies, fearing that they are damaging to society. New York City’s Luddite Club falls into this camp. Formed by a group of tech-disillusioned Gen-Zers, the club advocates the use of flip phones, crafting, hanging out in parks and reading hardcover or paperback books. Screens are an anathema to the group, which sees them as a drain on mental health.

I’m not sure how many of today’s Neo-Luddites – whether they’re thoughtful technologists, technology-rejecting teens or simply people who are uneasy about technological disruption – have read Glendinning’s manifesto. And to be sure, parts of it are rather contentious. Yet there is a common thread here: the idea that technology can lead to personal and societal harm if it is not developed responsibly.

Getting back to where this started with nonhuman authors, Amelia Eqbal has written up an informal transcript of a March 16, 2023 CBC radio interview (radio segment is embedded) about ChatGPT-4 (the latest AI chatbot from OpenAI) between host Elamin Abdelmahmoud and tech journalist, Alyssa Bereznak.

I was hoping to add a little more Canadian content, so in March 2023 and again in April 2023, I sent a question about whether there were any policies regarding nonhuman or AI authors to Kim Barnhardt at the Canadian Medical Association Journal (CMAJ). To date, there has been no reply but should one arrive, I will place it here.

In the meantime, I have this from Canadian writer, Susan Baxter in her May 15, 2023 blog posting “Coming soon: Robot Overlords, Sentient AI and more,”

The current threat looming (Covid having been declared null and void by the WHO*) is Artificial Intelligence (AI) which, we are told, is becoming too smart for its own good and will soon outsmart humans. Then again, given some of the humans I’ve met along the way that wouldn’t be difficult.

All this talk of scary-boo AI seems to me to have become the worst kind of cliché, one that obscures how our lives have become more complicated and more frustrating as apps and bots and cyber-whatsits take over.

The trouble with clichés, as Alain de Botton wrote in How Proust Can Change Your Life, is not that they are wrong or contain false ideas but more that they are “superficial articulations of good ones”. Cliches are oversimplifications that become so commonplace we stop noticing the more serious subtext. (This is rife in medicine where metaphors such as talk of “replacing” organs through transplants makes people believe it’s akin to changing the oil filter in your car. Or whatever it is EV’s have these days that needs replacing.)

Should you live in Vancouver (Canada) and are attending a May 28, 2023 AI event, you may want to read Susan Baxter’s piece as a counterbalance to, “Discover the future of artificial intelligence at this unique AI event in Vancouver,” a May 19, 2023 sponsored content by Katy Brennan for the Daily Hive,

If you’re intrigued and eager to delve into the rapidly growing field of AI, you’re not going to want to miss this unique Vancouver event.

On Sunday, May 28 [2023], a Multiplatform AI event is coming to the Vancouver Playhouse — and it’s set to take you on a journey into the future of artificial intelligence.

The exciting conference promises a fusion of creativity, tech innovation, and thought–provoking insights, with talks from renowned AI leaders and concept artists, who will share their experiences and opinions.

Guests can look forward to intense discussions about AI’s pros and cons, hear real-world case studies, and learn about the ethical dimensions of AI, its potential threats to humanity, and the laws that govern its use.

Live Q&A sessions will also be held, where leading experts in the field will address all kinds of burning questions from attendees. There will also be a dynamic round table and several other opportunities to connect with industry leaders, pioneers, and like-minded enthusiasts. 

This conference is being held at The Playhouse, 600 Hamilton Street, from 11 am to 7:30 pm, ticket prices range from $299 to $349 to $499 (depending on when you make your purchase, From the Multiplatform AI Conference homepage,

Event Speakers

Max Sills
General Counsel at Midjourney

From Jan 2022 – present (Advisor – now General Counsel) – Midjourney – An independent research lab exploring new mediums of thought and expanding the imaginative powers of the human species (SF) Midjourney – a generative artificial intelligence program and service created and hosted by a San Francisco-based independent research lab Midjourney, Inc. Midjourney generates images from natural language descriptions, called “prompts”, similar to OpenAI’s DALL-E and Stable Diffusion. For now the company uses Discord Server as a source of service and, with huge 15M+ members, is the biggest Discord server in the world. In the two-things-at-once department, Max Sills also known as an owner of Open Advisory Services, firm which is set up to help small and medium tech companies with their legal needs (managing outside counsel, employment, carta, TOS, privacy). Their clients are enterprise level, medium companies and up, and they are here to help anyone on open source and IP strategy. Max is an ex-counsel at Block, ex-general manager of the Crypto Open Patent Alliance. Prior to that Max led Google’s open source legal group for 7 years.

So, the first speaker listed is a lawyer associated with Midjourney, a highly controversial generative artificial intelligence programme used to generate images. According to their entry on Wikipedia, the company is being sued, Note: Links have been removed,

On January 13, 2023, three artists – Sarah Andersen, Kelly McKernan, and Karla Ortiz – filed a copyright infringement lawsuit against Stability AI, Midjourney, and DeviantArt, claiming that these companies have infringed the rights of millions of artists, by training AI tools on five billion images scraped from the web, without the consent of the original artists.[32]

My October 24, 2022 posting highlights some of the issues with generative image programmes and Midjourney is mentioned throughout.

As I noted earlier, I’m glad to see more thought being put into the societal impact of AI and somewhat disconcerted by the hyperbole from the like of Geoffrey Hinton and the like of Vancouver’s Multiplatform AI conference organizers. Mike Masnick put it nicely in his May 24, 2023 posting on TechDirt (Note 1: I’ve taken a paragraph out of context, his larger issue is about proposals for legislation; Note 2: Links have been removed),

Honestly, this is partly why I’ve been pretty skeptical about the “AI Doomers” who keep telling fanciful stories about how AI is going to kill us all… unless we give more power to a few elite people who seem to think that it’s somehow possible to stop AI tech from advancing. As I noted last month, it is good that some in the AI space are at least conceptually grappling with the impact of what they’re building, but they seem to be doing so in superficial ways, focusing only on the sci-fi dystopian futures they envision, and not things that are legitimately happening today from screwed up algorithms.

For anyone interested in the Canadian government attempts to legislate AI, there’s my May 1, 2023 posting, “Canada, AI regulation, and the second reading of the Digital Charter Implementation Act, 2022 (Bill C-27).”

Addendum (June 1, 2023)

Another statement warning about runaway AI was issued on Tuesday, May 30, 2023. This was far briefer than the previous March 2023 warning, from the Center for AI Safety’s “Statement on AI Risk” webpage,

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war [followed by a list of signatories] …

Vanessa Romo’s May 30, 2023 article (with contributions from Bobby Allyn) for NPR ([US] National Public Radio) offers an overview of both warnings. Rae Hodge’s May 31, 2023 article for Salon offers a more critical view, Note: Links have been removed,

The artificial intelligence world faced a swarm of stinging backlash Tuesday morning, after more than 350 tech executives and researchers released a public statement declaring that the risks of runaway AI could be on par with those of “nuclear war” and human “extinction.” Among the signatories were some who are actively pursuing the profitable development of the very products their statement warned about — including OpenAI CEO Sam Altman and Google DeepMind CEO Demis Hassabis.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement from the non-profit Center for AI Safety said.

But not everyone was shaking in their boots, especially not those who have been charting AI tech moguls’ escalating use of splashy language — and those moguls’ hopes for an elite global AI governance board.

TechCrunch’s Natasha Lomas, whose coverage has been steeped in AI, immediately unravelled the latest panic-push efforts with a detailed rundown of the current table stakes for companies positioning themselves at the front of the fast-emerging AI industry.

“Certainly it speaks volumes about existing AI power structures that tech execs at AI giants including OpenAI, DeepMind, Stability AI and Anthropic are so happy to band and chatter together when it comes to publicly amplifying talk of existential AI risk. And how much more reticent to get together to discuss harms their tools can be seen causing right now,” Lomas wrote.

“Instead of the statement calling for a development pause, which would risk freezing OpenAI’s lead in the generative AI field, it lobbies policymakers to focus on risk mitigation — doing so while OpenAI is simultaneously crowdfunding efforts to shape ‘democratic processes for steering AI,'” Lomas added.

The use of scary language and fear as a marketing tool has a long history in tech. And, as the LA Times’ Brian Merchant pointed out in an April column, OpenAI stands to profit significantly from a fear-driven gold rush of enterprise contracts.

“[OpenAI is] almost certainly betting its longer-term future on more partnerships like the one with Microsoft and enterprise deals serving large companies,” Merchant wrote. “That means convincing more corporations that if they want to survive the coming AI-led mass upheaval, they’d better climb aboard.”

Fear, after all, is a powerful sales tool.

Romo’s May 30, 2023 article for NPR offers a good overview and, if you have the time, I recommend reading Hodge’s May 31, 2023 article for Salon in its entirety.

*ETA June 8, 2023: This sentence “What follows the ‘nonhuman authors’ is essentially a survey of situation/panic.” was added to the introductory paragraph at the beginning of this post.

Mind-reading prosthetic limbs

In a December 21, 2022 Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU press release (also on EurekAlert) problems with current neuroprostheses are described in the context of a new research project intended to solve them,

Lifting a glass, making a fist, entering a phone number using the index finger: it is amazing the things cutting-edge robotic hands can already do thanks to biomedical technology. However, things that work in the laboratory often encounter stumbling blocks when put to practice in daily life. The problem is the vast diversity of the intentions of each individual person, their surroundings and the things that can be found there, making a one size fits all solution all but impossible. A team at FAU is investigating how intelligent prostheses can be improved and made more reliable. The idea is that interactive artificial intelligence will help the prostheses to recognize human intent better, to register their surroundings and to continue to develop and improve over time. The project is to receive 4.5 million euros in funding from the EU, with FAU receiving 467,000 euros.

“We are literally working at the interface between humans and machines,” explains Prof. Dr. Claudio Castellini, professor of medical robotics at FAU. “The technology behind prosthetics for upper limbs has come on in leaps and bounds over the past decades.” Using surface electromyography, for example, skin electrodes at the remaining stump of the arm can detect the slightest muscle movements. These biosignals can be converted and transferred to the prosthetic limb as electrical impulses. “The wearer controls their artificial hand themselves using the stump. Methods taken from pattern recognition and interactive machine learning also allow people to teach their prosthetic their own individual needs when making a gesture or a movement.”

The advantages of AI over purely cosmetic prosthetics

At present, advanced robotic prosthetics have not yet reached optimal standards in terms of comfort, function and control, which is why many people with missing limbs still often prefer purely cosmetic prosthetics with no additional functions. The new EU Horizon project “AI-Powered Manipulation System for Advanced Robotic Service, Manufacturing and Prosthetics (IntelliMan)” therefore focuses on how these can interact with their environment even more effectively and for a specific purpose.

Researchers at FAU concentrate in particular on how to improve control of both real and virtual prosthetic upper limbs. The focus is on what is known as intent detection. Prof. Castellini and his team are continuing work on recording and analyzing human biosignals, and are designing innovative algorithms for machine learning aimed at detecting the individual movement patterns of individuals. User studies conducted on test persons both with and without physical disabilities are used to validate their results. Furthermore, FAU is also leading the area “Shared autonomy between humans and robots” in the EU project, aimed at checking the safety of the results.

At the interface between humans and machines

Prof. Castellini heads the “Assistive Intelligent Robotics” lab (AIROB) at FAU that focuses on controlling assistive robotics for the upper and lower limbs as well as functional electrostimulation. “We are exploiting the potential offered by intent detection to control assistive and rehabilitative robotics,” explains the researcher. “This covers wearable robots worn on the body such as prosthetics and exoskeletons, but also robot arms and simulations using virtual reality.” The professorship focuses particularly on biosignal processing of various sensor modalities and methods of machine learning for intent detection, in other words research directly at the interface between humans and machines.

In his previous research at the German Aerospace Center (DLR), where he was based until 2021, Castellini investigated the question of how virtual hand prosthetics could help amputees cope with phantom pain. Alongside Castellini, doctoral candidate Fabio Egle, a research associate at the professorship, is also actively involved in the IntelliMan project. The FAU share of the EU project will receive funding of 467,000 euros over a period of three and a half years, while the overall budget amounts to 6 million euros. The IntelliMan project is coordinated by the University of Bologna and the DLR, the Polytechnic University of Catalonia, the University of Genoa, Luigi Vanvitelli University in Campania and the Bavarian Research Alliance (BayFOR) are also involved.

Good luck to the team!

Bioinspired ‘smart’ materials a step towards soft robotics and electronics

An October 13, 2022 news item on Nanowerk describes some new work from the University of Texas at Austin,

Inspired by living things from trees to shellfish, researchers at The University of Texas at Austin set out to create a plastic much like many life forms that are hard and rigid in some places and soft and stretchy in others.

Their success — a first, using only light and a catalyst to change properties such as hardness and elasticity in molecules of the same type — has brought about a new material that is 10 times as tough as natural rubber and could lead to more flexible electronics and robotics.

An October 13, 2022 University of Texas at Austin news release (also on EurekAlert), which originated the news item, delves further into the work,

“This is the first material of its type,” said Zachariah Page, assistant professor of chemistry and corresponding author on the paper. “The ability to control crystallization, and therefore the physical properties of the material, with the application of light is potentially transformative for wearable electronics or actuators in soft robotics.”

Scientists have long sought to mimic the properties of living structures, like skin and muscle, with synthetic materials. In living organisms, structures often combine attributes such as strength and flexibility with ease. When using a mix of different synthetic materials to mimic these attributes, materials often fail, coming apart and ripping at the junctures between different materials.

Oftentimes, when bringing materials together, particularly if they have very different mechanical properties, they want to come apart,” Page said. Page and his team were able to control and change the structure of a plastic-like material, using light to alter how firm or stretchy the material would be.

Chemists started with a monomer, a small molecule that binds with others like it to form the building blocks for larger structures called polymers that were similar to the polymer found in the most commonly used plastic. After testing a dozen catalysts, they found one that, when added to their monomer and shown visible light, resulted in a semicrystalline polymer similar to those found in existing synthetic rubber. A harder and more rigid material was formed in the areas the light touched, while the unlit areas retained their soft, stretchy properties.

Because the substance is made of one material with different properties, it was stronger and could be stretched farther than most mixed materials.

The reaction takes place at room temperature, the monomer and catalyst are commercially available, and researchers used inexpensive blue LEDs as the light source in the experiment. The reaction also takes less than an hour and minimizes use of any hazardous waste, which makes the process rapid, inexpensive, energy efficient and environmentally benign.

The researchers will next seek to develop more objects with the material to continue to test its usability.

“We are looking forward to exploring methods of applying this chemistry towards making 3D objects containing both hard and soft components,” said first author Adrian Rylski, a doctoral student at UT Austin.

The team envisions the material could be used as a flexible foundation to anchor electronic components in medical devices or wearable tech. In robotics, strong and flexible materials are desirable to improve movement and durability.

Here’s a link to and a citation for the paper,

Polymeric multimaterials by photochemical patterning of crystallinity by Adrian K. Rylski, Henry L. Cater, Keldy S. Mason, Marshall J. Allen, Anthony J. Arrowood, Benny D. Freeman, Gabriel E. Sanoja, and Zachariah A. Page. Science 13 Oct 2022 Vol 378, Issue 6616 pp. 211-215 DOI: 10.1126/science.add6975

This paper is behind a paywall.