Category Archives: robots

Bionic jellyfish for deep ocean exploration

This research may be a little disturbing for animal lovers as it involves conjoining a jellyfish (or sea jelly) and a robotic device. That said, a February 29, 2024 news item on ScienceDaily highlights new research into the oceanic depths,

Jellyfish can’t do much besides swim, sting, eat, and breed. They don’t even have brains. Yet, these simple creatures can easily journey to the depths of the oceans in a way that humans, despite all our sophistication, cannot.

But what if humans could have jellyfish explore the oceans on our behalf, reporting back what they find? New research conducted at Caltech [California Institute of Technology] aims to make that a reality through the creation of what researchers call biohybrid robotic jellyfish. These creatures, which can be thought of as ocean-going cyborgs, augment jellyfish with electronics that enhance their swimming and a prosthetic “hat” that can carry a small payload while also making the jellyfish swim in a more streamlined manner.

The researchers describe their work and provide recordings of the jellyfish,

A February 28, 2024 California Institute of Technology (Caltech) news release (also on EurekAlert) by Emily Velasco, which originated the news item, provides more detail,

The work, published in the journal Bioinspiration & Biomimetics, was conducted in the lab of John Dabiri (MS ’03, PhD ’05), the Centennial Professor of Aeronautics and Mechanical Engineering, and builds on his previous work augmenting jellyfish. Dabiri’s goal with this research is to use jellyfish as robotic data-gatherers, sending them into the oceans to collect information about temperature, salinity, and oxygen levels, all of which are affected by Earth’s changing climate.

“It’s well known that the ocean is critical for determining our present and future climate on land, and yet, we still know surprisingly little about the ocean, especially away from the surface,” Dabiri says. “Our goal is to finally move that needle by taking an unconventional approach inspired by one of the few animals that already successfully explores the entire ocean.”

Throughout his career, Dabiri has looked to the natural world, jellyfish included, for inspiration in solving engineering challenges. This work began with early attempts by Dabiri’s lab to develop a mechanical robot that swam like jellyfish, which have the most efficient method for traveling through water of any living creature. Though his research team succeeded in creating such a robot, that robot was never able to swim as efficiently as a real jellyfish. At that point, Dabiri asked himself, why not just work with jellyfish themselves?

“Jellyfish are the original ocean explorers, reaching its deepest corners and thriving just as well in tropical or polar waters,” Dabiri says. “Since they don’t have a brain or the ability to sense pain, we’ve been able to collaborate with bioethicists to develop this biohybrid robotic application in a way that’s ethically principled.”

Previously, Dabiri’s lab implanted jellyfish with a kind of electronic pacemaker that controls the speed at which they swim. In doing so, they found that if they made jellyfish swim faster than the leisurely pace they normally keep, the animals became even more efficient. A jellyfish swimming three times faster than it normally would uses only twice as much energy.

This time, the research team went a step further, adding what they call a forebody to the jellies. These forebodies are like hats that sit atop the jellyfish’s bell (the mushroom-shaped part of the animal). The devices were designed by graduate student and lead author Simon Anuszczyk (MS ’22), who aimed to make the jellyfish more streamlined while also providing a place where sensors and other electronics can be carried.

“Much like the pointed end of an arrow, we designed 3D-printed forebodies to streamline the bell of the jellyfish robot, reduce drag, and increase swimming performance,” Anuszczyk says. “At the same time, we experimented with 3D printing until we were able to carefully balance the buoyancy and keep the jellyfish swimming vertically.”

To test the augmented jellies’ swimming abilities, Dabiri’s lab undertook the construction of a massive vertical aquarium inside Caltech’s Guggenheim Laboratory. Dabiri explains that the three-story tank is tall, rather than wide, because researchers want to gather data on oceanic conditions far below the surface.

“In the ocean, the round trip from the surface down to several thousand meters will take a few days for the jellyfish, so we wanted to develop a facility to study that process in the lab,” Dabiri says. “Our vertical tank lets the animals swim against a flowing vertical current, like a treadmill for swimmers. We expect the unique scale of the facility—probably the first vertical water treadmill of its kind—to be useful for a variety of other basic and applied research questions.”

Swim tests conducted in the tank show that a jellyfish equipped with a combination of the swimming pacemaker and forebody can swim up to 4.5 times faster than an all-natural jelly while carrying a payload. The total cost is about $20 per jellyfish, Dabiri says, which makes biohybrid jellies an attractive alternative to renting a research vessel that can cost more than $50,000 a day to run.

“By using the jellyfish’s natural capacity to withstand extreme pressures in the deep ocean and their ability to power themselves by feeding, our engineering challenge is a lot more manageable,” Dabiri adds. “We still need to design the sensor package to withstand the same crushing pressures, but that device is smaller than a softball, making it much easier to design than a full submarine vehicle operating at those depths.

“I’m really excited to see what we can learn by simply observing these parts of the ocean for the very first time,” he adds.

Dabiri says future work may focus on further enhancing the bionic jellies’ abilities. Right now, they can only be made to swim faster in a straight line, such as the vertical paths being designed for deep ocean measurement. But further research may also make them steerable, so they can be directed horizontally as well as vertically.

The paper describing the work, “Electromechanical enhancement of live jellyfish for ocean exploration,” appears in the XX issue of Bioinspiration & Biomimetics. Co-authors are Anuszczyk and Dabiri.

Funding for the research was provided by the National Science Foundation and the Charles Lee Powell Foundation.

Here’s a link to and a citation for the paper,

Electromechanical enhancement of live jellyfish for ocean exploration by Simon R Anuszczyk and John O Dabiri. Bioinspiration & Biomimetics, Volume 19, Number 2 DOI 10.1088/1748-3190/ad277f Published 28 February 2024

This paper is open access.

Digi, Nano, Bio, Neuro – why should we care more about converging technologies?

Personality in focus: the convergence of biology and computer technology could make extremely sensitive data available. (Image: by-​studio / AdobeStock) [downloaded from https://ethz.ch/en/news-and-events/eth-news/news/2024/05/digi-nano-bio-neuro-or-why-we-should-care-more-about-converging-technologies.html]

I gave a guest lecture some years ago where I mentioned that I thought the real issue with big data and AI (artificial intelligence) lay in combining them (or convergence). These days, it seems I was insufficiently imaginative as researchers from ETH Zurich have taken the notion much further.

From a May 7, 2024 ETH Zurich press release (also on EurekAlert), Note: You’ll see in the ‘References’ some extra words, ‘external page’ is self-explanatory but ‘call made’ remains a mystery to me,

In my research, I [Dirk Helbing, Professor of Computational Social Science at the Department of Humanities, Social and Political Sciences and associated with the Department of Computer Science at ETH Zurich.] deal with the consequences of digitalisation for people, society and democracy. In this context, it is also important to keep an eye on their convergence in computer and life sciences – i.e. what becomes possible when digital technologies grow increasingly together with biotechnology, neurotechnology and nanotechnology.

Converging technologies are seen as a breeding ground for far-​reaching innovations. However, they are blurring the boundaries between the physical, biological and digital worlds. Conventional regulations are becoming ineffective as a result.

In a joint study I conducted with my co-​author Marcello Ienca, we have recently examined the risks and societal challenges of technological convergence – and concluded that the effects for individuals and society are far-​reaching.

We would like to draw attention to the challenges and risks of converging technologies and explain why we consider it necessary to accompany technological developments internationally with strict regulations.

For several years now, everyone has been able to observe, within the context of digitalisation, the consequences of leaving technological change to market forces alone without effective regulation.

Misinformation and manipulation on the web

The Digital Manifesto was published in 2015 – almost ten years ago.1 Nine European experts, including one from ETH Zurich, issued an urgent warning against scoring, i.e. the evaluation of people, and big nudging,2 a subtle form of digital manipulation. The latter is based on personality profiles created using cookies and other surveillance data. A little later, the Cambridge Analytica scandal alerted the world to how the data analysis company had been using personalised ads (microtargeting) in an attempt to manipulate voting behaviour in democratic elections.

This has brought democracies around the world under considerable pressure. Propaganda, fake news and hate speech are polarising and sowing doubt, while privacy is on the decline. We are in the midst of an international information war for control of our minds, in which advertising companies, tech corporations, secret services and the military are fighting to exert an influence on our mindset and behaviour. The European Union has adopted the AI Act in an attempt to curb these dangers.

However, digital technologies have developed at a breathtaking pace, and new possibilities for manipulation are already emerging. The merging of digital and nanotechnology with modern biotechnology and neurotechnology makes revolutionary applications possible that had been hardly imaginable before.

Microrobots for precision medicine

In personalised medicine, for example, the advancing miniaturisation of electronics is making it increasingly possible to connect living organisms and humans with networked sensors and computing power. The WEF [World Economic Forum] proclaimed the “Internet of Bodies” as early as 2020.3, 4

One example that combines conventional medication with a monitoring function is digital pills. These could control medication and record a patient’s physiological data (see this blog post).

Experts expect sensor technology to reach the nanoscale. Magnetic nanoparticles or nanoelectronic components, i.e. tiny particles invisible to the naked eye with a diameter up to 100 nanometres, would make it possible to transport active substances, interact with cells and record vast amounts of data on bodily functions. If introduced into the body, it is hoped that diseases could be detected at an early stage and treated in a personalised manner. This is often referred to as high-​precision medicine.

Nano-​electrodes record brain function

Miniaturised electrodes that can simultaneously measure and manipulate the activity of thousands of neurons coupled with ever-​improving AI tools for the analysis of brain signals are approaches that are now leading to much-​discussed advances in the brain-​computer interface. Brain activity mapping is also on the agenda. Thanks to nano-​neurotechnology, we could soon envisage smartphones and other AI applications being controlled directly by thoughts.

“Long before precision medicine and neurotechnology work reliably, these technologies will be able to be used against people.” Dirk Helbling

Large-​scale projects to map the human brain are also likely to benefit from this.5 In future, brain activity mapping will not only be able to read our thoughts and feelings but also make them possible of being influenced remotely – the latter would probably be a lot more effective than previous manipulation methods like big nudging.

However, conventional electrodes are not suitable for permanent connection between cells and electronics – this requires durable and biocompatible interfaces. This has given rise to the suggestion of transmitting signals optogenetically, i.e. to control genes in special cells with light pulses.6 This would make the implementation of amazing circuits possible (see this ETH News article [November 11, 2014 press release] “Controlling genes with thoughts” ).

The downside of convergence

Admittedly, the applications mentioned above may sound futuristic, with most of them still visions or in their early stages of development. However, a lot of research is being conducted worldwide and at full speed. The military is also interested in using converging technologies for its own purposes. 7, 8

The downside of convergence is the considerable risks involved, such as state or private players gaining access to highly sensitive data and misusing it to monitor and influence people. The more connected our bodies become, the more vulnerable we will be to cybercrime and hacking. It cannot be ruled out that military applications exist already.5 One thing is clear, however: long before precision medicine and neurotechnology work reliably, these technologies will be able to be used against people.

“We need to regain control of our personal data. To do this, we need genuine informational self-​determination.” Dirk Helbling

The problem is that existing regulations are specific and insufficient to keep technological convergence in check. But how are we to retain control over our lives if it becomes increasingly possible to influence our thoughts, feelings and decisions by digital means?

Converging global regulation is needed

In our recent paper we conclude that any regulation of converging technologies would have to be based on converging international regulations. Accordingly, we outline a new global regulatory framework and propose ten governance principles to close the looming regulatory gap. 9

The framework emphasises the need for safeguards to protect bodily and mental functions from unauthorised interference and to ensure personal integrity and privacy by, for example. establishing neurorights.

To minimise risks and prevent abuse, future regulations should be inclusive, transparent and trustworthy. The principle of participatory governance is key, which would have to involve all the relevant groups and ensure that the concerns of affected minorities are also taken into account in decision-​making processes.

Finally, we need to regain control of our personal data. To accomplish this, we need genuine informational self-​determination. This would also have to apply to the digital twins of our body and personality, because they can be used to hack our health and our way of thinking – for good or for bad.10

With our contribution, we would like to initiate public debate about converging technologies. Despite its major relevance, we believe that too little attention is being paid to this topic. Continuous discourse on benefits, risks and sensible rules can help to steer technological convergence in such a way that it serves people instead of harming them.

Dirk Helbing wrote this article together with external page Marcello Ienca call_made, who previously worked at ETH Zurich and EPFL and is now Assistant Professor of Ethics of AI and Neuroscience at the Technical University of Munich.

References

1 Digital-​Manifest: external page Digitale Demokratie statt Datendiktatur call_made (2015) Spektrum der Wissenschaft

2 external page Sie sind das Ziel! call_made (2024) Schweizer Monat

3 external page The Internet of Bodies Is Here: Tackling new challenges of technology governance call_made (2020) World Economic Forum

4 external page Tracking how our bodies work could change our lives call_made (2020) World Economic Forum

5 external page Nanotools for Neuroscience and Brain Activity Mapping call_made (2013) ACS Nano

6 external page Innovationspotenziale der Mensch-​Maschine-Interaktion call_made (2016) Deutsche Akademie der Technikwissenschaften

7 external page Human Augmentation – The Dawn of a New Paradigm. A strategic implications project call_made (2021) UK Ministry of Defence

8 external page Behavioural change as the core of warfighting call_made (2017) Militaire Spectator

9 Helbing D, Ienca M: external page Why converging technologies need converging international regulation call_made (2024) Ethics and Information Technology

10 external page Who is Messing with Your Digital Twin? Body, Mind, and Soul for Sale? call_made Dirk Helbing TEDx Talk (2023)

Here’s a second link to and citation for the paper,

Why converging technologies need converging international regulation by Dirk Helbing & Marcello Ienca. Ethics and Information Technology Volume 26, article number 15, (2024) DOI: 10.1007/s10676-024-09756-8 Published: 28 February 2024

This paper is open access.

Chatbot with expertise in nanomaterials

This December 1, 2023 news item on phys.org starts with a story,

A researcher has just finished writing a scientific paper. She knows her work could benefit from another perspective. Did she overlook something? Or perhaps there’s an application of her research she hadn’t thought of. A second set of eyes would be great, but even the friendliest of collaborators might not be able to spare the time to read all the required background publications to catch up.

Kevin Yager—leader of the electronic nanomaterials group at the Center for Functional Nanomaterials (CFN), a U.S. Department of Energy (DOE) Office of Science User Facility at DOE’s Brookhaven National Laboratory—has imagined how recent advances in artificial intelligence (AI) and machine learning (ML) could aid scientific brainstorming and ideation. To accomplish this, he has developed a chatbot with knowledge in the kinds of science he’s been engaged in.

A December 1, 2023 DOE/Brookhaven National Laboratory news release by Denise Yazak (also on EurekAlert), which originated the news item, describes a research project with a chatbot that has nanomaterial-specific knowledge, Note: Links have been removed,

Rapid advances in AI and ML have given way to programs that can generate creative text and useful software code. These general-purpose chatbots have recently captured the public imagination. Existing chatbots—based on large, diverse language models—lack detailed knowledge of scientific sub-domains. By leveraging a document-retrieval method, Yager’s bot is knowledgeable in areas of nanomaterial science that other bots are not. The details of this project and how other scientists can leverage this AI colleague for their own work have recently been published in Digital Discovery.

Rise of the Robots

“CFN has been looking into new ways to leverage AI/ML to accelerate nanomaterial discovery for a long time. Currently, it’s helping us quickly identify, catalog, and choose samples, automate experiments, control equipment, and discover new materials. Esther Tsai, a scientist in the electronic nanomaterials group at CFN, is developing an AI companion to help speed up materials research experiments at the National Synchrotron Light Source II (NSLS-II).” NSLS-II is another DOE Office of Science User Facility at Brookhaven Lab.

At CFN, there has been a lot of work on AI/ML that can help drive experiments through the use of automation, controls, robotics, and analysis, but having a program that was adept with scientific text was something that researchers hadn’t explored as deeply. Being able to quickly document, understand, and convey information about an experiment can help in a number of ways—from breaking down language barriers to saving time by summarizing larger pieces of work.

Watching Your Language

To build a specialized chatbot, the program required domain-specific text—language taken from areas the bot is intended to focus on. In this case, the text is scientific publications. Domain-specific text helps the AI model understand new terminology and definitions and introduces it to frontier scientific concepts. Most importantly, this curated set of documents enables the AI model to ground its reasoning using trusted facts.

To emulate natural human language, AI models are trained on existing text, enabling them to learn the structure of language, memorize various facts, and develop a primitive sort of reasoning. Rather than laboriously retrain the AI model on nanoscience text, Yager gave it the ability to look up relevant information in a curated set of publications. Providing it with a library of relevant data was only half of the battle. To use this text accurately and effectively, the bot would need a way to decipher the correct context.

“A challenge that’s common with language models is that sometimes they ‘hallucinate’ plausible sounding but untrue things,” explained Yager. “This has been a core issue to resolve for a chatbot used in research as opposed to one doing something like writing poetry. We don’t want it to fabricate facts or citations. This needed to be addressed. The solution for this was something we call ‘embedding,’ a way of categorizing and linking information quickly behind the scenes.”

Embedding is a process that transforms words and phrases into numerical values. The resulting “embedding vector” quantifies the meaning of the text. When a user asks the chatbot a question, it’s also sent to the ML embedding model to calculate its vector value. This vector is used to search through a pre-computed database of text chunks from scientific papers that were similarly embedded. The bot then uses text snippets it finds that are semantically related to the question to get a more complete understanding of the context.

The user’s query and the text snippets are combined into a “prompt” that is sent to a large language model, an expansive program that creates text modeled on natural human language, that generates the final response. The embedding ensures that the text being pulled is relevant in the context of the user’s question. By providing text chunks from the body of trusted documents, the chatbot generates answers that are factual and sourced.

“The program needs to be like a reference librarian,” said Yager. “It needs to heavily rely on the documents to provide sourced answers. It needs to be able to accurately interpret what people are asking and be able to effectively piece together the context of those questions to retrieve the most relevant information. While the responses may not be perfect yet, it’s already able to answer challenging questions and trigger some interesting thoughts while planning new projects and research.”

Bots Empowering Humans

CFN is developing AI/ML systems as tools that can liberate human researchers to work on more challenging and interesting problems and to get more out of their limited time while computers automate repetitive tasks in the background. There are still many unknowns about this new way of working, but these questions are the start of important discussions scientists are having right now to ensure AI/ML use is safe and ethical.

“There are a number of tasks that a domain-specific chatbot like this could clear from a scientist’s workload. Classifying and organizing documents, summarizing publications, pointing out relevant info, and getting up to speed in a new topical area are just a few potential applications,” remarked Yager. “I’m excited to see where all of this will go, though. We never could have imagined where we are now three years ago, and I’m looking forward to where we’ll be three years from now.”

For researchers interested in trying this software out for themselves, the source code for CFN’s chatbot and associated tools can be found in this github repository.

Brookhaven National Laboratory is supported by the Office of Science of the U.S. Department of Energy. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit science.energy.gov.

Here’s a link to and a citation for the paper,

Domain-specific chatbots for science using embeddings by Kevin G. Yager.
Digital Discovery, 2023,2, 1850-1861 DOI: https://doi.org/10.1039/D3DD00112A
First published 10 Oct 2023

This paper appears to be open access.

Living technology possibilities

Before launching into the possibilities, here are two descriptions of ‘living technology’ from the European Centre for Living Technology’s (ECLT) homepage,

Goals

Promote, carry out and coordinate research activities and the diffusion of scientific results in the field of living technology. The scientific areas for living technology are the nano-bio-technologies, self-organizing and evolving information and production technologies, and adaptive complex systems.

History

Founded in 2004 the European Centre for Living Technology is an international and interdisciplinary research centre established as an inter-university consortium, currently involving 18 European and extra-European institutional affiliates.

The Centre is devoted to the study of technologies that exhibit life-like properties including self-organization, adaptability and the capacity to evolve.

Despite the reference to “nano-bio-technologies,” this October 11, 2023 news item on ScienceDaily focuses on microscale living technology,

It is noIn a recent article in the high-profile journal “Advanced Materials,” researchers in Chemnitz show just how close and necessary the transition to sustainable living technology is, based on the morphogenesis of self-assembling microelectronic modules, strengthening the recent membership of Chemnitz University of Technology with the European Centre for Living Technology (ECLT) in Venice.

An October 11, 2023 Chemnitz University of Technology (Technische Universität Chemnitz; TU Chemnitz) press release (also on EurekAlert), which originated the news item, delves further into the topic, Note: Links have been removed,

It is now apparent that the mass-produced artefacts of technology in our increasingly densely populated world – whether electronic devices, cars, batteries, phones, household appliances, or industrial robots – are increasingly at odds with the sustainable bounded ecosystems achieved by living organisms based on cells over millions of years. Cells provide organisms with soft and sustainable environmental interactions with complete recycling of material components, except in a few notable cases like the creation of oxygen in the atmosphere, and of the fossil fuel reserves of oil and coal (as a result of missing biocatalysts). However, the fantastic information content of biological cells (gigabits of information in DNA alone) and the complexities of protein biochemistry for metabolism seem to place a cellular approach well beyond the current capabilities of technology, and prevent the development of intrinsically sustainable technology.

SMARTLETs: tiny shape-changing modules that collectively self-organize to larger more complex systems

A recent perspective review published in the very high impact journal Advanced Materials this month [October 2023] by researchers at the Research Center for Materials, Architectures and Integration of Nanomembranes (MAIN) of Chemnitz University of Technology, shows how a novel form of high-information-content Living Technology is now within reach, based on microrobotic electronic modules called SMARTLETs, which will soon be capable of self-assembling into complex artificial organisms. The research belongs to the new field of Microelectronic Morphogenesis, the creation of form under microelectronic control, and builds on work over the previous years at Chemnitz University of Technology to construct self-folding and self-locomoting thin film electronic modules, now carrying tiny silicon chiplets between the folds, for a massive increase in information processing capabilities. Sufficient information can now be stored in each module to encode not only complex functions but fabrication recipes (electronic genomes) for clean rooms to allow the modules to be copied and evolved like cells, but safely because of the gating of reproduction through human operated clean room facilities.

Electrical self-awareness during self-assembly

In addition, the chiplets can provide neuromorphic learning capabilities allowing them to improve performance during operation. A further key feature of the specific self-assembly of these modules, based on matching physical bar codes, is that electrical and fluidic connections can be achieved between modules. These can then be employed, to make the electronic chiplets on board “aware” of the state of assembly, and of potential errors, allowing them to direct repair, correct mis-assembly, induce disassembly and form collective functions spanning many modules. Such functions include extended communication (antennae), power harvesting and redistribution, remote sensing, material redistribution etc.

So why is this technology vital for sustainability?

The complete digital fab description for modules, for which actually only a limited number of types are required even for complex organisms, allows their material content, responsible originator and environmentally relevant exposure all to be read out. Prof. Dagmar Nuissl-Gesmann from the Law Department at Chemnitz University of Technology observes that “this fine-grained documentation of responsibility intrinsic down to microscopic scales will be a game changer in allowing legal assignment of environmental and social responsibility for our technical artefacts”.

Furthermore, the self-locomotion and self-assembly-disassembly capabilities allows the modules to self-sort for recycling. Modules can be regained, reused, reconfigured, and redeployed in different artificial organisms. If they are damaged, then their limited and documented types facilitate efficient custom recycling of materials with established and optimized protocols for these sorted and now identical entities. These capabilities complement the other more obvious advantages in terms of design development and reuse in this novel reconfigurable media. As Prof. Marlen Arnold, an expert in Sustainability of the Faculty of Economics and Business Administration observes, “Even at high volumes of deployment use, these properties could provide this technology with a hitherto unprecedented level of sustainability which would set the bar for future technologies to share our planet safely with us.”

Contribution to European Living Technology

This research is a first contribution of MAIN/Chemnitz University of Technology, as a new member of the European Centre for Living Technology ECLT, based in Venice,” says Prof. Oliver G. Schmidt, Scientific Director of the Research Center MAIN and adds that “It’s fantastic to see that our deep collaboration with ECLT is paying off so quickly with immediate transdisciplinary benefit for several scientific communities.” “Theoretical research at the ECLT has been urgently in need of novel technology systems able to implement the core properties of living systems.” comments Prof. John McCaskill, coauthor of the paper, and a grounding director of the ECLT in 2004.

Here’s a link to and a citation for the researchers’ perspective paper,

Microelectronic Morphogenesis: Smart Materials with Electronics Assembling into Artificial Organisms by John S. McCaskill, Daniil Karnaushenko, Minshen Zhu, Oliver G. Schmidt. Advanced Materials DOI: https://doi.org/10.1002/adma.202306344 First published: 09 October 2023

This paper is open access.

XoMotion, an exoskeleton developed in Canada causes commotion

I first stumbled across these researchers in 2016 when their project was known as “Wearable Lower Limb Anthropomorphic Exoskeleton (WLLAE).” In my January 20, 2016 posting, “#BCTECH: being at the Summit (Jan. 18-19, 2016),” an event put on by the province of British Columbia (BC, Canada) and the BC Innovation Council (BCIC), I visited a number of booths and talks at the #BC TECH Summit and had this to say about WLLAE,

“The Wearable Lower Limb Anthropomorphic Exoskeleton (WLLAE) – a lightweight, battery-operated and ergonomic robotic system to help those with mobility issues improve their lives. The exoskeleton features joints and links that correspond to those of a human body and sync with motion. SFU has designed, manufactured and tested a proof-of-concept prototype and the current version can mimic all the motions of hip joints.” The researchers (Siamak Arzanpour and Edward Park) pointed out that the ability to mimic all the motions of the hip is a big difference between their system and others which only allow the leg to move forward or back. They rushed the last couple of months to get this system ready for the Summit. In fact, they received their patent for the system the night before (Jan. 17, 2016) the Summit opened.

Unfortunately, there aren’t any pictures of WLLAE yet and the proof-of-concept version may differ significantly from the final version. This system could be used to help people regain movement (paralysis/frail seniors) and I believe there’s a possibility it could be used to enhance human performance (soldiers/athletes). The researchers still have some significant hoops to jump before getting to the human clinical trial stage. They need to refine their apparatus, ensure that it can be safely operated, and further develop the interface between human and machine. I believe WLLAE is considered a neuroprosthetic device. While it’s not a fake leg or arm, it enables movement (prosthetic) and it operates on brain waves (neuro). It’s a very exciting area of research, consequently, there’s a lot of international competition. [ETA January 3, 2024: I’m pretty sure I got the neuroprosthetic part wrong]

Time moved on and there was a name change and then there was this November 10, 2023 article by Jeremy Hainsworth for the Vancouver is Awesome website,

Vancouver-based fashion designer Chloe Angus thought she’d be in a wheelchair for the rest of her life after being diagnosed with an inoperable benign tumour in her spinal cord in 2015, resulting in permanent loss of mobility in her legs.

Now, however, she’s been using a state-of-the-art robotic exoskeleton known as XoMotion that can help physically disabled people self-balance, walk, sidestep, climb stairs and crouch.

“The first time I walked with the exoskeleton was a jaw-dropping experience,” said Angus. “After all these years, the exoskeleton let me stand up and walk on my own without falling. I felt like myself again.”

She added the exoskeleton has the potential to completely change the world for people with motion disabilities.

XoMotion is the result of a decade of research and the product of a Simon Fraser University spinoff company, Human in Motion Robotics (HMR) Inc. It’s the brainchild of professors Siamak Arzanpour and Edward Park.

Arzanpour and Park, both researchers in the Burnaby-based university’s School of Mechatronic Systems Engineering, began work on the device in 2014. They had a vision to enhance exoskeleton technology and empower individuals with mobility challenges to have more options for movement.

“We felt that there was an immediate need to help people with motion disabilities to walk again, with a full range of motion. At the time, exoskeletons could only walk forward. That was the only motion possible,” Arzanpour said.

A November 15, 2023 article (with an embedded video) by Amy Judd & Alissa Thibault for Global News (television) highlights Alexander’s story,

SFU professors Siamak Arzanpour and Edward Park wanted to help people with motion disabilities to walk freely, naturally and independently.

The exoskeleton [XoMotion] is now the most advanced of its kind in the world.

Chloe Angus, who lost her mobility in her legs in 2015, now works for the team.

She said the exoskeleton makes her feel like herself again.

She was diagnosed with an inoperable benign tumor in her spinal cord in 2015 which resulted in a sudden and permanent loss of mobility in her legs. At the time, doctors told Angus that she would need a wheelchair to move for the rest of her life.

Now she is part of the project and defying all odds.

“After all these years, the exoskeleton let me stand up and walk on my own without falling. I felt like myself again.”

There’s a bit more information in the November 8, 2023 Simon Fraser University (SFU) news release (which has the same embedded video as the Global News article) by Ray Sharma,

The state-of-the-art robotic exoskeleton known as XoMotion is the result of a decade of research and the product of an SFU spin off company, Human in Motion Robotics (HMR) Inc. The company has recently garnered millions in investment, an overseas partnership and a suite of new offices in Vancouver.

XoMotion allows individuals with mobility challenges to stand up and walk on their own, without additional support. When in use, XoMotion maintains its stability and simultaneously encompasses all the ranges of motion and degrees of freedom needed for users to self-balance, walk, sidestep, climb stairs, crouch, and more. 

Sensors within the lower-limb exoskeleton mimic the human body’s sense of logic to identify structures along the path, and in-turn, generate a fully balanced motion.

SFU professors Siamak Arzanpour and Edward Park, both researchers in the School of Mechatronic Systems Engineering, began work on the device in 2014 with a vision to enhance exoskeleton technology and empower individuals with mobility challenges to have more options for movement. 

“We felt that there was an immediate need to help people with motion disabilities to walk again, with a full range of motion. At the time, exoskeletons could only walk forward. That was the only motion possible,” says Arzanpour. 

The SFU professors, who first met in 2001 as graduate students at the University of Toronto, co-founded HMR in 2016, bringing together a group of students, end-users, therapists, and organizations to build upon the exoskeleton. Currently, 70 per cent of HMR employees are SFU graduates. 

In recent years, HMR has garnered multiple streams of investment, including a contract with Innovative Solutions Canada, and $10 million in funding during their Series A round in May, including an $8 million investment and strategic partnership from Beno TNR, a prominent Korean technology investment firm.

I decided to bring the embedded video here, it runs a little over 2 mins.,

You can find the Human in Robotics (HMR) website here.

FrogHeart’s 2023 comes to an end as 2024 comes into view

My personal theme for this last year (2023) and for the coming year was and is: catching up. On the plus side, my 2023 backlog (roughly six months) to be published was whittled down considerably. On the minus side, I start 2024 with a backlog of two to three months.

2023 on this blog had a lot in common with 2022 (see my December 31, 2022 posting), which may be due to what’s going on in the world of emerging science and technology or to my personal interests or possibly a bit of both. On to 2023 and a further blurring of boundaries:

Energy, computing and the environment

The argument against paper is that it uses up resources, it’s polluting, it’s affecting the environment, etc. Somehow the part where electricity which underpins so much of our ‘smart’ society does the same thing is left out of the discussion.

Neuromorphic (brainlike) computing and lower energy

Before launching into the stories about lowering energy usage, here’s an October 16, 2023 posting “The cost of building ChatGPT” that gives you some idea of the consequences of our insatiable desire for more computing and more ‘smart’ devices,

In its latest environmental report, Microsoft disclosed that its global water consumption spiked 34% from 2021 to 2022 (to nearly 1.7 billion gallons , or more than 2,500 Olympic-sized swimming pools), a sharp increase compared to previous years that outside researchers tie to its AI research. [emphases mine]

“It’s fair to say the majority of the growth is due to AI,” including “its heavy investment in generative AI and partnership with OpenAI,” said Shaolei Ren, [emphasis mine] a researcher at the University of California, Riverside who has been trying to calculate the environmental impact of generative AI products such as ChatGPT.

Why it matters: Microsoft’s five WDM [West Des Moines in Iowa] data centers — the “epicenter for advancing AI” — represent more than $5 billion in investments in the last 15 years.

Yes, but: They consumed as much as 11.5 million gallons of water a month for cooling, or about 6% of WDM’s total usage during peak summer usage during the last two years, according to information from West Des Moines Water Works.

The focus is AI but it doesn’t take long to realize that all computing has energy and environmental costs. I have more about Ren’s work and about water shortages in the “The cost of building ChatGPT” posting.

This next posting would usually be included with my other art/sci postings but it touches on the issues. My October 13, 2023 posting about Toronto’s Art/Sci Salon events, in particular, there’s the Streaming Carbon Footprint event (just scroll down to the appropriate subhead). For the interested, I also found this 2022 paper “The Carbon Footprint of Streaming Media:; Problems, Calculations, Solutions” co-authored by one of the artist/researchers (Laura U. Marks, philosopher and scholar of new media and film at Simon Fraser University) who presented at the Toronto event.

I’m late to the party; Thomas Daigle posted a January 2, 2020 article about energy use and our appetite for computing and ‘smart’ devices for the Canadian Broadcasting Corporation’s online news,

For those of us binge-watching TV shows, installing new smartphone apps or sharing family photos on social media over the holidays, it may seem like an abstract predicament.

The gigabytes of data we’re using — although invisible — come at a significant cost to the environment. Some experts say it rivals that of the airline industry. 

And as more smart devices rely on data to operate (think internet-connected refrigerators or self-driving cars), their electricity demands are set to skyrocket.

“We are using an immense amount of energy to drive this data revolution,” said Jane Kearns, an environment and technology expert at MaRS Discovery District, an innovation hub in Toronto.

“It has real implications for our climate.”

Some good news

Researchers are working on ways to lower the energy and environmental costs, here’s a sampling of 2023 posts with an emphasis on brainlike computing that attest to it,

If there’s an industry that can make neuromorphic computing and energy savings sexy, it’s the automotive indusry,

On the energy front,

Most people are familiar with nuclear fission and some its attendant issues. There is an alternative nuclear energy, fusion, which is considered ‘green’ or greener anyway. General Fusion is a local (Vancouver area) company focused on developing fusion energy, alongside competitors from all over the planet.

Part of what makes fusion energy attractive is that salt water or sea water can be used in its production and, according to that December posting, there are other applications for salt water power,

More encouraging developments in environmental science

Again, this is a selection. You’ll find a number of nano cellulose research projects and a couple of seaweed projects (seaweed research seems to be of increasing interest).

All by myself (neuromorphic engineering)

Neuromorphic computing is a subset of neuromorphic engineering and I stumbled across an article that outlines the similarities and differences. My ‘summary’ of the main points and a link to the original article can be found here,

Oops! I did it again. More AI panic

I included an overview of the various ‘recent’ panics (in my May 25, 2023 posting below) along with a few other posts about concerning developments but it’s not all doom and gloom..

Governments have realized that regulation might be a good idea. The European Union has a n AI act, the UK held an AI Safety Summit in November 2023, the US has been discussing AI regulation with its various hearings, and there’s impending legislation in Canada (see professor and lawyer Michael Geist’s blog for more).

A long time coming, a nanomedicine comeuppance

Paolo Macchiarini is now infamous for his untested, dangerous approach to medicine. Like a lot of people, I was fooled too as you can see in my August 2, 2011 posting, “Body parts nano style,”

In early July 2011, there were reports of a new kind of transplant involving a body part made of a biocomposite. Andemariam Teklesenbet Beyene underwent a trachea transplant that required an artificial windpipe crafted by UK experts then flown to Sweden where Beyene’s stem cells were used to coat the windpipe before being transplanted into his body.

It is an extraordinary story not least because Beyene, a patient in a Swedish hospital planning to return to Eritrea after his PhD studies in Iceland, illustrates the international cooperation that made the transplant possible.

The scaffolding material for the artificial windpipe was developed by Professor Alex Seifalian at the University College London in a landmark piece of nanotechnology-enabled tissue engineering. …

Five years later I stumbled across problems with Macchiarini’s work as outlined in my April 19, 2016 posting, “Macchiarini controversy and synthetic trachea transplants (part 1 of 2)” and my other April 19, 2016 posting, “Macchiarini controversy and synthetic trachea transplants (part 2 of 2)“.

This year, Gretchen Vogel (whose work was featured in my 2016 posts) has written a June 21, 2023 update about the Macchiarini affair for Science magazine, Note: Links have been removed,

Surgeon Paolo Macchiarini, who was once hailed as a pioneer of stem cell medicine, was found guilty of gross assault against three of his patients today and sentenced to 2 years and 6 months in prison by an appeals court in Stockholm. The ruling comes a year after a Swedish district court found Macchiarini guilty of bodily harm in two of the cases and gave him a suspended sentence. After both the prosecution and Macchiarini appealed that ruling, the Svea Court of Appeal heard the case in April and May. Today’s ruling from the five-judge panel is largely a win for the prosecution—it had asked for a 5-year sentence whereas Macchiarini’s lawyer urged the appeals court to acquit him of all charges.

Macchiarini performed experimental surgeries on the three patients in 2011 and 2012 while working at the renowned Karolinska Institute. He implanted synthetic windpipes seeded with stem cells from the patients’ own bone marrow, with the hope the cells would multiply over time and provide an enduring replacement. All three patients died when the implants failed. One patient died suddenly when the implant caused massive bleeding just 4 months after it was implanted; the two others survived for 2.5 and nearly 5 years, respectively, but suffered painful and debilitating complications before their deaths.

In the ruling released today, the appeals judges disagreed with the district court’s decision that the first two patients were treated under “emergency” conditions. Both patients could have survived for a significant length of time without the surgeries, they said. The third case was an “emergency,” the court ruled, but the treatment was still indefensible because by then Macchiarini was well aware of the problems with the technique. (One patient had already died and the other had suffered severe complications.)

A fictionalized tv series ( part of the Dr. Death anthology series) based on Macchiarini’s deceptions and a Dr. Death documentary are being broadcast/streamed in the US during January 2024. These come on the heels of a November 2023 Macchiarini documentary also broadcast/streamed on US television.

Dr. Death (anthology), based on the previews I’ve seen, is heavily US-centric, which is to be expected since Adam Ciralsky is involved in the production. Ciralsky wrote an exposé about Macchiarini for Vanity Fair published in 2016 (also featured in my 2016 postings). From a December 20, 2023 article by Julie Miller for Vanity Fair, Note: A link has been removed,

Seven years ago [2016], world-renowned surgeon Paolo Macchiarini was the subject of an ongoing Vanity Fair investigation. He had seduced award-winning NBC producer Benita Alexander while she was making a special about him, proposed, and promised her a wedding officiated by Pope Francis and attended by political A-listers. It was only after her designer wedding gown was made that Alexander learned Macchiarini was still married to his wife, and seemingly had no association with the famous names on their guest list.

Vanity Fair contributor Adam Ciralsky was in the midst of reporting the story for this magazine in the fall of 2015 when he turned to Dr. Ronald Schouten, a Harvard psychiatry professor. Ciralsky sought expert insight into the kind of fabulist who would invent and engage in such an audacious lie.

“I laid out the story to him, and he said, ‘Anybody who does this in their private life engages in the same conduct in their professional life,” recalls Ciralsky, in a phone call with Vanity Fair. “I think you ought to take a hard look at his CVs.”

That was the turning point in the story for Ciralsky, a former CIA lawyer who soon learned that Macchiarini was more dangerous as a surgeon than a suitor. …

Here’s a link to Ciralsky’s original article, which I described this way, from my April 19, 2016 posting (part 2 of the Macchiarini controversy),

For some bizarre frosting on this disturbing cake (see part 1 of the Macchiarini controversy and synthetic trachea transplants for the medical science aspects), a January 5, 2016 Vanity Fair article by Adam Ciralsky documents Macchiarini’s courtship of an NBC ([US] National Broadcasting Corporation) news producer who was preparing a documentary about him and his work.

[from Ciralsky’s article]

“Macchiarini, 57, is a magnet for superlatives. He is commonly referred to as “world-renowned” and a “super-surgeon.” He is credited with medical miracles, including the world’s first synthetic organ transplant, which involved fashioning a trachea, or windpipe, out of plastic and then coating it with a patient’s own stem cells. That feat, in 2011, appeared to solve two of medicine’s more intractable problems—organ rejection and the lack of donor organs—and brought with it major media exposure for Macchiarini and his employer, Stockholm’s Karolinska Institute, home of the Nobel Prize in Physiology or Medicine. Macchiarini was now planning another first: a synthetic-trachea transplant on a child, a two-year-old Korean-Canadian girl named Hannah Warren, who had spent her entire life in a Seoul hospital. … “

Other players in the Macchiarini story

Pierre Delaere, a trachea expert and professor of head and neck surgery at KU Leuven (a university in Belgium) was one of the first to draw attention to Macchiarini’s dangerous and unethical practices. To give you an idea of how difficult it was to get attention for this issue, there’s a September 1, 2017 article by John Rasko and Carl Power for the Guardian illustrating the issue. Here’s what they had to say about Delaere and other early critics of the work, Note: Links have been removed,

Delaere was one of the earliest and harshest critics of Macchiarini’s engineered airways. Reports of their success always seemed like “hot air” to him. He could see no real evidence that the windpipe scaffolds were becoming living, functioning airways – in which case, they were destined to fail. The only question was how long it would take – weeks, months or a few years.

Delaere’s damning criticisms appeared in major medical journals, including the Lancet, but weren’t taken seriously by Karolinska’s leadership. Nor did they impress the institute’s ethics council when Delaere lodged a formal complaint. [emphases mine]

Support for Macchiarini remained strong, even as his patients began to die. In part, this is because the field of windpipe repair is a niche area. Few people at Karolinska, especially among those in power, knew enough about it to appreciate Delaere’s claims. Also, in such a highly competitive environment, people are keen to show allegiance to their superiors and wary of criticising them. The official report into the matter dubbed this the “bandwagon effect”.

With Macchiarini’s exploits endorsed by management and breathlessly reported in the media, it was all too easy to jump on that bandwagon.

And difficult to jump off. In early 2014, four Karolinska doctors defied the reigning culture of silence [emphasis mine] by complaining about Macchiarini. In their view, he was grossly misrepresenting his results and the health of his patients. An independent investigator agreed. But the vice-chancellor of Karolinska Institute, Anders Hamsten, wasn’t bound by this judgement. He officially cleared Macchiarini of scientific misconduct, allowing merely that he’d sometimes acted “without due care”.

For their efforts, the whistleblowers were punished. [emphasis mine] When Macchiarini accused one of them, Karl-Henrik Grinnemo, of stealing his work in a grant application, Hamsten found him guilty. As Grinnemo recalls, it nearly destroyed his career: “I didn’t receive any new grants. No one wanted to collaborate with me. We were doing good research, but it didn’t matter … I thought I was going to lose my lab, my staff – everything.”

This went on for three years until, just recently [2017], Grinnemo was cleared of all wrongdoing.

It is fitting that Macchiarini’s career unravelled at the Karolinska Institute. As the home of the Nobel prize in physiology or medicine, one of its ambitions is to create scientific celebrities. Every year, it gives science a show-business makeover, picking out from the mass of medical researchers those individuals deserving of superstardom. The idea is that scientific progress is driven by the genius of a few.

It’s a problematic idea with unfortunate side effects. A genius is a revolutionary by definition, a risk-taker and a law-breaker. Wasn’t something of this idea behind the special treatment Karolinska gave Macchiarini? Surely, he got away with so much because he was considered an exception to the rules with more than a whiff of the Nobel about him. At any rate, some of his most powerful friends were themselves Nobel judges until, with his fall from grace, they fell too.

The September 1, 2017 article by Rasko and Power is worth the read if you have the interest and the time. And, Delaere has written up a comprehensive analysis, which includes basic information about tracheas and more, “The Biggest Lie in Medical History” 2020, PDF, 164 pp., Creative Commons Licence).

I also want to mention Leonid Schneider, science journalist and molecular cell biologist, whose work the Macchiarini scandal on his ‘For Better Science’ website was also featured in my 2016 pieces. Schneider’s site has a page titled, ‘Macchiarini’s trachea transplant patients: the full list‘ started in 2017 and which he continues to update with new information about the patients. The latest update was made on December 20, 2023.

Promising nanomedicine research but no promises and a caveat

Most of the research mentioned here is still in the laboratory. i don’t often come across work that has made its way to clinical trials since the focus of this blog is emerging science and technology,

*If you’re interested in the business of neurotechnology, the July 17, 2023 posting highlights a very good UNESCO report on the topic.

Funky music (sound and noise)

I have couple of stories about using sound for wound healing, bioinspiration for soundproofing applications, detecting seismic activity, more data sonification, etc.

Same old, same old CRISPR

2023 was relatively quiet (no panics) where CRISPR developments are concerned but still quite active.

Art/Sci: a pretty active year

I didn’t realize how active the year was art/sciwise including events and other projects until I reviewed this year’s postings. This is a selection from 2023 but there’s a lot more on the blog, just use the search term, “art/sci,” or “art/science,” or “sciart.”

While I often feature events and projects from these groups (e.g., June 2, 2023 posting, “Metacreation Lab’s greatest hits of Summer 2023“), it’s possible for me to miss a few. So, you can check out Toronto’s Art/Sci Salon’s website (strong focus on visual art) and Simon Fraser University’s Metacreation Lab for Creative Artificial Intelligence website (strong focus on music).

My selection of this year’s postings is more heavily weighted to the ‘writing’ end of things.

Boundaries: life/nonlife

Last year I subtitled this section, ‘Aliens on earth: machinic biology and/or biological machinery?” Here’s this year’s selection,

Canada’s 2023 budget … military

2023 featured an unusual budget where military expenditures were going to be increased, something which could have implications for our science and technology research.

Then things changed as Murray Brewster’s November 21, 2023 article for the Canadian Broadcasting Corporation’s (CBC) news online website comments, Note: A link has been removed,

There was a revelatory moment on the weekend as Defence Minister Bill Blair attempted to bridge the gap between rhetoric and reality in the Liberal government’s spending plans for his department and the Canadian military.

Asked about an anticipated (and long overdue) update to the country’s defence policy (supposedly made urgent two years ago by Russia’s full-on invasion of Ukraine), Blair acknowledged that the reset is now being viewed through a fiscal lens.

“We said we’re going to bring forward a new defence policy update. We’ve been working through that,” Blair told CBC’s Rosemary Barton Live on Sunday.

“The current fiscal environment that the country faces itself does require (that) that defence policy update … recognize (the) fiscal challenges. And so it’ll be part of … our future budget processes.”

One policy goal of the existing defence plan, Strong, Secure and Engaged, was to require that the military be able to concurrently deliver “two sustained deployments of 500 [to] 1,500 personnel in two different theaters of operation, including one as a lead nation.”

In a footnote, the recent estimates said the Canadian military is “currently unable to conduct multiple operations concurrently per the requirements laid out in the 2017 Defence Policy. Readiness of CAF force elements has continued to decrease over the course of the last year, aggravated by decreasing number of personnel and issues with equipment and vehicles.”

Some analysts say they believe that even if the federal government hits its overall budget reduction targets, what has been taken away from defence — and what’s about to be taken away — won’t be coming back, the minister’s public assurances notwithstanding.

10 years: Graphene Flagship Project and Human Brain Project

Graphene and Human Brain Project win biggest research award in history (& this is the 2000th post)” on January 28, 2013 was how I announced the results of what had been a a European Union (EU) competition that stretched out over several years and many stages as projects were evaluated and fell to the wayside or were allowed onto the next stage. The two finalists received €1B each to be paid out over ten years.

Future or not

As you can see, there was plenty of interesting stuff going on in 2023 but no watershed moments in the areas I follow. (Please do let me know in the Comments should you disagree with this or any other part of this posting.) Nanotechnology seems less and less an emerging science/technology in itself and more like a foundational element of our science and technology sectors. On that note, you may find my upcoming (in 2024) post about a report concerning the economic impact of its National Nanotechnology Initiative (NNI) from 2002 to 2022 of interest.

Following on the commercialization theme, I have noticed an increase of interest in commercializing brain and brainlike engineering technologies, as well as, more discussion about ethics.

Colonizing the brain?

UNESCO held events such as, this noted in my July 17, 2023 posting, “Unveiling the Neurotechnology Landscape: Scientific Advancements, Innovations and Major Trends—a UNESCO report” and this noted in my July 7, 2023 posting “Global dialogue on the ethics of neurotechnology on July 13, 2023 led by UNESCO.” An August 21, 2023 posting, “Ethical nanobiotechnology” adds to the discussion.

Meanwhile, Australia has been producing some very interesting mind/robot research, my June 13, 2023 posting, “Mind-controlled robots based on graphene: an Australian research story.” I have more of this kind of research (mind control or mind reading) from Australia to be published in early 2024. The Australians are not alone, there’s also this April 12, 2023 posting, “Mind-reading prosthetic limbs” from Germany.

My May 12, 2023 posting, “Virtual panel discussion: Canadian Strategies for Responsible Neurotechnology Innovation on May 16, 2023” shows Canada is entering the discussion. Unfortunately, the Canadian Science Policy Centre (CSPC), which held the event, has not posted a video online even though they have a youtube channel featuring other of their events.

As for neurmorphic engineering, China has produced a roadmap for its research in this area as noted in my March 20, 2023 posting, “A nontraditional artificial synaptic device and roadmap for Chinese research into neuromorphic devices.”

Quantum anybody?

I haven’t singled it out in this end-of-year posting but there is a great deal of interest in quantum computer both here in Canada and elsewhere. There is a 2023 report from the Council of Canadian Academies on the topic of quantum computing in Canada, which I hope to comment on soon.

Final words

I have a shout out for the Canadian Science Policy Centre, which celebrated its 15th anniversary in 2023. Congratulations!

For everyone, I wish peace on earth and all the best for you and yours in 2024!

Shape-changing speaker (aka acoustic swarms) for sound control

To alleviate any concerns, these swarms are not kin to Michael Crichton’s swarms in his 2002 novel, Prey or his 2011 novel, Micro (published after his death).

A September 21, 2023 news item on ScienceDaily announces this ‘acoustic swarm’ research,

In virtual meetings, it’s easy to keep people from talking over each other. Someone just hits mute. But for the most part, this ability doesn’t translate easily to recording in-person gatherings. In a bustling cafe, there are no buttons to silence the table beside you.

The ability to locate and control sound — isolating one person talking from a specific location in a crowded room, for instance — has challenged researchers, especially without visual cues from cameras.

A team led by researchers at the University of Washington has developed a shape-changing smart speaker, which uses self-deploying microphones to divide rooms into speech zones and track the positions of individual speakers. With the help of the team’s deep-learning algorithms, the system lets users mute certain areas or separate simultaneous conversations, even if two adjacent people have similar voices. Like a fleet of Roombas, each about an inch in diameter, the microphones automatically deploy from, and then return to, a charging station. This allows the system to be moved between environments and set up automatically. In a conference room meeting, for instance, such a system might be deployed instead of a central microphone, allowing better control of in-room audio.

The team published its findings Sept. 21 [2023] in Nature Communications.

A September 21, 2023 University of Washington (state) news release (also on EurekAlert), which originated the news item, delves further into the work, Note: Links have been removed,

“If I close my eyes and there are 10 people talking in a room, I have no idea who’s saying what and where they are in the room exactly. That’s extremely hard for the human brain to process. Until now, it’s also been difficult for technology,” said co-lead author Malek Itani, a UW doctoral student in the Paul G. Allen School of Computer Science & Engineering. “For the first time, using what we’re calling a robotic ‘acoustic swarm,’ we’re able to track the positions of multiple people talking in a room and separate their speech.”

Previous research on robot swarms has required using overhead or on-device cameras, projectors or special surfaces. The UW team’s system is the first to accurately distribute a robot swarm using only sound.

The team’s prototype consists of seven small robots that spread themselves across tables of various sizes. As they move from their charger, each robot emits a high frequency sound, like a bat navigating, using this frequency and other sensors to avoid obstacles and move around without falling off the table. The automatic deployment allows the robots to place themselves for maximum accuracy, permitting greater sound control than if a person set them. The robots disperse as far from each other as possible since greater distances make differentiating and locating people speaking easier. Today’s consumer smart speakers have multiple microphones, but clustered on the same device, they’re too close to allow for this system’s mute and active zones.

“If I have one microphone a foot away from me, and another microphone two feet away, my voice will arrive at the microphone that’s a foot away first. If someone else is closer to the microphone that’s two feet away, their voice will arrive there first,” said co-lead author Tuochao Chen, a UW doctoral student in the Allen School. “We developed neural networks that use these time-delayed signals to separate what each person is saying and track their positions in a space. So you can have four people having two conversations and isolate any of the four voices and locate each of the voices in a room.”

The team tested the robots in offices, living rooms and kitchens with groups of three to five people speaking. Across all these environments, the system could discern different voices within 1.6 feet (50 centimeters) of each other 90% of the time, without prior information about the number of speakers. The system was able to process three seconds of audio in 1.82 seconds on average — fast enough for live streaming, though a bit too long for real-time communications such as video calls.

As the technology progresses, researchers say, acoustic swarms might be deployed in smart homes to better differentiate people talking with smart speakers. That could potentially allow only people sitting on a couch, in an “active zone,” to vocally control a TV, for example.

Researchers plan to eventually make microphone robots that can move around rooms, instead of being limited to tables. The team is also investigating whether the speakers can emit sounds that allow for real-world mute and active zones, so people in different parts of a room can hear different audio. The current study is another step toward science fiction technologies, such as the “cone of silence” in “Get Smart” and“Dune,” the authors write.

Of course, any technology that evokes comparison to fictional spy tools will raise questions of privacy. Researchers acknowledge the potential for misuse, so they have included guards against this: The microphones navigate with sound, not an onboard camera like other similar systems. The robots are easily visible and their lights blink when they’re active. Instead of processing the audio in the cloud, as most smart speakers do, the acoustic swarms process all the audio locally, as a privacy constraint. And even though some people’s first thoughts may be about surveillance, the system can be used for the opposite, the team says.

“It has the potential to actually benefit privacy, beyond what current smart speakers allow,” Itani said. “I can say, ‘Don’t record anything around my desk,’ and our system will create a bubble 3 feet around me. Nothing in this bubble would be recorded. Or if two groups are speaking beside each other and one group is having a private conversation, while the other group is recording, one conversation can be in a mute zone, and it will remain private.”

Takuya Yoshioka, a principal research manager at Microsoft, is a co-author on this paper, and Shyam Gollakota, a professor in the Allen School, is a senior author. The research was funded by a Moore Inventor Fellow award.

Two of the paper`s authors, Malek Itani and Tuochao Chen, have written a ‘Behind the Paper’ article for Nature.com’s Electrical and Electronic Engineering Community, from their September 21, 2023 posting,

Sound is a versatile medium. In addition to being one of the primary means of communication for us humans, it serves numerous purposes for organisms across the animal kingdom. Particularly, many animals use sound to localize themselves and navigate in their environment. Bats, for example, emit ultrasonic sound pulses to move around and find food in the dark. Similar behavior can be observed in Beluga whales to avoid obstacles and locate one other.

Various animals also have a tendency to cluster together into swarms, forming a unit greater than the sum of its parts. Famously, bees agglomerate into swarms to more efficiently search for a new colony. Birds flock to evade predators. These behaviors have caught the attention of scientists for quite some time, inspiring a handful of models for crowd control, optimization and even robotics. 

A key challenge in building robot swarms for practical purposes is the ability for the robots to localize themselves, not just within the swarm, but also relative to other important landmarks. …

Here’s a link to and a citation for the paper,

Creating speech zones with self-distributing acoustic swarms by Malek Itani, Tuochao Chen, Takuya Yoshioka & Shyamnath Gollakota. Nature Communications volume 14, Article number: 5684 (2023) DOI: https://doi.org/10.1038/s41467-023-40869-8 Published: 21 September 2023

This paper is open access.

Robot that can maneuver through living lung tissue

Caption: Overview of the semiautonomous medical robot’s three stages in the lungs. Credit: Kuntz et al.

This looks like one robot operating on another robot; I guess the researchers want to emphasize the fact that this autonomous surgical procedure isn’t currently being tested on human beings.

There’s more in a September 21, 2023 news item on ScienceDaily,

Scientists have shown that their steerable lung robot can autonomously maneuver the intricacies of the lung, while avoiding important lung structures.

Lung cancer is the leading cause of cancer-related deaths in the United States. Some tumors are extremely small and hide deep within lung tissue, making it difficult for surgeons to reach them. To address this challenge, UNC -Chapel Hill and Vanderbilt University researchers have been working on an extremely bendy but sturdy robot capable of traversing lung tissue.

Their research has reached a new milestone. In a new paper, published in Science Robotics, Ron Alterovitz, PhD, in the UNC Department of Computer Science, and Jason Akulian, MD MPH, in the UNC Department of Medicine, have proven that their robot can autonomously go from “Point A” to “Point B” while avoiding important structures, such as tiny airways and blood vessels, in a living laboratory model.

Thankfully there’s a September 21, 2023 University of North Carolina (UNC) news release (also on EurekAlert), which originated the news item, to provide more information, Note: Links have been removed,

“This technology allows us to reach targets we can’t otherwise reach with a standard or even robotic bronchoscope,” said Dr. Akulian, co-author on the paper and Section Chief of Interventional Pulmonology and Pulmonary Oncology in the UNC Division of Pulmonary Disease and Critical Care Medicine. “It gives you that extra few centimeters or few millimeters even, which would help immensely with pursuing small targets in the lungs.”

The development of the autonomous steerable needle robot leveraged UNC’s highly collaborative culture by blending medicine, computer science, and engineering expertise. In addition to Alterovitz and Akulian, the development effort included Yueh Z. Lee, MD, PhD, at the UNC Department of Radiology, as well as Robert J. Webster III at Vanderbilt University and Alan Kuntz at the University of Utah.

The robot is made of several separate components. A mechanical control provides controlled thrust of the needle to go forward and backward and the needle design allows for steering along curved paths. The needle is made from a nickel-titanium alloy and has been laser etched to increase its flexibility, allowing it to move effortlessly through tissue.

As it moves forward, the etching on the needle allows it to steer around obstacles with ease. Other attachments, such as catheters, could be used together with the needle to perform procedures such as lung biopsies.

To drive through tissue, the needle needs to know where it is going. The research team used CT scans of the subject’s thoracic cavity and artificial intelligence to create three-dimensional models of the lung, including the airways, blood vessels, and the chosen target. Using this 3-D model and once the needle has been positioned for launch, their AI-driven software instructs it to automatically travel from “Point A” to “Point B” while avoiding important structures.

“The autonomous steerable needle we’ve developed is highly compact, but the system is packed with a suite of technologies that allow the needle to navigate autonomously in real-time,” said Alterovitz, the principal investigator on the project and senior author on the paper. “It’s akin to a self-driving car, but it navigates through lung tissue, avoiding obstacles like significant blood vessels as it travels to its destination.”

The needle can also account for respiratory motion. Unlike other organs, the lungs are constantly expanding and contracting in the chest cavity. This can make targeting especially difficult in a living, breathing subject. According to Akulian, it’s like shooting at a moving target.

The researchers tested their robot while the laboratory model performed intermittent breath holding. Every time the subject’s breath is held, the robot is programmed to move forward.

“There remain some nuances in terms of the robot’s ability to acquire targets and then actually get to them effectively,” said Akulian, who is also a member of the UNC Lineberger Comprehensive Cancer Center, “and while there’s still a lot of work to be done, I’m very excited about continuing to push the boundaries of what we can do for patients with the world-class experts that are here.”

“We plan to continue creating new autonomous medical robots that combine the strengths of robotics and AI to improve medical outcomes for patients facing a variety of health challenges while providing guarantees on patient safety,” added Alterovitz.

Here’s a link to and a citation for the paper,

Autonomous medical needle steering in vivo by Alan Kuntz, Maxwell Emerson, Tayfun Efe Ertop, Inbar Fried, Mengyu Fu, Janine Hoelscher, Margaret Rox, Jason Akulian, Erin A. Gillaspie, Yueh Z. Lee, Fabien Maldonado, Robert J. Webster III, and Ron Alterovitz. Science Robotics 20 Sep 2023 Vol 8, Issue 82 DOI: 10.1126/scirobotics.adf7614

This paper is behind a paywall.

An artificial, multisensory integrated neuron makes AI (artificial intelligence) smarter

More brainlike (neuromorphic) computing but this time, it’s all about the senses. From a September 15, 2023 news item on ScienceDaily, Note: A link has been removed,

The feel of a cat’s fur can reveal some information, but seeing the feline provides critical details: is it a housecat or a lion? While the sound of fire crackling may be ambiguous, its scent confirms the burning wood. Our senses synergize to give a comprehensive understanding, particularly when individual signals are subtle. The collective sum of biological inputs can be greater than their individual contributions. Robots tend to follow more straightforward addition, but researchers have now harnessed the biological concept for application in artificial intelligence (AI) to develop the first artificial, multisensory integrated neuron.

Led by Saptarshi Das, associate professor of engineering science and mechanics at Penn State, the team published their work today (Sept. 15 [2023]) in Nature Communications.

A September 12, 2023 Pennsylvania State University (Penn State) news release (also on EurekAlert but published September 15, 2023) by Ashley WennersHerron, which originated the news item, provides more detail about the research,

“Robots make decisions based on the environment they are in, but their sensors do not generally talk to each other,” said Das, who also has joint appointments in electrical engineering and in materials science and engineering. “A collective decision can be made through a sensor processing unit, but is that the most efficient or effective method? In the human brain, one sense can influence another and allow the person to better judge a situation.”

For instance, a car might have one sensor scanning for obstacles, while another senses darkness to modulate the intensity of the headlights. Individually, these sensors relay information to a central unit which then instructs the car to brake or adjust the headlights. According to Das, this process consumes more energy. Allowing sensors to communicate directly with each other can be more efficient in terms of energy and speed — particularly when the inputs from both are faint.

“Biology enables small organisms to thrive in environments with limited resources, minimizing energy consumption in the process,” said Das, who is also affiliated with the Materials Research Institute. “The requirements for different sensors are based on the context — in a dark forest, you’d rely more on listening than seeing, but we don’t make decisions based on just one sense. We have a complete sense of our surroundings, and our decision making is based on the integration of what we’re seeing, hearing, touching, smelling, etcetera. The senses evolved together in biology, but separately in AI. In this work, we’re looking to combine sensors and mimic how our brains actually work.”

The team focused on integrating a tactile sensor and a visual sensor so that the output of one sensor modifies the other, with the help of visual memory. According to Muhtasim Ul Karim Sadaf, a third-year doctoral student in engineering science and mechanics, even a short-lived flash of light can significantly enhance the chance of successful movement through a dark room.

“This is because visual memory can subsequently influence and aid the tactile responses for navigation,” Sadaf said. “This would not be possible if our visual and tactile cortex were to respond to their respective unimodal cues alone. We have a photo memory effect, where light shines and we can remember. We incorporated that ability into a device through a transistor that provides the same response.”

The researchers fabricated the multisensory neuron by connecting a tactile sensor to a phototransistor based on a monolayer of molybdenum disulfide, a compound that exhibits unique electrical and optical characteristics useful for detecting light and supporting transistors. The sensor generates electrical spikes in a manner reminiscent of neurons processing information, allowing it to integrate both visual and tactile cues.

It’s the equivalent of seeing an “on” light on the stove and feeling heat coming off of a burner — seeing the light on doesn’t necessarily mean the burner is hot yet, but a hand only needs to feel a nanosecond of heat before the body reacts and pulls the hand away from the potential danger. The input of light and heat triggered signals that induced the hand’s response. In this case, the researchers measured the artificial neuron’s version of this by seeing signaling outputs resulted from visual and tactile input cues.

To simulate touch input, the tactile sensor used triboelectric effect, in which two layers slide against one another to produce electricity, meaning the touch stimuli was encoded into electrical impulses. To simulate visual input, the researchers shined a light into the monolayer molybdenum disulfide photo memtransistor — or a transistor that can remember visual input, like how a person can hold onto the general layout of a room after a quick flash illuminates it.

They found that the sensory response of the neuron — simulated as electrical output — increased when both visual and tactile signals were weak.

“Interestingly, this effect resonates remarkably well with its biological counterpart — a visual memory naturally enhances the sensitivity to tactile stimulus,” said co-first author Najam U Sakib, a third-year doctoral student in engineering science and mechanics. “When cues are weak, you need to combine them to better understand the information, and that’s what we saw in the results.”

Das explained that an artificial multisensory neuron system could enhance sensor technology’s efficiency, paving the way for more eco-friendly AI uses. As a result, robots, drones and self-driving vehicles could navigate their environment more effectively while using less energy.

“The super additive summation of weak visual and tactile cues is the key accomplishment of our research,” said co-author Andrew Pannone, a fourth-year doctoral student in engineering science and mechanics. “For this work, we only looked into two senses. We’re working to identify the proper scenario to incorporate more senses and see what benefits they may offer.”

Harikrishnan Ravichandran, a fourth-year doctoral student in engineering science and mechanics at Penn State, also co-authored this paper.

The Army Research Office and the National Science Foundation supported this work.

Here’s a link to and a citation for the paper,

A bio-inspired visuotactile neuron for multisensory integration by Muhtasim Ul Karim Sadaf, Najam U Sakib, Andrew Pannone, Harikrishnan Ravichandran & Saptarshi Das. Nature Communications volume 14, Article number: 5729 (2023) DOI: https://doi.org/10.1038/s41467-023-40686-z Published: 15 September 2023

This paper is open access.