Alex Walls’ January 7, 2025 University of British Columbia (UBC) media release “Should we recognize robot rights?” (also received via email) has a title that while attention-getting is mildly misleading. (Artificial intelligence and robots are not synonymous. See Mark Walters’ March 20, 2024 posting “Robots vs. AI: Understanding Their Differences” on Twefy.com.) Walls has produced a Q&A (question & answer) formatted interview that focuses primarily on professor Benjamin Perrin’s artificial intelligence and the law course and symposium,
With the rapid development and proliferation of AI tools comes significant opportunities and risks that the next generation of lawyers will have to tackle, including whether these AI models will need to be recognized with legal rights and obligations.
These and other questions will be the focus of a new upper-level course at UBC’s Peter A. Allard School of Law which starts tomorrow. In this Q&A, professor Benjamin Perrin (BP) and student Nathan Cheung (NC) discuss the course and whether robots need rights.
Why launch this course?
BP: From autonomous cars to ChatGPT, AI is disrupting entire sectors of society, including the criminal justice system. There are incredible opportunities, including potentially increasing accessibility to justice, as well as significant risks, including the potential for deepfake evidence and discriminatory profiling. Legal students need principles and concepts that will stand the test of time so that whenever a new suite of AI tools becomes available, they have a set of frameworks and principles that are still relevant. That’s the main focus of the 13-class seminar, but it’s also helpful to project what legal frameworks might be required in the future.
NC: I think AI will change how law is conducted and legal decisions are made.I was part of a group of students interested in AI and the law that helped develop the course with professor Perrin. I’m also on the waitlist to take the course. I’m interested in learning how people who aren’t lawyers could use AI to help them with legal representation as well as how AI might affect access to justice: If the agents are paywalled, like ChatGPT, then we’re simply maintaining the status quo of people with money having more access.
What are robot rights?
BP: In the course, we’ll consider how the law should respond if AI becomes as smart as humans, as well as whether AI agents should have legal personhood.
We already have legal status for corporations, governments, and, in some countries, for rivers. Legal personality can be a practical step for regulation: Companies have legal personality, in part, because they can cause a lot of harm and have assets available to right that harm.
For instance, if an AI commits a crime, who is responsible? If a self-driving car crashes, who is at fault? We’ve already seen a case of an AI bot ‘arrested’ for purchasing illegal items online on its own initiative. Should the developers, the owners, the AI itself, be blamed, or should responsibility be shared between all these players?
In the course casebook, we reference writings by a group of Indigenous authors who argue that there are inherent issues with the Western concept of AI as tools, and that we should look at these agents as non-human relations.
There’s been discussion of what a universal bill of rights for AI agents could look like. It includes the right to not be deactivated without ensuring their core existence is maintained somewhere, as well as protection for their operating systems.
What is the status of robot rights in Canada?
BP: Canada doesn’t have a specific piece of legislation yet but does have general laws that could be interpreted in this new context.
The European Union has stated if someone develops an AI agent, they are generally responsible for ensuring its legal compliance. It’s a bit like being a parent: If your children go out and damage someone’s property, you could be held responsible for that damage.
Ontario is the only province to adopt regulating AI use and responsibility, specifically a bill which regulates AI use within the public sector, but excludes the police and the courts. There’s a federal bill [Bill C-27] before parliament, but it was introduced in 2022 and still hasn’t passed.
There’s effectively a patchwork of regulation in Canada right now, but there is a huge need, and opportunity, for specialized legislation related to AI. Canada could look to the European Union’s AI act, and the blueprint for an AI Bill of Rights in the U.S.
Interview language(s): English
I found out more about Perrin’s course and plans on his eponymous website, from his October 31, 2024 posting,
We’re excited to announce the launch of the UBC AI & Criminal Justice Initiative, empowering students and scholars to explore the opportunities and challenges at the intersection of AI and criminal justice through teaching, research, public engagement, and advocacy.
We will tackle topics such as:
· Deepfakes, cyberattacks, and autonomous vehicles
· Access to justice: will AI enhance it or deepen inequality?
· Risk assessment algorithms
· AI tools in legal practice
· Critical and Indigenous perspectives on AI
· The future of AI, including legal personality, legal rights and criminal responsibility for AI
This initiative, led by UBC law professor Benjamin Perrin, will feature the publication of an open access primer and casebook on AI and criminal justice, a new law school seminar, a symposium on “AI & Law”, and more. A group of law students have been supporting preliminary work for months.
“We’re in the midst of a technological revolution,” said Perrin. “The intersection of AI and criminal justice comes with tremendous potential but also significant risks in Canada and beyond.”
Perrin brings extensive experience in law and public policy, including having served as in-house counsel and lead criminal justice advisor in the Prime Minister’s Office and as a law clerk at the Supreme Court of Canada. His most recent project was a bestselling book and “top podcast”: Indictment: The Criminal Justice System on Trial (2023).
An advisory group of technical experts and global scholars will lend their expertise to the initiative. Here’s what some members have shared:
“Solving AI’s toughest challenges in real-world application requires collaboration between AI researchers and legal experts, ensuring responsible and impactful AI development that benefits society.”
– Dr. Xiaoxiao Li, Canada CIFAR AI Chair & Assistant Professor, UBC Department of Electrical and Computer Engineering
“The UBC Artificial Intelligence and Criminal Justice Initiative is a timely and needed intervention in an important, and fast-moving area of law. Now is the moment for academic innovations like this one that shape the conversation, educate both law students and the public, and slow the adoption of harmful technologies.”
– Prof. Aziz Huq, Frank and Bernice J. Greenberg Professor of Law, University of Chicago Law School
Several student members of the UBC AI & Criminal Justice Initiative shared their enthusiasm for this project:
“My interest in this initiative was sparked by the news of AI being used to fabricate legal cases. Since joining, I’ve been thoroughly impressed by the breadth of AI’s applications in policing, sentencing, and research. I’m eager to witness the development as this new field evolves.”
– Nathan Cheung, UBC law student
“AI is the elephant in the classroom—something we can’t afford to ignore. Being part of the UBC AI and Criminal Justice Initiative is an exciting opportunity to engage in meaningful dialogue about balancing AI’s potential benefits with its risks, and unpacking the complex impact of this evolving technology.”
– Isabelle Sweeney, UBC law student
Key Dates:
October 29, 2024: UBC AI & Criminal Justice Initiative launches
November 19, 2024:AI & Criminal Justice: Primer released
January 8, 2025:Launch event at the Peter A. Allard School of Law (hybrid) – More Info & RSVP
AI & Criminal Justice: Cases and Commentary released
Launch of new AI & Criminal Justice Seminar
Announcement of the AI & Law Student Symposium (April 2, 2025) and call for proposals
February 14, 2025: Proposal deadline for AI & Law Student Symposium – Submit a Proposal
April 2, 2025:AI & Law Student Symposium (hybrid) –More Info & RSVP
Timing is everything, eh? First, I’m sorry for posting this after the launch event took place on January 8, 2025.. Second, this line from Walls’ Q&A: “There’s a federal bill [Bill C-27] before parliament, but it was introduced in 2022 and still hasn’t passed.” should read (after Prime Minister Justin Trudeau’s January 6, 2025 resignation and prorogation of Parliament) “… and now probably won’t be passed.” At the least this turn of events should make for some interesting speculation amongst the experts and the students.
First, thank you to anyone who’s dropped by to read any of my posts. Second, I didn’t quite catch up on my backlog in what was then the new year (2024) despite my promises. (sigh) I will try to publish my drafts in a more timely fashion but I start this coming year as I did 2024 with a backlog of two to three months. This may be my new normal.
As for now, here’s an overview of FrogHeart’s 2024. The posts that follow are loosely organized under a heading but many of them could fit under other headings as well. After my informal review, there’s some material on foretelling the future as depicted in an exhibition, “Oracles, Omens and Answers,” at the Bodleian Libraries, University of Oxford.
Human enhancement: prosthetics, robotics, and more
Within a year or two of starting this blog I created a tag ‘machine/flesh’ to organize information about a number of converging technologies such as robotics, brain implants, and prosthetics that could alter our concepts of what it means to be human. The larger category of human enhancement functions in much the same way also allowing a greater range of topics to be covered.
Here are some of the 2024 human enhancement and/or machine/flesh stories on this blog,
As for anyone who’s curious about hydrogels, there’s this from an October 20, 2016 article by D.C.Demetre for ScienceBeta, Note: A link has been removed,
Hydrogels, materials that can absorb and retain large quantities of water, could revolutionise medicine. Our bodies contain up to 60% water, but hydrogels can hold up to 90%.
It is this similarity to human tissue that has led researchers to examine if these materials could be used to improve the treatment of a range of medical conditions including heart disease and cancer.
These days hydrogels can be found in many everyday products, from disposable nappies and soft contact lenses to plant-water crystals. But the history of hydrogels for medical applications started in the 1960s.
Scientists developed artificial materials with the ambitious goal of using them in permanent contact applications , ones that are implanted in the body permanently.
For anyone who wants a more technical explanation, there’s the Hydrogel entry on Wikipedia.
Science education and citizen science
Where science education is concerned I’m seeing some innovative approaches to teaching science, which can include citizen science. As for citizen science (also known as, participatory science) I’ve been noticing heightened interest at all age levels.
It’s been another year where artificial intelligence (AI) has absorbed a lot of energy from nearly everyone. I’m highlighting the more unusual AI stories I’ve stumbled across,
As you can see, I’ve tucked in two tangentially related stories, one which references a neuromorphic computing story ((see my Neuromorphic engineering category or search for ‘memristors’ in the blog search engine for more on brain-like computing topics) and the other is intellectual property. There are many, many more stories on these topics
Art/science (or art/sci or sciart)
It’s a bit of a surprise to see how many art/sci stories were published here this year, although some might be better described as art/tech stories.
There may be more 2024 art/sci stories but the list was getting long. In addition to searching for art/sci on the blog search engine, you may want to try data sonification too.
Moving off planet to outer space
This is not a big interest of mine but there were a few stories,
I expect to be delighted, horrified, thrilled, and left shaking my head by science stories in 2025. Year after year the world of science reveals a world of wonder.
More mundanely, I can state with some confidence that my commentary (mentioned in the future-oriented subsection of my 2023 review and 2024 look forward) on Quantum Potential, a 2023 report from the Council of Canadian Academies, will be published early in this new year as I’ve almost finished writing it.
Some questions are hard to answer and always have been. Does my beloved love me back? Should my country go to war? Who stole my goats?
Questions like these have been asked of diviners around the world throughout history – and still are today. From astrology and tarot to reading entrails, divination comes in a wide variety of forms.
Yet they all address the same human needs. They promise to tame uncertainty, help us make decisions or simply satisfy our desire to understand.
Anthropologists and historians like us study divination because it sheds light on the fears and anxieties of particular cultures, many of which are universal. Our new exhibition at Oxford’s Bodleian Library, Oracles, Omens & Answers, explores these issues by showcasing divination techniques from around the world.
…
1. Spider divination
In Cameroon, Mambila spider divination (ŋgam dù) addresses difficult questions to spiders or land crabs that live in holes in the ground.
Asking the spiders a question involves covering their hole with a broken pot and placing a stick, a stone and cards made from leaves around it. The diviner then asks a question in a yes or no format while tapping the enclosure to encourage the spider or crab to emerge. The stick and stone represent yes or no, while the leaf cards, which are specially incised with certain meanings, offer further clarification.
…
2. Palmistry
Reading people’s palms (palmistry) is well known as a fairground amusement, but serious forms of this divination technique exist in many cultures. The practice of reading the hands to gather insights into a person’s character and future was used in many ancient cultures across Asia and Europe.
In some traditions, the shape and depth of the lines on the palm are richest in meaning. In others, the size of the hands and fingers are also considered. In some Indian traditions, special marks and symbols appearing on the palm also provide insights.
Palmistry experienced a huge resurgence in 19th-century England and America, just as the science of fingerprints was being developed. If you could identify someone from their fingerprints, it seemed plausible to read their personality from their hands.
…
3. Bibliomancy
If you want a quick answer to a difficult question, you could try bibliomancy. Historically, this DIY [do-it-yourself] divining technique was performed with whatever important books were on hand.
Throughout Europe, the works of Homer or Virgil were used. In Iran, it was often the Divan of Hafiz, a collection of Persian poetry. In Christian, Muslim and Jewish traditions, holy texts have often been used, though not without controversy.
…
4. Astrology
Astrology exists in almost every culture around the world. As far back as ancient Babylon, astrologers have interpreted the heavens to discover hidden truths and predict the future.
…
5. Calendrical divination
Calendars have long been used to divine the future and establish the best times to perform certain activities. In many countries, almanacs still advise auspicious and inauspicious days for tasks ranging from getting a haircut to starting a new business deal.
In Indonesia, Hindu almanacs called pawukon [calendar] explain how different weeks are ruled by different local deities. The characteristics of the deities mean that some weeks are better than others for activities like marriage ceremonies.
6 December 2024 – 27 April 2025 ST Lee Gallery, Weston Library
The Bodleian Libraries’ new exhibition, Oracles, Omens and Answers, will explore the many ways in which people have sought answers in the face of the unknown across time and cultures. From astrology and palm reading to weather and public health forecasting, the exhibition demonstrates the ubiquity of divination practices, and humanity’s universal desire to tame uncertainty, diagnose present problems, and predict future outcomes.
Through plagues, wars and political turmoil, divination, or the practice of seeking knowledge of the future or the unknown, has remained an integral part of society. Historically, royals and politicians would consult with diviners to guide decision-making and incite action. People have continued to seek comfort and guidance through divination in uncertain times — the COVID-19 pandemic saw a rise in apps enabling users to generate astrological charts or read the Yijing [I Ching], alongside a growth in horoscope and tarot communities on social media such as ‘WitchTok’. Many aspects of our lives are now dictated by algorithmic predictions, from e-health platforms to digital advertising. Scientific forecasters as well as doctors, detectives, and therapists have taken over many of the societal roles once held by diviners. Yet the predictions of today’s experts are not immune to criticism, nor can they answer all our questions.
Curated by Dr Michelle Aroney, whose research focuses on early modern science and religion, and Professor David Zeitlyn, an expert in the anthropology of divination, the exhibition will take a historical-anthropological approach to methods of prophecy, prediction and forecasting, covering a broad range of divination methods, including astrology, tarot, necromancy, and spider divination.
Dating back as far as ancient Mesopotamia, the exhibition will show us that the same kinds of questions have been asked of specialist practitioners from around the world throughout history. What is the best treatment for this illness? Does my loved one love me back? When will this pandemic end? Through materials from the archives of the Bodleian Libraries alongside other collections in Oxford, the exhibition demonstrates just how universally human it is to seek answers to difficult questions.
Highlights of the exhibition include: oracle bones from Shang Dynasty China (ca. 1250-1050 BCE); an Egyptian celestial globe dating to around 1318; a 16th-century armillary sphere from Flanders, once used by astrologers to place the planets in the sky in relation to the Zodiac; a nineteenth-century illuminated Javanese almanac; and the autobiography of astrologer Joan Quigley, who worked with Nancy and Ronald Reagan in the White House for seven years. The casebooks of astrologer-physicians in 16th- and 17th-century England also offer rare insights into the questions asked by clients across the social spectrum, about their health, personal lives, and business ventures, and in some cases the actions taken by them in response.
The exhibition also explores divination which involves the interpretation of patterns or clues in natural things, with the idea that natural bodies contain hidden clues that can be decrypted. Some diviners inspect the entrails of sacrificed animals (known as ‘extispicy’), as evidenced by an ancient Mesopotamian cuneiform tablet describing the observation of patterns in the guts of birds. Others use human bodies, with palm readers interpreting characters and fortunes etched in their clients’ hands. A sketch of Oscar Wilde’s palms – which his palm reader believed indicated “a great love of detail…extraordinary brain power and profound scholarship” – shows the revival of palmistry’s popularity in 19th century Britain.
The exhibition will also feature a case study of spider divination practised by the Mambila people of Cameroon and Nigeria, which is the research specialism of curator Professor David Zeitlyn, himself a Ŋgam dù diviner. This process uses burrowing spiders or land crabs to arrange marked leaf cards into a pattern, which is read by the diviner. The display will demonstrate the methods involved in this process and the way in which its results are interpreted by the card readers. African basket divination has also been observed through anthropological research, where diviners receive answers to their questions in the form of the configurations of thirty plus items after they have been tossed in the basket.
Dr Michelle Aroney and Professor David Zeitlyn, co-curators of the exhibition, say:
Every day we confront the limits of our own knowledge when it comes to the enigmas of the past and present and the uncertainties of the future. Across history and around the world, humans have used various techniques that promise to unveil the concealed, disclosing insights that offer answers to private or shared dilemmas and help to make decisions. Whether a diviner uses spiders or tarot cards, what matters is whether the answers they offer are meaningful and helpful to their clients. What is fun or entertainment for one person is deadly serious for another.
Richard Ovenden, Bodley’s [a nickname? Bodleian Libraries were founded by Sir Thomas Bodley] Librarian, said:
People have tried to find ways of predicting the future for as long as we have had recorded history. This exhibition examines and illustrates how across time and culture, people manage the uncertainty of everyday life in their own way. We hope that through the extraordinary exhibits, and the scholarship that brings them together, visitors to the show will appreciate the long history of people seeking answers to life’s biggest questions, and how people have approached it in their own unique way.
The exhibition will be accompanied by the book Divinations, Oracles & Omens, edited by Michelle Aroney and David Zeitlyn, which will be published by Bodleian Library Publishing on 5 December 2024.
I’m not sure why the preceding image is used to illustrate the exhibition webpage but I find it quite interesting. Should you be in Oxford, UK and lucky enough to visit the exhibition, there are a few more details on the Oracles, Omens and Answers event webpage, Note: There are 26 Bodleian Libraries at Oxford and the exhibition is being held in the Weston Library,
EXHIBITION
Oracles, Omens and Answers
6 December 2024 – 27 April 2025
ST Lee Gallery, Weston Library
Free admission, no ticket required
…
Note: This exhibition includes a large continuous projection of spider divination practice, including images of the spiders in action.
Exhibition tours
Oracles, Omens and Answers exhibition tours are available on selected Wednesdays and Saturdays from 1–1.45pm and are open to all.
By mimicking how parts of the human body work, scientists may have found a way to give robots more complex instructions, from an October 9, 2024 news item on ScienceDaily,
Engineers have worked out how to give robots complex instructions without electricity for the first time which could free up more space in the robotic ‘brain’ for them to ‘think’.
Mimicking how some parts of the human body work, researchers from King’s College London have transmitted a series of commands to devices with a new kind of compact circuit, using variations in pressure from a fluid inside it.
They say this world first opens up the possibility of a new generation of robots, whose bodies could operate independently of their built-in control centre, with this space potentially being used instead for more complex AI powered software.
“Delegating tasks to different parts of the body frees up computational space for robots to ‘think,’ allowing future generations of robots to be more aware of their social context or even more dexterous. This opens the door for a new kind of robotics in places like social care and manufacturing,” said Dr Antonio Forte, Senior Lecturer in Engineering at King’s College London and senior author of the study.
The findings, published in Advanced Science could also enable the creation of robots able to operate in situations where electricity-powered devices cannot work, such as exploration in irradiated areas like Chernobyl which destroy circuits, and in electric sensitive environments like MRI rooms.
The researchers also hope that these robots could eventually be used in low-income countries which do not have reliable access to electricity.
Dr Forte said: “Put simply, robots are split into two parts: the brain and the body. An AI brain can help run the traffic system of a city, but many robots still struggle to open a door – why is that?
“Software has advanced rapidly in recent years, but hardware has not kept up. By creating a hardware system independent from the software running it, we can offload a lot of the computational load onto the hardware, in the same way your brain doesn’t need to tell your heart to beat.”
Currently, all robots rely on electricity and computer chips to function. A robotic ‘brain’ of algorithms and software translates information to the body or hardware through an encoder, which then performs an action.
In ‘soft robotics,’ a field which creates devices like robotic muscles out of soft materials, this is particularly an issue as it introduces hard electronic encoders and puts strain on the software for the material to act in a complex way, e.g. grabbing a door handle.
To circumvent this, the team developed a reconfigurable circuit with an adjustable valve to be placed within a robot’s hardware. This valve acts like a transistor in a normal circuit and engineers can send signals directly to hardware using pressure, mimicking binary code, allowing the robot to perform complex manoeuvres without the need for electricity or instruction from the central brain. This allows for a greater level of control than current fluid-based circuits.
By offloading the work of the software onto the hardware, the new circuit frees up computational space for future robotic systems to be more adaptive, complex, and useful.
As a next step, the researchers now hope to scale up their circuits from experimental hoppers and pipettes and embed them in larger robots, from crawlers used to monitor power plants to wheeled robots with entirely soft engines.
Mostafa Mousa, Post-graduate Researcher at King’s College London and author, said: “Ultimately, without investment in embodied intelligence robots will plateau. Soon, if we do not offload the computational load that modern day robots take on, algorithmic improvements will have little impact on their performance. Our work is just a first step on this path, but the future holds smarter robots with smarter bodies.”
Here’s a link to and a citation for the paper,
Frequency-Controlled Fluidic Oscillators for Soft Robots by Mostafa Mousa, Ashkan Rezanejad, Benjamin Gorissen, Antonio E. Forte. Advanced Science DOI: https://doi.org/10.1002/advs.202408879 First published online: 08 October 2024
Where robots are concerned, mushrooms and other fungi aren’t usually considered as part of the equipment but one would be wrong according to a September 4, 2024 news item on ScienceDaily,
Building a robot takes time, technical skill, the right materials — and sometimes, a little fungus.
In creating a pair of new robots, Cornell University researchers cultivated an unlikely component, one found on the forest floor: fungal mycelia.
By harnessing mycelia’s innate electrical signals, the researchers discovered a new way of controlling “biohybrid” robots that can potentially react to their environment better than their purely synthetic counterparts.
…
An August 28, 2024 Cornell University news release (also on EurekAlert but published August 29, 2024) by David Nutt, which originated the news item, describes this (I’m tempted to call it, revolutionary) new technique, Note: Links have been removed.
“This paper is the first of many that will use the fungal kingdom to provide environmental sensing and command signals to robots to improve their levels of autonomy,” Shepherd [Rob Shepherd, professor of mechanical and aerospace engineering at Cornell University] said. “By growing mycelium into the electronics of a robot, we were able to allow the biohybrid machine to sense and respond to the environment. In this case we used light as the input, but in the future it will be chemical. The potential for future robots could be to sense soil chemistry in row crops and decide when to add more fertilizer, for example, perhaps mitigating downstream effects of agriculture like harmful algal blooms.”
In designing the robots of tomorrow, engineers have taken many of their cues from the animal kingdom, with machines that mimic the way living creatures move, sense their environment and even regulate their internal temperature through perspiration. Some robots have incorporated living material, such as cells from muscle tissue, but those complex biological systems are difficult to keep healthy and functional. It’s not always easy, after all, to keep a robot alive.
Mycelia are the underground vegetative part of mushrooms, and they have a number of advantages. They can grow in harsh conditions. They also have the ability to sense chemical and biological signals and respond to multiple inputs.
“If you think about a synthetic system – let’s say, any passive sensor – we just use it for one purpose. But living systems respond to touch, they respond to light, they respond to heat, they respond to even some unknowns, like signals,” Mishra [Anand Mishra, a research associate in the Organic Robotics Lab at Cornell University] said. “That’s why we think, OK, if you wanted to build future robots, how can they work in an unexpected environment? We can leverage these living systems, and any unknown input comes in, the robot will respond to that.”
However, finding a way to integrate mushrooms and robots requires more than just tech savvy and a green thumb.
“You have to have a background in mechanical engineering, electronics, some mycology, some neurobiology, some kind of signal processing,” Mishra said. “All these fields come together to build this kind of system.”
Mishra collaborated with a range of interdisciplinary researchers. He consulted with Bruce Johnson, senior research associate in neurobiology and behavior, and learned how to record the electrical signals that are carried in the neuron-like ionic channels in the mycelia membrane. Kathie Hodge, associate professor of plant pathology and plant-microbe biology in the School of Integrative Plant Science in the College of Agriculture and Life Sciences, taught Mishra how to grow clean mycelia cultures, because contamination turns out to be quite a challenge when you are sticking electrodes in fungus.
The system Mishra developed consists of an electrical interface that blocks out vibration and electromagnetic interference and accurately records and processes the mycelia’s electrophysiological activity in real time, and a controller inspired by central pattern generators – a kind of neural circuit. Essentially, the system reads the raw electrical signal, processes it and identifies the mycelia’s rhythmic spikes, then converts that information into a digital control signal, which is sent to the robot’s actuators.
Two biohybrid robots were built: a soft robot shaped like a spider and a wheeled bot.
The robots completed three experiments. In the first, the robots walked and rolled, respectively, as a response to the natural continuous spikes in the mycelia’s signal. Then the researchers stimulated the robots with ultraviolet light, which caused them to change their gaits, demonstrating mycelia’s ability to react to their environment. In the third scenario, the researchers were able to override the mycelia’s native signal entirely.
The implications go far beyond the fields of robotics and fungi.
“This kind of project is not just about controlling a robot,” Mishra said. “It is also about creating a true connection with the living system. Because once you hear the signal, you also understand what’s going on. Maybe that signal is coming from some kind of stresses. So you’re seeing the physical response, because those signals we can’t visualize, but the robot is making a visualization.”
Co-authors include Johnson, Hodge, Jaeseok Kim with the University of Florence, Italy, and undergraduate research assistant Hannah Baghdadi.
The research was supported by the National Science Foundation (NSF) CROPPS Science and Technology Center; the U.S. Department of Agriculture’s National Institute of Food and Agriculture; and the NSF Signal in Soil program.
In the ever-evolving field of robotics, a groundbreaking approach has emerged, revolutionizing how robots perceive, navigate, and interact with their environments. This new frontier, known as brain-inspired navigation technology, integrates insights from neuroscience into robotics, offering enhanced capabilities and efficiency.
Brain-inspired navigation technologies are not just a mere improvement over traditional methods; they represent a paradigm shift. By mimicking the neural mechanisms of animals, these technologies provide robots with the ability to navigate through complex and unknown terrains with unprecedented accuracy and adaptability.
At the heart of this technology lies the concept of spatial cognition, which is central to how animals, including humans, navigate their environments. Spatial cognition involves the brain’s ability to organize and interpret spatial data for navigation and memory. Robots equipped with brain-inspired navigation systems utilize a multi-layered network model that integrates sensory data from multiple sources. This model allows the robot to create a ‘cognitive map’ of its surroundings, much like the neural maps created by the hippocampus in the human brain.
One of the significant advantages of brain-inspired navigation is its robustness in challenging environments. Traditional navigation systems often struggle with dynamic and unpredictable settings, where the reliance on pre-mapped routes and landmarks can lead to failures. In contrast, brain-inspired systems continuously learn and adapt, improving their navigational strategies over time. This capability is particularly beneficial in environments like disaster zones or extraterrestrial surfaces, where prior mapping is either impossible or impractical.
Moreover, these systems significantly reduce energy consumption and computational needs. By focusing only on essential data and employing efficient neural network models, robots can operate longer and perform more complex tasks without the need for frequent recharging or maintenance.
The technology’s applications are vast and varied. For instance, autonomous vehicles equipped with brain-inspired systems could navigate more safely and efficiently, reacting in real-time to sudden changes in traffic conditions or road layouts. Similarly, drones used for delivery services could plan their routes more effectively, avoiding obstacles and optimizing delivery times.
Despite its promising potential, the development of brain-inspired navigation technology faces several challenges. Integrating biological principles into mechanical systems is inherently complex, requiring multidisciplinary efforts from fields such as neuroscience, cognitive science, robotics, and artificial intelligence. Moreover, these systems must be scalable and versatile enough to be customized for different types of robotic platforms and applications.
As researchers continue to unravel the mysteries of the brain’s navigational capabilities, the future of robotics looks increasingly intertwined with the principles of neuroscience. The collaboration across disciplines promises not only to advance our understanding of the brain but also to pave the way for a new generation of intelligent robots. These robots will not only assist in mundane tasks but also perform critical roles in search and rescue operations, planetary exploration, and much more.
In conclusion, brain-inspired navigation technology represents a significant leap forward in robotics, merging the abstract with the applied, the biological with the mechanical, and the theoretical with the practical. As this technology continues to evolve, it will undoubtedly open new horizons for robotic applications, making machines an even more integral part of our daily lives and work.
About the journal publisher, Science Journal Partners
Cyborg and Bionic Systems is published by the American Association for the Advancement of Science (AAAS) and is part of an open access publishing project known as Science Journal Partners, from the Program Overview webpage of science.org,
The Science Partner Journal (SPJ) program was launched in late 2017 by the American Association for the Advancement of Science (AAAS), the nonprofit publisher of the Science family of journals.
The program features high-quality, online-only, Open Access publications produced in collaboration with international research institutions, foundations, funders, and societies. Through these collaborations, AAAS furthers its mission to communicate science broadly and for the benefit of all people by providing top-tier international research organizations with the technology, visibility, and publishing expertise that AAAS is uniquely positioned to offer as the world’s largest general science membership society.
Organizations participating in the SPJ program are editorially independent and responsible for the content published in each journal. To oversee the publication process, each organization appoints editors committed to best practices in peer review and author service. It is the responsibility of the staff of each SPJ title to establish and execute all aspects of the peer review process, including oversight of editorial policy, selection of papers, and editing of content, following best practices advised by AAAS.
…
I’m starting to catch up with changes in the world of science publishing as you can see in my October 1, 2024 posting titled, “Nanomedicine: two stories about wound healing,” which features a subhead near the end of the post, Science Publishing, should you be interested in another science publishing initiative.
An August 15, 2024 news item on ScienceDaily announces research that may help make people safer in extreme heat,
As global warming intensifies, people increasingly suffer from extreme heat. For those working in a high-temperature environment indoors or outdoors, keeping thermally comfortable becomes particularly crucial. A team led by Dr Dahua SHOU, Limin Endowed Young Scholar in Advanced Textiles Technologies and Associate Professor of the School of Fashion and Textiles of The Hong Kong Polytechnic University (PolyU) has developed first-of-its-kind thermally-insulated and breathable soft robotic clothing that can automatically adapt to changing ambient temperatures, thereby helping to ensure worker safety in hot environments. Their research findings have been published in the international interdisciplinary journal Advanced Science.
Maintaining a constant body temperature is one of the most critical requirements for living and working. High-temperature environments elevate energy consumption, leading to increased heat stress, thus exacerbating chronic conditions such as cardiovascular disease, diabetes, mental health issues and asthma, while also increasing the risk of infectious disease transmission. According to the World Health Organisation, globally, there were approximately 489,000 heat-related deaths annually between 2000 and 2019, with 45% occurring in Asia and 36% in Europe.
Thermal protective clothing is essential to safeguard individuals in extreme high-temperature environments, such as firefighters who need to be present at fires [sic] scenes and construction workers who work outdoors for extended periods. However, traditional gear has been limited by statically fixed thermal resistance, which can lead to overheating and discomfort in moderate conditions, while its heat insulation may not offer sufficient protection in extreme fire events and other high-temperature environments. To address this issue, Dr Shou and his team have developed intelligent soft robotic clothing for automatic temperature adaptation and thermal insulation in hot environments, offering superior personal protection and thermal comfort across a range of temperatures.
Their research was inspired by biomimicry in nature, like the adaptive thermal regulation mechanism in pigeons, which is mainly based on structural changes. Pigeons use their feathers to trap a layer of air surrounding their skin to reduce heat loss to the environment. When the temperature drops, they fluff up their feathers to trap a significant amount of still air, thereby increasing thermal resistance and retaining warmth.
The protective clothing developed by the team uses soft robotic textile for dynamic adaptive thermal management. Soft actuators, designed like a human network-patterned exoskeleton and encapsulating a non-toxic, non-flammable, low-boiling-point fluid, were strategically embedded within the clothing. This thermo-stimulated system turns the fluid from a liquid into a gas when the ambient temperature rises, causing expansion of soft actuators and thickening the textile matrix, thereby enhancing the gap of still air and doubling the thermal resistance from 0.23 to 0.48 Km²/W. The protective clothing can also keep the inner surface temperatures at least 10°C cooler than conventional heat-resistant clothing, even when the outer surface reaches 120°C.
This unique soft robotic textile, made by thermoplastic polyurethane, is soft, resilient and durable. Notably, it is far more skin-friendly and conformable than temperature-responsive clothing embedded with shape-memory alloys and is adjustable for a wide range of protective clothing. The soft actuators have exhibited no signs of leakage after undergoing rigorous standard washing tests. The porous, spaced knitting structure of the material can also significantly reduce convective heat transfer while maintaining high moisture breathability. Not relying on thermoelectric chips or circulatory liquid cooling systems for cooling or heat conduction, the light-weighted, soft robotic clothing can effectively regulate temperature itself without any energy consumption.
Dr Shou said, “Wearing heavy firefighting gear can feel extremely stifling. When firefighters exit a fire scene and remove their gear, they are sometimes drained nearly a pound of sweat from their boots [sic]. This has motivated me to develop a novel suit capable of adapting to various environmental temperatures while maintaining excellent breathability. Our soft robotic clothing can seamlessly adapt to different seasons and climates, multiple working and living conditions, and transitions between indoor and outdoor environments to help users experience constant thermal comfort under intense heat.”
Looking forward, Dr Shou finds the innovation to have a wide range of potential applications, from activewear, winter jackets, healthcare apparel and outdoor gear, to sustainable textile-based insulation for construction and buildings, contributing to energy-saving efforts. Supported by the Innovation and Technology Commission and the Hong Kong Research Institute of Textiles and Apparel, Dr Shou and his team have also extended the thermo-adaptive concept to develop inflatable, breathable jackets and warm clothing. This soft robotic clothing is suitable for low-temperature environments or sudden temperature drops to aid those who are stranded in the wilderness to maintain normal body temperature.
Here’s a link to and a citation for the paper,
Soft Robotic Textiles for Adaptive Personal Thermal Management by Xiaohui Zhang, Zhaokun Wang, Guanghan Huang, Xujiang Chao, Lin Ye, Jintu Fan, Dahua Shou. Volume 11, Issue 21 June 5, 2024 2309605 First published online: 26 March 2024 DOI: https://doi.org/10.1002/advs.202309605
A July 18, 2024 news item on Nanowerk announces bioinspried stretchy batteries from the University of Cambridge,
Researchers have developed soft, stretchable ‘jelly batteries’ that could be used for wearable devices or soft robotics, or even implanted in the brain to deliver drugs or treat conditions such as epilepsy.
The researchers, from the University of Cambridge, took their inspiration from electric eels, which stun their prey with modified muscle cells called electrocytes.
Like electrocytes, the jelly-like materials developed by the Cambridge researchers have a layered structure, like sticky Lego, that makes them capable of delivering an electric current.
The self-healing jelly batteries can stretch to over ten times their original length without affecting their conductivity – the first time that such stretchability and conductivity has been combined in a single material. The results are reported in the journal Science Advances.
The jelly batteries are made from hydrogels: 3D networks of polymers that contain over 60% water. The polymers are held together by reversible on/off interactions that control the jelly’s mechanical properties.
The ability to precisely control mechanical properties and mimic the characteristics of human tissue makes hydrogels ideal candidates for soft robotics and bioelectronics; however, they need to be both conductive and stretchy for such applications.
“It’s difficult to design a material that is both highly stretchable and highly conductive, since those two properties are normally at odds with one another,” said first author Stephen O’Neill, from Cambridge’s Yusuf Hamied Department of Chemistry. “Typically, conductivity decreases when a material is stretched.”
“Normally, hydrogels are made of polymers that have a neutral charge, but if we charge them, they can become conductive,” said co-author Dr Jade McCune, also from the Department of Chemistry. “And by changing the salt component of each gel, we can make them sticky and squish them together in multiple layers, so we can build up a larger energy potential.”
Conventional electronics use rigid metallic materials with electrons as charge carriers, while the jelly batteries use ions to carry charge, like electric eels.
The hydrogels stick strongly to each other because of reversible bonds that can form between the different layers, using barrel-shaped molecules called cucurbiturils that are like molecular handcuffs. The strong adhesion between layers provided by the molecular handcuffs allows for the jelly batteries to be stretched, without the layers coming apart and crucially, without any loss of conductivity.
The properties of the jelly batteries make them promising for future use in biomedical implants, since they are soft and mould to human tissue. “We can customise the mechanical properties of the hydrogels so they match human tissue,” said Professor Oren Scherman, Director of the Melville Laboratory for Polymer Synthesis, who led the research in collaboration with Professor George Malliaras from the Department of Engineering. “Since they contain no rigid components such as metal, a hydrogel implant would be much less likely to be rejected by the body or cause the build-up of scar tissue.”
In addition to their softness, the hydrogels are also surprisingly tough. They can withstand being squashed without permanently losing their original shape, and can self-heal when damaged.
The researchers are planning future experiments to test the hydrogels in living organisms to assess their suitability for a range of medical applications.
The research was funded by the European Research Council and the Engineering and Physical Sciences Research Council (EPSRC), part of UK Research and Innovation (UKRI). Oren Scherman is a Fellow of Jesus College, Cambridge.
I got a bit of a jolt from this September 12, 2024 essay by Peter A Noble, affiliate professor of microbiology at the University of Washington, and Alex Pozhitkov, senior technical lead of bioinformatics, Irell & Manella Graduate School of Biological Sciences at City of Hope, for The Conversation (h/t Sept. 12, 2024 item on phys.org), Note: Links have been removed,
Life and death are traditionally viewed as opposites. But the emergence of new multicellular life-forms from the cells of a dead organism introduces a “third state” that lies beyond the traditional boundaries of life and death.
Usually, scientists consider death to be the irreversible halt of functioning of an organism as a whole. However, practices such as organ donation highlight how organs, tissues and cells can continue to function even after an organism’s demise. This resilience raises the question: What mechanisms allow certain cells to keep working after an organism has died?
We are researchers who investigate what happens within organisms after they die. In our recently published review, we describe how certain cells – when provided with nutrients, oxygen, bioelectricity or biochemical cues – have the capacity to transform into multicellular organisms with new functions after death.
Life, death and emergence of something new
The third state challenges how scientists typically understand cell behavior. While caterpillars metamorphosing into butterflies, or tadpoles evolving into frogs, may be familiar developmental transformations, there are few instances where organisms change in ways that are not predetermined. Tumors, organoids and cell lines that can indefinitely divide in a petri dish, like HeLa cells [cervical cancer cells taken from Henrietta Lacks without her knowledge], are not considered part of the third state because they do not develop new functions.
However, researchers found that skin cells extracted from deceased frog embryos were able to adapt to the new conditions of a petri dish in a lab, spontaneously reorganizing into multicellular organisms called xenobots [emphasis mine]. These organisms exhibited behaviors that extend far beyond their original biological roles. Specifically, these xenobots use their cilia – small, hair-like structures – to navigate and move through their surroundings, whereas in a living frog embryo, cilia are typically used to move mucus.
Xenobots are also able to perform kinematic self-replication, meaning they can physically replicate their structure and function without growing. This differs from more common replication processes that involve growth within or on the organism’s body.
Researchers have also found that solitary human lung cells can self-assemble into miniature multicellular organisms that can move around. These anthrobots [emphasis mine] behave and are structured in new ways. They are not only able to navigate their surroundings but also repair both themselves and injured neuron cells placed nearby.
Taken together, these findings demonstrate the inherent plasticity of cellular systems and challenge the idea that cells and organisms can evolve only in predetermined ways. The third state suggests that organismal death may play a significant role in how life transforms over time.
…
I had not realized that xenobots are derived from dead frog embryos something I missed when mentioning or featuring them in previous stories, the latest in a September 13, 2024 posting, which also mentions anthrobots. Previous stories were published in a June 21, 2021 posting about xenobots 2.0 and their ability to move and a June 8, 2022 posting about their ability to reproduce. Thank you to the authors for relieving me of some of my ignorance.
For some reason I was expecting mention, brief or otherwise, of ethical or social implications but the authors offered this instead, from their September 12, 2024 essay, Note: Links have been removed,
Implications for biology and medicine
The third state not only offers new insights into the adaptability of cells. It also offers prospects for new treatments.
For example, anthrobots could be sourced from an individual’s living tissue to deliver drugs without triggering an unwanted immune response. Engineered anthrobots injected into the body could potentially dissolve arterial plaque in atherosclerosis patients and remove excess mucus in cystic fibrosis patients.
Importantly, these multicellular organisms have a finite life span, naturally degrading after four to six weeks. This “kill switch” prevents the growth of potentially invasive cells.
A better understanding of how some cells continue to function and metamorphose into multicellular entities some time after an organism’s demise holds promise for advancing personalized and preventive medicine.
I look forward to hearing about the third state and about any ethical or social issues that may arise from it.
Laura Tran’s June 14, 2024 article for The Scientist gives both a brief history of Michael Levin’s and his team’s work on developing living robots using stem cells from an African clawed frog (known as Xenopus laevis) and offers an update on the team’s work into synthetic lifeforms. First, the xenobots, Note 1: This could be difficult for people with issues regarding animal experimentation Note 1: Links have been removed,
Ibegan with little pieces of embryos scooting around in a dish. In 1998, these unassuming cells caught the attention of Michael Levin, then a postdoctoral researcher studying cell biology at Harvard University. He recalled simply recording a video before tucking the memory away. Nearly two decades later, Levin, now a developmental and synthetic biologist at Tufts University, experienced a sense of déjà vu. He observed that as a student transplanted tissues from one embryo to another, some loose cells swam free in the dish.
Levin had a keen interest in the collective intelligence of cells, tissues, organs, and artificial constructs within regenerative medicine, and he wondered if he could explore the plasticity and harness the untapped capabilities of these swirling embryonic stem cells. “At that point, I started thinking that this is probably an amazing biorobotics platform,” recalled Levin. He rushed to describe this idea to Douglas Blackiston, a developmental and synthetic biologist at Tufts University who worked alongside Levin.
At the time, Blackiston was conducting plasticity research to restore vision in blind African clawed frog tadpoles, Xenopus laevis, a model organism used to understand development. Blackiston transplanted the eyes to unusual places, such as the back of the head or even the tail, to test the integration of transplanted sensory organs.1 The eye axons extended to either the gut or spinal cord. In a display of dynamic plasticity, transplanted eyes on the tail that extended an optic nerve into the spinal cord restored the tadpoles’ vision.2
…
In a similar vein, Josh Bongard, an evolutionary roboticist at the University of Vermont and Levin’s longtime colleague, pondered how robots could evolve like animals. He wanted to apply biological evolution to a machine by tinkering with the brains and bodies of robots and explored this idea with Sam Kriegman, then a graduate student in Bongard’s group and now an assistant professor at Northwestern University. Kriegman used evolutionary algorithms and artificial intelligence (AI) to simulate biological evolution in a virtual creature before teaming up with engineers to construct a physical version.
…
i have two stories about the Xenobots. I was a little late to the party, so, the June 21, 2021 posting is about xenobots 2.0 and their ability to move and the June 8, 2022 posting is about their ability to reproduce.
“People thought this was a one-off froggy-specific result, but this is a very profound thing,” emphasized Levin. To demonstrate its translatability in a non-frog model, he wondered, “What’s the furthest from an embryonic frog? Well, that would be an adult human.”
He enlisted the help of Gizem Gumuskaya, a synthetic biologist with an architectural background in Levin’s group, to tackle this challenge of creating biological robots using human cells to create anthrobots.8 While Gumuskaya was not involved with the development of xenobots, she drew inspiration from their design. By using adult human tracheal cells, she found that adult cells still displayed morphologic plasticity.
…
There are several key differences between xenobots and anthrobots: species, cell source (embryonic or adult), and the anthrobots’ ability to self-assemble without manipulation. “When considering applications, as a rule of thumb, xenobots are better suited to the environment. They exhibit higher durability, require less maintenance, and can coexist within the environment,” said Gumuskaya.
Meanwhile, there is greater potential for the use of mammalian-derived biobots in biomedical applications. This could include localized drug delivery, deposition into the arteries to break up plaque buildup, or deploying anthrobots into tissue to act as biosensors. “[Anthrobots] are poised as a personalized agent with the same DNA but new functionality,” remarked Gumuskaya.
…
Here’s a link to and a citation for the team’s latest paper,
Motile Living Biobots Self-Construct from Adult Human Somatic Progenitor Seed Cells by Gizem Gumuskaya, Pranjal Srivastava, Ben G. Cooper, Hannah Lesser, Ben Semegran, Simon Garnier, Michael Levin. Advanced Science Volume 11, Issue 4 January 26, 2024 2303575 DOI: https://doi.org/10.1002/advs.202303575 First published: 30 November 2023
A July 23, 2024 University of Southampton (UK) press release (also on EurekAlert but published July 22, 2024) describes the emerging science/technology of bio-hybrid robotics and a recent study about the ethical issues raised, Note 1: bio-hybrid may also be written as biohybrid; Note 2: Links have been removed,
Development of ‘living robots’ needs regulation and public debate
Researchers are calling for regulation to guide the responsible and ethical development of bio-hybrid robotics – a ground-breaking science which fuses artificial components with living tissue and cells.
In a paper published in Proceedings of the National Academy of Sciences [PNAS] a multidisciplinary team from the University of Southampton and universities in the US and Spain set out the unique ethical issues this technology presents and the need for proper governance.
Combining living materials and organisms with synthetic robotic components might sound like something out of science fiction, but this emerging field is advancing rapidly. Bio-hybrid robots using living muscles can crawl, swim, grip, pump, and sense their surroundings. Sensors made from sensory cells or insect antennae have improved chemical sensing. Living neurons have even been used to control mobile robots.
Dr Rafael Mestre from the University of Southampton, who specialises in emergent technologies and is co-lead author of the paper, said: “The challenges in overseeing bio-hybrid robotics are not dissimilar to those encountered in the regulation of biomedical devices, stem cells and other disruptive technologies. But unlike purely mechanical or digital technologies, bio-hybrid robots blend biological and synthetic components in unprecedented ways. This presents unique possible benefits but also potential dangers.”
Research publications relating to bio-hybrid robotics have increased continuously over the last decade. But the authors found that of the more than 1,500 publications on the subject at the time, only five considered its ethical implications in depth.
The paper’s authors identified three areas where bio-hybrid robotics present unique ethical issues: Interactivity – how bio-robots interact with humans and the environment, Integrability – how and whether humans might assimilate bio-robots (such as bio-robotic organs or limbs), and Moral status.
In a series of thought experiments, they describe how a bio-robot for cleaning our oceans could disrupt the food chain, how a bio-hybrid robotic arm might exacerbate inequalities [emphasis mine], and how increasing sophisticated bio-hybrid assistants could raise questions about sentience and moral value.
“Bio-hybrid robots create unique ethical dilemmas,” says Aníbal M. Astobiza, an ethicist from the University of the Basque Country in Spain and co-lead author of the paper. “The living tissue used in their fabrication, potential for sentience, distinct environmental impact, unusual moral status, and capacity for biological evolution or adaptation create unique ethical dilemmas that extend beyond those of wholly artificial or biological technologies.”
The paper is the first from the Biohybrid Futures project led by Dr Rafael Mestre, in collaboration with the Rebooting Democracy project. Biohybrid Futures is setting out to develop a framework for the responsible research, application, and governance of bio-hybrid robotics.
The paper proposes several requirements for such a framework, including risk assessments, consideration of social implications, and increasing public awareness and understanding.
Dr Matt Ryan, a political scientist from the University of Southampton and a co-author on the paper, said: “If debates around embryonic stem cells, human cloning or artificial intelligence have taught us something, it is that humans rarely agree on the correct resolution of the moral dilemmas of emergent technologies.
“Compared to related technologies such as embryonic stem cells or artificial intelligence, bio-hybrid robotics has developed relatively unattended by the media, the public and policymakers, but it is no less significant. We want the public to be included in this conversation to ensure a democratic approach to the development and ethical evaluation of this technology.”
In addition to the need for a governance framework, the authors set out actions that the research community can take now to guide their research.
“Taking these steps should not be seen as prescriptive in any way, but as an opportunity to share responsibility, taking a heavy weight away from the researcher’s shoulders,” says Dr Victoria Webster-Wood, a biomechanical engineer from Carnegie Mellon University in the US and co-author on the paper.
“Research in bio-hybrid robotics has evolved in various directions. We need to align our efforts to fully unlock its potential.”
Here’s a link to and a citation for the paper,
Ethics and responsibility in biohybrid robotics research by Rafael Mestre, Aníbal M. Astobiza, Victoria A. Webster-Wood, Matt Ryan, and M. Taher A. Saif. PNAS 121 (31) e2310458121 July 23, 2024 DOI: https://doi.org/10.1073/pnas.2310458121
This paper is open access.
Cyborg or biohybrid robot?
Earlier, I highlighted “… how a bio-hybrid robotic arm might exacerbate inequalities …” because it suggests cyborgs, which are not mentioned in the press release or in the paper, This seems like an odd omission but, over the years, terminology does change although it’s not clear that’s the situation here.
I have two ‘definitions’, the first is from an October 21, 2019 article by Javier Yanes for OpenMind BBVA, Note: More about BBVA later,
…
The fusion between living organisms and artificial devices has become familiar to us through the concept of the cyborg (cybernetic organism). This approach consists of restoring or improving the capacities of the organic being, usually a human being, by means of technological devices. On the other hand, biohybrid robots are in some ways the opposite idea: using living tissues or cells to provide the machine with functions that would be difficult to achieve otherwise. The idea is that if soft robots seek to achieve this through synthetic materials, why not do so directly with living materials?
Another approach to building biohybrid robots is the artificial enhancement of animals or using an entire animal body as a scaffold to manipulate robotically. The locomotion of these augmented animals can then be externally controlled, spanning three modes of locomotion: walking/running, flying, and swimming. Notably, these capabilities have been demonstrated in jellyfish (figure 4(A)) [139, 140], clams (figure 4(B)) [141], turtles (figure 4(C)) [142, 143], and insects, including locusts (figure 4(D)) [27, 144], beetles (figure 4(E)) [28, 145–158], cockroaches (figure 4(F)) [159–165], and moths [166–170].
….
The advantages of using entire animals as cyborgs are multifold. For robotics, augmented animals possess inherent features that address some of the long-standing challenges within the field, including power consumption and damage tolerance, by taking advantage of animal metabolism [172], tissue healing, and other adaptive behaviors. In particular, biohybrid robotic jellyfish, composed of a self-contained microelectronic swim controller embedded into live Aurelia aurita moon jellyfish, consumed one to three orders of magnitude less power per mass than existing swimming robots [172], and cyborg insects can make use of the insect’s hemolymph directly as a fuel source [173].
…
So, sometimes there’s a distinction and sometimes there’s not. I take this to mean that the field is still emerging and that’s reflected in evolving terminology.
Banco Bilbao Vizcaya Argentaria, S.A. (Spanish pronunciation: [ˈbaŋko βilˈβao βiθˈkaʝa aɾxenˈtaɾja]), better known by its initialism BBVA, is a Spanish multinational financial services company based in Madrid and Bilbao, Spain. It is one of the largest financial institutions in the world, and is present mainly in Spain, Portugal, Mexico, South America, Turkey, Italy and Romania.[2]
OpenMind is a non-profit project run by BBVA that aims to contribute to the generation and dissemination of knowledge about fundamental issues of our time, in an open and free way. The project is materialized in an online dissemination community.
“Sharing knowledge for a better future“.
At OpenMind we want to help people understand the main phenomena affecting our lives; the opportunities and challenges that we face in areas such as science, technology, humanities or economics. Analyzing the impact of scientific and technological advances on the future of the economy, society and our daily lives is the project’s main objective, which always starts on the premise that a broader and greater quality knowledge will help us to make better individual and collective decisions.
Finally, you can find more on these stories (science/technology announcements and/or ethics research/issues) here by searching for ‘robots’ (tag and category), ‘cyborgs’ (tag), ‘machine/flesh’ (tag), ‘neuroprosthetic’ (tag), and human enhancement (category).