Category Archives: robots

A biocompatible (implantable) micromachine (microrobot)

I appreciate the detail and information in this well written Jan. 4, 2017 Columbia University news release (h/t Jan. 4, 2016 Nanowerk; Note: Links have been removed),

A team of researchers led by Biomedical Engineering Professor Sam Sia has developed a way to manufacture microscale-sized machines from biomaterials that can safely be implanted in the body. Working with hydrogels, which are biocompatible materials that engineers have been studying for decades, Sia has invented a new technique that stacks the soft material in layers to make devices that have three-dimensional, freely moving parts. The study, published online January 4, 2017, in Science Robotics, demonstrates a fast manufacturing method Sia calls “implantable microelectromechanical systems” (iMEMS).

By exploiting the unique mechanical properties of hydrogels, the researchers developed a “locking mechanism” for precise actuation and movement of freely moving parts, which can provide functions such as valves, manifolds, rotors, pumps, and drug delivery. They were able to tune the biomaterials within a wide range of mechanical and diffusive properties and to control them after implantation without a sustained power supply such as a toxic battery. They then tested the “payload” delivery in a bone cancer model and found that the triggering of release of doxorubicin from the device over 10 days showed high treatment efficacy and low toxicity, at 1/10 of the standard systemic chemotherapy dose.

“Overall, our iMEMS platform enables development of biocompatible implantable microdevices with a wide range of intricate moving components that can be wirelessly controlled on demand and solves issues of device powering and biocompatibility,” says Sia, also a member of the Data Science Institute. “We’re really excited about this because we’ve been able to connect the world of biomaterials with that of complex, elaborate medical devices. Our platform has a large number of potential applications, including the drug delivery system demonstrated in our paper which is linked to providing tailored drug doses for precision medicine.”

I particularly like this bit about hydrogels being a challenge to work with and the difficulties of integrating both rigid and soft materials,

Most current implantable microdevices have static components rather than moving parts and, because they require batteries or other toxic electronics, have limited biocompatibility. Sia’s team spent more than eight years working on how to solve this problem. “Hydrogels are difficult to work with, as they are soft and not compatible with traditional machining techniques,” says Sau Yin Chin, lead author of the study who worked with Sia. “We have tuned the mechanical properties and carefully matched the stiffness of structures that come in contact with each other within the device. Gears that interlock have to be stiff in order to allow for force transmission and to withstand repeated actuation. Conversely, structures that form locking mechanisms have to be soft and flexible to allow for the gears to slip by them during actuation, while at the same time they have to be stiff enough to hold the gears in place when the device is not actuated. We also studied the diffusive properties of the hydrogels to ensure that the loaded drugs do not easily diffuse through the hydrogel layers.”

The team used light to polymerize sheets of gel and incorporated a stepper mechanization to control the z-axis and pattern the sheets layer by layer, giving them three-dimensionality. Controlling the z-axis enabled the researchers to create composite structures within one layer of the hydrogel while managing the thickness of each layer throughout the fabrication process. They were able to stack multiple layers that are precisely aligned and, because they could polymerize a layer at a time, one right after the other, the complex structure was built in under 30 minutes.

Sia’s iMEMS technique addresses several fundamental considerations in building biocompatible microdevices, micromachines, and microrobots: how to power small robotic devices without using toxic batteries, how to make small biocompatible moveable components that are not silicon which has limited biocompatibility, and how to communicate wirelessly once implanted (radio frequency microelectronics require power, are relatively large, and are not biocompatible). The researchers were able to trigger the iMEMS device to release additional payloads over days to weeks after implantation. They were also able to achieve precise actuation by using magnetic forces to induce gear movements that, in turn, bend structural beams made of hydrogels with highly tunable properties. (Magnetic iron particles are commonly used and FDA-approved for human use as contrast agents.)

In collaboration with Francis Lee, an orthopedic surgeon at Columbia University Medical Center at the time of the study, the team tested the drug delivery system on mice with bone cancer. The iMEMS system delivered chemotherapy adjacent to the cancer, and limited tumor growth while showing less toxicity than chemotherapy administered throughout the body.

“These microscale components can be used for microelectromechanical systems, for larger devices ranging from drug delivery to catheters to cardiac pacemakers, and soft robotics,” notes Sia. “People are already making replacement tissues and now we can make small implantable devices, sensors, or robots that we can talk to wirelessly. Our iMEMS system could bring the field a step closer in developing soft miniaturized robots that can safely interact with humans and other living systems.”

Here’s a link to and a citation for the paper,

Additive manufacturing of hydrogel-based materials for next-generation implantable medical devices by Sau Yin Chin, Yukkee Cheung Poh, Anne-Céline Kohler, Jocelyn T. Compton, Lauren L. Hsu, Kathryn M. Lau, Sohyun Kim, Benjamin W. Lee, Francis Y. Lee, and Samuel K. Sia. Science Robotics  04 Jan 2017: Vol. 2, Issue 2, DOI: 10.1126/scirobotics.aah6451

This paper appears to be open access.

The researchers have provided a video demonstrating their work (you may want to read the caption below before watching),

Magnetic actuation of the Geneva drive device. A magnet is placed about 1cm below and without contact with the device. The rotating magnet results in the rotational movement of the smaller driving gear. With each full rotation of this driving gear, the larger driven gear is engaged and rotates by 60º, exposing the next reservoir to the aperture on the top layer of the device.

—Video courtesy of Sau Yin Chin/Columbia Engineering

You can hear some background conversation but it doesn’t seem to have been included for informational purposes.

Artificial intelligence and industrial applications

This is take on artificial intelligence that I haven’t encountered before. Sean Captain’s Nov. 15, 2016 article for Fast Company profiles industry giant GE (General Electric) and its foray into that world (Note: Links have been removed),

When you hear the term “artificial intelligence,” you may think of tech giants Amazon, Google, IBM, Microsoft, or Facebook. Industrial powerhouse General Electric is now aiming to be included on that short list. It may not have a chipper digital assistant like Cortana or Alexa. It won’t sort through selfies, but it will look through X-rays. It won’t recommend movies, but it will suggest how to care for a diesel locomotive. Today, GE announced a pair of acquisitions and new services that will bring machine learning AI to the kinds of products it’s known for, including planes, trains, X-ray machines, and power plants.

The effort started in 2015 when GE announced Predix Cloud—an online platform to network and collect data from sensors on industrial machinery such as gas turbines or windmills. At the time, GE touted the benefits of using machine learning to find patterns in sensor data that could lead to energy savings or preventative maintenance before a breakdown. Predix Cloud opened up to customers in February [2016?], but GE is still building up the AI capabilities to fulfill the promise. “We were using machine learning, but I would call it in a custom way,” says Bill Ruh, GE’s chief digital officer and CEO of its GE Digital business (GE calls its division heads CEOs). “And we hadn’t gotten to a general-purpose framework in machine learning.”

Today [Nov. 15, 2016] GE revealed the purchase of two AI companies that Ruh says will get them there. Bit Stew Systems, founded in 2005, was already doing much of what Predix Cloud promises—collecting and analyzing sensor data from power utilities, oil and gas companies, aviation, and factories. (GE Ventures has funded the company.) Customers include BC Hydro, Pacific Gas & Electric, and Scottish & Southern Energy.

The second purchase, Wise.io is a less obvious purchase. Founded by astrophysics and AI experts using machine learning to study the heavens, the company reapplied the tech to streamlining a company’s customer support systems, picking up clients like Pinterest, Twilio, and TaskRabbit. GE believes the technology will transfer yet again, to managing industrial machines. “I think by the middle of next year we will have a full machine learning stack,” says Ruh.

Though young, Predix is growing fast, with 270 partner companies using the platform, according to GE, which expects revenue on software and services to grow over 25% this year, to more than $7 billion. Ruh calls Predix a “significant part” of that extra money. And he’s ready to brag, taking a jab at IBM Watson for being a “general-purpose” machine-learning provider without the deep knowledge of the industries it serves. “We have domain algorithms, on machine learning, that’ll know what a power plant is and all the depth of that, that a general-purpose machine learning will never really understand,” he says.

One especially dull-sounding new Predix service—Predictive Corrosion Management—touches on a very hot political issue: giant oil and gas pipeline projects. Over 400 people have been arrested in months of protests against the Dakota Access Pipeline, which would carry crude oil from North Dakota to Illinois. The issue is very complicated, but one concern of protestors is that a pipeline rupture would contaminate drinking water for the Standing Rock Sioux reservation.

“I think absolutely this is aimed at that problem. If you look at why pipelines spill, it’s corrosion,” says Ruh. “We believe that 10 years from now, we can detect a leak before it occurs and fix it before you see it happen.” Given how political battles over pipelines drag on, 10 years might not be so long to wait.

I recommend reading the article in its entirety if you have the time. And, for those of us in British Columbia, Canada, it was a surprise to see BC Hydro on the list of customers for one of GE’s new acquisitions. As well, that business about the pipelines hits home hard given the current debates (Enbridge Northern Gateway Pipelines) here. *ETA Dec. 27, 2016: This was originally edited just prior to publication to include information about the announcement by the Trudeau cabinet approving two pipelines for TransMountain  and Enbridge respectively while rejecting the Northern Gateway pipeline (Canadian Broadcasting Corporation [CBC] online news Nov. 29, 2016).  I trust this second edit will stick.*

It seems GE is splashing out in a big way. There’s a second piece on Fast Company, a Nov. 16, 2016 article by Sean Captain (again) this time featuring a chat between an engineer and a robotic power plant,

We are entering the era of talking machines—and it’s about more than just asking Amazon’s Alexa to turn down the music. General Electric has built a digital assistant into its cloud service for managing power plants, jet engines, locomotives, and the other heavy equipment it builds. Over the internet, an engineer can ask a machine—even one hundreds of miles away—how it’s doing and what it needs. …

Voice controls are built on top of GE’s Digital Twin program, which uses sensor readings from machinery to create virtual models in cyberspace. “That model is constantly getting a stream of data, both operational and environmental,” says Colin Parris, VP at GE Software Research. “So it’s adapting itself to that type of data.” The machines live virtual lives online, allowing engineers to see how efficiently each is running and if they are wearing down.

GE partnered with Microsoft on the interface, using the Bing Speech API (the same tech powering the Cortana digital assistant), with special training on key terms like “rotor.” The twin had little trouble understanding the Mandarin Chinese accent of Bo Yu, one of the researchers who built the system; nor did it stumble on Parris’s Trinidad accent. Digital Twin will also work with Microsoft’s HoloLens mixed reality goggles, allowing someone to step into a 3D image of the equipment.

I can’t help wondering if there are some jobs that were eliminated with this technology.

Morphing airplane wing

Long a science fiction trope, ‘morphing’, in this case, an airplane wing, is closer to reality with this work from the Massachusetts Institute of Technology (MIT). From a Nov. 3, 2016 MIT news release (also on EurekAlert),

When the Wright brothers accomplished their first powered flight more than a century ago, they controlled the motion of their Flyer 1 aircraft using wires and pulleys that bent and twisted the wood-and-canvas wings. This system was quite different than the separate, hinged flaps and ailerons that have performed those functions on most aircraft ever since. But now, thanks to some high-tech wizardry developed by engineers at MIT and NASA, some aircraft may be returning to their roots, with a new kind of bendable, “morphing” wing.

The new wing architecture, which could greatly simplify the manufacturing process and reduce fuel consumption by improving the wing’s aerodynamics, as well as improving its agility, is based on a system of tiny, lightweight subunits that could be assembled by a team of small specialized robots, and ultimately could be used to build the entire airframe. The wing would be covered by a “skin” made of overlapping pieces that might resemble scales or feathers.

The new concept is described in the journal Soft Robotics, in a paper by Neil Gershenfeld, director of MIT’s Center for Bits and Atoms (CBA); Benjamin Jenett, a CBA graduate student; Kenneth Cheung PhD ’12, a CBA alumnus and NASA research scientist; and four others.

Researchers have been trying for many years to achieve a reliable way of deforming wings as a substitute for the conventional, separate, moving surfaces, but all those efforts “have had little practical impact,” Gershenfeld says. The biggest problem was that most of these attempts relied on deforming the wing through the use of mechanical control structures within the wing, but these structures tended to be so heavy that they canceled out any efficiency advantages produced by the smoother aerodynamic surfaces. They also added complexity and reliability issues.

By contrast, Gershenfeld says, “We make the whole wing the mechanism. It’s not something we put into the wing.” In the team’s new approach, the whole shape of the wing can be changed, and twisted uniformly along its length, by activating two small motors that apply a twisting pressure to each wingtip.

Like building with blocks

The basic principle behind the new concept is the use of an array of tiny, lightweight structural pieces, which Gershenfeld calls “digital materials,” that can be assembled into a virtually infinite variety of shapes, much like assembling a structure from Lego blocks. The assembly, performed by hand for this initial experiment, could be done by simple miniature robots that would crawl along or inside the structure as it took shape. The team has already developed prototypes of such robots.

The individual pieces are strong and stiff, but the exact choice of the dimensions and materials used for the pieces, and the geometry of how they are assembled, allow for a precise tuning of the flexibility of the final shape. For the initial test structure, the goal was to allow the wing to twist in a precise way that would substitute for the motion of separate structural pieces (such as the small ailerons at the trailing edges of conventional wings), while providing a single, smooth aerodynamic surface.

Building up a large and complex structure from an array of small, identical building blocks, which have an exceptional combination of strength, light weight, and flexibility, greatly simplifies the manufacturing process, Gershenfeld explains. While the construction of light composite wings for today’s aircraft requires large, specialized equipment for layering and hardening the material, the new modular structures could be rapidly manufactured in mass quantities and then assembled robotically in place.

Gershenfeld and his team have been pursuing this approach to building complex structures for years, with many potential applications for robotic devices of various kinds. For example, this method could lead to robotic arms and legs whose shapes could bend continuously along their entire length, rather than just having a fixed number of joints.

This research, says Cheung, “presents a general strategy for increasing the performance of highly compliant — that is, ‘soft’ — robots and mechanisms,” by replacing conventional flexible materials with new cellular materials “that are much lower weight, more tunable, and can be made to dissipate energy at much lower rates” while having equivalent stiffness.

Saving fuel, cutting emissions

While exploring possible applications of this nascent technology, Gershenfeld and his team consulted with NASA engineers and others seeking ways to improve the efficiency of aircraft manufacturing and flight. They learned that “the idea that you could continuously deform a wing shape to do pure lift and roll has been a holy grail in the field, for both efficiency and agility,” he says. Given the importance of fuel costs in both the economics of the airline industry and that sector’s contribution to greenhouse gas emissions, even small improvements in fuel efficiency could have a significant impact.

Wind-tunnel tests of this structure showed that it at least matches the aerodynamic properties of a conventional wing, at about one-tenth the weight.

The “skin” of the wing also enhances the structure’s performance. It’s made from overlapping strips of flexible material, layered somewhat like feathers or fish scales, allowing for the pieces to move across each other as the wing flexes, while still providing a smooth outer surface.

The modular structure also provides greater ease of both assembly and disassembly: One of this system’s big advantages, in principle, Gershenfeld says, is that when it’s no longer needed, the whole structure can be taken apart into its component parts, which can then be reassembled into something completely different. Similarly, repairs could be made by simply replacing an area of damaged subunits.

“An inspection robot could just find where the broken part is and replace it, and keep the aircraft 100 percent healthy at all times,” says Jenett.

Following up on the successful wind tunnel tests, the team is now extending the work to tests of a flyable unpiloted aircraft, and initial tests have shown great promise, Jenett says. “The first tests were done by a certified test pilot, and he found it so responsive that he decided to do some aerobatics.”

Some of the first uses of the technology may be to make small, robotic aircraft — “super-efficient long-range drones,” Gershenfeld says, that could be used in developing countries as a way of delivering medicines to remote areas.

Here’s a link to and a citation for the paper,

Digital Morphing Wing: Active Wing Shaping Concept Using Composite Lattice-Based Cellular Structures by Benjamin Jenett, Sam Calisch, Daniel Cellucci, Nick Cramer, Neil Gershenfeld, Sean Swei, and Kenneth C. Cheung. Soft Robotics. October 2016, ahead of print. doi:10.1089/soro.2016.0032. Published online: Oct. 26, 2016

This paper is open access.

‘Robomussels’ for climate change

These ‘robomussels’ are not voting but they are being used to monitor mussel bed habitats according to an Oct. 17, 2016 news item on ScienceDaily,

Tiny robots have been helping researchers study how climate change affects biodiversity. Developed by Northeastern University scientist Brian Helmuth, the “robomussels” have the shape, size, and color of actual mussels, with miniature built-in sensors that track temperatures inside the mussel beds.

Caption: This is a robomussel, seen among living mussels and other sea creatures. Credit: Allison Matzelle

Caption: This is a robomussel, seen among living mussels and other sea creatures. Credit: Allison Matzelle

An Oct. 12, 2016 Northeastern University news release (also on EurekAlert), which originated the news item, describes a project some 20 years in the making,

For the past 18 years, every 10 to 15 minutes, Helmuth and a global research team of 48 scientists have used robomussels to track internal body temperature, which is determined by the temperature of the surrounding air or water, and the amount of solar radiation the devices absorb. They place the robots inside mussel beds in oceans around the globe and record temperatures. The researchers have built a database of nearly two decades worth of data enabling scientists to pinpoint areas of unusual warming, intervene to help curb damage to vital marine ecosystems, and develop strategies that could prevent extinction of certain species.

Housed at Northeastern’s Marine Science Center in Nahant, Massachusetts, this largest-ever database is not only a remarkable way to track the effects of climate change, the findings can also reveal emerging hotspots so policymakers and scientists can step in and relieve stressors such as erosion and water acidification before it’s too late.

“They look exactly like mussels but they have little green blinking lights in them,” says Helmuth. “You basically pluck out a mussel and then glue the device to the rock right inside the mussel bed. They enable us to link our field observations with the physiological impact of global climate change on these ecologically and economically important animals.”

For ecological forecasters such as Helmuth, mussels act as a barometer of climate change. That’s because they rely on external sources of heat such as air temperature and sun exposure for their body heat and thrive, or not, depending on those conditions. Using fieldwork along with mathematical and computational models, Helmuth forecasts the patterns of growth, reproduction, and survival of mussels in intertidal zones.

Over the years, he and his colleagues have found surprises: “Our expectations of where to look for the effects of climate change in nature are more complex than anticipated,” says Helmuth. For example, in an earlier paper in the journal Science, his team found that hotspots existed not only at the southern end of the species’ distribution, in this case, southern California; they also existed at sites up north, in Oregon and Washington state.

“These datasets tell us when and where to look for the effects of climate change,” he says. “Without them we could miss early warning signs of trouble.”

The robomussels’ near-continuous measurements serve as an early warning system. “If we start to see sites where the animals are regularly getting to temperatures that are right below what kills them, we know that any slight increase is likely to send them over the edge, and we can act,” says Helmuth.

It’s not only the mussels that may be pulled back from the brink. The advance notice could inform everything from maintaining the biodiversity of coastal systems to determining the best–and worst–places to locate mussel farms.

“Losing mussel beds is essentially like clearing a forest,” says Helmuth. “If they go, everything that’s living in them will go. They are a major food supply for many species, including lobsters and crabs. They also function as filters along near-shore waters, clearing huge amounts of particulates. So losing them can affect everything from the growth of species we care about because we want to eat them to water clarity to biodiversity of all the tiny animals that live on the insides of the beds.”

Here’s a link to and a citation for the paper,

Long-term, high frequency in situ measurements of intertidal mussel bed temperatures using biomimetic sensors by Brian Helmuth, Francis Choi, Gerardo Zardi.  Scientific Data 3, Article number: 160087 (2016) doi:10.1038/sdata.2016.87 Published online: 11 October 2016

This paper is open access.

A computer that intuitively predicts a molecule’s chemical properties

First, we have emotional artificial intelligence from MIT (Massachusetts Institute of Technology) with their Kismet [emotive AI] project and now we have intuitive computers according to an Oct. 14, 2016 news item on Nanowerk,

Scientists from Moscow Institute of Physics and Technology (MIPT)’s Research Center for Molecular Mechanisms of Aging and Age-Related Diseases together with Inria research center, Grenoble, France have developed a software package called Knodle to determine an atom’s hybridization, bond orders and functional groups’ annotation in molecules. The program streamlines one of the stages of developing new drugs.

An Oct. 14, 2016 Moscow Institute of Physics and Technology press release (also on EurekAlert), which originated the news item, expands on the theme,

Imagine that you were to develop a new drug. Designing a drug with predetermined properties is called drug-design. Once a drug has entered the human body, it needs to take effect on the cause of a disease. On a molecular level this is a malfunction of some proteins and their encoding genes. In drug-design these are called targets. If a drug is antiviral, it must somehow prevent the incorporation of viral DNA into human DNA. In this case the target is viral protein. The structure of the incorporating protein is known, and we also even know which area is the most important – the active site. If we insert a molecular “plug” then the viral protein will not be able to incorporate itself into the human genome and the virus will die. It boils down to this: you find the “plug” – you have your drug.

But how can we find the molecules required? Researchers use an enormous database of substances for this. There are special programs capable of finding a needle in a haystack; they use quantum chemistry approximations to predict the place and force of attraction between a molecular “plug” and a protein. However, databases only store the shape of a substance; information about atom and bond states is also needed for an accurate prediction. Determining these states is what Knodle does. With the help of the new technology, the search area can be reduced from hundreds of thousands to just a hundred. These one hundred can then be tested to find drugs such as Reltagravir – which has actively been used for HIV prevention since 2011.

From science lessons at school everyone is used to seeing organic substances as letters with sticks (substance structure), knowing that in actual fact there are no sticks. Every stick is a bond between electrons which obeys the laws of quantum chemistry. In the case of one simple molecule, like the one in the diagram [diagram follows], the experienced chemist intuitively knows the hybridizations of every atom (the number of neighboring atoms which it is connected to) and after a few hours looking at reference books, he or she can reestablish all the bonds. They can do this because they have seen hundreds and hundreds of similar substances and know that if oxygen is “sticking out like this”, it almost certainly has a double bond. In their research, Maria Kadukova, a MIPT student, and Sergei Grudinin, a researcher from Inria research center located in Grenoble, France, decided to pass on this intuition to a computer by using machine learning.

Compare “A solid hollow object with a handle, opening at the top and an elongation at the side, at the end of which there is another opening” and “A vessel for the preparation of tea”. Both of them describe a teapot rather well, but the latter is simpler and more believable. The same is true for machine learning, the best algorithm for learning is the simplest. This is why the researchers chose to use a nonlinear support vector machines (SVM), a method which has proven itself in recognizing handwritten text and images. On the input it was given the positions of neighboring atoms and on the output collected hybridization.

Good learning needs a lot of examples and the scientists did this using 7605 substances with known structures and atom states. “This is the key advantage of the program we have developed, learning from a larger database gives better predictions. Knodle is now one step ahead of similar programs: it has a margin of error of 3.9%, while for the closest competitor this figure is 4.7%”, explains Maria Kadukova. And that is not the only benefit. The software package can easily be modified for a specific problem. For example, Knodle does not currently work with substances containing metals, because those kind of substances are rather rare. But if it turns out that a drug for Alzheimer’s is much more effective if it has metal, the only thing needed to adapt the program is a database with metallic substances. We are now left to wonder what new drug will be found to treat a previously incurable disease.

Scientists from MIPT's Research Center for Molecular Mechanisms of Aging and Age-Related Diseases together with Inria research center, Grenoble, France have developed a software package called Knodle to determine an atom's hybridization, bond orders and functional groups' annotation in molecules. The program streamlines one of the stages of developing new drugs. Credit: MIPT Press Office

Scientists from MIPT’s Research Center for Molecular Mechanisms of Aging and Age-Related Diseases together with Inria research center, Grenoble, France have developed a software package called Knodle to determine an atom’s hybridization, bond orders and functional groups’ annotation in molecules. The program streamlines one of the stages of developing new drugs. Credit: MIPT Press Office

Here’s a link to and a citation for the paper,

Knodle: A Support Vector Machines-Based Automatic Perception of Organic Molecules from 3D Coordinates by Maria Kadukova and Sergei Grudinin. J. Chem. Inf. Model., 2016, 56 (8), pp 1410–1419 DOI: 10.1021/acs.jcim.5b00512 Publication Date (Web): July 13, 2016

Copyright © 2016 American Chemical Society

This paper is behind a paywall.

Westworld: a US television programme investigating AI (artificial intelligence) and consciousness

The US television network, Home Box Office (HBO) is getting ready to première Westworld, a new series based on a movie first released in 1973. Here’s more about the movie from its Wikipedia entry (Note: Links have been removed),

Westworld is a 1973 science fiction Western thriller film written and directed by novelist Michael Crichton and produced by Paul Lazarus III about amusement park robots that malfunction and begin killing visitors. It stars Yul Brynner as an android in a futuristic Western-themed amusement park, and Richard Benjamin and James Brolin as guests of the park.

Westworld was the first theatrical feature directed by Michael Crichton.[3] It was also the first feature film to use digital image processing, to pixellate photography to simulate an android point of view.[4] The film was nominated for Hugo, Nebula and Saturn awards, and was followed by a sequel film, Futureworld, and a short-lived television series, Beyond Westworld. In August 2013, HBO announced plans for a television series based on the original film.

The latest version is due to start broadcasting in the US on Sunday, Oct. 2, 2016 and as part of the publicity effort the producers are profiled by Sean Captain for Fast Company in a Sept. 30, 2016 article,

As Game of Thrones marches into its final seasons, HBO is debuting this Sunday what it hopes—and is betting millions of dollars on—will be its new blockbuster series: Westworld, a thorough reimagining of Michael Crichton’s 1973 cult classic film about a Western theme park populated by lifelike robot hosts. A philosophical prelude to Jurassic Park, Crichton’s Westworld is a cautionary tale about technology gone very wrong: the classic tale of robots that rise up and kill the humans. HBO’s new series, starring Evan Rachel Wood, Anthony Hopkins, and Ed Harris, is subtler and also darker: The humans are the scary ones.

“We subverted the entire premise of Westworld in that our sympathies are meant to be with the robots, the hosts,” says series co-creator Lisa Joy. She’s sitting on a couch in her Burbank office next to her partner in life and on the show—writer, director, producer, and husband Jonathan Nolan—who goes by Jonah. …

Their Westworld, which runs in the revered Sunday-night 9 p.m. time slot, combines present-day production values and futuristic technological visions—thoroughly revamping Crichton’s story with hybrid mechanical-biological robots [emphasis mine] fumbling along the blurry line between simulated and actual consciousness.

Captain never does explain the “hybrid mechanical-biological robots.” For example, do they have human skin or other organs grown for use in a robot? In other words, how are they hybrid?

That nitpick aside, the article provides some interesting nuggets of information and insight into the themes and ideas 2016 Westworld’s creators are exploring (Note: A link has been removed),

… Based on the four episodes I previewed (which get progressively more interesting), Westworld does a good job with the trope—which focused especially on the awakening of Dolores, an old soul of a robot played by Evan Rachel Wood. Dolores is also the catchall Spanish word for suffering, pain, grief, and other displeasures. “There are no coincidences in Westworld,” says Joy, noting that the name is also a play on Dolly, the first cloned mammal.

The show operates on a deeper, though hard-to-define level, that runs beneath the shoot-em and screw-em frontier adventure and robotic enlightenment narratives. It’s an allegory of how even today’s artificial intelligence is already taking over, by cataloging and monetizing our lives and identities. “Google and Facebook, their business is reading your mind in order to advertise shit to you,” says Jonah Nolan. …

“Exist free of rules, laws or judgment. No impulse is taboo,” reads a spoof home page for the resort that HBO launched a few weeks ago. That’s lived to the fullest by the park’s utterly sadistic loyal guest, played by Ed Harris and known only as the Man in Black.

The article also features some quotes from scientists on the topic of artificial intelligence (Note: Links have been removed),

“In some sense, being human, but less than human, it’s a good thing,” says Jon Gratch, professor of computer science and psychology at the University of Southern California [USC]. Gratch directs research at the university’s Institute for Creative Technologies on “virtual humans,” AI-driven onscreen avatars used in military-funded training programs. One of the projects, SimSensei, features an avatar of a sympathetic female therapist, Ellie. It uses AI and sensors to interpret facial expressions, posture, tension in the voice, and word choices by users in order to direct a conversation with them.

“One of the things that we’ve found is that people don’t feel like they’re being judged by this character,” says Gratch. In work with a National Guard unit, Ellie elicited more honest responses about their psychological stresses than a web form did, he says. Other data show that people are more honest when they know the avatar is controlled by an AI versus being told that it was controlled remotely by a human mental health clinician.

“If you build it like a human, and it can interact like a human. That solves a lot of the human-computer or human-robot interaction issues,” says professor Paul Rosenbloom, also with USC’s Institute for Creative Technologies. He works on artificial general intelligence, or AGI—the effort to create a human-like or human level of intellect.

Rosenbloom is building an AGI platform called Sigma that models human cognition, including emotions. These could make a more effective robotic tutor, for instance, “There are times you want the person to know you are unhappy with them, times you want them to know that you think they’re doing great,” he says, where “you” is the AI programmer. “And there’s an emotional component as well as the content.”

Achieving full AGI could take a long time, says Rosenbloom, perhaps a century. Bernie Meyerson, IBM’s chief innovation officer, is also circumspect in predicting if or when Watson could evolve into something like HAL or Her. “Boy, we are so far from that reality, or even that possibility, that it becomes ludicrous trying to get hung up there, when we’re trying to get something to reasonably deal with fact-based data,” he says.

Gratch, Rosenbloom, and Meyerson are talking about screen-based entities and concepts of consciousness and emotions. Then, there’s a scientist who’s talking about the difficulties with robots,

… Ken Goldberg, an artist and professor of engineering at UC [University of California] Berkeley, calls the notion of cyborg robots in Westworld “a pretty common trope in science fiction.” (Joy will take up the theme again, as the screenwriter for a new Battlestar Galactica movie.) Goldberg’s lab is struggling just to build and program a robotic hand that can reliably pick things up. But a sympathetic, somewhat believable Dolores in a virtual setting is not so farfetched.

Captain delves further into a thorny issue,

“Can simulations, at some point, become the real thing?” asks Patrick Lin, director of the Ethics + Emerging Sciences Group at California Polytechnic State University. “If we perfectly simulate a rainstorm on a computer, it’s still not a rainstorm. We won’t get wet. But is the mind or consciousness different? The jury is still out.”

While artificial consciousness is still in the dreamy phase, today’s level of AI is serious business. “What was sort of a highfalutin philosophical question a few years ago has become an urgent industrial need,” says Jonah Nolan. It’s not clear yet how the Delos management intends, beyond entrance fees, to monetize Westworld, although you get a hint when Ford tells Theresa Cullen “We know everything about our guests, don’t we? As we know everything about our employees.”

AI has a clear moneymaking model in this world, according to Nolan. “Facebook is monetizing your social graph, and Google is advertising to you.” Both companies (and others) are investing in AI to better understand users and find ways to make money off this knowledge. …

As my colleague David Bruggeman has often noted on his Pasco Phronesis blog, there’s a lot of science on television.

For anyone who’s interested in artificial intelligence and the effects it may have on urban life, see my Sept. 27, 2016 posting featuring the ‘One Hundred Year Study on Artificial Intelligence (AI100)’, hosted by Stanford University.

Points to anyone who recognized Jonah (Jonathan) Nolan as the producer for the US television series, Person of Interest, a programme based on the concept of a supercomputer with intelligence and personality and the ability to continuously monitor the population 24/7.

How might artificial intelligence affect urban life in 2030? A study

Peering into the future is always a chancy business as anyone who’s seen those film shorts from the 1950’s and 60’s which speculate exuberantly as to what the future will bring knows.

A sober approach (appropriate to our times) has been taken in a study about the impact that artificial intelligence might have by 2030. From a Sept. 1, 2016 Stanford University news release (also on EurekAlert) by Tom Abate (Note: Links have been removed),

A panel of academic and industrial thinkers has looked ahead to 2030 to forecast how advances in artificial intelligence (AI) might affect life in a typical North American city – in areas as diverse as transportation, health care and education ­– and to spur discussion about how to ensure the safe, fair and beneficial development of these rapidly emerging technologies.

Titled “Artificial Intelligence and Life in 2030,” this year-long investigation is the first product of the One Hundred Year Study on Artificial Intelligence (AI100), an ongoing project hosted by Stanford to inform societal deliberation and provide guidance on the ethical development of smart software, sensors and machines.

“We believe specialized AI applications will become both increasingly common and more useful by 2030, improving our economy and quality of life,” said Peter Stone, a computer scientist at the University of Texas at Austin and chair of the 17-member panel of international experts. “But this technology will also create profound challenges, affecting jobs and incomes and other issues that we should begin addressing now to ensure that the benefits of AI are broadly shared.”

The new report traces its roots to a 2009 study that brought AI scientists together in a process of introspection that became ongoing in 2014, when Eric and Mary Horvitz created the AI100 endowment through Stanford. AI100 formed a standing committee of scientists and charged this body with commissioning periodic reports on different aspects of AI over the ensuing century.

“This process will be a marathon, not a sprint, but today we’ve made a good start,” said Russ Altman, a professor of bioengineering and the Stanford faculty director of AI100. “Stanford is excited to host this process of introspection. This work makes practical contribution to the public debate on the roles and implications of artificial intelligence.”

The AI100 standing committee first met in 2015, led by chairwoman and Harvard computer scientist Barbara Grosz. It sought to convene a panel of scientists with diverse professional and personal backgrounds and enlist their expertise to assess the technological, economic and policy implications of potential AI applications in a societally relevant setting.

“AI technologies can be reliable and broadly beneficial,” Grosz said. “Being transparent about their design and deployment challenges will build trust and avert unjustified fear and suspicion.”

The report investigates eight domains of human activity in which AI technologies are beginning to affect urban life in ways that will become increasingly pervasive and profound by 2030.

The 28,000-word report includes a glossary to help nontechnical readers understand how AI applications such as computer vision might help screen tissue samples for cancers or how natural language processing will allow computerized systems to grasp not simply the literal definitions, but the connotations and intent, behind words.

The report is broken into eight sections focusing on applications of AI. Five examine application arenas such as transportation where there is already buzz about self-driving cars. Three other sections treat technological impacts, like the section on employment and workplace trends which touches on the likelihood of rapid changes in jobs and incomes.

“It is not too soon for social debate on how the fruits of an AI-dominated economy should be shared,” the researchers write in the report, noting also the need for public discourse.

“Currently in the United States, at least sixteen separate agencies govern sectors of the economy related to AI technologies,” the researchers write, highlighting issues raised by AI applications: “Who is responsible when a self-driven car crashes or an intelligent medical device fails? How can AI applications be prevented from [being used for] racial discrimination or financial cheating?”

The eight sections discuss:

Transportation: Autonomous cars, trucks and, possibly, aerial delivery vehicles may alter how we commute, work and shop and create new patterns of life and leisure in cities.

Home/service robots: Like the robotic vacuum cleaners already in some homes, specialized robots will clean and provide security in live/work spaces that will be equipped with sensors and remote controls.

Health care: Devices to monitor personal health and robot-assisted surgery are hints of things to come if AI is developed in ways that gain the trust of doctors, nurses, patients and regulators.

Education: Interactive tutoring systems already help students learn languages, math and other skills. More is possible if technologies like natural language processing platforms develop to augment instruction by humans.

Entertainment: The conjunction of content creation tools, social networks and AI will lead to new ways to gather, organize and deliver media in engaging, personalized and interactive ways.

Low-resource communities: Investments in uplifting technologies like predictive models to prevent lead poisoning or improve food distributions could spread AI benefits to the underserved.

Public safety and security: Cameras, drones and software to analyze crime patterns should use AI in ways that reduce human bias and enhance safety without loss of liberty or dignity.

Employment and workplace: Work should start now on how to help people adapt as the economy undergoes rapid changes as many existing jobs are lost and new ones are created.

“Until now, most of what is known about AI comes from science fiction books and movies,” Stone said. “This study provides a realistic foundation to discuss how AI technologies are likely to affect society.”

Grosz said she hopes the AI 100 report “initiates a century-long conversation about ways AI-enhanced technologies might be shaped to improve life and societies.”

You can find the A100 website here, and the group’s first paper: “Artificial Intelligence and Life in 2030” here. Unfortunately, I don’t have time to read the report but I hope to do so soon.

The AI100 website’s About page offered a surprise,

This effort, called the One Hundred Year Study on Artificial Intelligence, or AI100, is the brainchild of computer scientist and Stanford alumnus Eric Horvitz who, among other credits, is a former president of the Association for the Advancement of Artificial Intelligence.

In that capacity Horvitz convened a conference in 2009 at which top researchers considered advances in artificial intelligence and its influences on people and society, a discussion that illuminated the need for continuing study of AI’s long-term implications.

Now, together with Russ Altman, a professor of bioengineering and computer science at Stanford, Horvitz has formed a committee that will select a panel to begin a series of periodic studies on how AI will affect automation, national security, psychology, ethics, law, privacy, democracy and other issues.

“Artificial intelligence is one of the most profound undertakings in science, and one that will affect every aspect of human life,” said Stanford President John Hennessy, who helped initiate the project. “Given’s Stanford’s pioneering role in AI and our interdisciplinary mindset, we feel obliged and qualified to host a conversation about how artificial intelligence will affect our children and our children’s children.”

Five leading academicians with diverse interests will join Horvitz and Altman in launching this effort. They are:

  • Barbara Grosz, the Higgins Professor of Natural Sciences at HarvardUniversity and an expert on multi-agent collaborative systems;
  • Deirdre K. Mulligan, a lawyer and a professor in the School of Information at the University of California, Berkeley, who collaborates with technologists to advance privacy and other democratic values through technical design and policy;

    This effort, called the One Hundred Year Study on Artificial Intelligence, or AI100, is the brainchild of computer scientist and Stanford alumnus Eric Horvitz who, among other credits, is a former president of the Association for the Advancement of Artificial Intelligence.

    In that capacity Horvitz convened a conference in 2009 at which top researchers considered advances in artificial intelligence and its influences on people and society, a discussion that illuminated the need for continuing study of AI’s long-term implications.

    Now, together with Russ Altman, a professor of bioengineering and computer science at Stanford, Horvitz has formed a committee that will select a panel to begin a series of periodic studies on how AI will affect automation, national security, psychology, ethics, law, privacy, democracy and other issues.

    “Artificial intelligence is one of the most profound undertakings in science, and one that will affect every aspect of human life,” said Stanford President John Hennessy, who helped initiate the project. “Given’s Stanford’s pioneering role in AI and our interdisciplinary mindset, we feel obliged and qualified to host a conversation about how artificial intelligence will affect our children and our children’s children.”

    Five leading academicians with diverse interests will join Horvitz and Altman in launching this effort. They are:

    • Barbara Grosz, the Higgins Professor of Natural Sciences at HarvardUniversity and an expert on multi-agent collaborative systems;
    • Deirdre K. Mulligan, a lawyer and a professor in the School of Information at the University of California, Berkeley, who collaborates with technologists to advance privacy and other democratic values through technical design and policy;
    • Yoav Shoham, a professor of computer science at Stanford, who seeks to incorporate common sense into AI;
    • Tom Mitchell, the E. Fredkin University Professor and chair of the machine learning department at Carnegie Mellon University, whose studies include how computers might learn to read the Web;
    • and Alan Mackworth, a professor of computer science at the University of British Columbia [emphases mine] and the Canada Research Chair in Artificial Intelligence, who built the world’s first soccer-playing robot.

    I wasn’t expecting to see a Canadian listed as a member of the AI100 standing committee and then I got another surprise (from the AI100 People webpage),

    Study Panels

    Study Panels are planned to convene every 5 years to examine some aspect of AI and its influences on society and the world. The first study panel was convened in late 2015 to study the likely impacts of AI on urban life by the year 2030, with a focus on typical North American cities.

    2015 Study Panel Members

    • Peter Stone, UT Austin, Chair
    • Rodney Brooks, Rethink Robotics
    • Erik Brynjolfsson, MIT
    • Ryan Calo, University of Washington
    • Oren Etzioni, Allen Institute for AI
    • Greg Hager, Johns Hopkins University
    • Julia Hirschberg, Columbia University
    • Shivaram Kalyanakrishnan, IIT Bombay
    • Ece Kamar, Microsoft
    • Sarit Kraus, Bar Ilan University
    • Kevin Leyton-Brown, [emphasis mine] UBC [University of British Columbia]
    • David Parkes, Harvard
    • Bill Press, UT Austin
    • AnnaLee (Anno) Saxenian, Berkeley
    • Julie Shah, MIT
    • Milind Tambe, USC
    • Astro Teller, Google[X]
  • [emphases mine] and the Canada Research Chair in Artificial Intelligence, who built the world’s first soccer-playing robot.

I wasn’t expecting to see a Canadian listed as a member of the AI100 standing committee and then I got another surprise (from the AI100 People webpage),

Study Panels

Study Panels are planned to convene every 5 years to examine some aspect of AI and its influences on society and the world. The first study panel was convened in late 2015 to study the likely impacts of AI on urban life by the year 2030, with a focus on typical North American cities.

2015 Study Panel Members

  • Peter Stone, UT Austin, Chair
  • Rodney Brooks, Rethink Robotics
  • Erik Brynjolfsson, MIT
  • Ryan Calo, University of Washington
  • Oren Etzioni, Allen Institute for AI
  • Greg Hager, Johns Hopkins University
  • Julia Hirschberg, Columbia University
  • Shivaram Kalyanakrishnan, IIT Bombay
  • Ece Kamar, Microsoft
  • Sarit Kraus, Bar Ilan University
  • Kevin Leyton-Brown, [emphasis mine] UBC [University of British Columbia]
  • David Parkes, Harvard
  • Bill Press, UT Austin
  • AnnaLee (Anno) Saxenian, Berkeley
  • Julie Shah, MIT
  • Milind Tambe, USC
  • Astro Teller, Google[X]

I see they have representation from Israel, India, and the private sector as well. Refreshingly, there’s more than one woman on the standing committee and in this first study group. It’s good to see these efforts at inclusiveness and I’m particularly delighted with the inclusion of an organization from Asia. All too often inclusiveness means Europe, especially the UK. So, it’s good (and I think important) to see a different range of representation.

As for the content of report, should anyone have opinions about it, please do let me know your thoughts in the blog comments.

Interactive chat with Amy Krouse Rosenthal’s memoir

It’s nice to see writers using technology in their literary work to create new forms although I do admit to a pang at the thought that this might have a deleterious effect on book clubs as the headline (Ditch Your Book Club: This AI-Powered Memoir Wants To Chat With You) for Claire Zulkey’s Sept. 1, 2016 article for Fast Company suggests,

Instead of attempting to write a book that would defeat the distractions of a smartphone, author Amy Krouse Rosenthal decided to make the two kiss and make up with her new memoir.

“I have this habit of doing interactive stuff,” says the Chicago writer and filmmaker, whose previous projects have enticed readers to communicate via email, website, or in person, and before all that, a P.O. box. As she pondered a logical follow-up to her 2005 memoir Encyclopedia of an Ordinary Life (which, among other prompts, offered readers a sample of her favorite perfume if they got in touch via her website), Rosenthal hit upon the concept of a textbook. The idea appealed to her, for its bibliographical elements and as a new way of conversing with her readers. And also, of course, because of the double meaning of the title. Textbook, which went on sale August 9 [2016], is a book readers can send texts to, and the book will text them back. “When I realized the wordplay opportunity, and that nobody had done that before, I loved it,” Rosenthal says. “Most people would probably be reading with a phone in their hands anyway.”

Rosenthal may be best known for the dozens of children’s books she’s published, but Encyclopedia was listed in Amazon’s top 10 memoirs of the decade for its alphabetized musings gathered together under the premise, “I have not survived against all odds. I have not lived to tell. I have not witnessed the extraordinary. This is my story.” Her writing often celebrates the serendipitous moment, the smallness of our world, the misheard sentence that was better than the real one—always in praise of the flashes of magic in our mundane lives. Textbook, Rosenthal says, is not a prequel or a sequel but “an equal” to Encyclopedia. It is organized by subject, and Rosenthal shares her favorite anagrams, admits a bias against people who sign emails with just their initials, and exhorts readers, next time they are at a party, to attempt to write a “group biography.” …

… when she sent the book out to publishers, Rosenthal explains, “Pretty much everybody got it. Nobody said, ‘We want to do this book but we don’t want to do that texting thing.’”

Zulkey also covers some of the nitty gritty elements of getting this book published and developed,

After she signed with Dutton, Rosenthal’s editors got in touch with OneReach, a Denver company that specializes in providing multichannel, conversational bot experiences, “This book is a great illustration of what we’re going to see a lot more of in the future,” says OneReach cofounder Robb Wilson. “It’s conversational and has some basic AI components in it.”

Textbook has nearly 20 interactive elements to it, some of which involve email or going to the book’s website, but many are purely text-message-based. One example is a prompt to send in good thoughts, which Rosenthal will then print and send out in a bottle to sea. Another asks readers to text photos of a rainbow they are witnessing in real time. The rainbow and its location are then posted on the book’s website in a live rainbow feed. And yet another puts out a call for suggestions for matching tattoos that at least one reader and Rosenthal will eventually get. Three weeks after its publication date, the book has received texts from over 600 readers.

Nearly anyone who has received a text from Walgreens saying a prescription is ready, gotten an appointment confirmation from a dentist, or even voted on American Idol has interacted with the type of technology OneReach handles. But behind the scenes of that technology were artistic quandaries that Rosenthal and the team had to solve or work around.

For instance, the reader has the option to pick and choose which prompts to engage with and in what order, which is not typically how text chains work. “Normally, with an automated text message you’re in kind of a lineal format,” says Justin Biel, who built Textbook’s system and made sure that if you skipped the best-wishes text, for instance, and go right to the rainbow, you wouldn’t get an error message. At one point Rosenthal and her assistant manually tried every possible permutation of text to confirm that there were no hitches jumping from one prompt to another.

Engineers also made lots of revisions so that the system felt like readers were having a realistic text conversation with a person, rather than a bot or someone who had obviously written out the messages ahead of time. “It’s a fine line between robotic and poetic,” Rosenthal says.

Unlike your Instacart shopper whom you hope doesn’t need to text to ask you about substitutions, Textbook readers will never receive a message alerting them to a new Rosenthal signing or a discount at Amazon. No promo or marketing messages, ever. “In a way, that’s a betrayal,” Wilson says. Texting, to him, is “a personal channel, and to try to use that channel for blatant reasons, I think, hurts you more than it helps you.

Zulkey’s piece is a good read and includes images and an embedded video.

Robots built from living tissue

Biohybrid robots, as they are known, are built from living tissue but not in a Frankenstein kind of way as Victoria Webster PhD candidate at Case Western Reserve University (US) explains in her Aug. 9, 2016 essay on The Conversation (also on phys.org as an Aug. 10, 2016 news item; Note: Links have been removed),

Researchers are increasingly looking for solutions to make robots softer or more compliant – less like rigid machines, more like animals. With traditional actuators – such as motors – this can mean using air muscles or adding springs in parallel with motors. …

But there’s a growing area of research that’s taking a different approach. By combining robotics with tissue engineering, we’re starting to build robots powered by living muscle tissue or cells. These devices can be stimulated electrically or with light to make the cells contract to bend their skeletons, causing the robot to swim or crawl. The resulting biobots can move around and are soft like animals. They’re safer around people and typically less harmful to the environment they work in than a traditional robot might be. And since, like animals, they need nutrients to power their muscles, not batteries, biohybrid robots tend to be lighter too.

Webster explains how these biobots are built,

Researchers fabricate biobots by growing living cells, usually from heart or skeletal muscle of rats or chickens, on scaffolds that are nontoxic to the cells. If the substrate is a polymer, the device created is a biohybrid robot – a hybrid between natural and human-made materials.

If you just place cells on a molded skeleton without any guidance, they wind up in random orientations. That means when researchers apply electricity to make them move, the cells’ contraction forces will be applied in all directions, making the device inefficient at best.

So to better harness the cells’ power, researchers turn to micropatterning. We stamp or print microscale lines on the skeleton made of substances that the cells prefer to attach to. These lines guide the cells so that as they grow, they align along the printed pattern. With the cells all lined up, researchers can direct how their contraction force is applied to the substrate. So rather than just a mess of firing cells, they can all work in unison to move a leg or fin of the device.

Researchers sometimes mimic animals when creating their biobots (Note: Links have been removed),

Others have taken their cues from nature, creating biologically inspired biohybrids. For example, a group led by researchers at California Institute of Technology developed a biohybrid robot inspired by jellyfish. This device, which they call a medusoid, has arms arranged in a circle. Each arm is micropatterned with protein lines so that cells grow in patterns similar to the muscles in a living jellyfish. When the cells contract, the arms bend inwards, propelling the biohybrid robot forward in nutrient-rich liquid.

More recently, researchers have demonstrated how to steer their biohybrid creations. A group at Harvard used genetically modified heart cells to make a biologically inspired manta ray-shaped robot swim. The heart cells were altered to contract in response to specific frequencies of light – one side of the ray had cells that would respond to one frequency, the other side’s cells responded to another.

Amazing, eh? And, this is quite a recent video; it was published on YouTube on July 7, 2016.

Webster goes on to describe work designed to make these robots hardier and more durable so they can leave the laboratory,

… Here at Case Western Reserve University, we’ve recently begun to investigate … by turning to the hardy marine sea slug Aplysia californica. Since A. californica lives in the intertidal region, it can experience big changes in temperature and environmental salinity over the course of a day. When the tide goes out, the sea slugs can get trapped in tide pools. As the sun beats down, water can evaporate and the temperature will rise. Conversely in the event of rain, the saltiness of the surrounding water can decrease. When the tide eventually comes in, the sea slugs are freed from the tidal pools. Sea slugs have evolved very hardy cells to endure this changeable habitat.

We’ve been able to use Aplysia tissue to actuate a biohybrid robot, suggesting that we can manufacture tougher biobots using these resilient tissues. The devices are large enough to carry a small payload – approximately 1.5 inches long and one inch wide.

Webster has written a fascinating piece and, if you have time, I encourage you to read it in its entirety.