Category Archives: robots

How might artificial intelligence affect urban life in 2030? A study

Peering into the future is always a chancy business as anyone who’s seen those film shorts from the 1950’s and 60’s which speculate exuberantly as to what the future will bring knows.

A sober approach (appropriate to our times) has been taken in a study about the impact that artificial intelligence might have by 2030. From a Sept. 1, 2016 Stanford University news release (also on EurekAlert) by Tom Abate (Note: Links have been removed),

A panel of academic and industrial thinkers has looked ahead to 2030 to forecast how advances in artificial intelligence (AI) might affect life in a typical North American city – in areas as diverse as transportation, health care and education ­– and to spur discussion about how to ensure the safe, fair and beneficial development of these rapidly emerging technologies.

Titled “Artificial Intelligence and Life in 2030,” this year-long investigation is the first product of the One Hundred Year Study on Artificial Intelligence (AI100), an ongoing project hosted by Stanford to inform societal deliberation and provide guidance on the ethical development of smart software, sensors and machines.

“We believe specialized AI applications will become both increasingly common and more useful by 2030, improving our economy and quality of life,” said Peter Stone, a computer scientist at the University of Texas at Austin and chair of the 17-member panel of international experts. “But this technology will also create profound challenges, affecting jobs and incomes and other issues that we should begin addressing now to ensure that the benefits of AI are broadly shared.”

The new report traces its roots to a 2009 study that brought AI scientists together in a process of introspection that became ongoing in 2014, when Eric and Mary Horvitz created the AI100 endowment through Stanford. AI100 formed a standing committee of scientists and charged this body with commissioning periodic reports on different aspects of AI over the ensuing century.

“This process will be a marathon, not a sprint, but today we’ve made a good start,” said Russ Altman, a professor of bioengineering and the Stanford faculty director of AI100. “Stanford is excited to host this process of introspection. This work makes practical contribution to the public debate on the roles and implications of artificial intelligence.”

The AI100 standing committee first met in 2015, led by chairwoman and Harvard computer scientist Barbara Grosz. It sought to convene a panel of scientists with diverse professional and personal backgrounds and enlist their expertise to assess the technological, economic and policy implications of potential AI applications in a societally relevant setting.

“AI technologies can be reliable and broadly beneficial,” Grosz said. “Being transparent about their design and deployment challenges will build trust and avert unjustified fear and suspicion.”

The report investigates eight domains of human activity in which AI technologies are beginning to affect urban life in ways that will become increasingly pervasive and profound by 2030.

The 28,000-word report includes a glossary to help nontechnical readers understand how AI applications such as computer vision might help screen tissue samples for cancers or how natural language processing will allow computerized systems to grasp not simply the literal definitions, but the connotations and intent, behind words.

The report is broken into eight sections focusing on applications of AI. Five examine application arenas such as transportation where there is already buzz about self-driving cars. Three other sections treat technological impacts, like the section on employment and workplace trends which touches on the likelihood of rapid changes in jobs and incomes.

“It is not too soon for social debate on how the fruits of an AI-dominated economy should be shared,” the researchers write in the report, noting also the need for public discourse.

“Currently in the United States, at least sixteen separate agencies govern sectors of the economy related to AI technologies,” the researchers write, highlighting issues raised by AI applications: “Who is responsible when a self-driven car crashes or an intelligent medical device fails? How can AI applications be prevented from [being used for] racial discrimination or financial cheating?”

The eight sections discuss:

Transportation: Autonomous cars, trucks and, possibly, aerial delivery vehicles may alter how we commute, work and shop and create new patterns of life and leisure in cities.

Home/service robots: Like the robotic vacuum cleaners already in some homes, specialized robots will clean and provide security in live/work spaces that will be equipped with sensors and remote controls.

Health care: Devices to monitor personal health and robot-assisted surgery are hints of things to come if AI is developed in ways that gain the trust of doctors, nurses, patients and regulators.

Education: Interactive tutoring systems already help students learn languages, math and other skills. More is possible if technologies like natural language processing platforms develop to augment instruction by humans.

Entertainment: The conjunction of content creation tools, social networks and AI will lead to new ways to gather, organize and deliver media in engaging, personalized and interactive ways.

Low-resource communities: Investments in uplifting technologies like predictive models to prevent lead poisoning or improve food distributions could spread AI benefits to the underserved.

Public safety and security: Cameras, drones and software to analyze crime patterns should use AI in ways that reduce human bias and enhance safety without loss of liberty or dignity.

Employment and workplace: Work should start now on how to help people adapt as the economy undergoes rapid changes as many existing jobs are lost and new ones are created.

“Until now, most of what is known about AI comes from science fiction books and movies,” Stone said. “This study provides a realistic foundation to discuss how AI technologies are likely to affect society.”

Grosz said she hopes the AI 100 report “initiates a century-long conversation about ways AI-enhanced technologies might be shaped to improve life and societies.”

You can find the A100 website here, and the group’s first paper: “Artificial Intelligence and Life in 2030” here. Unfortunately, I don’t have time to read the report but I hope to do so soon.

The AI100 website’s About page offered a surprise,

This effort, called the One Hundred Year Study on Artificial Intelligence, or AI100, is the brainchild of computer scientist and Stanford alumnus Eric Horvitz who, among other credits, is a former president of the Association for the Advancement of Artificial Intelligence.

In that capacity Horvitz convened a conference in 2009 at which top researchers considered advances in artificial intelligence and its influences on people and society, a discussion that illuminated the need for continuing study of AI’s long-term implications.

Now, together with Russ Altman, a professor of bioengineering and computer science at Stanford, Horvitz has formed a committee that will select a panel to begin a series of periodic studies on how AI will affect automation, national security, psychology, ethics, law, privacy, democracy and other issues.

“Artificial intelligence is one of the most profound undertakings in science, and one that will affect every aspect of human life,” said Stanford President John Hennessy, who helped initiate the project. “Given’s Stanford’s pioneering role in AI and our interdisciplinary mindset, we feel obliged and qualified to host a conversation about how artificial intelligence will affect our children and our children’s children.”

Five leading academicians with diverse interests will join Horvitz and Altman in launching this effort. They are:

  • Barbara Grosz, the Higgins Professor of Natural Sciences at HarvardUniversity and an expert on multi-agent collaborative systems;
  • Deirdre K. Mulligan, a lawyer and a professor in the School of Information at the University of California, Berkeley, who collaborates with technologists to advance privacy and other democratic values through technical design and policy;

    This effort, called the One Hundred Year Study on Artificial Intelligence, or AI100, is the brainchild of computer scientist and Stanford alumnus Eric Horvitz who, among other credits, is a former president of the Association for the Advancement of Artificial Intelligence.

    In that capacity Horvitz convened a conference in 2009 at which top researchers considered advances in artificial intelligence and its influences on people and society, a discussion that illuminated the need for continuing study of AI’s long-term implications.

    Now, together with Russ Altman, a professor of bioengineering and computer science at Stanford, Horvitz has formed a committee that will select a panel to begin a series of periodic studies on how AI will affect automation, national security, psychology, ethics, law, privacy, democracy and other issues.

    “Artificial intelligence is one of the most profound undertakings in science, and one that will affect every aspect of human life,” said Stanford President John Hennessy, who helped initiate the project. “Given’s Stanford’s pioneering role in AI and our interdisciplinary mindset, we feel obliged and qualified to host a conversation about how artificial intelligence will affect our children and our children’s children.”

    Five leading academicians with diverse interests will join Horvitz and Altman in launching this effort. They are:

    • Barbara Grosz, the Higgins Professor of Natural Sciences at HarvardUniversity and an expert on multi-agent collaborative systems;
    • Deirdre K. Mulligan, a lawyer and a professor in the School of Information at the University of California, Berkeley, who collaborates with technologists to advance privacy and other democratic values through technical design and policy;
    • Yoav Shoham, a professor of computer science at Stanford, who seeks to incorporate common sense into AI;
    • Tom Mitchell, the E. Fredkin University Professor and chair of the machine learning department at Carnegie Mellon University, whose studies include how computers might learn to read the Web;
    • and Alan Mackworth, a professor of computer science at the University of British Columbia [emphases mine] and the Canada Research Chair in Artificial Intelligence, who built the world’s first soccer-playing robot.

    I wasn’t expecting to see a Canadian listed as a member of the AI100 standing committee and then I got another surprise (from the AI100 People webpage),

    Study Panels

    Study Panels are planned to convene every 5 years to examine some aspect of AI and its influences on society and the world. The first study panel was convened in late 2015 to study the likely impacts of AI on urban life by the year 2030, with a focus on typical North American cities.

    2015 Study Panel Members

    • Peter Stone, UT Austin, Chair
    • Rodney Brooks, Rethink Robotics
    • Erik Brynjolfsson, MIT
    • Ryan Calo, University of Washington
    • Oren Etzioni, Allen Institute for AI
    • Greg Hager, Johns Hopkins University
    • Julia Hirschberg, Columbia University
    • Shivaram Kalyanakrishnan, IIT Bombay
    • Ece Kamar, Microsoft
    • Sarit Kraus, Bar Ilan University
    • Kevin Leyton-Brown, [emphasis mine] UBC [University of British Columbia]
    • David Parkes, Harvard
    • Bill Press, UT Austin
    • AnnaLee (Anno) Saxenian, Berkeley
    • Julie Shah, MIT
    • Milind Tambe, USC
    • Astro Teller, Google[X]
  • [emphases mine] and the Canada Research Chair in Artificial Intelligence, who built the world’s first soccer-playing robot.

I wasn’t expecting to see a Canadian listed as a member of the AI100 standing committee and then I got another surprise (from the AI100 People webpage),

Study Panels

Study Panels are planned to convene every 5 years to examine some aspect of AI and its influences on society and the world. The first study panel was convened in late 2015 to study the likely impacts of AI on urban life by the year 2030, with a focus on typical North American cities.

2015 Study Panel Members

  • Peter Stone, UT Austin, Chair
  • Rodney Brooks, Rethink Robotics
  • Erik Brynjolfsson, MIT
  • Ryan Calo, University of Washington
  • Oren Etzioni, Allen Institute for AI
  • Greg Hager, Johns Hopkins University
  • Julia Hirschberg, Columbia University
  • Shivaram Kalyanakrishnan, IIT Bombay
  • Ece Kamar, Microsoft
  • Sarit Kraus, Bar Ilan University
  • Kevin Leyton-Brown, [emphasis mine] UBC [University of British Columbia]
  • David Parkes, Harvard
  • Bill Press, UT Austin
  • AnnaLee (Anno) Saxenian, Berkeley
  • Julie Shah, MIT
  • Milind Tambe, USC
  • Astro Teller, Google[X]

I see they have representation from Israel, India, and the private sector as well. Refreshingly, there’s more than one woman on the standing committee and in this first study group. It’s good to see these efforts at inclusiveness and I’m particularly delighted with the inclusion of an organization from Asia. All too often inclusiveness means Europe, especially the UK. So, it’s good (and I think important) to see a different range of representation.

As for the content of report, should anyone have opinions about it, please do let me know your thoughts in the blog comments.

Interactive chat with Amy Krouse Rosenthal’s memoir

It’s nice to see writers using technology in their literary work to create new forms although I do admit to a pang at the thought that this might have a deleterious effect on book clubs as the headline (Ditch Your Book Club: This AI-Powered Memoir Wants To Chat With You) for Claire Zulkey’s Sept. 1, 2016 article for Fast Company suggests,

Instead of attempting to write a book that would defeat the distractions of a smartphone, author Amy Krouse Rosenthal decided to make the two kiss and make up with her new memoir.

“I have this habit of doing interactive stuff,” says the Chicago writer and filmmaker, whose previous projects have enticed readers to communicate via email, website, or in person, and before all that, a P.O. box. As she pondered a logical follow-up to her 2005 memoir Encyclopedia of an Ordinary Life (which, among other prompts, offered readers a sample of her favorite perfume if they got in touch via her website), Rosenthal hit upon the concept of a textbook. The idea appealed to her, for its bibliographical elements and as a new way of conversing with her readers. And also, of course, because of the double meaning of the title. Textbook, which went on sale August 9 [2016], is a book readers can send texts to, and the book will text them back. “When I realized the wordplay opportunity, and that nobody had done that before, I loved it,” Rosenthal says. “Most people would probably be reading with a phone in their hands anyway.”

Rosenthal may be best known for the dozens of children’s books she’s published, but Encyclopedia was listed in Amazon’s top 10 memoirs of the decade for its alphabetized musings gathered together under the premise, “I have not survived against all odds. I have not lived to tell. I have not witnessed the extraordinary. This is my story.” Her writing often celebrates the serendipitous moment, the smallness of our world, the misheard sentence that was better than the real one—always in praise of the flashes of magic in our mundane lives. Textbook, Rosenthal says, is not a prequel or a sequel but “an equal” to Encyclopedia. It is organized by subject, and Rosenthal shares her favorite anagrams, admits a bias against people who sign emails with just their initials, and exhorts readers, next time they are at a party, to attempt to write a “group biography.” …

… when she sent the book out to publishers, Rosenthal explains, “Pretty much everybody got it. Nobody said, ‘We want to do this book but we don’t want to do that texting thing.’”

Zulkey also covers some of the nitty gritty elements of getting this book published and developed,

After she signed with Dutton, Rosenthal’s editors got in touch with OneReach, a Denver company that specializes in providing multichannel, conversational bot experiences, “This book is a great illustration of what we’re going to see a lot more of in the future,” says OneReach cofounder Robb Wilson. “It’s conversational and has some basic AI components in it.”

Textbook has nearly 20 interactive elements to it, some of which involve email or going to the book’s website, but many are purely text-message-based. One example is a prompt to send in good thoughts, which Rosenthal will then print and send out in a bottle to sea. Another asks readers to text photos of a rainbow they are witnessing in real time. The rainbow and its location are then posted on the book’s website in a live rainbow feed. And yet another puts out a call for suggestions for matching tattoos that at least one reader and Rosenthal will eventually get. Three weeks after its publication date, the book has received texts from over 600 readers.

Nearly anyone who has received a text from Walgreens saying a prescription is ready, gotten an appointment confirmation from a dentist, or even voted on American Idol has interacted with the type of technology OneReach handles. But behind the scenes of that technology were artistic quandaries that Rosenthal and the team had to solve or work around.

For instance, the reader has the option to pick and choose which prompts to engage with and in what order, which is not typically how text chains work. “Normally, with an automated text message you’re in kind of a lineal format,” says Justin Biel, who built Textbook’s system and made sure that if you skipped the best-wishes text, for instance, and go right to the rainbow, you wouldn’t get an error message. At one point Rosenthal and her assistant manually tried every possible permutation of text to confirm that there were no hitches jumping from one prompt to another.

Engineers also made lots of revisions so that the system felt like readers were having a realistic text conversation with a person, rather than a bot or someone who had obviously written out the messages ahead of time. “It’s a fine line between robotic and poetic,” Rosenthal says.

Unlike your Instacart shopper whom you hope doesn’t need to text to ask you about substitutions, Textbook readers will never receive a message alerting them to a new Rosenthal signing or a discount at Amazon. No promo or marketing messages, ever. “In a way, that’s a betrayal,” Wilson says. Texting, to him, is “a personal channel, and to try to use that channel for blatant reasons, I think, hurts you more than it helps you.

Zulkey’s piece is a good read and includes images and an embedded video.

Robots built from living tissue

Biohybrid robots, as they are known, are built from living tissue but not in a Frankenstein kind of way as Victoria Webster PhD candidate at Case Western Reserve University (US) explains in her Aug. 9, 2016 essay on The Conversation (also on phys.org as an Aug. 10, 2016 news item; Note: Links have been removed),

Researchers are increasingly looking for solutions to make robots softer or more compliant – less like rigid machines, more like animals. With traditional actuators – such as motors – this can mean using air muscles or adding springs in parallel with motors. …

But there’s a growing area of research that’s taking a different approach. By combining robotics with tissue engineering, we’re starting to build robots powered by living muscle tissue or cells. These devices can be stimulated electrically or with light to make the cells contract to bend their skeletons, causing the robot to swim or crawl. The resulting biobots can move around and are soft like animals. They’re safer around people and typically less harmful to the environment they work in than a traditional robot might be. And since, like animals, they need nutrients to power their muscles, not batteries, biohybrid robots tend to be lighter too.

Webster explains how these biobots are built,

Researchers fabricate biobots by growing living cells, usually from heart or skeletal muscle of rats or chickens, on scaffolds that are nontoxic to the cells. If the substrate is a polymer, the device created is a biohybrid robot – a hybrid between natural and human-made materials.

If you just place cells on a molded skeleton without any guidance, they wind up in random orientations. That means when researchers apply electricity to make them move, the cells’ contraction forces will be applied in all directions, making the device inefficient at best.

So to better harness the cells’ power, researchers turn to micropatterning. We stamp or print microscale lines on the skeleton made of substances that the cells prefer to attach to. These lines guide the cells so that as they grow, they align along the printed pattern. With the cells all lined up, researchers can direct how their contraction force is applied to the substrate. So rather than just a mess of firing cells, they can all work in unison to move a leg or fin of the device.

Researchers sometimes mimic animals when creating their biobots (Note: Links have been removed),

Others have taken their cues from nature, creating biologically inspired biohybrids. For example, a group led by researchers at California Institute of Technology developed a biohybrid robot inspired by jellyfish. This device, which they call a medusoid, has arms arranged in a circle. Each arm is micropatterned with protein lines so that cells grow in patterns similar to the muscles in a living jellyfish. When the cells contract, the arms bend inwards, propelling the biohybrid robot forward in nutrient-rich liquid.

More recently, researchers have demonstrated how to steer their biohybrid creations. A group at Harvard used genetically modified heart cells to make a biologically inspired manta ray-shaped robot swim. The heart cells were altered to contract in response to specific frequencies of light – one side of the ray had cells that would respond to one frequency, the other side’s cells responded to another.

Amazing, eh? And, this is quite a recent video; it was published on YouTube on July 7, 2016.

Webster goes on to describe work designed to make these robots hardier and more durable so they can leave the laboratory,

… Here at Case Western Reserve University, we’ve recently begun to investigate … by turning to the hardy marine sea slug Aplysia californica. Since A. californica lives in the intertidal region, it can experience big changes in temperature and environmental salinity over the course of a day. When the tide goes out, the sea slugs can get trapped in tide pools. As the sun beats down, water can evaporate and the temperature will rise. Conversely in the event of rain, the saltiness of the surrounding water can decrease. When the tide eventually comes in, the sea slugs are freed from the tidal pools. Sea slugs have evolved very hardy cells to endure this changeable habitat.

We’ve been able to use Aplysia tissue to actuate a biohybrid robot, suggesting that we can manufacture tougher biobots using these resilient tissues. The devices are large enough to carry a small payload – approximately 1.5 inches long and one inch wide.

Webster has written a fascinating piece and, if you have time, I encourage you to read it in its entirety.

Curbing police violence with machine learning

A rather fascinating Aug. 1, 2016 article by Hal Hodson about machine learning and curbing police violence has appeared in the New Scientist journal (Note: Links have been removed),

None of their colleagues may have noticed, but a computer has. By churning through the police’s own staff records, it has caught signs that an officer is at high risk of initiating an “adverse event” – racial profiling or, worse, an unwarranted shooting.

The Charlotte-Mecklenburg Police Department in North Carolina is piloting the system in an attempt to tackle the police violence that has become a heated issue in the US in the past three years. A team at the University of Chicago is helping them feed their data into a machine learning system that learns to spot risk factors for unprofessional conduct. The department can then step in before risk transforms into actual harm.

The idea is to prevent incidents in which officers who are stressed behave aggressively, for example, such as one in Texas where an officer pulled his gun on children at a pool party after responding to two suicide calls earlier that shift. Ideally, early warning systems would be able to identify individuals who had recently been deployed on tough assignments, and divert them from other sensitive calls.

According to Hodson, there are already systems, both human and algorithmic, in place but the goal is to make them better,

The system being tested in Charlotte is designed to include all of the records a department holds on an individual – from details of previous misconduct and gun use to their deployment history, such as how many suicide or domestic violence calls they have responded to. It retrospectively caught 48 out of 83 adverse incidents between 2005 and now – 12 per cent more than Charlotte-Mecklenberg’s existing early intervention system.

More importantly, the false positive rate – the fraction of officers flagged as being under stress who do not go on to act aggressively – was 32 per cent lower than the existing system’s. “Right now the systems that claim to do this end up flagging the majority of officers,” says Rayid Ghani, who leads the Chicago team. “You can’t really intervene then.”

There is some cautious optimism about this new algorithm (Note: Links have been removed),

Frank Pasquale, who studies the social impact of algorithms at the University of Maryland, is cautiously optimistic. “In many walks of life I think this algorithmic ranking of workers has gone too far – it troubles me,” he says. “But in the context of the police, I think it could work.”

Pasquale says that while such a system for tackling police misconduct is new, it’s likely that older systems created the problem in the first place. “The people behind this are going to say it’s all new,” he says. “But it could be seen as an effort to correct an earlier algorithmic failure. A lot of people say that the reason you have so much contact between minorities and police is because the CompStat system was rewarding officers who got the most arrests.”

CompStat, short for Computer Statistics, is a police management and accountability system that was used to implement the “broken windows” theory of policing, which proposes that coming down hard on minor infractions like public drinking and vandalism helps to create an atmosphere of law and order, bringing serious crime down in its wake. Many police researchers have suggested that the approach has led to the current dangerous tension between police and minority communities.

Ghani has not forgotten the human dimension,

One thing Ghani is certain of is that the interventions will need to be decided on and delivered by humans. “I would not want any of those to be automated,” he says. “As long as there is a human in the middle starting a conversation with them, we’re reducing the chance for things to go wrong.”

h/t Terkko Navigator

I have written about police and violence here in the context of the Dallas Police Department and its use of a robot in a violent confrontation with a sniper, July 25, 2016 posting titled: Robots, Dallas (US), ethics, and killing.

Robots judge a beauty contest

I have a lot of respect for good PR gimmicks and a beauty contest judged by robots (or more accurately, artificial intelligence) is a provocative idea wrapped up in a good public relations (PR) gimmick. A July 12, 2016 In Silico Medicine press release on EurekAlert reveals more,

Beauty.AI 2.0, a platform,” a platform, where human beauty is evaluated by a jury of robots and algorithm developers compete on novel applications of machine intelligence to perception is supported by Ernst and Young.

“We were very impressed by E&Y’s recent advertising campaign with a robot hand holding a beautiful butterfly and a slogan “How human is your algorithm?” and immediately invited them to participate. This slogan captures the very essence of our contest, which is constantly exploring new ideas in machine perception of humans”, said Anastasia Georgievskaya, Managing Scientist at Youth Laboratories, the organizer of Beauty.AI.

Beauty.AI contest is supported by the many innovative companies from the US, Europe, and Asia with some of the top cosmetics companies participating in collaborative research projects. Imagene Labs, one of the leaders in linking facial and biological information from Singapore operating across Asia, is a gold sponsor and research partner of the contest.

There are many approaches to evaluating human beauty. Features like symmetry, pigmentation, pimples, wrinkles may play a role and similarity to actors, models and celebrities may be used in the calculation of the overall score. However, other innovative approaches have been proposed. A robot developed by Insilico Medicine compares the chronological age with the age predicted by a deep neural network. Another team is training an artificially-intelligent system to identify features that contribute to the popularity of the people on dating sites.

“We look forward to collaborating with the Youth Laboratories team to create new AI algorithms. These will eventually allow consumers to objectively evaluate how well their wellness interventions – such as diet, exercise, skincare and supplements – are working. Based on the results they can then fine tune their approach to further improve their well-being and age better”, said Jia-Yi Har, Vice President of Imagene Labs.

The contest is open to anyone with a modern smartphone running either Android or iOS operating system, and Beauty.AI 2.0 app can be downloaded for free from either Google or Apple markets. Programmers and companies can participate by submitting their algorithm to the organizers through the Beauty.AI website.

“The beauty of Beauty.AI pageants is that algorithms are much more impartial than humans, and we are trying to prevent any racial bias and run the contest in multiple age categories. Most of the popular beauty contests discriminate by age, gender, marital status, body weight and race. Algorithms are much less partial”, said Alex Shevtsov, CEO of Youth Laboratories.

Very interesting take on beauty and bias. I wonder if they’re building change into their algorithms. After all, standards for beauty don’t remain static, they change over time.

Unfortunately, that question isn’t asked in Wency Leung’s July 4, 2016 article on the robot beauty contest for the Globe and Mail but she does provides more details about the contest and insight into the world of international cosmetics companies and their use of technology,

Teaching computers about aesthetics involves designing sophisticated algorithms to recognize and measure features like wrinkles, face proportions, blemishes and skin colour. And the beauty industry is rapidly embracing these high-tech tools to respond to consumers’ demand for products that suit their individual tastes and attributes.

Companies like Sephora and Avon, for instance, are using face simulation technology to provide apps that allow customers to virtually try on and shop for lipsticks and eye shadows using their mobile devices. Skincare producers are using similar technologies to track and predict the effects of serums and creams on various skin types. And brands like L’Oréal’s Lancôme are using facial analysis to read consumers’ skin tones to create personalized foundations.

“The more we’re able to use these tools like augmented reality [and] artificial intelligence to provide new consumer experiences, the more we can move to customizing and personalizing products for every consumer around the world, no matter what their skin tone is, no matter where they live, no matter who they are,” says Guive Balooch, global vice-president of L’Oréal’s technology incubator.

Balooch was tasked with starting up the company’s tech research hub four years ago, with a mandate to predict and invent solutions to how consumers would choose and use products in the future. Among its innovations, his team has come up with the Makeup Genius app, a virtual mirror that allows customers to try on products on a mobile screen, and a device called My UV Patch, a sticker sensor that users wear on their skin, which informs them through an app how much UV exposure they get.

These tools may seem easy enough to use, but their simplicity belies the work that goes on behind the scenes. To create the Makeup Genius app, for example, Balooch says the developers sought expertise from the animation industry to enable users to see themselves move onscreen in real time. The developers also brought in hundreds of consumers with different skin tones to test real products in the lab, and they tested the app on some 100,000 images in more than 40 lighting conditions, to ensure the colours of makeup products appeared the same in real life as they did onscreen, Balooch says.

The article is well worth reading in its entirety.

For the seriously curious, you can find Beauty AI here, In Silico Medicine here, and Imagene Labs here. I cannot find a website for Youth Laboratories featuring Anastasia Georgievskaya.

I last wrote about In Silico Medicine in a May 31, 2016 post about deep learning, wrinkles, and aging.

Robots, Dallas (US), ethics, and killing

I’ve waited a while before posting this piece in the hope that the situation would calm. Sadly, it took longer than hoped as there was an additional shooting incident of police officers in Baton Rouge on July 17, 2016. There’s more about that shooting in a July 18, 2016 news posting by Steve Visser for CNN.)

Finally: Robots, Dallas, ethics, and killing: In the wake of the Thursday, July 7, 2016 shooting in Dallas (Texas, US) and subsequent use of a robot armed with a bomb to kill  the suspect, a discussion about ethics has been raised.

This discussion comes at a difficult period. In the same week as the targeted shooting of white police officers in Dallas, two African-American males were shot and killed in two apparently unprovoked shootings by police. The victims were Alton Sterling in Baton Rouge, Louisiana on Tuesday, July 5, 2016 and, Philando Castile in Minnesota on Wednesday, July 6, 2016. (There’s more detail about the shootings prior to Dallas in a July 7, 2016 news item on CNN.) The suspect in Dallas, Micah Xavier Johnson, a 25-year-old African-American male had served in the US Army Reserve and been deployed in Afghanistan (there’s more in a July 9, 2016 news item by Emily Shapiro, Julia Jacobo, and Stephanie Wash for abcnews.go.com). All of this has taken place within the context of a movement started in 2013 in the US, Black Lives Matter.

Getting back to robots, most of the material I’ve seen about ‘killing or killer’ robots has so far involved industrial accidents (very few to date) and ethical issues for self-driven cars (see a May 31, 2016 posting by Noah J. Goodall on the IEEE [Institute of Electrical and Electronics Engineers] Spectrum website).

The incident in Dallas is apparently the first time a US police organization has used a robot as a bomb, although it has been an occasional practice by US Armed Forces in combat situations. Rob Lever in a July 8, 2016 Agence France-Presse piece on phys.org focuses on the technology aspect,

The “bomb robot” killing of a suspected Dallas shooter may be the first lethal use of an automated device by American police, and underscores growing role of technology in law enforcement.

Regardless of the methods in Dallas, the use of robots is expected to grow, to handle potentially dangerous missions in law enforcement and the military.


Researchers at Florida International University meanwhile have been working on a TeleBot that would allow disabled police officers to control a humanoid robot.

The robot, described in some reports as similar to the “RoboCop” in films from 1987 and 2014, was designed “to look intimidating and authoritative enough for citizens to obey the commands,” but with a “friendly appearance” that makes it “approachable to citizens of all ages,” according to a research paper.

Robot developers downplay the potential for the use of automated lethal force by the devices, but some analysts say debate on this is needed, both for policing and the military.

A July 9, 2016 Associated Press piece by Michael Liedtke and Bree Fowler on phys.org focuses more closely on ethical issues raised by the Dallas incident,

When Dallas police used a bomb-carrying robot to kill a sniper, they also kicked off an ethical debate about technology’s use as a crime-fighting weapon.

The strategy opens a new chapter in the escalating use of remote and semi-autonomous devices to fight crime and protect lives. It also raises new questions over when it’s appropriate to dispatch a robot to kill dangerous suspects instead of continuing to negotiate their surrender.

“If lethally equipped robots can be used in this situation, when else can they be used?” says Elizabeth Joh, a University of California at Davis law professor who has followed U.S. law enforcement’s use of technology. “Extreme emergencies shouldn’t define the scope of more ordinary situations where police may want to use robots that are capable of harm.”

In approaching the question about the ethics, Mike Masnick’s July 8, 2016 posting on Techdirt provides a surprisingly sympathetic reading for the Dallas Police Department’s actions, as well as, asking some provocative questions about how robots might be better employed by police organizations (Note: Links have been removed),

The Dallas Police have a long history of engaging in community policing designed to de-escalate situations, rather than encourage antagonism between police and the community, have been handling all of this with astounding restraint, frankly. Many other police departments would be lashing out, and yet the Dallas Police Dept, while obviously grieving for a horrible situation, appear to be handling this tragic situation professionally. And it appears that they did everything they could in a reasonable manner. They first tried to negotiate with Johnson, but after that failed and they feared more lives would be lost, they went with the robot + bomb option. And, obviously, considering he had already shot many police officers, I don’t think anyone would question the police justification if they had shot Johnson.

But, still, at the very least, the whole situation raises a lot of questions about the legality of police using a bomb offensively to blow someone up. And, it raises some serious questions about how other police departments might use this kind of technology in the future. The situation here appears to be one where people reasonably concluded that this was the most effective way to stop further bloodshed. And this is a police department with a strong track record of reasonable behavior. But what about other police departments where they don’t have that kind of history? What are the protocols for sending in a robot or drone to kill someone? Are there any rules at all?

Furthermore, it actually makes you wonder, why isn’t there a focus on using robots to de-escalate these situations? What if, instead of buying military surplus bomb robots, there were robots being designed to disarm a shooter, or detain him in a manner that would make it easier for the police to capture him alive? Why should the focus of remote robotic devices be to kill him? This isn’t faulting the Dallas Police Department for its actions last night. But, rather, if we’re going to enter the age of robocop, shouldn’t we be looking for ways to use such robotic devices in a manner that would help capture suspects alive, rather than dead?

Gordon Corera’s July 12, 2016 article on the BBC’s (British Broadcasting Corporation) news website provides an overview of the use of automation and of ‘killing/killer robots’,

Remote killing is not new in warfare. Technology has always been driven by military application, including allowing killing to be carried out at distance – prior examples might be the introduction of the longbow by the English at Crecy in 1346, then later the Nazi V1 and V2 rockets.

More recently, unmanned aerial vehicles (UAVs) or drones such as the Predator and the Reaper have been used by the US outside of traditional military battlefields.

Since 2009, the official US estimate is that about 2,500 “combatants” have been killed in 473 strikes, along with perhaps more than 100 non-combatants. Critics dispute those figures as being too low.

Back in 2008, I visited the Creech Air Force Base in the Nevada desert, where drones are flown from.

During our visit, the British pilots from the RAF deployed their weapons for the first time.

One of the pilots visibly bristled when I asked him if it ever felt like playing a video game – a question that many ask.

The military uses encrypted channels to control its ordnance disposal robots, but – as any hacker will tell you – there is almost always a flaw somewhere that a determined opponent can find and exploit.

We have already seen cars being taken control of remotely while people are driving them, and the nightmare of the future might be someone taking control of a robot and sending a weapon in the wrong direction.

The military is at the cutting edge of developing robotics, but domestic policing is also a different context in which greater separation from the community being policed risks compounding problems.

The balance between risks and benefits of robots, remote control and automation remain unclear.

But Dallas suggests that the future may be creeping up on us faster than we can debate it.

The excerpts here do not do justice to the articles, if you’re interested in this topic and have the time, I encourage you to read all the articles cited here in their entirety.

*(ETA: July 25, 2016 at 1405 hours PDT: There is a July 25, 2016 essay by Carrie Sheffield for Salon.com which may provide some insight into the Black Lives matter movement and some of the generational issues within the US African-American community as revealed by the movement.)*

Korea Advanced Institute of Science and Technology (KAIST) at summer 2016 World Economic Forum in China

From the Ideas Lab at the 2016 World Economic Forum at Davos to offering expertise at the 2016 World Economic Forum in Tanjin, China that is taking place from June 26 – 28, 2016.

Here’s more from a June 24, 2016 KAIST news release on EurekAlert,

Scientific and technological breakthroughs are more important than ever as a key agent to drive social, economic, and political changes and advancements in today’s world. The World Economic Forum (WEF), an international organization that provides one of the broadest engagement platforms to address issues of major concern to the global community, will discuss the effects of these breakthroughs at its 10th Annual Meeting of the New Champions, a.k.a., the Summer Davos Forum, in Tianjin, China, June 26-28, 2016.

Three professors from the Korea Advanced Institute of Science and Technology (KAIST) will join the Annual Meeting and offer their expertise in the fields of biotechnology, artificial intelligence, and robotics to explore the conference theme, “The Fourth Industrial Revolution and Its Transformational Impact.” The Fourth Industrial Revolution, a term coined by WEF founder, Klaus Schwab, is characterized by a range of new technologies that fuse the physical, digital, and biological worlds, such as the Internet of Things, cloud computing, and automation.

Distinguished Professor Sang Yup Lee of the Chemical and Biomolecular Engineering Department will speak at the Experts Reception to be held on June 25, 2016 on the topic of “The Summer Davos Forum and Science and Technology in Asia.” On June 27, 2016, he will participate in two separate discussion sessions.

In the first session entitled “What If Drugs Are Printed from the Internet?” Professor Lee will discuss the future of medicine being impacted by advancements in biotechnology and 3D printing technology with Nita A. Farahany, a Duke University professor, under the moderation of Clare Matterson, the Director of Strategy at Wellcome Trust in the United Kingdom. The discussants will note recent developments made in the way patients receive their medicine, for example, downloading drugs directly from the internet and the production of yeast strains to make opioids for pain treatment through systems metabolic engineering, and predicting how these emerging technologies will transform the landscape of the pharmaceutical industry in the years to come.

In the second session, “Lessons for Life,” Professor Lee will talk about how to nurture life-long learning and creativity to support personal and professional growth necessary in an era of the new industrial revolution.

During the Annual Meeting, Professors Jong-Hwan Kim of the Electrical Engineering School and David Hyunchul Shim of the Aerospace Department will host, together with researchers from Carnegie Mellon University and AnthroTronix, an engineering research and development company, a technological exhibition on robotics. Professor Kim, the founder of the internally renowned Robot World Cup, will showcase his humanoid micro-robots that play soccer, displaying their various cutting-edge technologies such as imaging processing, artificial intelligence, walking, and balancing. Professor Shim will present a human-like robotic piloting system, PIBOT, which autonomously operates a simulated flight program, grabbing control sticks and guiding an airplane from take offs to landings.

In addition, the two professors will join Professor Lee, who is also a moderator, to host a KAIST-led session on June 26, 2016, entitled “Science in Depth: From Deep Learning to Autonomous Machines.” Professors Kim and Shim will explore new opportunities and challenges in their fields from machine learning to autonomous robotics including unmanned vehicles and drones.

Since 2011, KAIST has been participating in the World Economic Forum’s two flagship conferences, the January and June Davos Forums, to introduce outstanding talents, share their latest research achievements, and interact with global leaders.

KAIST President Steve Kang said, “It is important for KAIST to be involved in global talks that identify issues critical to humanity and seek answers to solve them, where our skills and knowledge in science and technology could play a meaningful role. The Annual Meeting in China will become another venue to accomplish this.”

I mentioned KAIST and the Ideas Lab at the 2016 Davos meeting in this Nov. 20, 2015 posting and was able to clear up my (and possible other people’s) confusion as to what the Fourth Industrial revolution might be in my Dec. 3, 2015 posting.

A human user manual—for robots

Researchers from the Georgia Institute of Technology (Georgia Tech), funded by the US Office of Naval Research (ONR), have developed a program that teaches robots to read stories and more in an effort to educate them about humans. From a June 16, 2016 ONR news release by Warren Duffie Jr. (also on EurekAlert),

With support from the Office of Naval Research (ONR), researchers at the Georgia Institute of Technology have created an artificial intelligence software program named Quixote to teach robots to read stories, learn acceptable behavior and understand successful ways to conduct themselves in diverse social situations.

“For years, researchers have debated how to teach robots to act in ways that are appropriate, non-intrusive and trustworthy,” said Marc Steinberg, an ONR program manager who oversees the research. “One important question is how to explain complex concepts such as policies, values or ethics to robots. Humans are really good at using narrative stories to make sense of the world and communicate to other people. This could one day be an effective way to interact with robots.”

The rapid pace of artificial intelligence has stirred fears by some that robots could act unethically or harm humans. Dr. Mark Riedl, an associate professor and director of Georgia Tech’s Entertainment Intelligence Lab, hopes to ease concerns by having Quixote serve as a “human user manual” by teaching robots values through simple stories. After all, stories inform, educate and entertain–reflecting shared cultural knowledge, social mores and protocols.

For example, if a robot is tasked with picking up a pharmacy prescription for a human as quickly as possible, it could: a) take the medicine and leave, b) interact politely with pharmacists, c) or wait in line. Without value alignment and positive reinforcement, the robot might logically deduce robbery is the fastest, cheapest way to accomplish its task. However, with value alignment from Quixote, it would be rewarded for waiting patiently in line and paying for the prescription.

For their research, Riedl and his team crowdsourced stories from the Internet. Each tale needed to highlight daily social interactions–going to a pharmacy or restaurant, for example–as well as socially appropriate behaviors (e.g., paying for meals or medicine) within each setting.

The team plugged the data into Quixote to create a virtual agent–in this case, a video game character placed into various game-like scenarios mirroring the stories. As the virtual agent completed a game, it earned points and positive reinforcement for emulating the actions of protagonists in the stories.

Riedl’s team ran the agent through 500,000 simulations, and it displayed proper social interactions more than 90 percent of the time.

“These games are still fairly simple,” said Riedl, “more like ‘Pac-Man’ instead of ‘Halo.’ However, Quixote enables these artificial intelligence agents to immerse themselves in a story, learn the proper sequence of events and be encoded with acceptable behavior patterns. This type of artificial intelligence can be adapted to robots, offering a variety of applications.”

Within the next six months, Riedl’s team hopes to upgrade Quixote’s games from “old-school” to more modern and complex styles like those found in Minecraft–in which players use blocks to build elaborate structures and societies.

Riedl believes Quixote could one day make it easier for humans to train robots to perform diverse tasks. Steinberg notes that robotic and artificial intelligence systems may one day be a much larger part of military life. This could involve mine detection and deactivation, equipment transport and humanitarian and rescue operations.

“Within a decade, there will be more robots in society, rubbing elbows with us,” said Riedl. “Social conventions grease the wheels of society, and robots will need to understand the nuances of how humans do things. That’s where Quixote can serve as a valuable tool. We’re already seeing it with virtual agents like Siri and Cortana, which are programmed not to say hurtful or insulting things to users.”

This story brought to mind two other projects: RoboEarth (an internet for robots only) mentioned in my Jan. 14, 2014 which was an update on the project featuring its use in hospitals and RoboBrain, a robot learning project (sourcing the internet, YouTube, and more for information to teach robots) was mentioned in my Sept. 2, 2014 posting.

A Victoria & Albert Museum installation integrates of biomimicry, robotic fabrication and new materials research in architecture

The Victoria & Albert Museum (V&A) in London, UK, opened its Engineering Season show on May 18, 2016 (it runs until Nov. 6, 2016) featuring a robot installation and an exhibition putting the spotlight on Ove Arup, “the most significant engineer of the 20th century” according to the V&A’s May ??, 2016 press release,

The first major retrospective of the most influential engineer of the 20th century and a site specific installation inspired by nature and fabricated by robots will be the highlights of the V&A’s first ever Engineering Season, complemented by displays, events and digital initiatives dedicated to global engineering design. The V&A Engineering Season will highlight the importance of engineering in our daily lives and consider engineers as the ‘unsung heroes’ of design, who play a vital and creative role in the creation of our built environment.

Before launching into the robot/biomimicry part of this story, here’s a very brief description of why Ove Arup is considered so significant and influential,

Engineering the World: Ove Arup and the Philosophy of Total Design will explore the work and legacy of Ove Arup (1895-1988), … . Ove pioneered a multidisciplinary approach to design that has defined the way engineering is understood and practiced today. Spanning 100 years of engineering and architectural design, the exhibition will be guided by Ove’s writings about design and include his early projects, such as the Penguin Pool at London Zoo, as well as renowned projects by the firm including Sydney Opera House [Australia] and the Centre Pompidou in Paris. Arup’s collaborations with major architects of the 20th century pioneered new approaches to design and construction that remain influential today, with the firm’s legacy visible in many buildings across London and around the world. It will also showcase recent work by Arup, from major infrastructure projects like Crossrail and novel technologies for acoustics and crowd flow analysis, to engineering solutions for open source housing design.

Robots, biomimicry and the Elytra Filament Pavilion

A May 18, 2016 article by Tim Master for BBC (British Broadcasting Corporation) news online describes the pavilion installation,

A robot has taken up residence at the Victoria & Albert Musuem to construct a new installation at its London gardens.

The robot – which resembles something from a car assembly line – will build new sections of the Elytra Filament Pavilion over the coming months.

The futuristic structure will grow and change shape using data based on how visitors interact with it.

Elytra’s canopy is made up of 40 hexagonal cells – made from strips of carbon and glass fibre – which have been tightly wound into shape by the computer-controlled Kuka robot.

Each cell takes about three hours to build. On certain days, visitors to the V&A will be able to watch the robot create new cells that will be added to the canopy.

Here are some images made available by V&A,

Elytra Filament Pavilion arriving at the V&A, 2016. © Victoria and Albert Museum, London

Elytra Filament Pavilion arriving at the V&A, 2016. © Victoria and Albert Museum, London

Kuka robot weaving Elytra Filament Pavilion cell fibres, 2016. © Victoria and Albert Museum, London

Kuka robot weaving Elytra Filament Pavilion cell fibres, 2016. © Victoria and Albert Museum, London

[downloaded from http://www.bbc.com/news/entertainment-arts-36322731]

[downloaded from http://www.bbc.com/news/entertainment-arts-36322731]

Elytra Filament Pavilion at the V&A, 2016. © Victoria and Albert Museum, London

Elytra Filament Pavilion at the V&A, 2016. © Victoria and Albert Museum, London

Here’s more detail from the V&A’s Elytra Filament Pavilion installation description,

Elytra Filament Pavilion has been created by experimental German architect Achim Menges with Moritz Dörstelmann, structural engineer Jan Knippers and climate engineer Thomas Auer.

Menges and Knippers are leaders of research institutes at the University of Stuttgart that are pioneering the integration of biomimicry, robotic fabrication and new materials research in architecture. This installation emerges from their ongoing research projects and is their first-ever major commission in the UK.

The pavilion explores the impact of emerging robotic technologies on architectural design, engineering and making.

Its design is inspired by lightweight construction principles found in nature, the filament structures of the forewing shells of flying beetles known as elytra. Made of glass and carbon fibre, each component of the undulating canopy is produced using an innovative robotic winding technique developed by the designers. Like beetle elytra, the pavilion’s filament structure is both very strong and very light – spanning over 200m2 it weighs less than 2,5 tonnes.

Elytra is a responsive shelter that will grow over the course of the V&A Engineering Season. Sensors in the canopy fibres will collect data on how visitors inhabit the pavilion and monitor the structure’s behaviour, ultimately informing how and where the canopy grows. During a series of special events as part of the Engineering Season, visitors will have the opportunity to witness the pavilion’s construction live, as new components are fabricated on-site by a Kuka robot.

Unfortunately, I haven’t been able to find more technical detail, particularly about the materials being used in the construction of the pavilion, on the V&A website.

One observation, I’m a little uncomfortable with how they’re gathering data “Sensors in the canopy fibres will collect data on how visitors inhabit the pavilion … .” It sounds like surveillance to me.

Nonetheless, the Engineering Season offers the promise of a very intriguing approach to fulfilling the V&A’s mandate as a museum dedicated to decorative arts and design.

Ingestible origami robot gets one step closer

Fiction, more or less seriously, has been exploring the idea of ingestible, tiny robots that can enter the human body for decades (Fantastic Voyage and Innerspace are two movie examples). The concept is coming closer to being realized as per a May 12, 2016 news item on phys.org,

In experiments involving a simulation of the human esophagus and stomach, researchers at MIT [Massachusetts Institute of Technology], the University of Sheffield, and the Tokyo Institute of Technology have demonstrated a tiny origami robot that can unfold itself from a swallowed capsule and, steered by external magnetic fields, crawl across the stomach wall to remove a swallowed button battery or patch a wound.

A May 12, 2016 MIT news release (also on EurekAlert), which originated the news item, provides some fascinating depth to this story (Note: Links have been removed),

The new work, which the researchers are presenting this week at the International Conference on Robotics and Automation, builds on a long sequence of papers on origami robots from the research group of Daniela Rus, the Andrew and Erna Viterbi Professor in MIT’s Department of Electrical Engineering and Computer Science.

“It’s really exciting to see our small origami robots doing something with potential important applications to health care,” says Rus, who also directs MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). “For applications inside the body, we need a small, controllable, untethered robot system. It’s really difficult to control and place a robot inside the body if the robot is attached to a tether.”

Although the new robot is a successor to one reported at the same conference last year, the design of its body is significantly different. Like its predecessor, it can propel itself using what’s called a “stick-slip” motion, in which its appendages stick to a surface through friction when it executes a move, but slip free again when its body flexes to change its weight distribution.

Also like its predecessor — and like several other origami robots from the Rus group — the new robot consists of two layers of structural material sandwiching a material that shrinks when heated. A pattern of slits in the outer layers determines how the robot will fold when the middle layer contracts.

Material difference

The robot’s envisioned use also dictated a host of structural modifications. “Stick-slip only works when, one, the robot is small enough and, two, the robot is stiff enough,” says Guitron [Steven Guitron, a graduate student in mechanical engineering]. “With the original Mylar design, it was much stiffer than the new design, which is based on a biocompatible material.”

To compensate for the biocompatible material’s relative malleability, the researchers had to come up with a design that required fewer slits. At the same time, the robot’s folds increase its stiffness along certain axes.

But because the stomach is filled with fluids, the robot doesn’t rely entirely on stick-slip motion. “In our calculation, 20 percent of forward motion is by propelling water — thrust — and 80 percent is by stick-slip motion,” says Miyashita [Shuhei Miyashita, who was a postdoc at CSAIL when the work was done and is now a lecturer in electronics at the University of York, England]. “In this regard, we actively introduced and applied the concept and characteristics of the fin to the body design, which you can see in the relatively flat design.”

It also had to be possible to compress the robot enough that it could fit inside a capsule for swallowing; similarly, when the capsule dissolved, the forces acting on the robot had to be strong enough to cause it to fully unfold. Through a design process that Guitron describes as “mostly trial and error,” the researchers arrived at a rectangular robot with accordion folds perpendicular to its long axis and pinched corners that act as points of traction.

In the center of one of the forward accordion folds is a permanent magnet that responds to changing magnetic fields outside the body, which control the robot’s motion. The forces applied to the robot are principally rotational. A quick rotation will make it spin in place, but a slower rotation will cause it to pivot around one of its fixed feet. In the researchers’ experiments, the robot uses the same magnet to pick up the button battery.

Porcine precedents

The researchers tested about a dozen different possibilities for the structural material before settling on the type of dried pig intestine used in sausage casings. “We spent a lot of time at Asian markets and the Chinatown market looking for materials,” Li [Shuguang Li, a CSAIL postdoc] says. The shrinking layer is a biodegradable shrink wrap called Biolefin.

To design their synthetic stomach, the researchers bought a pig stomach and tested its mechanical properties. Their model is an open cross-section of the stomach and esophagus, molded from a silicone rubber with the same mechanical profile. A mixture of water and lemon juice simulates the acidic fluids in the stomach.

Every year, 3,500 swallowed button batteries are reported in the U.S. alone. Frequently, the batteries are digested normally, but if they come into prolonged contact with the tissue of the esophagus or stomach, they can cause an electric current that produces hydroxide, which burns the tissue. Miyashita employed a clever strategy to convince Rus that the removal of swallowed button batteries and the treatment of consequent wounds was a compelling application of their origami robot.

“Shuhei bought a piece of ham, and he put the battery on the ham,” Rus says. [emphasis mine] “Within half an hour, the battery was fully submerged in the ham. So that made me realize that, yes, this is important. If you have a battery in your body, you really want it out as soon as possible.”

“This concept is both highly creative and highly practical, and it addresses a clinical need in an elegant way,” says Bradley Nelson, a professor of robotics at the Swiss Federal Institute of Technology Zurich. “It is one of the most convincing applications of origami robots that I have seen.”

I wonder if they ate the ham afterwards.

Happily, MIT has produced a video featuring this ingestible, origami robot,

Finally, this team has a couple more members than the previously mentioned Rus, Miyashita, and Li,

…  Kazuhiro Yoshida of Tokyo Institute of Technology, who was visiting MIT on sabbatical when the work was done; and Dana Damian of the University of Sheffield, in England.

As Rus notes in the video, the next step will be in vivo (animal) studies.