Tag Archives: RoboEarth

A human user manual—for robots

Researchers from the Georgia Institute of Technology (Georgia Tech), funded by the US Office of Naval Research (ONR), have developed a program that teaches robots to read stories and more in an effort to educate them about humans. From a June 16, 2016 ONR news release by Warren Duffie Jr. (also on EurekAlert),

With support from the Office of Naval Research (ONR), researchers at the Georgia Institute of Technology have created an artificial intelligence software program named Quixote to teach robots to read stories, learn acceptable behavior and understand successful ways to conduct themselves in diverse social situations.

“For years, researchers have debated how to teach robots to act in ways that are appropriate, non-intrusive and trustworthy,” said Marc Steinberg, an ONR program manager who oversees the research. “One important question is how to explain complex concepts such as policies, values or ethics to robots. Humans are really good at using narrative stories to make sense of the world and communicate to other people. This could one day be an effective way to interact with robots.”

The rapid pace of artificial intelligence has stirred fears by some that robots could act unethically or harm humans. Dr. Mark Riedl, an associate professor and director of Georgia Tech’s Entertainment Intelligence Lab, hopes to ease concerns by having Quixote serve as a “human user manual” by teaching robots values through simple stories. After all, stories inform, educate and entertain–reflecting shared cultural knowledge, social mores and protocols.

For example, if a robot is tasked with picking up a pharmacy prescription for a human as quickly as possible, it could: a) take the medicine and leave, b) interact politely with pharmacists, c) or wait in line. Without value alignment and positive reinforcement, the robot might logically deduce robbery is the fastest, cheapest way to accomplish its task. However, with value alignment from Quixote, it would be rewarded for waiting patiently in line and paying for the prescription.

For their research, Riedl and his team crowdsourced stories from the Internet. Each tale needed to highlight daily social interactions–going to a pharmacy or restaurant, for example–as well as socially appropriate behaviors (e.g., paying for meals or medicine) within each setting.

The team plugged the data into Quixote to create a virtual agent–in this case, a video game character placed into various game-like scenarios mirroring the stories. As the virtual agent completed a game, it earned points and positive reinforcement for emulating the actions of protagonists in the stories.

Riedl’s team ran the agent through 500,000 simulations, and it displayed proper social interactions more than 90 percent of the time.

“These games are still fairly simple,” said Riedl, “more like ‘Pac-Man’ instead of ‘Halo.’ However, Quixote enables these artificial intelligence agents to immerse themselves in a story, learn the proper sequence of events and be encoded with acceptable behavior patterns. This type of artificial intelligence can be adapted to robots, offering a variety of applications.”

Within the next six months, Riedl’s team hopes to upgrade Quixote’s games from “old-school” to more modern and complex styles like those found in Minecraft–in which players use blocks to build elaborate structures and societies.

Riedl believes Quixote could one day make it easier for humans to train robots to perform diverse tasks. Steinberg notes that robotic and artificial intelligence systems may one day be a much larger part of military life. This could involve mine detection and deactivation, equipment transport and humanitarian and rescue operations.

“Within a decade, there will be more robots in society, rubbing elbows with us,” said Riedl. “Social conventions grease the wheels of society, and robots will need to understand the nuances of how humans do things. That’s where Quixote can serve as a valuable tool. We’re already seeing it with virtual agents like Siri and Cortana, which are programmed not to say hurtful or insulting things to users.”

This story brought to mind two other projects: RoboEarth (an internet for robots only) mentioned in my Jan. 14, 2014 which was an update on the project featuring its use in hospitals and RoboBrain, a robot learning project (sourcing the internet, YouTube, and more for information to teach robots) was mentioned in my Sept. 2, 2014 posting.

Robo Brain; a new robot learning project

Having covered the RoboEarth project (a European Union funded ‘internet for robots’ first mentioned here in a Feb. 14, 2011 posting [scroll down about 1/4 of the way] and again in a March 12 2013 posting about the project’s cloud engine, Rapyuta and. most recently in a Jan. 14, 2014 posting), an Aug. 25, 2014 Cornell University news release by Bill Steele (also on EurekAlert with some editorial changes) about the US Robo Brain project immediately caught my attention,

Robo Brain – a large-scale computational system that learns from publicly available Internet resources – is currently downloading and processing about 1 billion images, 120,000 YouTube videos, and 100 million how-to documents and appliance manuals. The information is being translated and stored in a robot-friendly format that robots will be able to draw on when they need it.

The news release spells out why and how researchers have created Robo Brain,

To serve as helpers in our homes, offices and factories, robots will need to understand how the world works and how the humans around them behave. Robotics researchers have been teaching them these things one at a time: How to find your keys, pour a drink, put away dishes, and when not to interrupt two people having a conversation.

This will all come in one package with Robo Brain, a giant repository of knowledge collected from the Internet and stored in a robot-friendly format that robots will be able to draw on when they need it. [emphasis mine]

“Our laptops and cell phones have access to all the information we want. If a robot encounters a situation it hasn’t seen before it can query Robo Brain in the cloud,” explained Ashutosh Saxena, assistant professor of computer science.

Saxena and colleagues at Cornell, Stanford and Brown universities and the University of California, Berkeley, started in July to download about one billion images, 120,000 YouTube videos and 100 million how-to documents and appliance manuals, along with all the training they have already given the various robots in their own laboratories. Robo Brain will process images to pick out the objects in them, and by connecting images and video with text, it will learn to recognize objects and how they are used, along with human language and behavior.

Saxena described the project at the 2014 Robotics: Science and Systems Conference, July 12-16 [2014] in Berkeley.

If a robot sees a coffee mug, it can learn from Robo Brain not only that it’s a coffee mug, but also that liquids can be poured into or out of it, that it can be grasped by the handle, and that it must be carried upright when it is full, as opposed to when it is being carried from the dishwasher to the cupboard.

The system employs what computer scientists call “structured deep learning,” where information is stored in many levels of abstraction. An easy chair is a member of the class of chairs, and going up another level, chairs are furniture. Sitting is something you can do on a chair, but a human can also sit on a stool, a bench or the lawn.

A robot’s computer brain stores what it has learned in a form mathematicians call a Markov model, which can be represented graphically as a set of points connected by lines (formally called nodes and edges). The nodes could represent objects, actions or parts of an image, and each one is assigned a probability – how much you can vary it and still be correct. In searching for knowledge, a robot’s brain makes its own chain and looks for one in the knowledge base that matches within those probability limits.

“The Robo Brain will look like a gigantic, branching graph with abilities for multidimensional queries,” said Aditya Jami, a visiting researcher at Cornell who designed the large-scale database for the brain. It might look something like a chart of relationships between Facebook friends but more on the scale of the Milky Way.

Like a human learner, Robo Brain will have teachers, thanks to crowdsourcing. The Robo Brain website will display things the brain has learned, and visitors will be able to make additions and corrections.

The “robot-friendly format” for information in the European project (RoboEarth) meant machine language but if I understand what’s written in the news release correctly, this project incorporates a mix of machine language and natural (human) language.

This is one of the times the funding sources (US National Science Foundation, two of the armed forces, businesses and a couple of not-for-profit agencies) seem particularly interesting (from the news release),

The project is supported by the National Science Foundation, the Office of Naval Research, the Army Research Office, Google, Microsoft, Qualcomm, the Alfred P. Sloan Foundation and the National Robotics Initiative, whose goal is to advance robotics to help make the United States more competitive in the world economy.

For the curious, here’s a link to the Robo Brain and RoboEarth websites.

RoboEarth (robot internet) gets examined in hospital

RoboEarth sometimes referred to as a robot internet or a robot world wide web is being tested this week by a team of researchers at Eindhoven University of Technology (Technische Universiteit Eindhoven, Netherlands) and their colleagues at Philips, ETH Zürich, TU München and the universities of Zaragoza and Stuttgart according to a Jan. 14, 2014 news item on BBC (British Broadcasting Corporation) news online,

A world wide web for robots to learn from each other and share information is being shown off for the first time.

Scientists behind RoboEarth will put it through its paces at Eindhoven University in a mocked-up hospital room.

Four robots will use the system to complete a series of tasks, including serving drinks to patients.

It is the culmination of a four-year project, funded by the European Union.

The eventual aim is that both robots and humans will be able to upload information to the cloud-based database, which would act as a kind of common brain for machines.

There’s a bit more detail in Victoria Turk’s Jan. 13 (?), 2014 article for motherboard.vice.com (Note: A link has been removed),

A hospital-like setting is an ideal test for the project, because where RoboEarth could come in handy is in helping out humans with household tasks. A big problem for robots at the moment is that human environments tend to change a lot, whereas robots are limited to the very specific movements and tasks they’ve been programmed to do.

“To enable robots to successfully lend a mechanical helping hand, they need to be able to deal flexibly with new situations and conditions,” explains a post by the University of Eindhoven. “For example you can teach a robot to bring you a cup of coffee in the living room, but if some of the chairs have been moved the robot won’t be able to find you any longer. Or it may get confused if you’ve just bought a different set of coffee cups.”

And of course, it wouldn’t just be limited to robots working explicitly together. The Wikipedia-like knowledge base is more like an internet for machines, connecting lonely robots across the globe.

A Jan. 10, 2014 Eindhoven University of Technology news release provides some insight into what the researchers want to accomplish,

“The problem right now is that robots are often developed specifically for one task”, says René van de Molengraft, TU/e  [Eindhoven University of Technology] researcher and RoboEarth project leader. “Everyday changes that happen all the time in our environment make all the programmed actions unusable. But RoboEarth simply lets robots learn new tasks and situations from each other. All their knowledge and experience are shared worldwide on a central, online database. As well as that, computing and ‘thinking’ tasks can be carried out by the system’s ‘cloud engine’, so the robot doesn’t need to have as much computing or battery power on‑board.”

It means, for example, that a robot can image a hospital room and upload the resulting map to RoboEarth. Another robot, which doesn’t know the room, can use that map on RoboEarth to locate a glass of water immediately, without having to search for it endlessly. In the same way a task like opening a box of pills can be shared on RoboEarth, so other robots can also do it without having to be programmed for that specific type of box.

There’s no word as to exactly when this test being demonstrated to a delegation from the European Commission, which financed the project, using four robots and two simulated hospital rooms is being held.

I first wrote about* RoboEarth in a Feb. 14, 2011 posting (scroll down about 1/4 of the way) and again in a March 12 2013 posting about the project’s cloud engine, Rapyuta.

* ‘abut’ corrected to ‘about’ on Sept. 2, 2014.

RoboEarth’s Rapyuta, a cloud engine for the robot internet

Described in a 2011 BBC news item as an internet/wikipedia for robots only, RobotEarth was last mentioned here in a Feb. 14, 2011 posting (scroll down about 1/3 of the way) where I featured both the aforementioned BBC news item and a first person account of the project on the IEEE (Institute of Electrical and Electronics Engineering) Spectrum’s Automaton Robotics blog.

Today, Mar. 12, 2013, there’s a news release on EurekAlert about a new RoboEarth development,

Researchers of five European universities have developed a cloud-computing platform for robots. The platform allows robots connected to the Internet to directly access the powerful computational, storage, and communications infrastructure of modern data centers – the giant server farms behind the likes of Google, Facebook, and Amazon – for robotics tasks and robot learning.

With the development of the RoboEarth Cloud Engine the team continues their work towards creating an Internet for robots. The new platform extends earlier work on allowing robots to share knowledge with other robots via a WWW-style database, greatly speeding up robot learning and adaptation in complex tasks.

Here’s how the cloud engine is described,

The developed Platform as a Service (PaaS) for robots allows to perform complex functions like mapping, navigation, or processing of human voice commands in the cloud, at a fraction of the time required by robots’ on-board computers. By making enterprise-scale computing infrastructure available to any robot with a wireless connection, the researchers believe that the new computing platform will help pave the way towards lighter, cheaper, more intelligent robots.

“The RoboEarth Cloud Engine is particularly useful for mobile robots, such as drones or autonomous cars, which require lots of computation for navigation. It also offers significant benefits for robot co-workers, such as factory robots working alongside humans, which require large knowledge databases, and for the deployment of robot teams.” says Mohanarajah Gajamohan, researcher at the Swiss Federal Institute of Technology (ETH Zurich) and Technical Lead of the project.

“On-board computation reduces mobility and increases cost.”, says Dr. Heico Sandee, RoboEarth’s Program Manager at Eindhoven University of Technology in the Netherlands, “With the rapid increase in wireless data rates caused by the booming demand of mobile communications devices, more and more of a robot’s computational tasks can be moved into the cloud.”

Oddly, there’s never any mention of the name for the cloud engine project in the news release. I found the name (Rapyuta) on the RoboEarth website, from the home page,

Update: Join (or remotely watch) the Cloud Robotics Workshop at the EU Robotics Forum on Wednesday 20. March, 4-6pm CET. Details: http://www.roboearth.org/eurobotics2013

It is our pleasure to announce the first public release of Rapyuta: The RoboEarth Cloud Engine. Rapyuta is an open source cloud robotics platform for robots. It implements a Platform-as-a-Service (PaaS) framework designed specifically for robotics applications.

Rapyuta helps robots to offload heavy computation by providing secured customizable computing environments in the cloud. Robots can start their own computational environment, launch any computational node uploaded by the developer, and communicate with the launched nodes using the WebSockets protocol.

Interestingly, the final paragraph of today’s (Mar. 12, 2011) news release includes a statement about jobs,

While high-tech companies that heavily rely on data centers have been criticized for creating fewer jobs than traditional companies (e.g., Google or Facebook employ less than half the number of workers of General Electric or Hewlett-Packard per dollar in revenue), the researchers don’t believe that this new robotics platform should be cause for alarm. According to a recent study by the International Federation of Robotics and Metra Martech entitled “Positive Impact of Industrial Robots on Employment”, robots don’t kill jobs but rather tend to lead to an overall growth in jobs.

I’d like to see some more data about this business of robots creating jobs. In the meantime, there’s  more information about RoboEarth and the Rapyuta cloud engine in the links the news release provides to materials such as this video,

Unexpectedly, the narrator sounds like she might have been educated in Canada or the US.

Robot ethics at Vancouver’s next Café Scientifique

AJung Moon, a mechanical engineering researcher at the University of British Columbia, will be giving a talk: Roboethics – A discussion on how robots are impacting our society on Tuesday, May 31, 2011, 7:30 pm at the Railway Club,579 Dunsmuir St., Vancouver, BC. From the announcement,

From vacuuming houses to befriending older persons at care facilities, robots are starting to provide convenient and efficient solutions at homes, hospitals, and schools. For decades, numerous works in science fiction have imaginatively warned us that robots can bring catastrophic ethical, legal, and social issues into our society. But is today’s robotics technology advanced enough to the point that we should take these fictional speculations seriously? Roboticists, philosophers, and policymakers agree that we won’t see Terminator or Transformers type robots any time soon, but they also agree that the technology is bringing forth ethical issues needing serious discussions today. In this talk, we will highlight some of the ways robots are already impacting our society, and how the study of human-robot interaction can help put ethics into its design.

Moon has a blog called Roboethic info DataBase, where she posts the latest about robots and ethics.

Here’s a picture of her,

AJung Moon (downloaded from her Roboethics info DataBase blog)

I wonder what she makes of the RoboEarth project where robots will uploading information to something which is the equivalent of the internet and wikipedia (my Feb. 14, 2011 posting, scroll down a few paragraphs) or the lingodroids project where robots are creating a language. From the May 17, 2011 article by Katie Gatto (originally written for the IEEE [Institute of Electrical and Electronics Engineers) on physorg.com,

Communication is a vital part of any task that has to be done by more than one individual. That is why humans in every corner of the world have created their own complex languages that help us share the goal. As it turns out, we are not alone in that need, or in our ability to create a language of our own.

Researchers at the University of Queensland and Queensland University of Technology have created a pair of robots who are creating their own language. The bots, which are being taught how to speak but not given specific languages, are learning to create a lexicon of their own.

The researchers have named these bots, lingodroids and you can read the paper here,

Research paper: Schulz, R., Wyeth, G., & Wiles, J. (In Press) Are we there yet? Grounding temporal concepts in shared journeys, IEEE Transactions on Autonomous Mental Development [PDF]

I hope to get to the talk on Tuesday, May 31, 2011. Meanwhile, Happy Weekend (and for Canadians it’s a long weekend)!

Intelligence, computers, and robots

Starting tonight, Feb. 14, 2011, you’ll be able to watch a computer compete against two former champions on the US television quiz programme, Jeopardy.  The match between the IBM computer, named Watson, and the most accomplished champions that have ever played on Jeopardy, Ken Jennings and Brad Rutter, has been four years in the making. From the article by Julie Beswald on physorg.com,

“Let’s finish, ‘Chicks Dig Me’,” intones the somewhat monotone, but not unpleasant, voice of Watson, IBM’s new supercomputer built to compete on the game show Jeopardy!

The audience chuckles in response to the machine-like voice and its all-too-human assertion. But fellow contestant Ken Jennings gets the last laugh as he buzzes in and garners $1,000.

This exchange is part of a January 13 practice round for the world’s first man vs. machine game show. Scheduled to air February 14-16, the match pits Watson against the two best Jeopardy! players of all time. Jennings holds the record for the most consecutive games won, at 74. The other contestant, Brad Rutter, has winnings totaling over $3.2 million.

On Feb. 9, 2011, PBS’s NOVA science program broadcast a documentary about Watson whose name is derived from the company founder, Paul Watson, and not Sherlock Holmes’s companion and biographer, Dr. Watson. Titled the Smartest Machine on Earth, the show highlighted Watson’s learning process and some of the principles behind artificial intelligence. PBS’s website is featuring a live blogging event of tonight’s and the Feb. 15 and 16 matches. From the website,

On Monday [Feb. 14, 2011], our bloggers will be Nico Schlaefer and Hideki Shima, two Ph.D. students at Carnegie Mellon University’s Language Technologies Institute who worked on the Watson project.

At the same time that the ‘Watson’ event was being publicized last week, another news item on artificial intelligence and learning was making the rounds. From a Feb. 9, 2011 article by Mark Ward on BBC News ,

Robots could soon have an equivalent of the internet and Wikipedia.

European scientists have embarked on a project to let robots share and store what they discover about the world.

Called RoboEarth it will be a place that robots can upload data to when they master a task, and ask for help in carrying out new ones.

Researchers behind it hope it will allow robots to come into service more quickly, armed with a growing library of knowledge about their human masters. [emphasis mine]

You can read a first person account of the RoboEarth project on the IEEE (Institute of Electrical and Electronics Engineering) Spectrum’s Automaton Robotics blog in a posting by Markus Waibel,

As part of the European project RoboEarth, I am currently one of about 30 people working towards building an Internet for robots: a worldwide, open-source platform that allows any robot with a network connection to generate, share, and reuse data. The project is set up to deliver a proof of concept to show two things:

* RoboEarth greatly speeds up robot learning and adaptation in complex tasks.

* Robots using RoboEarth can execute tasks that were not explicitly planned for at design time.

The vision behind RoboEarth is much larger: Allow robots to encode, exchange, and reuse knowledge to help each other accomplish complex tasks. This goes beyond merely allowing robots to communicate via the Internet, outsourcing computation to the cloud, or linked data.

But before you yell “Skynet!,” think again. While the most similar things science fiction writers have imagined may well be the artificial intelligences in Terminator, the Space Odyssey series, or the Ender saga, I think those analogies are flawed. [emphasis mine] RoboEarth is about building a knowledge base, and while it may include intelligent web services or a robot app store, it will probably be about as self-aware as Wikipedia.

That said, my colleagues and I believe that if robots are to move out of the factories and work alongside humans, they will need to systematically share data and build on each other’s experience.

Unfortunately, Markus Waibel doesn’t explain why he thinks the analogies are flawed but he does lay out the reasoning for why robots should share information. For a more approachable and much briefer account, you can check out Ariel Schwartz’s Feb. 10, 2011 article on the Fast Company website,

The EU-funded [European Union] RoboEarth project is bringing together European scientists to build a network and database repository for robots to share information about the world. They will, if all goes as planned, use the network to store and retrieve information about objects, locations (including maps), and instructions about completing activities. Robots will be both the contributors and the editors of the repository.

With RoboEarth, one robot’s learning experiences are never lost–the data is passed on for other robots to mine. As RedOrbit explains, that means one robot’s experiences with, say, setting a dining room table could be passed on to others, so the butler robot of the future might know how to prepare for dinner guests without any prior programming.

There is a RoboEarth website, so we humans can get more information and hopefully keep up with the robots.

Happily and as there is with increasing frequency, there’s a Youtube video. This one features a robot downloading information from RoboEarth and using that information in a quasi hospital setting,

I find this use of popular entertainment, particularly obvious with Watson, to communicate about scientific advances quite interesting. On this same theme of popular culture as a means of science communication, I featured a Lady Gaga parody by a lab working on Alzheimer’s in my Jan. 28, 2011 posting.  I also find the reference to “human masters” in the BBC article along with Waibel’s flat assertion that some science fiction analogies about artificial intelligence are flawed indicative of some very old anxieties as expressed in Mary Shelley’s Frankenstein.

ETA Feb. 14, 2011: The latest posting on the Pasco Phronesis blog, I, For One, Welcome Our Robot Game Show Overlords, features another opinion about the Watson appearances on Jeopardy. From the posting,

What will this mean? Given that a cursory search suggests opinion is divided on whether Watson will win this week, I have no idea. While it will likely be entertaining, and does represent a significant step forward in computing capabilities, I can’t help but think about the supercomputing race that makes waves only when a new computational record is made. It’s nice, and might prompt government action should they lose the number one standing. But what does it mean? What new outcomes do we have because of this? The conversation is rarely about what, to me, seems more important.