Category Archives: robots

Robo Brain; a new robot learning project

Having covered the RoboEarth project (a European Union funded ‘internet for robots’ first mentioned here in a Feb. 14, 2011 posting [scroll down about 1/4 of the way] and again in a March 12 2013 posting about the project’s cloud engine, Rapyuta and. most recently in a Jan. 14, 2014 posting), an Aug. 25, 2014 Cornell University news release by Bill Steele (also on EurekAlert with some editorial changes) about the US Robo Brain project immediately caught my attention,

Robo Brain – a large-scale computational system that learns from publicly available Internet resources – is currently downloading and processing about 1 billion images, 120,000 YouTube videos, and 100 million how-to documents and appliance manuals. The information is being translated and stored in a robot-friendly format that robots will be able to draw on when they need it.

The news release spells out why and how researchers have created Robo Brain,

To serve as helpers in our homes, offices and factories, robots will need to understand how the world works and how the humans around them behave. Robotics researchers have been teaching them these things one at a time: How to find your keys, pour a drink, put away dishes, and when not to interrupt two people having a conversation.

This will all come in one package with Robo Brain, a giant repository of knowledge collected from the Internet and stored in a robot-friendly format that robots will be able to draw on when they need it. [emphasis mine]

“Our laptops and cell phones have access to all the information we want. If a robot encounters a situation it hasn’t seen before it can query Robo Brain in the cloud,” explained Ashutosh Saxena, assistant professor of computer science.

Saxena and colleagues at Cornell, Stanford and Brown universities and the University of California, Berkeley, started in July to download about one billion images, 120,000 YouTube videos and 100 million how-to documents and appliance manuals, along with all the training they have already given the various robots in their own laboratories. Robo Brain will process images to pick out the objects in them, and by connecting images and video with text, it will learn to recognize objects and how they are used, along with human language and behavior.

Saxena described the project at the 2014 Robotics: Science and Systems Conference, July 12-16 [2014] in Berkeley.

If a robot sees a coffee mug, it can learn from Robo Brain not only that it’s a coffee mug, but also that liquids can be poured into or out of it, that it can be grasped by the handle, and that it must be carried upright when it is full, as opposed to when it is being carried from the dishwasher to the cupboard.

The system employs what computer scientists call “structured deep learning,” where information is stored in many levels of abstraction. An easy chair is a member of the class of chairs, and going up another level, chairs are furniture. Sitting is something you can do on a chair, but a human can also sit on a stool, a bench or the lawn.

A robot’s computer brain stores what it has learned in a form mathematicians call a Markov model, which can be represented graphically as a set of points connected by lines (formally called nodes and edges). The nodes could represent objects, actions or parts of an image, and each one is assigned a probability – how much you can vary it and still be correct. In searching for knowledge, a robot’s brain makes its own chain and looks for one in the knowledge base that matches within those probability limits.

“The Robo Brain will look like a gigantic, branching graph with abilities for multidimensional queries,” said Aditya Jami, a visiting researcher at Cornell who designed the large-scale database for the brain. It might look something like a chart of relationships between Facebook friends but more on the scale of the Milky Way.

Like a human learner, Robo Brain will have teachers, thanks to crowdsourcing. The Robo Brain website will display things the brain has learned, and visitors will be able to make additions and corrections.

The “robot-friendly format” for information in the European project (RoboEarth) meant machine language but if I understand what’s written in the news release correctly, this project incorporates a mix of machine language and natural (human) language.

This is one of the times the funding sources (US National Science Foundation, two of the armed forces, businesses and a couple of not-for-profit agencies) seem particularly interesting (from the news release),

The project is supported by the National Science Foundation, the Office of Naval Research, the Army Research Office, Google, Microsoft, Qualcomm, the Alfred P. Sloan Foundation and the National Robotics Initiative, whose goal is to advance robotics to help make the United States more competitive in the world economy.

For the curious, here’s a link to the Robo Brain and RoboEarth websites.

Mothbots (cyborg moths)

Apparently the big picture could involve search and rescue applications, meanwhile, the smaller picture shows attempts to create a cyborg moth (mothbot). From an Aug. 20, 2014 news item on ScienceDaily,

North Carolina State University [US] researchers have developed methods for electronically manipulating the flight muscles of moths and for monitoring the electrical signals moths use to control those muscles. The work opens the door to the development of remotely-controlled moths, or “biobots,” for use in emergency response.

“In the big picture, we want to know whether we can control the movement of moths for use in applications such as search and rescue operations,” says Dr. Alper Bozkurt, an assistant professor of electrical and computer engineering at NC State and co-author of a paper on the work. “The idea would be to attach sensors to moths in order to create a flexible, aerial sensor network that can identify survivors or public health hazards in the wake of a disaster.”

An Aug. 20, 2014 North Carolina State University news release (also on EurekAlert), which originated the news item,

The paper presents a technique Bozkurt developed for attaching electrodes to a moth during its pupal stage, when the caterpillar is in a cocoon undergoing metamorphosis into its winged adult stage. This aspect of the work was done in conjunction with Dr. Amit Lal of Cornell University.

But the new findings in the paper involve methods developed by Bozkurt’s research team for improving our understanding of precisely how a moth coordinates its muscles during flight.

By attaching electrodes to the muscle groups responsible for a moth’s flight, Bozkurt’s team is able to monitor electromyographic signals – the electric signals the moth uses during flight to tell those muscles what to do.

The moth is connected to a wireless platform that collects the electromyographic data as the moth moves its wings. To give the moth freedom to turn left and right, the entire platform levitates, suspended in mid-air by electromagnets. A short video describing the work is available at http://www.youtube.com/watch?v=jR325RHPK8o.

“By watching how the moth uses its wings to steer while in flight, and matching those movements with their corresponding electromyographic signals, we’re getting a much better understanding of how moths maneuver through the air,” Bozkurt says.

“We’re optimistic that this information will help us develop technologies to remotely control the movements of moths in flight,” Bozkurt says. “That’s essential to the overarching goal of creating biobots that can be part of a cyberphysical sensor network.”

But Bozkurt stresses that there’s a lot of work yet to be done to make moth biobots a viable tool.

“We now have a platform for collecting data about flight coordination,” Bozkurt says. “Next steps include developing an automated system to explore and fine-tune parameters for controlling moth flight, further miniaturizing the technology, and testing the technology in free-flying moths.”

Here’s an image illustrating the researchers’ work,

Caption: The moth is connected to a wireless platform that collects the electromyographic data as the moth moves its wings. To give the moth freedom to turn left and right, the entire platform levitates, suspended in mid-air by electromagnets. Credit: Alper Bozkurt

Caption: The moth is connected to a wireless platform that collects the electromyographic data as the moth moves its wings. To give the moth freedom to turn left and right, the entire platform levitates, suspended in mid-air by electromagnets.
Credit: Alper Bozkurt

I was expecting to find this research had been funded by the US military but that doesn’t seem to be the case according to the university news release,

… The research was supported by the National Science Foundation, under grant CNS-1239243. The researchers also used transmitters and receivers developed by Triangle Biosystems International and thank them for their contribution to the work.

For the curious, here’s a link to and a citation for the text and the full video,

Early Metamorphic Insertion Technology for Insect Flight Behavior Monitoring by Alexander Verderber, Michael McKnight, and Alper Bozkurt. J. Vis. Exp. (89), e50901, doi:10.3791/50901 (2014)

This material is behind a paywall.

Hummingbirds and ‘nano’ spy cameras

Hummingbird-inspired spy cameras have come a long way since the research featured in this Aug. 12, 2011 posting which includes a video of a robot camera designed to look like a hummingbird and mimic some of its extraordinary flying abilities. These days (2014) the emphasis appears to be on mimicking the abilities to a finer degree if Margaret Munro’s July 29, 2014 article for Canada.com is to be believed,

Tiny, high-end military drones are catching up with one of nature’s great engineering masterpieces.

A side-by-side comparison has found a “remarkably similar” aerodynamic performance between hummingbirds and the Black Hornet, the most sophisticated nano spycam yet.

“(The) Average Joe hummingbird” is about on par with the tiny helicopter that is so small it can fit in a pocket, says engineering professor David Lentink, at Stanford University. He led a team from Canada [University of British Columbia], the U.S. and the Netherlands [Wageningen University and Eindhoven University of Technology] that compared the birds and the machine for a study released Tuesday [July 29, 2014].

For a visual comparison with the latest nano spycam (Black Hornet), here’s the ‘hummingbird’ featured in the 2011 posting,

The  Nano Hummingbird, a drone from AeroVironment designed for the US Pentagon, would fit into any or all of those categories.

And, here’s this 2013 image of a Black Hornet Nano Helicopter inspired by hummingbirds,

Black Hornet Nano Helicopter UAVView licenseview terms Richard Watt - Photo http://www.defenceimagery.mod.uk/fotoweb/fwbin/download.dll/45153802.jpgCourtesy: Wikipedia

Black Hornet Nano Helicopter UAVView licenseview terms
Richard Watt – Photo http://www.defenceimagery.mod.uk/fotoweb/fwbin/download.dll/45153802.jpg Courtesy: Wikipedia

A July 30, 2014 Stanford University news release by Bjorn Carey provides more details about this latest research into hummingbirds and their flying ways,

More than 42 million years of natural selection have turned hummingbirds into some of the world’s most energetically efficient flyers, particularly when it comes to hovering in place.

Humans, however, are gaining ground quickly. A new study led by David Lentink, an assistant professor of mechanical engineering at Stanford, reveals that the spinning blades of micro-helicopters are about as efficient at hovering as the average hummingbird.

The experiment involved spinning hummingbird wings – sourced from a pre-existing museum collection – of 12 different species on an apparatus designed to test the aerodynamics of helicopter blades. The researchers used cameras to visualize airflow around the wings, and sensitive load cells to measure the drag and the lift force they exerted, at different speeds and angles.

Lentink and his colleagues then replicated the experiment using the blades from a ProxDynamics Black Hornet autonomous microhelicopter. The Black Hornet is the most sophisticated microcopter available – the United Kingdom’s army uses it in Afghanistan – and is itself about the size of a hummingbird.

Even spinning like a helicopter, rather than flapping, the hummingbird wings excelled: If hummingbirds were able to spin their wings to hover, it would cost them roughly half as much energy as flapping. The microcopter’s wings kept pace with the middle-of-the-pack hummingbird wings, but the topflight wings – those of Anna’s hummingbird, a species common throughout the West Coast – were still about 27 percent more efficient than engineered blades.

Hummingbirds acing the test didn’t particularly surprise Lentink – previous studies had indicated hummingbirds were incredibly efficient – but he was impressed with the helicopter.

“The technology is at the level of an average Joe hummingbird,” Lentink said. “A helicopter is really the most efficient hovering device that we can build. The best hummingbirds are still better, but I think it’s amazing that we’re getting closer. It’s not easy to match their performance, but if we build better wings with better shapes, we might approximate hummingbirds.”

Based on the measurements of Anna’s hummingbirds, Lentink said there is potential to improve microcopter rotor power by up to 27 percent.

The high-fidelity experiment also provided an opportunity to refine previous rough estimates of muscle power. Lentink’s team learned that hummingbirds’ muscles produce a surprising 130 watts of energy per kilogram; the average for other birds, and across most vertebrates, is roughly 100 watts/kg.

Although the current study revealed several details of how a hummingbird hovers in one place, the birds still hold many secrets. For instance, Lentink said, we don’t know how hummingbirds maintain their flight in a strong gust, how they navigate through branches and other clutter, or how they change direction so quickly during aerial “dogfights.”

He also thinks great strides could be made by studying wing aspect ratios, the ratio of wing length to wing width. The aspect ratios of all the hummingbirds’ wings remarkably converged around 3.9. The aspect ratios of most wings used in aviation measure much higher; the Black Hornet’s aspect ratio was 4.7.

“I want to understand if aspect ratio is special, and whether the amount of variation has an effect on performance,” Lentink said. Understanding and replicating these abilities and characteristics could be a boon for robotics and will be the focus of future experiments.

“Those are the things we don’t know right now, and they could be incredibly useful. But I don’t mind it, actually,” Lentink said. “I think it’s nice that there are still a few things about hummingbirds that we don’t know.”

Agreed, it’s nice to know there are still a few mysteries left. You can watch the ‘mysterious’ hummingbird in this video courtesy of the Rivers Ingersoll Lentink Lab at Stanford University,

High speed video of Anna’s hummingbird at Stanford Arizona Cactus Garden.

Here’s a link to and a citation for the paper, H/T to Nancy Owano’s article on phys.org for alerting me to this story.

Hummingbird wing efficacy depends on aspect ratio and compares with helicopter rotors by Jan W. Kruyt, Elsa M. Quicazán-Rubio, GertJan F. van Heijst, Douglas L. Altshuler, and David Lentink.  J. R. Soc. Interface 6 October 2014 vol. 11 no. 99 20140585 doi: 10.1098/​rsif.2014.0585 Published [online] 30 July 2014

This is an open access paper.

Despite Munro’s reference to the Black Hornet as a ‘nano’ spycam, the ‘microhelicopter’ description in the news release places the device at the microscale (/1,000,000,000). Still, I don’t understand what makes it microscale since it’s visible to the naked eye. In any case, it is small.

Squishy but rigid robots from MIT (Massachusetts Institute of Technology)

A July 14, 2014 news item on ScienceDaily MIT (Massachusetts Institute of Technology) features robots that mimic mice and other biological constructs or, if you prefer, movie robots,

In the movie “Terminator 2,” the shape-shifting T-1000 robot morphs into a liquid state to squeeze through tight spaces or to repair itself when harmed.

Now a phase-changing material built from wax and foam, and capable of switching between hard and soft states, could allow even low-cost robots to perform the same feat.

The material — developed by Anette Hosoi, a professor of mechanical engineering and applied mathematics at MIT, and her former graduate student Nadia Cheng, alongside researchers at the Max Planck Institute for Dynamics and Self-Organization and Stony Brook University — could be used to build deformable surgical robots. The robots could move through the body to reach a particular point without damaging any of the organs or vessels along the way.

A July 14, 2014 MIT news release (also on EurekAlert), which originated the news item, describes the research further by referencing both octopuses and jello,

Working with robotics company Boston Dynamics, based in Waltham, Mass., the researchers began developing the material as part of the Chemical Robots program of the Defense Advanced Research Projects Agency (DARPA). The agency was interested in “squishy” robots capable of squeezing through tight spaces and then expanding again to move around a given area, Hosoi says — much as octopuses do.

But if a robot is going to perform meaningful tasks, it needs to be able to exert a reasonable amount of force on its surroundings, she says. “You can’t just create a bowl of Jell-O, because if the Jell-O has to manipulate an object, it would simply deform without applying significant pressure to the thing it was trying to move.”

What’s more, controlling a very soft structure is extremely difficult: It is much harder to predict how the material will move, and what shapes it will form, than it is with a rigid robot.

So the researchers decided that the only way to build a deformable robot would be to develop a material that can switch between a soft and hard state, Hosoi says. “If you’re trying to squeeze under a door, for example, you should opt for a soft state, but if you want to pick up a hammer or open a window, you need at least part of the machine to be rigid,” she says.

Compressible and self-healing

To build a material capable of shifting between squishy and rigid states, the researchers coated a foam structure in wax. They chose foam because it can be squeezed into a small fraction of its normal size, but once released will bounce back to its original shape.

The wax coating, meanwhile, can change from a hard outer shell to a soft, pliable surface with moderate heating. This could be done by running a wire along each of the coated foam struts and then applying a current to heat up and melt the surrounding wax. Turning off the current again would allow the material to cool down and return to its rigid state.

In addition to switching the material to its soft state, heating the wax in this way would also repair any damage sustained, Hosoi says. “This material is self-healing,” she says. “So if you push it too far and fracture the coating, you can heat it and then cool it, and the structure returns to its original configuration.”

To build the material, the researchers simply placed the polyurethane foam in a bath of melted wax. They then squeezed the foam to encourage it to soak up the wax, Cheng says. “A lot of materials innovation can be very expensive, but in this case you could just buy really low-cost polyurethane foam and some wax from a craft store,” she says.

In order to study the properties of the material in more detail, they then used a 3-D printer to build a second version of the foam lattice structure, to allow them to carefully control the position of each of the struts and pores.

When they tested the two materials, they found that the printed lattice was more amenable to analysis than the polyurethane foam, although the latter would still be fine for low-cost applications, Hosoi says.

The wax coating could also be replaced by a stronger material, such as solder, she adds.

Hosoi is now investigating the use of other unconventional materials for robotics, such as magnetorheological and electrorheological fluids. These materials consist of a liquid with particles suspended inside, and can be made to switch from a soft to a rigid state with the application of a magnetic or electric field.

When it comes to artificial muscles for soft and biologically inspired robots, we tend to think of controlling shape through bending or contraction, says Carmel Majidi, an assistant professor of mechanical engineering in the Robotics Institute at Carnegie Mellon University, who was not involved in the research. “But for a lot of robotics tasks, reversibly tuning the mechanical rigidity of a joint can be just as important,” he says. “This work is a great demonstration of how thermally controlled rigidity-tuning could potentially be used in soft robotics.”

Here’s a link to and a citation for the paper,

Thermally Tunable, Self-Healing Composites for Soft Robotic Applications by Nadia G. Cheng, Arvind Gopinath, Lifeng Wang, Karl Iagnemma, and Anette E. Hosoi. Macromolecular Materials and Engineering DOI: 10.1002/mame.201400017 Article first published online: 30 JUN 2014

© 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

Writing and AI or is a robot writing this blog?

In an interview almost 10 years ago for an article I was writing for a digital publishing magazine, I had a conversation with a very technically oriented individually that went roughly this way,

Him: (enthused and excited) We’re developing algorithms that will let us automatically create brochures, written reports, that will always have the right data and can be instantly updated.

Me: (pause)

Him: (no reaction)

Me: (breaking long pause) You realize you’re talking to a writer, eh? You’ve just told me that at some point in the future nobody will need writers.

Him: (pause) No. (then with more certainty) No. You don’t understand. We’re making things better for you. In the future, you won’t need to do the boring stuff.

It seems the future is now and in the hands of a company known as Automated Insights, You can find this at the base of one of the company’s news releases,

ABOUT AUTOMATED INSIGHTS, INC.

Automated Insights (Ai) transforms Big Data into written reports with the depth of analysis, personality and variability of a human writer. In 2014, Ai and its patented Wordsmith platform will produce over 1 billion personalized reports for clients like Yahoo!, The Associated Press, the NFL, and Edmunds.com. [emphasis mine] The Wordsmith platform uses artificial intelligence to dynamically spot patterns and trends in raw data and then describe those findings in plain English. Wordsmith authors insightful, personalized reports around individual user data at unprecedented scale and in real-time. Automated Insights also offers applications that run on its Wordsmith platform, including the recently launched Wordsmith for Marketing, which enables marketing agencies to automate reporting for clients. Learn more at http://automatedinsights.com.

In the wake of the June 30, 2014 deal with Associated Press, there has been a flurry of media interest especially from writers who seem to have largely concluded that the robots will do the boring stuff and free human writers to do creative, innovative work. A July 2, 2014 news item on FoxNews.com provides more details about the deal,

The Associated Press, the largest American-based news agency in the world, will now use story-writing software to produce U.S. corporate earnings stories.

In a recent blog post post AP Managing Editor Lou Ferarra explained that the software is capable of producing these stories, which are largely technical financial reports that range from 150 to 300 words, in “roughly the same time that it takes our reporters.” [emphasis mine]

AP staff members will initially edit the software-produced reports, but the agency hopes the process will soon be fully automated.

The Wordsmith software constructs narratives in plain English by using algorithms to analyze trends and patterns in a set of data and place them in an appropriate context depending on the nature of the story.

Representatives for the Associated Press have assured anyone who fears robots are making journalists obsolete that Wordsmith will not be taking the jobs of staffers. “We are going to use our brains and time in more enterprising ways during earnings season” Ferarra wrote, in the blog pos. “This is about using technology to free journalists to do more journalism and less data processing, not about eliminating jobs. [emphasis mine]

Russell Brandon’s July 11, 2014 article for The Verge provides more technical detail and context for this emerging field,

Last week, the Associated Press announced it would be automating its articles on quarterly earnings reports. Instead of 300 articles written by humans, the company’s new software will write 4,400 of them, each formatted for AP style, in mere seconds. It’s not the first time a company has tried out automatic writing: last year, a reporter at The LA Times wrote an automated earthquake-reporting program that combined prewritten sentences with automatic seismograph reports to report quakes just seconds after they happen. The natural language-generation company Narrative Science has been churning out automated sports reporting for years.

It appears that AP Managing Editor Lou Ferarra doesn’t know how long it takes to write 150 to 300 words (“roughly the same time that it takes our reporters”) or perhaps he or she wanted to ‘soften’ the news’s possible impact. Getting back to the technical aspects in Brandon’s article,

… So how do you make a robot that writes sentences?

In the case of AP style, a lot of the work has already been done. Every Associated Press article already comes with a clear, direct opening and a structure that spirals out from there. All the algorithm needs to do is code in the same reasoning a reporter might employ. Algorithms detect the most volatile or newsworthy shift in a given earnings report and slot that in as the lede. Circling outward, the program might sense that a certain topic has already been covered recently and decide it’s better to talk about something else. …

The staffers who keep the copy fresh are scribes and coders in equal measure. (Allen [Automated Insights CEO Robbie Allen] says he looks for “stats majors who worked on the school paper.”) They’re not writers in the traditional sense — most of the language work is done beforehand, long before the data is available — but each job requires close attention. For sports articles, the Automated Insights team does all its work during the off-season and then watches the articles write themselves from the sidelines, as soon as each game’s results are available. “I’m often quite surprised by the result,” says Joe Procopio, the company’s head of product engineering. “There might be four or five variables that determine what that lead sentence looks like.” …

A July 11, 2014 article by Catherine Taibi for Huffington Post offers a summary of the current ‘robot/writer’ situation (Automated Insights is not the only company offering this service) along with many links including one to this July 11, 2014 article by Kevin Roose for New York Magazine where he shares what appears to be a widely held opinion and which echoes my interviewee of 10 years ago (Note: A link has been removed),

By this point, we’re no longer surprised when machines replace human workers in auto factories or electronics-manufacturing plants. That’s the norm. But we hoity-toity journalists had long assumed that our jobs were safe from automation. (We’re knowledge workers, after all.) So when the AP announced its new automated workforce, you could hear the panic spread to old-line news desks across the nation. Unplug the printers, Bob! The robots are coming!

I’m not an alarmist, though. In fact, I welcome our new robot colleagues. Not only am I not scared of losing my job to a piece of software, I think the introduction of automated reporting is the best thing to happen to journalists in a long time.

For one thing, humans still have the talent edge. At the moment, the software created by Automated Insights is only capable of generating certain types of news stories — namely, short stories that use structured data as an input, and whose output follows a regular pattern. …

Robot-generated stories aren’t all fill-in-the-blank jobs; the more advanced algorithms use things like perspective, tone, and humor to tailor a story to its audience. …

But these robots, as sophisticated as they are, can’t approach the full creativity of a human writer. They can’t contextualize Emmy snubs like Matt Zoller Seitz, assail opponents of Obamacare like Jonathan Chait, or collect summer-camp sex stories like Maureen O’Connor. My colleagues’ jobs (and mine, knock wood) are too complex for today’s artificial intelligence to handle; they require human skills like picking up the phone, piecing together data points from multiple sources, and drawing original, evidence-based conclusions. [emphasis mine]

The stories that today’s robots can write are, frankly, the kinds of stories that humans hate writing anyway. … [emphasis mine]

Despite his blithe assurances, there is a little anxiety expressed in this piece “My colleagues’ jobs (and mine, knock wood) are too complex for today’s artificial intelligence … .”

I too am feeling a little uncertain. For example, there’s this April 29, 2014 posting by Adam Long on the Automated Insights blog and I can’t help wondering how much was actually written by Long and how much by the company’s robots. After all the company proudly proclaims the blog is powered by Wordsmith Marketing. For that matter, I’m not that sure about the FoxNews.com piece, which has no byline.

For anyone interested in still more links and information, Automated Insights offers a listing of their press coverage here. Although it’s a bit dated now, there is an exhaustive May 22, 2013 posting by Tony Hirst on the OUseful.info blog which, despite the title: ‘Notes on Narrative Science and Automated Insights’, provides additional context for the work being done to automate the writing process since 2009.

For the record, this blog is not written by a robot. As for getting rid of the boring stuff, I can’t help but remember that part of how one learns any craft is by doing the boring, repetitive work needed to build skills.

One final and unrelated note, Automated Insights has done a nice piece of marketing with its name which abbreviates to Ai. One can’t help but be reminded of AI, a term connoting the field of artificial intelligence.

What about the heart? and the quest to make androids lifelike

Japanese scientist Hiroshi Ishiguro has been mentioned here several times in the context of ‘lifelike’ robots. Accordingly, it’s no surprise to see Ishiguro’s name in a June 24, 2014 news item about uncannily lifelike robotic tour guides in a Tokyo museum (CBC (Canadian Broadcasting Corporation) News online),

The new robot guides at a Tokyo museum look so eerily human and speak so smoothly they almost outdo people — almost.

Japanese robotics expert Hiroshi Ishiguro, an Osaka University professor, says they will be useful for research on how people interact with robots and on what differentiates the person from the machine.

“Making androids is about exploring what it means to be human,” he told reporters Tuesday [June 23, 2014], “examining the question of what is emotion, what is awareness, what is thinking.”

In a demonstration, the remote-controlled machines moved their pink lips in time to a voice-over, twitched their eyebrows, blinked and swayed their heads from side to side. They stay seated but can move their hands.

Ishiguro and his robots were also mentioned in a May 29, 2014 article by Carey Dunne for Fast Company. The article concerned a photographic project of Luisa Whitton’s.

In her series “What About the Heart?,” British photographer Luisa Whitton documents one of the creepiest niches of the Japanese robotics industry--androids. Here, an eerily lifelike face made for a robot. [dowloaded from http://www.fastcodesign.com/3031125/exposure/japans-uncanny-quest-to-humanize-robots?partner=rss]

In her series “What About the Heart?,” British photographer Luisa Whitton documents one of the creepiest niches of the Japanese robotics industry–androids. Here, an eerily lifelike face made for a robot. [dowloaded from http://www.fastcodesign.com/3031125/exposure/japans-uncanny-quest-to-humanize-robots?partner=rss]

From Dunne’s May 29, 2014 article (Note: Links have been removed),

We’re one step closer to a robot takeover. At least, that’s one interpretation of “What About the Heart?” a new series by British photographer Luisa Whitton. In 17 photos, Whitton documents one of the creepiest niches of the Japanese robotics industry–androids. These are the result of a growing group of scientists trying to make robots look like living, breathing people. Their efforts pose a question that’s becoming more relevant as Siri and her robot friends evolve: what does it mean to be human as technology progresses?

Whitton spent several months in Japan working with Hiroshi Ishiguro, a scientist who has constructed a robotic copy of himself. Ishiguro’s research focused on whether his robotic double could somehow possess his “Sonzai-Kan,” a Japanese term that translates to the “presence” or “spirit” of a person. It’s work that blurs the line between technology, philosophy, psychology, and art, using real-world studies to examine existential issues once reserved for speculation by the likes of Philip K. Dick or Sigmund Freud. And if this sounds like a sequel to Blade Runner, it gets weirder: after Ishiguro aged, he had plastic surgery so that his face still matched that of his younger, mechanical doppelganger.

I profiled Ishiguro’s robots (then called Geminoids) in a March 10, 2011 posting which featured a Danish philosopher, Henrik Scharfe, who’d commissioned a Geminoid identical to himself for research purposes. He doesn’t seem to have published any papers about his experience but there is this interview of Scharfe and his Geminoid twin by Aldith Hunkar (she’s very good) at a 2011 TEDxAmsterdam,

Mary King’s 2007 research project notes a contrast, Robots and AI in Japan and The West and provides an excellent primer (Note: A link has been removed),

The Japanese scientific approach and expectations of robots and AI are far more down to earth than those of their Western counterparts. Certainly, future predictions made by Japanese scientists are far less confrontational or sci-fi-like. In an interview via email, Canadian technology journalist Tim N. Hornyak described the Japanese attitude towards robots as being “that of the craftsman, not the philosopher” and cited this as the reason for “so many rosy imaginings of a future Japan in which robots are a part of people’s everyday lives.”

Hornyak, who is author of “Loving the Machine: The Art and Science of Japanese Robots,” acknowledges that apocalyptic visions do appear in manga and anime, but emphasizes that such forecasts do not exist in government circles or within Japanese companies. Hornyak also added that while AI has for many years taken a back seat to robot development in Japan, this situation is now changing. Honda, for example, is working on giving better brains to Asimo, which is already the world’s most advanced humanoid robot. Japan is also already legislating early versions of Asimov’s laws by introducing design requirements for next-generation mobile robots.

It does seem there might be more interest in the philosophical issues in Japan these days or possibly it’s a reflection of Ishiguro’s own current concerns (from Dunne’s May 29, 2014 article),

The project’s title derives from a discussion with Ishiguro about what it means to be human. “The definition of human will be more complicated,” Ishiguro said.

Dunne reproduces a portion of Whitton’s statement describing her purpose for these photographs,

Through Ishiguro, Whitton got in touch with a number of other scientists working on androids. “In the photographs, I am trying to subvert the traditional formula of portraiture and allure the audience into a debate on the boundaries that determine the dichotomy of the human/not human,” she writes in her artist statement. “The photographs become documents of objects that sit between scientific tool and horrid simulacrum.”

I’m not sure what she means by “horrid simulacrum” but she seems to be touching on the concept of the ‘uncanny valley’. Here’s a description I provided in a May 31, 2013 posting about animator Chris Landreth and his explorations of that valley within the context of his animated film, Subconscious Password,,

Landreth also discusses the ‘uncanny valley’ and how he deliberately cast his film into that valley. For anyone who’s unfamiliar with the ‘uncanny valley’ I wrote about it in a Mar. 10, 2011 posting concerning Geminoid robots,

It seems that researchers believe that the ‘uncanny valley’ doesn’t necessarily have to exist forever and at some point, people will accept humanoid robots without hesitation. In the meantime, here’s a diagram of the ‘uncanny valley’,

From the article on Android Science by Masahiro Mori (translated by Karl F. MacDorman and Takashi Minato)

Here’s what Mori (the person who coined the term) had to say about the ‘uncanny valley’ (from Android Science),

Recently there are many industrial robots, and as we know the robots do not have a face or legs, and just rotate or extend or contract their arms, and they bear no resemblance to human beings. Certainly the policy for designing these kinds of robots is based on functionality. From this standpoint, the robots must perform functions similar to those of human factory workers, but their appearance is not evaluated. If we plot these industrial robots on a graph of familiarity versus appearance, they lie near the origin (see Figure 1 [above]). So they bear little resemblance to a human being, and in general people do not find them to be familiar. But if the designer of a toy robot puts importance on a robot’s appearance rather than its function, the robot will have a somewhat humanlike appearance with a face, two arms, two legs, and a torso. This design lets children enjoy a sense of familiarity with the humanoid toy. So the toy robot is approaching the top of the first peak.

Of course, human beings themselves lie at the final goal of robotics, which is why we make an effort to build humanlike robots. For example, a robot’s arms may be composed of a metal cylinder with many bolts, but to achieve a more humanlike appearance, we paint over the metal in skin tones. These cosmetic efforts cause a resultant increase in our sense of the robot’s familiarity. Some readers may have felt sympathy for handicapped people they have seen who attach a prosthetic arm or leg to replace a missing limb. But recently prosthetic hands have improved greatly, and we cannot distinguish them from real hands at a glance. Some prosthetic hands attempt to simulate veins, muscles, tendons, finger nails, and finger prints, and their color resembles human pigmentation. So maybe the prosthetic arm has achieved a degree of human verisimilitude on par with false teeth. But this kind of prosthetic hand is too real and when we notice it is prosthetic, we have a sense of strangeness. So if we shake the hand, we are surprised by the lack of soft tissue and cold temperature. In this case, there is no longer a sense of familiarity. It is uncanny. In mathematical terms, strangeness can be represented by negative familiarity, so the prosthetic hand is at the bottom of the valley. So in this case, the appearance is quite human like, but the familiarity is negative. This is the uncanny valley.

[keep scrolling, I'm having trouble getting rid of this extra space below]

It seems that Mori is suggesting that as the differences between the original and the simulacrum become fewer and fewer, the ‘uncanny valley’ will disappear. It’s possible but I suspect before that day occurs those of us who were brought up in a world without synthetic humans (androids) may experience an intensification of the feelings aroused by an encounter with the uncanny valley even as it disappears. For those who’d like a preview, check out Luisa Whitton’s What About The Heart? project.

Lunar spelunking with robots at Vancouver’s (Canada) June 24, 2014 Café Scientifique

Vancouver’s next Café Scientifique is being held in the back room of the The Railway Club (2nd floor of 579 Dunsmuir St. [at Seymour St.], Vancouver, Canada), on Tuesday, June 24,  2014 at 7:30 pm. Here’s the meeting description (from the June 18, 2014 announcement),

Our speaker for the evening will be John Walker, Rover Development Lead of the Hakuto Google Lunar X-Prize Team.  The title and abstract of his talk is:

Lunar Spelunking

Lava tubes, or caves likely exist on the surface of the moon. Based on recent images and laser distance measurements from the surface of the moon, scientists have selected candidates for further study.

Governmental space agencies and private institutions now have plans to visit these potential caves and investigate them as potential lunar habitat sites, as early as 2015.

I will present some of these candidates and my PhD research, which is supporting a Google Lunar X-Prize team’s attempt to survey one of these caves using robots.

I wasn’t able to find much about John Walker bu there is this Facebook entry noting a talk he gave at TEDxBudapest.

As for the Google Lunar XPRIZE, running a Google search yielded this on June 22, 2014 at 0945 hours PDT. It was the top finding on the search page. links to the site were provided below this definition:

The Google Lunar XPRIZE is a $30 million competition for the first privately funded team to send a robot to the moon, travel 500 meters and transmit video,…

You can find the Google Lunar XPRIZE website here. The Hakuto team, the only one based in Japan (I believe), has a website here. There is some English language material but the bulk would appear to be Japanese language.

Brazil, the 2014 World Cup kickoff, and a mind-controlled exoskeleton (part four of five)

The Brain research, ethics, and nanotechnology (part one of five) May 19, 2014 post kicked off a series titled ‘Brains, prostheses, nanotechnology, and human enhancement’ which brings together a number of developments in the worlds of neuroscience, prosthetics, and, incidentally, nanotechnology in the field of interest called human enhancement. Parts one through four are an attempt to draw together a number of new developments, mostly in the US and in Europe. Due to my language skills which extend to English and, more tenuously, French, I can’t provide a more ‘global perspective’. Part five features a summary.

Brazil’s World Cup for soccer/football which opens on June 12, 2014 will be the first public viewing of someone with paraplegia demonstrating a mind-controlled exoskeleton (or a robotic suit as it’s sometimes called) by opening the 2014 games with the first kick-off.

I’ve been covering this story since 2011 and, even so, was late to the party as per this May 7, 2014 article by Alejandra Martins for BBC World news online,

The World Cup curtain-raiser will see the first public demonstration of a mind-controlled exoskeleton that will enable a person with paralysis to walk.

If all goes as planned, the robotic suit will spring to life in front of almost 70,000 spectators and a global audience of billions of people.

The exoskeleton was developed by an international team of scientists as part of the Walk Again Project and is the culmination of more than a decade of work for Dr Miguel Nicolelis, a Brazilian neuroscientist based at Duke University in North Carolina. [emphasis mine]

Since November [2013], Dr Nicolelis has been training eight patients at a lab in Sao Paulo, in the midst of huge media speculation that one of them will stand up from his or her wheelchair and deliver the first kick of this year’s World Cup.

“That was the original plan,” the Duke University researcher told the BBC. “But not even I could tell you the specifics of how the demonstration will take place. This is being discussed at the moment.”

Speaking in Portuguese from Sao Paulo, Miguel Nicolelis explained that all the patients are over 20 years of age, with the oldest about 35.

“We started the training in a virtual environment with a simulator. In the last few days, four patients have donned the exoskeleton to take their first steps and one of them has used mental control to kick a ball,” he explained.

The history of Nicolelis’ work is covered here in a series of a posts starting the with an Oct. 5, 2011 post (Advertising for the 21st Century: B-Reel, ‘storytelling’, and mind control; scroll down 2/3 of the way for a reference to Ed Yong’s article where I first learned of Nicolelis).

The work was explored in more depth in a March 16, 2012 posting (Monkeys, mind control, robots, prosthetics, and the 2014 World Cup (soccer/football) and then followed up a year later by two posts which link Nicoleliis’ work with the Brain Activity Map (now called, BRAIN [Brain Research through Advancing Innovative Neurotechnologies] initiative: a March 4, 2013 (Brain-to-brain communication, organic computers, and BAM [brain activity map], the connectome) and a March 8,  2013 post (Prosthetics and the human brain) directly linking exoskeleton work in Holland and the project at Duke with current brain research and the dawning of a new relationship to one’s prosthestics,

On the heels of research which suggests that humans tend to view their prostheses, including wheel chairs, as part of their bodies, researchers in Europe  have announced the development of a working exoskeleton powered by the wearer’s thoughts.

Getting back to Brazil and Nicolelis’ technology, Ian Sample offers an excellent description in an April 1, 2014 article for the Guardian (Note: Links have been removed),

The technology in question is a mind-controlled robotic exoskeleton. The complex and conspicuous robotic suit, built from lightweight alloys and powered by hydraulics, has a simple enough function. When a paraplegic person straps themselves in, the machine does the job that their leg muscles no longer can.

The exoskeleton is the culmination of years of work by an international team of scientists and engineers on the Walk Again project. The robotics work was coordinated by Gordon Cheng at the Technical University in Munich, and French researchers built the exoskeleton. Nicolelis’s team focused on ways to read people’s brain waves, and use those signals to control robotic limbs.

To operate the exoskeleton, the person is helped into the suit and given a cap to wear that is fitted with electrodes to pick up their brain waves. These signals are passed to a computer worn in a backpack, where they are decoded and used to move hydraulic drivers on the suit.

The exoskeleton is powered by a battery – also carried in the backpack – that allows for two hours of continuous use.

“The movements are very smooth,” Nicolelis told the Guardian. “They are human movements, not robotic movements.”

Nicolelis says that in trials so far, his patients seem have taken to the exoskeleton. “This thing was made for me,” one patient told him after being strapped into the suit.

The operator’s feet rest on plates which have sensors to detect when contact is made with the ground. With each footfall, a signal shoots up to a vibrating device sewn into the forearm of the wearer’s shirt. The device seems to fool the brain into thinking that the sensation came from their foot. In virtual reality simulations, patients felt that their legs were moving and touching something.

Sample’s article includes a good schematic of the ‘suit’ which I have not been able to find elsewhere (meaning the Guardian likely has a copyright for the schematic and is why you won’t see it here) and speculation about robotics and prosthetics in the future.

Nicolelis and his team have a Facebook page for the Walk Again Project where you can get some of the latest information with  both English and Portuguese language entries as they prepare for the June 12, 2014 kickoff.

One final thought, this kickoff project represents an unlikely confluence of events. After all, what are the odds

    • that a Brazil-born researcher (Nicolelis) would be working on a project to give paraplegics the ability to walk again? and
    • that Brazil would host the World Cup in 2014 (the first time since 1950)? and
    • that the timing would coincide so a public demonstration at one of the world’s largest athletic events (of a sport particularly loved in Brazil) could be planned?

It becomes even more extraordinary when one considers that Brazil had isolated itself somewhat in the 1980s with a policy of nationalism vis à vis the computer industry (from the Brazil Science and Technology webpage on the ITA website),

In the early 1980s, the policy of technological nationalism and self-sufficiency had narrowed to the computer sector, where protective legislation tried to shield the Brazilian mini- and microcomputer industries from foreign competition. Here again, the policy allowed for the growth of local industry and a few well-qualified firms, but the effect on the productive capabilities of the economy as a whole was negative; and the inability to follow the international market in price and quality forced the policy to be discontinued.

For those who may have forgotten, the growth of the computer industry (specifically personal computers) in the 1980s figured hugely in a country’s economic health and, in this case,with  a big negative impact in Brazil.

Returning to 2014, the kickoff in Brazil (if successful) symbolizes more than an international athletic competition or a technical/medical achievement, this kick-off symbolizes a technological future for Brazil and its place on the world stage (despite the protests and social unrest) .

Links to other posts in the Brains, prostheses, nanotechnology, and human enhancement five-part series

Part one: Brain research, ethics, and nanotechnology (May 19, 2014 post)

Part two: BRAIN and ethics in the US with some Canucks (not the hockey team) participating (May 19, 2014)

Part three: Gray Matters: Integrative Approaches for Neuroscience, Ethics, and Society issued May 2014 by US Presidential Bioethics Commission (May 20, 2014)

Part five: Brains, prostheses, nanotechnology, and human enhancement: summary (May 20, 2014)

ETA June 16, 2014: The kickoff seems to have been a disappointment (June 15, 2014 news item on phys.org) and for those who might be interested in some of the reasons for the World Cup unrest and protests in Brazil, John Oliver provides an excoriating overview of the organization which organizes the World Cup games while professing his great love of the games, http://www.youtube.com/watch?v=DlJEt2KU33I

UK’s National Physical Laboratory reaches out to ‘BioTouch’ MIT and UCL

This March 27, 2014 news item on Azonano is an announcement for a new project featuring haptics and self-assembly,

NPL (UK’s National Physical Laboratory) has started a new strategic research partnership with UCL (University College of London) and MIT (Massachusetts Institute of Technology) focused on haptic-enabled sensing and micromanipulation of biological self-assembly – BioTouch.

The NPL March 27, 2014 news release, which originated the news item, is accompanied by a rather interesting image,

A computer operated dexterous robotic hand holding a microscope slide with a fluorescent human cell (not to scale) embedded into a synthetic extracellular matrix. Courtesy: NPL

A computer operated dexterous
robotic hand holding a microscope
slide with a fluorescent human cell
(not to scale) embedded into a
synthetic extracellular matrix. Courtesy: NPL

The news release goes on to describe the BioTouch project in more detail (Note: A link has been removed),

The project will probe sensing and application of force and related vectors specific to biological self-assembly as a means of synthetic biology and nanoscale construction. The overarching objective is to enable the re-programming of self-assembled patterns and objects by directed micro-to-nano manipulation with compliant robotic haptic control.

This joint venture, funded by the European Research Council, EPSRC and NPL’s Strategic Research Programme, is a rare blend of interdisciplinary research bringing together expertise in robotics, haptics and machine vision with synthetic and cell biology, protein design, and super- and high-resolution microscopy. The research builds on the NPL’s pioneering developments in bioengineering and imaging and world-leading haptics technologies from UCL and MIT.

Haptics is an emerging enabling tool for sensing and manipulation through touch, which holds particular promise for the development of autonomous robots that need to perform human-like functions in unstructured environments. However, the path to all such applications is hampered by the lack of a compliant interface between a predictably assembled biological system and a human user. This research will enable human directed micro-manipulation of experimental biological systems using cutting-edge robotic systems and haptic feedback.

Recently the UK government has announced ‘eight great technologies’ in which Britain is to become a world leader. Robotics, synthetic biology, regenerative medicine and advanced materials are four of these technologies for which this project serves as a merging point providing thus an excellent example of how multidisciplinary collaborative research can shape our future.

If it read this rightly, it means they’re trying to design systems where robots will work directly with materials in the labs while humans direct the robots’ actions from a remote location. My best example of this (it’s not a laboratory example) would be of a surgery where a robot actually performs the work while a human directs the robot’s actions based on haptic (touch) information the human receives from the robot. Surgeons don’t necessarily see what they’re dealing with, they may be feeling it with their fingers (haptic information). In effect, the robot’s hands become an extension of the surgeon’s hands. I imagine using a robot’s ‘hands’ would allow for less invasive procedures to be performed.

Should we love our robots or are robots going be smarter than we are? TED’s 2014 All Stars Session 5: The Future is Ours (maybe)

Rodney Brooks seems to be a man who loves robots, from his TED biography,

Rodney Brooks builds robots based on biological principles of movement and reasoning. The goal: a robot who can figure things out.

MIT professor Rodney Brooks studies and engineers robot intelligence, looking for the holy grail of robotics: the AGI, or artificial general intelligence. For decades, we’ve been building robots to do highly specific tasks — welding, riveting, delivering interoffice mail — but what we all want, really, is a robot that can figure things out on its own, the way we humans do.

Brooks makes a plea for easy-to-use (programme) robots and mentions his Baxter robot as an example that should be improved; Brooks issues a challenge to make robots better. (Baxter was used as the base for EDI introduced earlier in TED’s 2014 Session 8 this morning (March 20, 2014).

By contrast, Sir Martin Rees, astrophysicist has some concerns about robots and artificial intelligence as per my Nov. 26, 2012 posting about his (and others’) proposal to create the Cambridge Project for Existential Risk. From his TED biography,

Martin Rees, one of the world’s most eminent astronomers, is a professor of cosmology and astrophysics at the University of Cambridge and the UK’s Astronomer Royal. He is one of our key thinkers on the future of humanity in the cosmos.

Sir Martin Rees has issued a clarion call for humanity. His 2004 book, ominously titled Our Final Hour, catalogues the threats facing the human race in a 21st century dominated by unprecedented and accelerating scientific change. He calls on scientists and nonscientists alike to take steps that will ensure our survival as a species.

Rees states that the worst threats to planetary survival come from humans not, as it did in the past, nature. While science offers great possibilities, it has an equally dark side. Rees suggests robots going rogue, activists hijacking synthetic biology to winnow out the population, and more. He suggests that there is a 50% chance that we could suffer a devastating setback. Rees then mentions the proposed Cambridge Centre for Existential Risk and the importance of studying the possibility of human extinction and ways to mitigate risk.

Steven Johnson, writer, was introduced next (from his TED biography),

Steven Berlin Johnson examines the intersection of science, technology and personal experience.

A dynamic writer and speaker, Johnson crafts captivating theories that draw on a dizzying array of disciplines, without ever leaving his audience behind. Author Kurt Anderson described Johnson’s book Emergence as “thoughtful and lucid and charming and staggeringly smart.” The same could be said for Johnson himself. His big-brained, multi-disciplinary theories make him one of his generation’s more intriguing thinkers. His books take the reader on a journey — following the twists and turns his own mind makes as he connects seemingly disparate ideas: ants and cities, interface design and Victorian novels.

He will be hosting a new PBS (Public Broadcasting Service) series, ‘How We Got to Now’ (mentioned in Hector Tobar’s Aug. 7, 2013 article about the PBS series in the Los Angeles Times) and this talk sounds like it might be a preview of sorts. Johnson plays a recording made 20 years before Alexander Graham Bell ‘first’ recorded sound. The story he shares is about an inventor who didn’t think to include a playback feature for his recordings. He simply didn’t think about it as he was interested in doing something else (I can’t quite remember what that was now) and, consequently, his invention and work got lost for decades. Despite that, it forms part of the sound recording story. Thankfully, modern sound recording engineers have developed a technique which allows us to hear those ‘lost’ sounds today.