Category Archives: robots

Squishy but rigid robots from MIT (Massachusetts Institute of Technology)

A July 14, 2014 news item on ScienceDaily MIT (Massachusetts Institute of Technology) features robots that mimic mice and other biological constructs or, if you prefer, movie robots,

In the movie “Terminator 2,” the shape-shifting T-1000 robot morphs into a liquid state to squeeze through tight spaces or to repair itself when harmed.

Now a phase-changing material built from wax and foam, and capable of switching between hard and soft states, could allow even low-cost robots to perform the same feat.

The material — developed by Anette Hosoi, a professor of mechanical engineering and applied mathematics at MIT, and her former graduate student Nadia Cheng, alongside researchers at the Max Planck Institute for Dynamics and Self-Organization and Stony Brook University — could be used to build deformable surgical robots. The robots could move through the body to reach a particular point without damaging any of the organs or vessels along the way.

A July 14, 2014 MIT news release (also on EurekAlert), which originated the news item, describes the research further by referencing both octopuses and jello,

Working with robotics company Boston Dynamics, based in Waltham, Mass., the researchers began developing the material as part of the Chemical Robots program of the Defense Advanced Research Projects Agency (DARPA). The agency was interested in “squishy” robots capable of squeezing through tight spaces and then expanding again to move around a given area, Hosoi says — much as octopuses do.

But if a robot is going to perform meaningful tasks, it needs to be able to exert a reasonable amount of force on its surroundings, she says. “You can’t just create a bowl of Jell-O, because if the Jell-O has to manipulate an object, it would simply deform without applying significant pressure to the thing it was trying to move.”

What’s more, controlling a very soft structure is extremely difficult: It is much harder to predict how the material will move, and what shapes it will form, than it is with a rigid robot.

So the researchers decided that the only way to build a deformable robot would be to develop a material that can switch between a soft and hard state, Hosoi says. “If you’re trying to squeeze under a door, for example, you should opt for a soft state, but if you want to pick up a hammer or open a window, you need at least part of the machine to be rigid,” she says.

Compressible and self-healing

To build a material capable of shifting between squishy and rigid states, the researchers coated a foam structure in wax. They chose foam because it can be squeezed into a small fraction of its normal size, but once released will bounce back to its original shape.

The wax coating, meanwhile, can change from a hard outer shell to a soft, pliable surface with moderate heating. This could be done by running a wire along each of the coated foam struts and then applying a current to heat up and melt the surrounding wax. Turning off the current again would allow the material to cool down and return to its rigid state.

In addition to switching the material to its soft state, heating the wax in this way would also repair any damage sustained, Hosoi says. “This material is self-healing,” she says. “So if you push it too far and fracture the coating, you can heat it and then cool it, and the structure returns to its original configuration.”

To build the material, the researchers simply placed the polyurethane foam in a bath of melted wax. They then squeezed the foam to encourage it to soak up the wax, Cheng says. “A lot of materials innovation can be very expensive, but in this case you could just buy really low-cost polyurethane foam and some wax from a craft store,” she says.

In order to study the properties of the material in more detail, they then used a 3-D printer to build a second version of the foam lattice structure, to allow them to carefully control the position of each of the struts and pores.

When they tested the two materials, they found that the printed lattice was more amenable to analysis than the polyurethane foam, although the latter would still be fine for low-cost applications, Hosoi says.

The wax coating could also be replaced by a stronger material, such as solder, she adds.

Hosoi is now investigating the use of other unconventional materials for robotics, such as magnetorheological and electrorheological fluids. These materials consist of a liquid with particles suspended inside, and can be made to switch from a soft to a rigid state with the application of a magnetic or electric field.

When it comes to artificial muscles for soft and biologically inspired robots, we tend to think of controlling shape through bending or contraction, says Carmel Majidi, an assistant professor of mechanical engineering in the Robotics Institute at Carnegie Mellon University, who was not involved in the research. “But for a lot of robotics tasks, reversibly tuning the mechanical rigidity of a joint can be just as important,” he says. “This work is a great demonstration of how thermally controlled rigidity-tuning could potentially be used in soft robotics.”

Here’s a link to and a citation for the paper,

Thermally Tunable, Self-Healing Composites for Soft Robotic Applications by Nadia G. Cheng, Arvind Gopinath, Lifeng Wang, Karl Iagnemma, and Anette E. Hosoi. Macromolecular Materials and Engineering DOI: 10.1002/mame.201400017 Article first published online: 30 JUN 2014

© 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

Writing and AI or is a robot writing this blog?

In an interview almost 10 years ago for an article I was writing for a digital publishing magazine, I had a conversation with a very technically oriented individually that went roughly this way,

Him: (enthused and excited) We’re developing algorithms that will let us automatically create brochures, written reports, that will always have the right data and can be instantly updated.

Me: (pause)

Him: (no reaction)

Me: (breaking long pause) You realize you’re talking to a writer, eh? You’ve just told me that at some point in the future nobody will need writers.

Him: (pause) No. (then with more certainty) No. You don’t understand. We’re making things better for you. In the future, you won’t need to do the boring stuff.

It seems the future is now and in the hands of a company known as Automated Insights, You can find this at the base of one of the company’s news releases,

ABOUT AUTOMATED INSIGHTS, INC.

Automated Insights (Ai) transforms Big Data into written reports with the depth of analysis, personality and variability of a human writer. In 2014, Ai and its patented Wordsmith platform will produce over 1 billion personalized reports for clients like Yahoo!, The Associated Press, the NFL, and Edmunds.com. [emphasis mine] The Wordsmith platform uses artificial intelligence to dynamically spot patterns and trends in raw data and then describe those findings in plain English. Wordsmith authors insightful, personalized reports around individual user data at unprecedented scale and in real-time. Automated Insights also offers applications that run on its Wordsmith platform, including the recently launched Wordsmith for Marketing, which enables marketing agencies to automate reporting for clients. Learn more at http://automatedinsights.com.

In the wake of the June 30, 2014 deal with Associated Press, there has been a flurry of media interest especially from writers who seem to have largely concluded that the robots will do the boring stuff and free human writers to do creative, innovative work. A July 2, 2014 news item on FoxNews.com provides more details about the deal,

The Associated Press, the largest American-based news agency in the world, will now use story-writing software to produce U.S. corporate earnings stories.

In a recent blog post post AP Managing Editor Lou Ferarra explained that the software is capable of producing these stories, which are largely technical financial reports that range from 150 to 300 words, in “roughly the same time that it takes our reporters.” [emphasis mine]

AP staff members will initially edit the software-produced reports, but the agency hopes the process will soon be fully automated.

The Wordsmith software constructs narratives in plain English by using algorithms to analyze trends and patterns in a set of data and place them in an appropriate context depending on the nature of the story.

Representatives for the Associated Press have assured anyone who fears robots are making journalists obsolete that Wordsmith will not be taking the jobs of staffers. “We are going to use our brains and time in more enterprising ways during earnings season” Ferarra wrote, in the blog pos. “This is about using technology to free journalists to do more journalism and less data processing, not about eliminating jobs. [emphasis mine]

Russell Brandon’s July 11, 2014 article for The Verge provides more technical detail and context for this emerging field,

Last week, the Associated Press announced it would be automating its articles on quarterly earnings reports. Instead of 300 articles written by humans, the company’s new software will write 4,400 of them, each formatted for AP style, in mere seconds. It’s not the first time a company has tried out automatic writing: last year, a reporter at The LA Times wrote an automated earthquake-reporting program that combined prewritten sentences with automatic seismograph reports to report quakes just seconds after they happen. The natural language-generation company Narrative Science has been churning out automated sports reporting for years.

It appears that AP Managing Editor Lou Ferarra doesn’t know how long it takes to write 150 to 300 words (“roughly the same time that it takes our reporters”) or perhaps he or she wanted to ‘soften’ the news’s possible impact. Getting back to the technical aspects in Brandon’s article,

… So how do you make a robot that writes sentences?

In the case of AP style, a lot of the work has already been done. Every Associated Press article already comes with a clear, direct opening and a structure that spirals out from there. All the algorithm needs to do is code in the same reasoning a reporter might employ. Algorithms detect the most volatile or newsworthy shift in a given earnings report and slot that in as the lede. Circling outward, the program might sense that a certain topic has already been covered recently and decide it’s better to talk about something else. …

The staffers who keep the copy fresh are scribes and coders in equal measure. (Allen [Automated Insights CEO Robbie Allen] says he looks for “stats majors who worked on the school paper.”) They’re not writers in the traditional sense — most of the language work is done beforehand, long before the data is available — but each job requires close attention. For sports articles, the Automated Insights team does all its work during the off-season and then watches the articles write themselves from the sidelines, as soon as each game’s results are available. “I’m often quite surprised by the result,” says Joe Procopio, the company’s head of product engineering. “There might be four or five variables that determine what that lead sentence looks like.” …

A July 11, 2014 article by Catherine Taibi for Huffington Post offers a summary of the current ‘robot/writer’ situation (Automated Insights is not the only company offering this service) along with many links including one to this July 11, 2014 article by Kevin Roose for New York Magazine where he shares what appears to be a widely held opinion and which echoes my interviewee of 10 years ago (Note: A link has been removed),

By this point, we’re no longer surprised when machines replace human workers in auto factories or electronics-manufacturing plants. That’s the norm. But we hoity-toity journalists had long assumed that our jobs were safe from automation. (We’re knowledge workers, after all.) So when the AP announced its new automated workforce, you could hear the panic spread to old-line news desks across the nation. Unplug the printers, Bob! The robots are coming!

I’m not an alarmist, though. In fact, I welcome our new robot colleagues. Not only am I not scared of losing my job to a piece of software, I think the introduction of automated reporting is the best thing to happen to journalists in a long time.

For one thing, humans still have the talent edge. At the moment, the software created by Automated Insights is only capable of generating certain types of news stories — namely, short stories that use structured data as an input, and whose output follows a regular pattern. …

Robot-generated stories aren’t all fill-in-the-blank jobs; the more advanced algorithms use things like perspective, tone, and humor to tailor a story to its audience. …

But these robots, as sophisticated as they are, can’t approach the full creativity of a human writer. They can’t contextualize Emmy snubs like Matt Zoller Seitz, assail opponents of Obamacare like Jonathan Chait, or collect summer-camp sex stories like Maureen O’Connor. My colleagues’ jobs (and mine, knock wood) are too complex for today’s artificial intelligence to handle; they require human skills like picking up the phone, piecing together data points from multiple sources, and drawing original, evidence-based conclusions. [emphasis mine]

The stories that today’s robots can write are, frankly, the kinds of stories that humans hate writing anyway. … [emphasis mine]

Despite his blithe assurances, there is a little anxiety expressed in this piece “My colleagues’ jobs (and mine, knock wood) are too complex for today’s artificial intelligence … .”

I too am feeling a little uncertain. For example, there’s this April 29, 2014 posting by Adam Long on the Automated Insights blog and I can’t help wondering how much was actually written by Long and how much by the company’s robots. After all the company proudly proclaims the blog is powered by Wordsmith Marketing. For that matter, I’m not that sure about the FoxNews.com piece, which has no byline.

For anyone interested in still more links and information, Automated Insights offers a listing of their press coverage here. Although it’s a bit dated now, there is an exhaustive May 22, 2013 posting by Tony Hirst on the OUseful.info blog which, despite the title: ‘Notes on Narrative Science and Automated Insights’, provides additional context for the work being done to automate the writing process since 2009.

For the record, this blog is not written by a robot. As for getting rid of the boring stuff, I can’t help but remember that part of how one learns any craft is by doing the boring, repetitive work needed to build skills.

One final and unrelated note, Automated Insights has done a nice piece of marketing with its name which abbreviates to Ai. One can’t help but be reminded of AI, a term connoting the field of artificial intelligence.

What about the heart? and the quest to make androids lifelike

Japanese scientist Hiroshi Ishiguro has been mentioned here several times in the context of ‘lifelike’ robots. Accordingly, it’s no surprise to see Ishiguro’s name in a June 24, 2014 news item about uncannily lifelike robotic tour guides in a Tokyo museum (CBC (Canadian Broadcasting Corporation) News online),

The new robot guides at a Tokyo museum look so eerily human and speak so smoothly they almost outdo people — almost.

Japanese robotics expert Hiroshi Ishiguro, an Osaka University professor, says they will be useful for research on how people interact with robots and on what differentiates the person from the machine.

“Making androids is about exploring what it means to be human,” he told reporters Tuesday [June 23, 2014], “examining the question of what is emotion, what is awareness, what is thinking.”

In a demonstration, the remote-controlled machines moved their pink lips in time to a voice-over, twitched their eyebrows, blinked and swayed their heads from side to side. They stay seated but can move their hands.

Ishiguro and his robots were also mentioned in a May 29, 2014 article by Carey Dunne for Fast Company. The article concerned a photographic project of Luisa Whitton’s.

In her series “What About the Heart?,” British photographer Luisa Whitton documents one of the creepiest niches of the Japanese robotics industry--androids. Here, an eerily lifelike face made for a robot. [dowloaded from http://www.fastcodesign.com/3031125/exposure/japans-uncanny-quest-to-humanize-robots?partner=rss]

In her series “What About the Heart?,” British photographer Luisa Whitton documents one of the creepiest niches of the Japanese robotics industry–androids. Here, an eerily lifelike face made for a robot. [dowloaded from http://www.fastcodesign.com/3031125/exposure/japans-uncanny-quest-to-humanize-robots?partner=rss]

From Dunne’s May 29, 2014 article (Note: Links have been removed),

We’re one step closer to a robot takeover. At least, that’s one interpretation of “What About the Heart?” a new series by British photographer Luisa Whitton. In 17 photos, Whitton documents one of the creepiest niches of the Japanese robotics industry–androids. These are the result of a growing group of scientists trying to make robots look like living, breathing people. Their efforts pose a question that’s becoming more relevant as Siri and her robot friends evolve: what does it mean to be human as technology progresses?

Whitton spent several months in Japan working with Hiroshi Ishiguro, a scientist who has constructed a robotic copy of himself. Ishiguro’s research focused on whether his robotic double could somehow possess his “Sonzai-Kan,” a Japanese term that translates to the “presence” or “spirit” of a person. It’s work that blurs the line between technology, philosophy, psychology, and art, using real-world studies to examine existential issues once reserved for speculation by the likes of Philip K. Dick or Sigmund Freud. And if this sounds like a sequel to Blade Runner, it gets weirder: after Ishiguro aged, he had plastic surgery so that his face still matched that of his younger, mechanical doppelganger.

I profiled Ishiguro’s robots (then called Geminoids) in a March 10, 2011 posting which featured a Danish philosopher, Henrik Scharfe, who’d commissioned a Geminoid identical to himself for research purposes. He doesn’t seem to have published any papers about his experience but there is this interview of Scharfe and his Geminoid twin by Aldith Hunkar (she’s very good) at a 2011 TEDxAmsterdam,

Mary King’s 2007 research project notes a contrast, Robots and AI in Japan and The West and provides an excellent primer (Note: A link has been removed),

The Japanese scientific approach and expectations of robots and AI are far more down to earth than those of their Western counterparts. Certainly, future predictions made by Japanese scientists are far less confrontational or sci-fi-like. In an interview via email, Canadian technology journalist Tim N. Hornyak described the Japanese attitude towards robots as being “that of the craftsman, not the philosopher” and cited this as the reason for “so many rosy imaginings of a future Japan in which robots are a part of people’s everyday lives.”

Hornyak, who is author of “Loving the Machine: The Art and Science of Japanese Robots,” acknowledges that apocalyptic visions do appear in manga and anime, but emphasizes that such forecasts do not exist in government circles or within Japanese companies. Hornyak also added that while AI has for many years taken a back seat to robot development in Japan, this situation is now changing. Honda, for example, is working on giving better brains to Asimo, which is already the world’s most advanced humanoid robot. Japan is also already legislating early versions of Asimov’s laws by introducing design requirements for next-generation mobile robots.

It does seem there might be more interest in the philosophical issues in Japan these days or possibly it’s a reflection of Ishiguro’s own current concerns (from Dunne’s May 29, 2014 article),

The project’s title derives from a discussion with Ishiguro about what it means to be human. “The definition of human will be more complicated,” Ishiguro said.

Dunne reproduces a portion of Whitton’s statement describing her purpose for these photographs,

Through Ishiguro, Whitton got in touch with a number of other scientists working on androids. “In the photographs, I am trying to subvert the traditional formula of portraiture and allure the audience into a debate on the boundaries that determine the dichotomy of the human/not human,” she writes in her artist statement. “The photographs become documents of objects that sit between scientific tool and horrid simulacrum.”

I’m not sure what she means by “horrid simulacrum” but she seems to be touching on the concept of the ‘uncanny valley’. Here’s a description I provided in a May 31, 2013 posting about animator Chris Landreth and his explorations of that valley within the context of his animated film, Subconscious Password,,

Landreth also discusses the ‘uncanny valley’ and how he deliberately cast his film into that valley. For anyone who’s unfamiliar with the ‘uncanny valley’ I wrote about it in a Mar. 10, 2011 posting concerning Geminoid robots,

It seems that researchers believe that the ‘uncanny valley’ doesn’t necessarily have to exist forever and at some point, people will accept humanoid robots without hesitation. In the meantime, here’s a diagram of the ‘uncanny valley’,

From the article on Android Science by Masahiro Mori (translated by Karl F. MacDorman and Takashi Minato)

Here’s what Mori (the person who coined the term) had to say about the ‘uncanny valley’ (from Android Science),

Recently there are many industrial robots, and as we know the robots do not have a face or legs, and just rotate or extend or contract their arms, and they bear no resemblance to human beings. Certainly the policy for designing these kinds of robots is based on functionality. From this standpoint, the robots must perform functions similar to those of human factory workers, but their appearance is not evaluated. If we plot these industrial robots on a graph of familiarity versus appearance, they lie near the origin (see Figure 1 [above]). So they bear little resemblance to a human being, and in general people do not find them to be familiar. But if the designer of a toy robot puts importance on a robot’s appearance rather than its function, the robot will have a somewhat humanlike appearance with a face, two arms, two legs, and a torso. This design lets children enjoy a sense of familiarity with the humanoid toy. So the toy robot is approaching the top of the first peak.

Of course, human beings themselves lie at the final goal of robotics, which is why we make an effort to build humanlike robots. For example, a robot’s arms may be composed of a metal cylinder with many bolts, but to achieve a more humanlike appearance, we paint over the metal in skin tones. These cosmetic efforts cause a resultant increase in our sense of the robot’s familiarity. Some readers may have felt sympathy for handicapped people they have seen who attach a prosthetic arm or leg to replace a missing limb. But recently prosthetic hands have improved greatly, and we cannot distinguish them from real hands at a glance. Some prosthetic hands attempt to simulate veins, muscles, tendons, finger nails, and finger prints, and their color resembles human pigmentation. So maybe the prosthetic arm has achieved a degree of human verisimilitude on par with false teeth. But this kind of prosthetic hand is too real and when we notice it is prosthetic, we have a sense of strangeness. So if we shake the hand, we are surprised by the lack of soft tissue and cold temperature. In this case, there is no longer a sense of familiarity. It is uncanny. In mathematical terms, strangeness can be represented by negative familiarity, so the prosthetic hand is at the bottom of the valley. So in this case, the appearance is quite human like, but the familiarity is negative. This is the uncanny valley.

[keep scrolling, I'm having trouble getting rid of this extra space below]

It seems that Mori is suggesting that as the differences between the original and the simulacrum become fewer and fewer, the ‘uncanny valley’ will disappear. It’s possible but I suspect before that day occurs those of us who were brought up in a world without synthetic humans (androids) may experience an intensification of the feelings aroused by an encounter with the uncanny valley even as it disappears. For those who’d like a preview, check out Luisa Whitton’s What About The Heart? project.

Lunar spelunking with robots at Vancouver’s (Canada) June 24, 2014 Café Scientifique

Vancouver’s next Café Scientifique is being held in the back room of the The Railway Club (2nd floor of 579 Dunsmuir St. [at Seymour St.], Vancouver, Canada), on Tuesday, June 24,  2014 at 7:30 pm. Here’s the meeting description (from the June 18, 2014 announcement),

Our speaker for the evening will be John Walker, Rover Development Lead of the Hakuto Google Lunar X-Prize Team.  The title and abstract of his talk is:

Lunar Spelunking

Lava tubes, or caves likely exist on the surface of the moon. Based on recent images and laser distance measurements from the surface of the moon, scientists have selected candidates for further study.

Governmental space agencies and private institutions now have plans to visit these potential caves and investigate them as potential lunar habitat sites, as early as 2015.

I will present some of these candidates and my PhD research, which is supporting a Google Lunar X-Prize team’s attempt to survey one of these caves using robots.

I wasn’t able to find much about John Walker bu there is this Facebook entry noting a talk he gave at TEDxBudapest.

As for the Google Lunar XPRIZE, running a Google search yielded this on June 22, 2014 at 0945 hours PDT. It was the top finding on the search page. links to the site were provided below this definition:

The Google Lunar XPRIZE is a $30 million competition for the first privately funded team to send a robot to the moon, travel 500 meters and transmit video,…

You can find the Google Lunar XPRIZE website here. The Hakuto team, the only one based in Japan (I believe), has a website here. There is some English language material but the bulk would appear to be Japanese language.

Brazil, the 2014 World Cup kickoff, and a mind-controlled exoskeleton (part four of five)

The Brain research, ethics, and nanotechnology (part one of five) May 19, 2014 post kicked off a series titled ‘Brains, prostheses, nanotechnology, and human enhancement’ which brings together a number of developments in the worlds of neuroscience, prosthetics, and, incidentally, nanotechnology in the field of interest called human enhancement. Parts one through four are an attempt to draw together a number of new developments, mostly in the US and in Europe. Due to my language skills which extend to English and, more tenuously, French, I can’t provide a more ‘global perspective’. Part five features a summary.

Brazil’s World Cup for soccer/football which opens on June 12, 2014 will be the first public viewing of someone with paraplegia demonstrating a mind-controlled exoskeleton (or a robotic suit as it’s sometimes called) by opening the 2014 games with the first kick-off.

I’ve been covering this story since 2011 and, even so, was late to the party as per this May 7, 2014 article by Alejandra Martins for BBC World news online,

The World Cup curtain-raiser will see the first public demonstration of a mind-controlled exoskeleton that will enable a person with paralysis to walk.

If all goes as planned, the robotic suit will spring to life in front of almost 70,000 spectators and a global audience of billions of people.

The exoskeleton was developed by an international team of scientists as part of the Walk Again Project and is the culmination of more than a decade of work for Dr Miguel Nicolelis, a Brazilian neuroscientist based at Duke University in North Carolina. [emphasis mine]

Since November [2013], Dr Nicolelis has been training eight patients at a lab in Sao Paulo, in the midst of huge media speculation that one of them will stand up from his or her wheelchair and deliver the first kick of this year’s World Cup.

“That was the original plan,” the Duke University researcher told the BBC. “But not even I could tell you the specifics of how the demonstration will take place. This is being discussed at the moment.”

Speaking in Portuguese from Sao Paulo, Miguel Nicolelis explained that all the patients are over 20 years of age, with the oldest about 35.

“We started the training in a virtual environment with a simulator. In the last few days, four patients have donned the exoskeleton to take their first steps and one of them has used mental control to kick a ball,” he explained.

The history of Nicolelis’ work is covered here in a series of a posts starting the with an Oct. 5, 2011 post (Advertising for the 21st Century: B-Reel, ‘storytelling’, and mind control; scroll down 2/3 of the way for a reference to Ed Yong’s article where I first learned of Nicolelis).

The work was explored in more depth in a March 16, 2012 posting (Monkeys, mind control, robots, prosthetics, and the 2014 World Cup (soccer/football) and then followed up a year later by two posts which link Nicoleliis’ work with the Brain Activity Map (now called, BRAIN [Brain Research through Advancing Innovative Neurotechnologies] initiative: a March 4, 2013 (Brain-to-brain communication, organic computers, and BAM [brain activity map], the connectome) and a March 8,  2013 post (Prosthetics and the human brain) directly linking exoskeleton work in Holland and the project at Duke with current brain research and the dawning of a new relationship to one’s prosthestics,

On the heels of research which suggests that humans tend to view their prostheses, including wheel chairs, as part of their bodies, researchers in Europe  have announced the development of a working exoskeleton powered by the wearer’s thoughts.

Getting back to Brazil and Nicolelis’ technology, Ian Sample offers an excellent description in an April 1, 2014 article for the Guardian (Note: Links have been removed),

The technology in question is a mind-controlled robotic exoskeleton. The complex and conspicuous robotic suit, built from lightweight alloys and powered by hydraulics, has a simple enough function. When a paraplegic person straps themselves in, the machine does the job that their leg muscles no longer can.

The exoskeleton is the culmination of years of work by an international team of scientists and engineers on the Walk Again project. The robotics work was coordinated by Gordon Cheng at the Technical University in Munich, and French researchers built the exoskeleton. Nicolelis’s team focused on ways to read people’s brain waves, and use those signals to control robotic limbs.

To operate the exoskeleton, the person is helped into the suit and given a cap to wear that is fitted with electrodes to pick up their brain waves. These signals are passed to a computer worn in a backpack, where they are decoded and used to move hydraulic drivers on the suit.

The exoskeleton is powered by a battery – also carried in the backpack – that allows for two hours of continuous use.

“The movements are very smooth,” Nicolelis told the Guardian. “They are human movements, not robotic movements.”

Nicolelis says that in trials so far, his patients seem have taken to the exoskeleton. “This thing was made for me,” one patient told him after being strapped into the suit.

The operator’s feet rest on plates which have sensors to detect when contact is made with the ground. With each footfall, a signal shoots up to a vibrating device sewn into the forearm of the wearer’s shirt. The device seems to fool the brain into thinking that the sensation came from their foot. In virtual reality simulations, patients felt that their legs were moving and touching something.

Sample’s article includes a good schematic of the ‘suit’ which I have not been able to find elsewhere (meaning the Guardian likely has a copyright for the schematic and is why you won’t see it here) and speculation about robotics and prosthetics in the future.

Nicolelis and his team have a Facebook page for the Walk Again Project where you can get some of the latest information with  both English and Portuguese language entries as they prepare for the June 12, 2014 kickoff.

One final thought, this kickoff project represents an unlikely confluence of events. After all, what are the odds

    • that a Brazil-born researcher (Nicolelis) would be working on a project to give paraplegics the ability to walk again? and
    • that Brazil would host the World Cup in 2014 (the first time since 1950)? and
    • that the timing would coincide so a public demonstration at one of the world’s largest athletic events (of a sport particularly loved in Brazil) could be planned?

It becomes even more extraordinary when one considers that Brazil had isolated itself somewhat in the 1980s with a policy of nationalism vis à vis the computer industry (from the Brazil Science and Technology webpage on the ITA website),

In the early 1980s, the policy of technological nationalism and self-sufficiency had narrowed to the computer sector, where protective legislation tried to shield the Brazilian mini- and microcomputer industries from foreign competition. Here again, the policy allowed for the growth of local industry and a few well-qualified firms, but the effect on the productive capabilities of the economy as a whole was negative; and the inability to follow the international market in price and quality forced the policy to be discontinued.

For those who may have forgotten, the growth of the computer industry (specifically personal computers) in the 1980s figured hugely in a country’s economic health and, in this case,with  a big negative impact in Brazil.

Returning to 2014, the kickoff in Brazil (if successful) symbolizes more than an international athletic competition or a technical/medical achievement, this kick-off symbolizes a technological future for Brazil and its place on the world stage (despite the protests and social unrest) .

Links to other posts in the Brains, prostheses, nanotechnology, and human enhancement five-part series

Part one: Brain research, ethics, and nanotechnology (May 19, 2014 post)

Part two: BRAIN and ethics in the US with some Canucks (not the hockey team) participating (May 19, 2014)

Part three: Gray Matters: Integrative Approaches for Neuroscience, Ethics, and Society issued May 2014 by US Presidential Bioethics Commission (May 20, 2014)

Part five: Brains, prostheses, nanotechnology, and human enhancement: summary (May 20, 2014)

ETA June 16, 2014: The kickoff seems to have been a disappointment (June 15, 2014 news item on phys.org) and for those who might be interested in some of the reasons for the World Cup unrest and protests in Brazil, John Oliver provides an excoriating overview of the organization which organizes the World Cup games while professing his great love of the games, http://www.youtube.com/watch?v=DlJEt2KU33I

UK’s National Physical Laboratory reaches out to ‘BioTouch’ MIT and UCL

This March 27, 2014 news item on Azonano is an announcement for a new project featuring haptics and self-assembly,

NPL (UK’s National Physical Laboratory) has started a new strategic research partnership with UCL (University College of London) and MIT (Massachusetts Institute of Technology) focused on haptic-enabled sensing and micromanipulation of biological self-assembly – BioTouch.

The NPL March 27, 2014 news release, which originated the news item, is accompanied by a rather interesting image,

A computer operated dexterous robotic hand holding a microscope slide with a fluorescent human cell (not to scale) embedded into a synthetic extracellular matrix. Courtesy: NPL

A computer operated dexterous
robotic hand holding a microscope
slide with a fluorescent human cell
(not to scale) embedded into a
synthetic extracellular matrix. Courtesy: NPL

The news release goes on to describe the BioTouch project in more detail (Note: A link has been removed),

The project will probe sensing and application of force and related vectors specific to biological self-assembly as a means of synthetic biology and nanoscale construction. The overarching objective is to enable the re-programming of self-assembled patterns and objects by directed micro-to-nano manipulation with compliant robotic haptic control.

This joint venture, funded by the European Research Council, EPSRC and NPL’s Strategic Research Programme, is a rare blend of interdisciplinary research bringing together expertise in robotics, haptics and machine vision with synthetic and cell biology, protein design, and super- and high-resolution microscopy. The research builds on the NPL’s pioneering developments in bioengineering and imaging and world-leading haptics technologies from UCL and MIT.

Haptics is an emerging enabling tool for sensing and manipulation through touch, which holds particular promise for the development of autonomous robots that need to perform human-like functions in unstructured environments. However, the path to all such applications is hampered by the lack of a compliant interface between a predictably assembled biological system and a human user. This research will enable human directed micro-manipulation of experimental biological systems using cutting-edge robotic systems and haptic feedback.

Recently the UK government has announced ‘eight great technologies’ in which Britain is to become a world leader. Robotics, synthetic biology, regenerative medicine and advanced materials are four of these technologies for which this project serves as a merging point providing thus an excellent example of how multidisciplinary collaborative research can shape our future.

If it read this rightly, it means they’re trying to design systems where robots will work directly with materials in the labs while humans direct the robots’ actions from a remote location. My best example of this (it’s not a laboratory example) would be of a surgery where a robot actually performs the work while a human directs the robot’s actions based on haptic (touch) information the human receives from the robot. Surgeons don’t necessarily see what they’re dealing with, they may be feeling it with their fingers (haptic information). In effect, the robot’s hands become an extension of the surgeon’s hands. I imagine using a robot’s ‘hands’ would allow for less invasive procedures to be performed.

Should we love our robots or are robots going be smarter than we are? TED’s 2014 All Stars Session 5: The Future is Ours (maybe)

Rodney Brooks seems to be a man who loves robots, from his TED biography,

Rodney Brooks builds robots based on biological principles of movement and reasoning. The goal: a robot who can figure things out.

MIT professor Rodney Brooks studies and engineers robot intelligence, looking for the holy grail of robotics: the AGI, or artificial general intelligence. For decades, we’ve been building robots to do highly specific tasks — welding, riveting, delivering interoffice mail — but what we all want, really, is a robot that can figure things out on its own, the way we humans do.

Brooks makes a plea for easy-to-use (programme) robots and mentions his Baxter robot as an example that should be improved; Brooks issues a challenge to make robots better. (Baxter was used as the base for EDI introduced earlier in TED’s 2014 Session 8 this morning (March 20, 2014).

By contrast, Sir Martin Rees, astrophysicist has some concerns about robots and artificial intelligence as per my Nov. 26, 2012 posting about his (and others’) proposal to create the Cambridge Project for Existential Risk. From his TED biography,

Martin Rees, one of the world’s most eminent astronomers, is a professor of cosmology and astrophysics at the University of Cambridge and the UK’s Astronomer Royal. He is one of our key thinkers on the future of humanity in the cosmos.

Sir Martin Rees has issued a clarion call for humanity. His 2004 book, ominously titled Our Final Hour, catalogues the threats facing the human race in a 21st century dominated by unprecedented and accelerating scientific change. He calls on scientists and nonscientists alike to take steps that will ensure our survival as a species.

Rees states that the worst threats to planetary survival come from humans not, as it did in the past, nature. While science offers great possibilities, it has an equally dark side. Rees suggests robots going rogue, activists hijacking synthetic biology to winnow out the population, and more. He suggests that there is a 50% chance that we could suffer a devastating setback. Rees then mentions the proposed Cambridge Centre for Existential Risk and the importance of studying the possibility of human extinction and ways to mitigate risk.

Steven Johnson, writer, was introduced next (from his TED biography),

Steven Berlin Johnson examines the intersection of science, technology and personal experience.

A dynamic writer and speaker, Johnson crafts captivating theories that draw on a dizzying array of disciplines, without ever leaving his audience behind. Author Kurt Anderson described Johnson’s book Emergence as “thoughtful and lucid and charming and staggeringly smart.” The same could be said for Johnson himself. His big-brained, multi-disciplinary theories make him one of his generation’s more intriguing thinkers. His books take the reader on a journey — following the twists and turns his own mind makes as he connects seemingly disparate ideas: ants and cities, interface design and Victorian novels.

He will be hosting a new PBS (Public Broadcasting Service) series, ‘How We Got to Now’ (mentioned in Hector Tobar’s Aug. 7, 2013 article about the PBS series in the Los Angeles Times) and this talk sounds like it might be a preview of sorts. Johnson plays a recording made 20 years before Alexander Graham Bell ‘first’ recorded sound. The story he shares is about an inventor who didn’t think to include a playback feature for his recordings. He simply didn’t think about it as he was interested in doing something else (I can’t quite remember what that was now) and, consequently, his invention and work got lost for decades. Despite that, it forms part of the sound recording story. Thankfully, modern sound recording engineers have developed a technique which allows us to hear those ‘lost’ sounds today.

Traffic robots in Kinshasa (Democratic Republic of the Congo) developed by an all women team of engineers

Kinshasa, the capital of the Democratic Republic of the Congo (DRC), now hosts two traffic cop robots with hopes for more of these solar-powered traffic regulators on the way. Before plunging into the story, here’s a video of these ‘gendarmes automates’ (or robot roulage intelligent [RRR] as the inventors prefer) in action,

This story has been making the English language news rounds since late last year when Voxafrica carried a news item, dated Dec. 27, 2013, about the robot traffic cops,

Kinshasa has adopted an innovative way of managing traffic along its city streets, by installing robot cops to direct and monitor traffic along roads instead of using normal policemen to reduce congestion. … They may not have real eyes, but new traffic policemen still spot Kinshasa’s usual signature cop sunglasses. The prototypes are equipped with four cameras that allow them to record traffic flow … . The team behind the new robots are a group of Congolese engineers based at the Kinshasa Higher Institute of Applied Technique, known by its French acronym, ISTA.

A Jan. 30, 2014 article by Matt McFarland for the Washington Post provides additional detail (Note: A link has been removed),

The solar-powered robot is equipped with multiple cameras, opening the potential for monitoring traffic and issuing tickets. “If a driver says that it is not going to respect the robot because it’s just a machine the robot is going to take that and there will be a ticket for him,” said Isaie Therese, the engineer behind the project said in an interview with CCTV Africa. “We are a poor country and our government is looking for money. And I will tell you that with the roads the government has built, it needs to recover its money.”

A Feb. 24, 2014 CNN article by Teo Kermeliotis describes the casings for the robots,

Standing eight feet tall, the robot traffic wardens are on duty 24 hours a day, their towering — even scarecrow-like — mass visible from afar. …

The humanoids, which are installed on Kinshasa’s busy Triomphal and Lumumba intersections, are built of aluminum and stainless steel to endure the city’s year-round hot climate.

The French language press, as might be expected since DRC is a francophone country, were the first to tell the story.  From a June 28, 2013 news item on Radio Okapi’s website,

Les ingénieurs formés à l’Institut supérieur des techniques appliquées (Ista) ont mis au point un robot intelligent servant à réguler la circulation routière. …

Ce robot qui fonctionne grâce à l’énergie solaire, assurera aussi la sécurité routière grâce à la vidéo surveillance. Il est doté de la capacité de stocker les données pendant 6 mois.

Le “robot roulage intelligent” est une invention totalement congolaise. Il a été mis au point par les inventeurs congolais avec l’appui financier de l’association Women technologies, une association des femmes ingénieurs de la RDC.

Ce spécimen coûte près de 20 000 $ US. L’association Women technologies attend le financement du gouvernement pour reproduire ce robot afin de le mettre à la disposition des usagers et même, de l’exporter.

Here’s my very rough translation of the French: an engineering team from the Kinshasa Higher Institute of Applied Technique (ISTA) developed an intelligent automated traffic cop. This intelligent traffic cop is a Congolese invention from design to development fo funding. The prototype, which cost $20,000 US, was funded by the ‘Association Women Technologies’, a DRC (RDC is the abbreviation in French) association of women engineers, who were in June 2013 hoping for additional government funds to implement their traffic solution. Clearly, they received the money.

A January 30, 2014 news item on AfricaNouvelles focussed on the lead engineer and the team’s hopes for future exports of their technology,

Maman Thérèse Inza est ingénieure et responsable des robots régulateurs de la circulation routière à Kinshasa.

L’association Women technologies attend l’accompagnement du gouvernement pour pouvoir exporter des robots à l’international.

Bruno Bonnell’s Feb. 11, 2014 (?) article for Les Echos delves more deeply into the project and the team’s hopes of exporting their technology,

Depuis octobre 2013, le « roulage » au carrefour du Parlement, sur le boulevard Lumumba à Kinshsa, n’est plus assuré par un policier. Un robot en aluminium de 2,50 mètre de haut régule la circulation d’une des artères principales de la capitale congolaise. …

« Un robot qui fait la sécurité et la régulation routières, c’est vraiment made in Congo », assure Thérèse Inza, la présidente de l’association Women Technology, qui a construit ces machines conçues pour résister aux rigueurs du climat équatorial et dont l’autonomie est assurée par des panneaux solaires, dans des quartiers qui ne sont pas reliés au réseau électrique. La fondatrice de l’association voulait à l’origine proposer des débouchés aux femmes congolaises titulaires d’un diplôme d’ingénieur. Grâce aux robots, elle projette désormais de créer des emplois dans tout le pays. … Ces RRI prouvent que la robotique se développe aussi en Afrique. Audacieuse, Thérèse Inza affirme : « Nous devons vendre notre intelligence dans d’autres pays, de l’Afrique centrale comme d’ailleurs. Pourquoi pas aux Etats-Unis, en Europe ou en Asie ? » Entre 2008 et 2012, la demande de bande passante a été multipliée par 20 en Afrique, continent où sont nés le système de services bancaires mobiles M-Pesa et la plate-forme de gestion de catastrophe naturelle Ushahidi, utilisés aujourd’hui dans le monde entier. Et si la robotique, dont aucun pays n’a le monopole, était pour l’Afrique l’opportunité industrielle à ne pas rater ?

Here’s my rough translation, the first implementation was a single robot in October 2013 (the other details have already been mentioned here). The second paragraph describes how and why Thérèse Inza developed the project in the first place. The robot was designed specifically for the equatorial climate and for areas where access to electricity is either nonexistent or difficult. She recruited women engineers from ISTA for her team. I think she was initially trying to create jobs for women engineers. Now the robots have been successful, she’s hoping to create more jobs for everyone throughout the DRC and to export the technology to the US, Europe, and Asia.

The last sentence notes that Africa (Kenya) was the birthplace of mobile banking service, M-Pesa, “the most developed mobile payment system in the world” according to Wikipedia and Ushahidi, a platform which enables crowdsourced reporting and information about natural and other disasters.

Ushahidi, like M-Pesa, was also developed in Kenya. I found this Feb. 27, 2014 article  by Herman Manson on MarkLives.com about Ushahidi and one of its co-founders, Juliana Rotich (Note: A link has been removed),

Rotich [Juliana Rotich] is the co-founder of Ushahidi, the open-source software developed in Kenya which came to the fore and caught global attention for collecting, visualising and mapping information in the aftermath of the disputed 2008 elections.

Rotich challenges the legacies that have stymied the development of Africa’s material and cultural resources — be that broadband cables connecting coastal territories and ignoring the continent’s interior — or the political classes continuing to exploit its citizens.

Ushahidi means “witness” or “testimony”, and allows ordinary people to crowd source and map information, turning them into everything from election monitors reporting electoral misconduct to helpers assisting with the direction of emergency response resources during natural disasters.

The open source software is now available in 30 languages and across the globe.

The Rotich article is a preview of sorts for Design Indaba 2014 being held in Cape Town, South Africa, from Feb. 24, 2014 = March 2, 2014.

Getting back to the robot traffic cops, perhaps one day the inventors will come up with a design that runs on rain and an implementation that can function in either Vancouver.

Making nanoelectronic devices last longer in the body could lead to ‘cyborg’ tissue

An American Chemical Society (ACS) Feb. 19, 2014 news release (also on EurekAlert), describes some research devoted to extending a nanoelectronic device’s ‘life’ when implanted in the body,

The debut of cyborgs who are part human and part machine may be a long way off, but researchers say they now may be getting closer. In a study published in ACS’ journal Nano Letters, they report development of a coating that makes nanoelectronics much more stable in conditions mimicking those in the human body. [emphases mine] The advance could also aid in the development of very small implanted medical devices for monitoring health and disease.

Charles Lieber and colleagues note that nanoelectronic devices with nanowire components have unique abilities to probe and interface with living cells. They are much smaller than most implanted medical devices used today. For example, a pacemaker that regulates the heart is the size of a U.S. 50-cent coin, but nanoelectronics are so small that several hundred such devices would fit in the period at the end of this sentence. Laboratory versions made of silicon nanowires can detect disease biomarkers and even single virus cells, or record heart cells as they beat. Lieber’s team also has integrated nanoelectronics into living tissues in three dimensions — creating a “cyborg tissue.” One obstacle to the practical, long-term use of these devices is that they typically fall apart within weeks or days when implanted. In the current study, the researchers set out to make them much more stable.

They found that coating silicon nanowires with a metal oxide shell allowed nanowire devices to last for several months. This was in conditions that mimicked the temperature and composition of the inside of the human body. In preliminary studies, one shell material appears to extend the lifespan of nanoelectronics to about two years.

Depending on how you define the term cyborg, it could be said there are already cyborgs amongst us as I noted in an April 20, 2012 posting titled: My mother is a cyborg. Personally I’m fascinated by the news release’s mention of ‘cyborg tissue’ although there’s no further explanation of what the term might mean.

For the curious, here’s a link to and a citation for the paper,

Long Term Stability of Nanowire Nanoelectronics in Physiological Environments by Wei Zhou, Xiaochuan Dai, Tian-Ming Fu, Chong Xie, Jia Liu, and Charles M. Lieber. Nano Lett., Article ASAP DOI: 10.1021/nl500070h Publication Date (Web): January 30, 2014
Copyright © 2014 American Chemical Society

This paper is behind a paywall.

Beer drinkers weep into their pints on hearing news of electronic tongue

First, it was the wine drinkers (my July 28, 2011 posting titled: Bio-inspired electronic tongue replaces sommelier? about research performed by Spanish researches at the UAB (Universitat Autònoma de Barcelona) now, these researchers have turned their attention to beer.  From a Jan. 30, 2014 news release on EurekAlert,

Beer is the oldest and most widely consumed alcoholic drink in the world. Now, scientists at the Autonomous University of Barcelona have led a study which analysed several brands of beer by applying a new concept in analysis systems, known as an electronic tongue, the idea for which is based on the human sense of taste.

As Manel del Valle, the main author of the study, explains to SINC [Spain's state public agency specialising in science, technology and innovation information]: “The concept of the electronic tongue consists in using a generic array of sensors, in other words with generic response to the various chemical compounds involved, which generate a varied spectrum of information with advanced tools for processing, pattern recognition and even artificial neural networks.”

In this case, the array of sensors was formed of 21 ion-selective electrodes, including some with response to cations (ammonium, sodium), others with response to anions (nitrate, chloride, etc.), as well as electrodes with generic (unspecified) response to the varieties considered.

The authors recorded the multidimensional response generated by the array of sensors and how this was influenced by the type of beer considered. An initial analysis enabled them to change coordinates to view the grouping better, although it was not effective for classifying the beers.

“Using more powerful tools – supervised learning – and linear discriminant analysis did enable us to distinguish between the main categories of beer we studied: Schwarzbier, lager, double malt, Pilsen, Alsatian and low-alcohol,” Del Valle continues, “and with a success rate of 81.9%.”

It seems the electronic tongue does have one drawback,

Furthermore, it is worth noting that varieties of beers that the tongue is not trained to recognise, such as beer/soft drink mixes or foreign makes, were not identified (discrepant samples), which, according to the experts, validates the system as it does not recognise brands for which it was not trained.

Future plans, according to the news release, include,

In view of the ordering of the varieties, which followed their declared alcohol content, the scientists estimated this content with a numerical model developed with an artificial neural network.

“This application could be considered a sensor by software, as the ethanol present does not respond directly to the sensors used, which only respond to the ions present in the solution,” outlines the researcher.

The study concludes that these tools could one day give robots a sense of taste, and even supplant panels of tasters in the food industry to improve the quality and reliability of products for consumption.

Here’s a link to and a citation for the paper,

Beer classification by means of a potentiometric electronic tongue by Xavier Cetó, Manuel Gutiérrez-Capitán, Daniel Calvo , and Manel del Vall. Food Chemistry Volume 141, Issue 3, 1 December 2013, Pages 2533–2540 DOI: 10.1016/j.foodchem.2013.05.091

I’d imagine that anyone who has dreams of becoming a beer taster might want to consider some future alternatives. As for folks like Canadian Kevin Brauch, “host of The Thirsty Traveler [on the Cooking Channel], …  about the world’s greatest beer, wine and cocktails,” he will no doubt claim that a robot is not likely to express like/dislikes or more nuanced opinions, should he become aware of his competitor. Besides, Brauch does have the cocktail to rely on; there’s no word of cocktails being test on an electronic tongue, not yer.

Historically, Canada has been a beer drinkers nation. According to data collected in 2010, we rank fifth in the world (following the Czech Republic, Germany, Austria, and Ireland, in that order)  found in the Wikipedia essay: List of countries by beer consumption per capita.  For anyone who’s curious about Canadian beer drinkers’ perspectives, I found this blog, The Great Canadian Beer Snob (as of 2012 the blog owner, Matt Williams lived in Victoria, BC), which I suspect was a name chosen with tongue-in-cheek.