Category Archives: robots

UK’s National Physical Laboratory reaches out to ‘BioTouch’ MIT and UCL

This March 27, 2014 news item on Azonano is an announcement for a new project featuring haptics and self-assembly,

NPL (UK’s National Physical Laboratory) has started a new strategic research partnership with UCL (University College of London) and MIT (Massachusetts Institute of Technology) focused on haptic-enabled sensing and micromanipulation of biological self-assembly – BioTouch.

The NPL March 27, 2014 news release, which originated the news item, is accompanied by a rather interesting image,

A computer operated dexterous robotic hand holding a microscope slide with a fluorescent human cell (not to scale) embedded into a synthetic extracellular matrix. Courtesy: NPL

A computer operated dexterous
robotic hand holding a microscope
slide with a fluorescent human cell
(not to scale) embedded into a
synthetic extracellular matrix. Courtesy: NPL

The news release goes on to describe the BioTouch project in more detail (Note: A link has been removed),

The project will probe sensing and application of force and related vectors specific to biological self-assembly as a means of synthetic biology and nanoscale construction. The overarching objective is to enable the re-programming of self-assembled patterns and objects by directed micro-to-nano manipulation with compliant robotic haptic control.

This joint venture, funded by the European Research Council, EPSRC and NPL’s Strategic Research Programme, is a rare blend of interdisciplinary research bringing together expertise in robotics, haptics and machine vision with synthetic and cell biology, protein design, and super- and high-resolution microscopy. The research builds on the NPL’s pioneering developments in bioengineering and imaging and world-leading haptics technologies from UCL and MIT.

Haptics is an emerging enabling tool for sensing and manipulation through touch, which holds particular promise for the development of autonomous robots that need to perform human-like functions in unstructured environments. However, the path to all such applications is hampered by the lack of a compliant interface between a predictably assembled biological system and a human user. This research will enable human directed micro-manipulation of experimental biological systems using cutting-edge robotic systems and haptic feedback.

Recently the UK government has announced ‘eight great technologies’ in which Britain is to become a world leader. Robotics, synthetic biology, regenerative medicine and advanced materials are four of these technologies for which this project serves as a merging point providing thus an excellent example of how multidisciplinary collaborative research can shape our future.

If it read this rightly, it means they’re trying to design systems where robots will work directly with materials in the labs while humans direct the robots’ actions from a remote location. My best example of this (it’s not a laboratory example) would be of a surgery where a robot actually performs the work while a human directs the robot’s actions based on haptic (touch) information the human receives from the robot. Surgeons don’t necessarily see what they’re dealing with, they may be feeling it with their fingers (haptic information). In effect, the robot’s hands become an extension of the surgeon’s hands. I imagine using a robot’s ‘hands’ would allow for less invasive procedures to be performed.

Should we love our robots or are robots going be smarter than we are? TED’s 2014 All Stars Session 5: The Future is Ours (maybe)

Rodney Brooks seems to be a man who loves robots, from his TED biography,

Rodney Brooks builds robots based on biological principles of movement and reasoning. The goal: a robot who can figure things out.

MIT professor Rodney Brooks studies and engineers robot intelligence, looking for the holy grail of robotics: the AGI, or artificial general intelligence. For decades, we’ve been building robots to do highly specific tasks — welding, riveting, delivering interoffice mail — but what we all want, really, is a robot that can figure things out on its own, the way we humans do.

Brooks makes a plea for easy-to-use (programme) robots and mentions his Baxter robot as an example that should be improved; Brooks issues a challenge to make robots better. (Baxter was used as the base for EDI introduced earlier in TED’s 2014 Session 8 this morning (March 20, 2014).

By contrast, Sir Martin Rees, astrophysicist has some concerns about robots and artificial intelligence as per my Nov. 26, 2012 posting about his (and others’) proposal to create the Cambridge Project for Existential Risk. From his TED biography,

Martin Rees, one of the world’s most eminent astronomers, is a professor of cosmology and astrophysics at the University of Cambridge and the UK’s Astronomer Royal. He is one of our key thinkers on the future of humanity in the cosmos.

Sir Martin Rees has issued a clarion call for humanity. His 2004 book, ominously titled Our Final Hour, catalogues the threats facing the human race in a 21st century dominated by unprecedented and accelerating scientific change. He calls on scientists and nonscientists alike to take steps that will ensure our survival as a species.

Rees states that the worst threats to planetary survival come from humans not, as it did in the past, nature. While science offers great possibilities, it has an equally dark side. Rees suggests robots going rogue, activists hijacking synthetic biology to winnow out the population, and more. He suggests that there is a 50% chance that we could suffer a devastating setback. Rees then mentions the proposed Cambridge Centre for Existential Risk and the importance of studying the possibility of human extinction and ways to mitigate risk.

Steven Johnson, writer, was introduced next (from his TED biography),

Steven Berlin Johnson examines the intersection of science, technology and personal experience.

A dynamic writer and speaker, Johnson crafts captivating theories that draw on a dizzying array of disciplines, without ever leaving his audience behind. Author Kurt Anderson described Johnson’s book Emergence as “thoughtful and lucid and charming and staggeringly smart.” The same could be said for Johnson himself. His big-brained, multi-disciplinary theories make him one of his generation’s more intriguing thinkers. His books take the reader on a journey — following the twists and turns his own mind makes as he connects seemingly disparate ideas: ants and cities, interface design and Victorian novels.

He will be hosting a new PBS (Public Broadcasting Service) series, ‘How We Got to Now’ (mentioned in Hector Tobar’s Aug. 7, 2013 article about the PBS series in the Los Angeles Times) and this talk sounds like it might be a preview of sorts. Johnson plays a recording made 20 years before Alexander Graham Bell ‘first’ recorded sound. The story he shares is about an inventor who didn’t think to include a playback feature for his recordings. He simply didn’t think about it as he was interested in doing something else (I can’t quite remember what that was now) and, consequently, his invention and work got lost for decades. Despite that, it forms part of the sound recording story. Thankfully, modern sound recording engineers have developed a technique which allows us to hear those ‘lost’ sounds today.

Traffic robots in Kinshasa (Democratic Republic of the Congo) developed by an all women team of engineers

Kinshasa, the capital of the Democratic Republic of the Congo (DRC), now hosts two traffic cop robots with hopes for more of these solar-powered traffic regulators on the way. Before plunging into the story, here’s a video of these ‘gendarmes automates’ (or robot roulage intelligent [RRR] as the inventors prefer) in action,

This story has been making the English language news rounds since late last year when Voxafrica carried a news item, dated Dec. 27, 2013, about the robot traffic cops,

Kinshasa has adopted an innovative way of managing traffic along its city streets, by installing robot cops to direct and monitor traffic along roads instead of using normal policemen to reduce congestion. … They may not have real eyes, but new traffic policemen still spot Kinshasa’s usual signature cop sunglasses. The prototypes are equipped with four cameras that allow them to record traffic flow … . The team behind the new robots are a group of Congolese engineers based at the Kinshasa Higher Institute of Applied Technique, known by its French acronym, ISTA.

A Jan. 30, 2014 article by Matt McFarland for the Washington Post provides additional detail (Note: A link has been removed),

The solar-powered robot is equipped with multiple cameras, opening the potential for monitoring traffic and issuing tickets. “If a driver says that it is not going to respect the robot because it’s just a machine the robot is going to take that and there will be a ticket for him,” said Isaie Therese, the engineer behind the project said in an interview with CCTV Africa. “We are a poor country and our government is looking for money. And I will tell you that with the roads the government has built, it needs to recover its money.”

A Feb. 24, 2014 CNN article by Teo Kermeliotis describes the casings for the robots,

Standing eight feet tall, the robot traffic wardens are on duty 24 hours a day, their towering — even scarecrow-like — mass visible from afar. …

The humanoids, which are installed on Kinshasa’s busy Triomphal and Lumumba intersections, are built of aluminum and stainless steel to endure the city’s year-round hot climate.

The French language press, as might be expected since DRC is a francophone country, were the first to tell the story.  From a June 28, 2013 news item on Radio Okapi’s website,

Les ingénieurs formés à l’Institut supérieur des techniques appliquées (Ista) ont mis au point un robot intelligent servant à réguler la circulation routière. …

Ce robot qui fonctionne grâce à l’énergie solaire, assurera aussi la sécurité routière grâce à la vidéo surveillance. Il est doté de la capacité de stocker les données pendant 6 mois.

Le “robot roulage intelligent” est une invention totalement congolaise. Il a été mis au point par les inventeurs congolais avec l’appui financier de l’association Women technologies, une association des femmes ingénieurs de la RDC.

Ce spécimen coûte près de 20 000 $ US. L’association Women technologies attend le financement du gouvernement pour reproduire ce robot afin de le mettre à la disposition des usagers et même, de l’exporter.

Here’s my very rough translation of the French: an engineering team from the Kinshasa Higher Institute of Applied Technique (ISTA) developed an intelligent automated traffic cop. This intelligent traffic cop is a Congolese invention from design to development fo funding. The prototype, which cost $20,000 US, was funded by the ‘Association Women Technologies’, a DRC (RDC is the abbreviation in French) association of women engineers, who were in June 2013 hoping for additional government funds to implement their traffic solution. Clearly, they received the money.

A January 30, 2014 news item on AfricaNouvelles focussed on the lead engineer and the team’s hopes for future exports of their technology,

Maman Thérèse Inza est ingénieure et responsable des robots régulateurs de la circulation routière à Kinshasa.

L’association Women technologies attend l’accompagnement du gouvernement pour pouvoir exporter des robots à l’international.

Bruno Bonnell’s Feb. 11, 2014 (?) article for Les Echos delves more deeply into the project and the team’s hopes of exporting their technology,

Depuis octobre 2013, le « roulage » au carrefour du Parlement, sur le boulevard Lumumba à Kinshsa, n’est plus assuré par un policier. Un robot en aluminium de 2,50 mètre de haut régule la circulation d’une des artères principales de la capitale congolaise. …

« Un robot qui fait la sécurité et la régulation routières, c’est vraiment made in Congo », assure Thérèse Inza, la présidente de l’association Women Technology, qui a construit ces machines conçues pour résister aux rigueurs du climat équatorial et dont l’autonomie est assurée par des panneaux solaires, dans des quartiers qui ne sont pas reliés au réseau électrique. La fondatrice de l’association voulait à l’origine proposer des débouchés aux femmes congolaises titulaires d’un diplôme d’ingénieur. Grâce aux robots, elle projette désormais de créer des emplois dans tout le pays. … Ces RRI prouvent que la robotique se développe aussi en Afrique. Audacieuse, Thérèse Inza affirme : « Nous devons vendre notre intelligence dans d’autres pays, de l’Afrique centrale comme d’ailleurs. Pourquoi pas aux Etats-Unis, en Europe ou en Asie ? » Entre 2008 et 2012, la demande de bande passante a été multipliée par 20 en Afrique, continent où sont nés le système de services bancaires mobiles M-Pesa et la plate-forme de gestion de catastrophe naturelle Ushahidi, utilisés aujourd’hui dans le monde entier. Et si la robotique, dont aucun pays n’a le monopole, était pour l’Afrique l’opportunité industrielle à ne pas rater ?

Here’s my rough translation, the first implementation was a single robot in October 2013 (the other details have already been mentioned here). The second paragraph describes how and why Thérèse Inza developed the project in the first place. The robot was designed specifically for the equatorial climate and for areas where access to electricity is either nonexistent or difficult. She recruited women engineers from ISTA for her team. I think she was initially trying to create jobs for women engineers. Now the robots have been successful, she’s hoping to create more jobs for everyone throughout the DRC and to export the technology to the US, Europe, and Asia.

The last sentence notes that Africa (Kenya) was the birthplace of mobile banking service, M-Pesa, “the most developed mobile payment system in the world” according to Wikipedia and Ushahidi, a platform which enables crowdsourced reporting and information about natural and other disasters.

Ushahidi, like M-Pesa, was also developed in Kenya. I found this Feb. 27, 2014 article  by Herman Manson on MarkLives.com about Ushahidi and one of its co-founders, Juliana Rotich (Note: A link has been removed),

Rotich [Juliana Rotich] is the co-founder of Ushahidi, the open-source software developed in Kenya which came to the fore and caught global attention for collecting, visualising and mapping information in the aftermath of the disputed 2008 elections.

Rotich challenges the legacies that have stymied the development of Africa’s material and cultural resources — be that broadband cables connecting coastal territories and ignoring the continent’s interior — or the political classes continuing to exploit its citizens.

Ushahidi means “witness” or “testimony”, and allows ordinary people to crowd source and map information, turning them into everything from election monitors reporting electoral misconduct to helpers assisting with the direction of emergency response resources during natural disasters.

The open source software is now available in 30 languages and across the globe.

The Rotich article is a preview of sorts for Design Indaba 2014 being held in Cape Town, South Africa, from Feb. 24, 2014 = March 2, 2014.

Getting back to the robot traffic cops, perhaps one day the inventors will come up with a design that runs on rain and an implementation that can function in either Vancouver.

Making nanoelectronic devices last longer in the body could lead to ‘cyborg’ tissue

An American Chemical Society (ACS) Feb. 19, 2014 news release (also on EurekAlert), describes some research devoted to extending a nanoelectronic device’s ‘life’ when implanted in the body,

The debut of cyborgs who are part human and part machine may be a long way off, but researchers say they now may be getting closer. In a study published in ACS’ journal Nano Letters, they report development of a coating that makes nanoelectronics much more stable in conditions mimicking those in the human body. [emphases mine] The advance could also aid in the development of very small implanted medical devices for monitoring health and disease.

Charles Lieber and colleagues note that nanoelectronic devices with nanowire components have unique abilities to probe and interface with living cells. They are much smaller than most implanted medical devices used today. For example, a pacemaker that regulates the heart is the size of a U.S. 50-cent coin, but nanoelectronics are so small that several hundred such devices would fit in the period at the end of this sentence. Laboratory versions made of silicon nanowires can detect disease biomarkers and even single virus cells, or record heart cells as they beat. Lieber’s team also has integrated nanoelectronics into living tissues in three dimensions — creating a “cyborg tissue.” One obstacle to the practical, long-term use of these devices is that they typically fall apart within weeks or days when implanted. In the current study, the researchers set out to make them much more stable.

They found that coating silicon nanowires with a metal oxide shell allowed nanowire devices to last for several months. This was in conditions that mimicked the temperature and composition of the inside of the human body. In preliminary studies, one shell material appears to extend the lifespan of nanoelectronics to about two years.

Depending on how you define the term cyborg, it could be said there are already cyborgs amongst us as I noted in an April 20, 2012 posting titled: My mother is a cyborg. Personally I’m fascinated by the news release’s mention of ‘cyborg tissue’ although there’s no further explanation of what the term might mean.

For the curious, here’s a link to and a citation for the paper,

Long Term Stability of Nanowire Nanoelectronics in Physiological Environments by Wei Zhou, Xiaochuan Dai, Tian-Ming Fu, Chong Xie, Jia Liu, and Charles M. Lieber. Nano Lett., Article ASAP DOI: 10.1021/nl500070h Publication Date (Web): January 30, 2014
Copyright © 2014 American Chemical Society

This paper is behind a paywall.

Beer drinkers weep into their pints on hearing news of electronic tongue

First, it was the wine drinkers (my July 28, 2011 posting titled: Bio-inspired electronic tongue replaces sommelier? about research performed by Spanish researches at the UAB (Universitat Autònoma de Barcelona) now, these researchers have turned their attention to beer.  From a Jan. 30, 2014 news release on EurekAlert,

Beer is the oldest and most widely consumed alcoholic drink in the world. Now, scientists at the Autonomous University of Barcelona have led a study which analysed several brands of beer by applying a new concept in analysis systems, known as an electronic tongue, the idea for which is based on the human sense of taste.

As Manel del Valle, the main author of the study, explains to SINC [Spain's state public agency specialising in science, technology and innovation information]: “The concept of the electronic tongue consists in using a generic array of sensors, in other words with generic response to the various chemical compounds involved, which generate a varied spectrum of information with advanced tools for processing, pattern recognition and even artificial neural networks.”

In this case, the array of sensors was formed of 21 ion-selective electrodes, including some with response to cations (ammonium, sodium), others with response to anions (nitrate, chloride, etc.), as well as electrodes with generic (unspecified) response to the varieties considered.

The authors recorded the multidimensional response generated by the array of sensors and how this was influenced by the type of beer considered. An initial analysis enabled them to change coordinates to view the grouping better, although it was not effective for classifying the beers.

“Using more powerful tools – supervised learning – and linear discriminant analysis did enable us to distinguish between the main categories of beer we studied: Schwarzbier, lager, double malt, Pilsen, Alsatian and low-alcohol,” Del Valle continues, “and with a success rate of 81.9%.”

It seems the electronic tongue does have one drawback,

Furthermore, it is worth noting that varieties of beers that the tongue is not trained to recognise, such as beer/soft drink mixes or foreign makes, were not identified (discrepant samples), which, according to the experts, validates the system as it does not recognise brands for which it was not trained.

Future plans, according to the news release, include,

In view of the ordering of the varieties, which followed their declared alcohol content, the scientists estimated this content with a numerical model developed with an artificial neural network.

“This application could be considered a sensor by software, as the ethanol present does not respond directly to the sensors used, which only respond to the ions present in the solution,” outlines the researcher.

The study concludes that these tools could one day give robots a sense of taste, and even supplant panels of tasters in the food industry to improve the quality and reliability of products for consumption.

Here’s a link to and a citation for the paper,

Beer classification by means of a potentiometric electronic tongue by Xavier Cetó, Manuel Gutiérrez-Capitán, Daniel Calvo , and Manel del Vall. Food Chemistry Volume 141, Issue 3, 1 December 2013, Pages 2533–2540 DOI: 10.1016/j.foodchem.2013.05.091

I’d imagine that anyone who has dreams of becoming a beer taster might want to consider some future alternatives. As for folks like Canadian Kevin Brauch, “host of The Thirsty Traveler [on the Cooking Channel], …  about the world’s greatest beer, wine and cocktails,” he will no doubt claim that a robot is not likely to express like/dislikes or more nuanced opinions, should he become aware of his competitor. Besides, Brauch does have the cocktail to rely on; there’s no word of cocktails being test on an electronic tongue, not yer.

Historically, Canada has been a beer drinkers nation. According to data collected in 2010, we rank fifth in the world (following the Czech Republic, Germany, Austria, and Ireland, in that order)  found in the Wikipedia essay: List of countries by beer consumption per capita.  For anyone who’s curious about Canadian beer drinkers’ perspectives, I found this blog, The Great Canadian Beer Snob (as of 2012 the blog owner, Matt Williams lived in Victoria, BC), which I suspect was a name chosen with tongue-in-cheek.

Get yourself some e-whiskers for improved tactile sensing

E-whiskers are highly responsive tactile sensor networks made from carbon nanotubes and silver nanoparticles that resemble the whiskers of cats and other mammals. Courtesy: Berkeley Labs [downloaded from http://newscenter.lbl.gov/science-shorts/2014/01/20/e-whiskers/]

E-whiskers are highly responsive tactile sensor networks made from carbon nanotubes and silver nanoparticles that resemble the whiskers of cats and other mammals. Courtesy: Berkeley Labs [downloaded from http://newscenter.lbl.gov/science-shorts/2014/01/20/e-whiskers/]

A Jan. 21, 2014 news item on Azonano features work from researchers who have simulated the sensitivity of cat’s and rat’s whiskers by creating e-whiskers,

Researchers with Berkeley Lab and the University of California (UC) Berkeley have created tactile sensors from composite films of carbon nanotubes and silver nanoparticles similar to the highly sensitive whiskers of cats and rats. These new e-whiskers respond to pressure as slight as a single Pascal, about the pressure exerted on a table surface by a dollar bill. Among their many potential applications is giving robots new abilities to “see” and “feel” their surrounding environment.

The Jan. 20, 2014 Lawrence Berkeley National Laboratory (Berkeley Lab) ‘science short’ by Lynn Yarris, which originated the news item,  provides more information about the research,

“Whiskers are hair-like tactile sensors used by certain mammals and insects to monitor wind and navigate around obstacles in tight spaces,” says the leader of this research Ali Javey, a faculty scientist in Berkeley Lab’s Materials Sciences Division and a UC Berkeley professor of electrical engineering and computer science.  “Our electronic whiskers consist of high-aspect-ratio elastic fibers coated with conductive composite films of nanotubes and nanoparticles. In tests, these whiskers were 10 times more sensitive to pressure than all previously reported capacitive or resistive pressure sensors.”

Javey and his research group have been leaders in the development of e-skin and other flexible electronic devices that can interface with the environment. In this latest effort, they used a carbon nanotube paste to form an electrically conductive network matrix with excellent bendability. To this carbon nanotube matrix they loaded a thin film of silver nanoparticles that endowed the matrix with high sensitivity to mechanical strain.

“The strain sensitivity and electrical resistivity of our composite film is readily tuned by changing the composition ratio of the carbon nanotubes and the silver nanoparticles,” Javey says. “The composite can then be painted or printed onto high-aspect-ratio elastic fibers to form e-whiskers that can be integrated with different user-interactive systems.”

Javey notes that the use of elastic fibers with a small spring constant as the structural component of the whiskers provides large deflection and therefore high strain in response to the smallest applied pressures. As proof-of-concept, he and his research group successfully used their e-whiskers to demonstrate highly accurate 2D and 3D mapping of wind flow. In the future, e-whiskers could be used to mediate tactile sensing for the spatial mapping of nearby objects, and could also lead to wearable sensors for measuring heartbeat and pulse rate.

“Our e-whiskers represent a new type of highly responsive tactile sensor networks for real time monitoring of environmental effects,” Javey says. “The ease of fabrication, light weight and excellent performance of our e-whiskers should have a wide range of applications for advanced robotics, human-machine user interfaces, and biological applications.”

The researchers’ paper has been published in the Proceedings of the National Academy of Sciences and is titled: “Highly sensitive electronic whiskers based on patterned carbon nanotube and silver nanoparticle composite films.”

Here’s what the e-whiskers look like,

An array of seven vertically placed e-whiskers was used for 3D mapping of the wind by Ali Javey and his group [ Kuniharu Takei, Zhibin Yu, Maxwell Zheng, Hiroki Ota and Toshitake Takahashi].  Courtesy: Berkeley Lab

An array of seven vertically placed e-whiskers was used for 3D mapping of the wind by Ali Javey and his group [ Kuniharu Takei, Zhibin Yu, Maxwell Zheng, Hiroki Ota and Toshitake Takahashi]. Courtesy: Berkeley Lab

RoboEarth (robot internet) gets examined in hospital

RoboEarth sometimes referred to as a robot internet or a robot world wide web is being tested this week by a team of researchers at Eindhoven University of Technology (Technische Universiteit Eindhoven, Netherlands) and their colleagues at Philips, ETH Zürich, TU München and the universities of Zaragoza and Stuttgart according to a Jan. 14, 2014 news item on BBC (British Broadcasting Corporation) news online,

A world wide web for robots to learn from each other and share information is being shown off for the first time.

Scientists behind RoboEarth will put it through its paces at Eindhoven University in a mocked-up hospital room.

Four robots will use the system to complete a series of tasks, including serving drinks to patients.

It is the culmination of a four-year project, funded by the European Union.

The eventual aim is that both robots and humans will be able to upload information to the cloud-based database, which would act as a kind of common brain for machines.

There’s a bit more detail in Victoria Turk’s Jan. 13 (?), 2014 article for motherboard.vice.com (Note: A link has been removed),

A hospital-like setting is an ideal test for the project, because where RoboEarth could come in handy is in helping out humans with household tasks. A big problem for robots at the moment is that human environments tend to change a lot, whereas robots are limited to the very specific movements and tasks they’ve been programmed to do.

“To enable robots to successfully lend a mechanical helping hand, they need to be able to deal flexibly with new situations and conditions,” explains a post by the University of Eindhoven. “For example you can teach a robot to bring you a cup of coffee in the living room, but if some of the chairs have been moved the robot won’t be able to find you any longer. Or it may get confused if you’ve just bought a different set of coffee cups.”

And of course, it wouldn’t just be limited to robots working explicitly together. The Wikipedia-like knowledge base is more like an internet for machines, connecting lonely robots across the globe.

A Jan. 10, 2014 Eindhoven University of Technology news release provides some insight into what the researchers want to accomplish,

“The problem right now is that robots are often developed specifically for one task”, says René van de Molengraft, TU/e  [Eindhoven University of Technology] researcher and RoboEarth project leader. “Everyday changes that happen all the time in our environment make all the programmed actions unusable. But RoboEarth simply lets robots learn new tasks and situations from each other. All their knowledge and experience are shared worldwide on a central, online database. As well as that, computing and ‘thinking’ tasks can be carried out by the system’s ‘cloud engine’, so the robot doesn’t need to have as much computing or battery power on‑board.”

It means, for example, that a robot can image a hospital room and upload the resulting map to RoboEarth. Another robot, which doesn’t know the room, can use that map on RoboEarth to locate a glass of water immediately, without having to search for it endlessly. In the same way a task like opening a box of pills can be shared on RoboEarth, so other robots can also do it without having to be programmed for that specific type of box.

There’s no word as to exactly when this test being demonstrated to a delegation from the European Commission, which financed the project, using four robots and two simulated hospital rooms is being held.

I first wrote abut RoboEarth in a Feb. 14, 2011 posting (scroll down about 1/4 of the way) and again in a March 12 2013 posting about the project’s cloud engine, Rapyuta.

Almost Human (tv series), smartphones, and anxieties about life/nonlife

The US-based Fox Broadcasting Company is set to premiere a new futuristic television series, Almost Human, over two nights, Nov. 17, and 18, 2013 for US and Canadian viewers. Here’s a description of the premise from its Wikipedia essay (Note: Links have been removed),

The series is set thirty-five years in the future when humans in the Los Angeles Police Department are paired up with lifelike androids; a detective who has a dislike for robots partners with an android capable of emotion.

One of the showrunners, Naren Shankar, seems to have also been functioning both as a science consultant and as a crime writing consultant,in addition to his other duties. From a Sept. 4, 2013 article by Lisa Tsering for Indiawest.com,

FOX is the latest television network to utilize the formidable talents of Naren Shankar, an Indian American writer and producer best known to fans for his work on “Star Trek: Deep Space Nine,” “Star Trek: Voyager” and “Star Trek: The Next Generation” as well as “Farscape,” the recently cancelled ABC series “Zero Hour” and “The Outer Limits.”

Set 35 years in the future, “Almost Human” stars Karl Urban and Michael Ealy as a crimefighting duo of a cop who is part-machine and a robot who is part-human. [emphasis mine]

“We are extrapolating the things we see today into the near future,” he explained. For example, the show will comment on the pervasiveness of location software, he said. “There will also be issues of technology such as medical ethics, or privacy; or how technology enables the rich but not the poor, who can’t afford it.”

Speaking at Comic-Con July 20 [2013], Shankar told media there, “Joel [J.H. Wyman] was looking for a collaboration with someone who had come from the crime world, and I had worked on ‘CSI’ for eight years.

“This is like coming back to my first love, since for many years I had done science fiction. It’s a great opportunity to get away from dismembered corpses and autopsy scenes.”

There’s plenty of drama — in the new series, the year is 2048, and police officer John Kennex (Karl Urban, “Dr. Bones” from the new “Star Trek” films) is trying to bounce back from one of the most catastrophic attacks ever made against the police department. Kennex wakes up from a 17-month coma and can’t remember much, except that his partner was killed; his girlfriend left him and one of his legs has been amputated and is now outfitted with a high-tech synthetic appendage. According to police department policy, every cop must partner with a robot, so Kennex is paired with Dorian (Ealy), an android with an unusual glitch that makes it have human emotions.

Shankar took an unusual path into television. He started college at age 16 and attended Cornell University, where he earned a B. Sc., an M.S. and a Ph.D. in engineering physics and electrical engineering, and was a member of the elite Kappa Alpha Society, he decided he didn’t want to work as a scientist and moved to Los Angeles to try to become a writer.

Shankar is eager to move in a new direction with “Almost Human,” which he says comes at the right time. “People are so technologically sophisticated now that maybe the audience is ready for a show like this,” he told India-West.

I am particularly intrigued by the ‘man who’s part machine and the machine that’s part human’ concept (something I’ve called machine/flesh in previous postings such as this May 9, 2012 posting titled ‘Everything becomes part machine’) and was looking forward to seeing how they would be integrating this concept along with some of the more recent scientific work being done on prosthetics and robots, given they had an engineer as part of the team (albeit with lots of crime writing experience), into the stories. Sadly, only days after Tserling’s article was published, Shankar parted ways with Almost Human according to the Sept. 10, 2013 posting on the Almost Human blog,

So this was supposed to be the week that I posted a profile of Naren Shankar, for whom I have developed a full-on crush–I mean, he has a PhD in Electrical Engineering from Cornell, he was hired by Gene Roddenberry to be science consultant on TNG, he was saying all sorts of great things about how he wanted to present the future in AH…aaaand he quit as co-showrunner yesterday, citing “creative differences.” That leaves Wyman as sole showrunner, with no plans to replace Shankar.

I’d like to base some of my comments on the previews, unfortunately, Fox Broadcasting,, in its infinite wisdom, has decided to block Canadians from watching Almost Human previews online. (Could someone please explain why? I mean, Canadians will be tuning in to watch or record for future viewing  the series premiere on the 17th & 18th of November 2013 just like our US neighbours, so, why can’t we watch the previews online?)

Getting back to machine/flesh (human with prosthetic)s and life/nonlife (android with feelings), it seems that Almost Human (as did the latest version of Battlestar Galactica, from 2004-2009) may be giving a popular culture voice to some contemporary anxieties being felt about the boundary or lack thereof between humans and machines and life/nonlife. I’ve touched on this topic many times both within and without the popular culture context. Probably one of my more comprehensive essays on machine/flesh is Eye, arm, & leg prostheses, cyborgs, eyeborgs, Deus Ex, and ableism from August 30, 2011, which includes this quote from a still earlier posting on this topic,

Here’s an excerpt from my Feb. 2, 2010 posting which reinforces what Gregor [Gregor Wolbring, University of Calgary] is saying,

This influx of R&D cash, combined with breakthroughs in materials science and processor speed, has had a striking visual and social result: an emblem of hurt and loss has become a paradigm of the sleek, modern, and powerful. Which is why Michael Bailey, a 24-year-old student in Duluth, Georgia, is looking forward to the day when he can amputate the last two fingers on his left hand.

“I don’t think I would have said this if it had never happened,” says Bailey, referring to the accident that tore off his pinkie, ring, and middle fingers. “But I told Touch Bionics I’d cut the rest of my hand off if I could make all five of my fingers robotic.” [originally excerpted from Paul Hochman's Feb. 1, 2010 article, Bionic Legs, i-Limbs, and Other Super Human Prostheses You'll Envy for Fast Company]

Here’s something else from the Hochman article,

But Bailey is most surprised by his own reaction. “When I’m wearing it, I do feel different: I feel stronger. As weird as that sounds, having a piece of machinery incorporated into your body, as a part of you, well, it makes you feel above human. [semphasis mine] It’s a very powerful thing.”

Bailey isn’t  almost human’, he’s ‘above human’. As Hochman points out. repeatedly throughout his article, this sentiment is not confined to Bailey. My guess is that Kennex (Karl Urban’s character) in Almost Human doesn’t echo Bailey’s sentiments and, instead feels he’s not quite human while the android, Dorian, (Michael Ealy’s character) struggles with his feelings in a human way that clashes with Kennex’s perspective on what is human and what is not (or what we might be called the boundary between life and nonlife).

Into this mix, one could add the rising anxiety around ‘intelligent’ machines present in real life, as well as, fiction as per this November 12 (?), 2013 article by Ian Barker for Beta News,

The rise of intelligent machines has long been fertile ground for science fiction writers, but a new report by technology research specialists Gartner suggests that the future is closer than we think.

“Smartphones are becoming smarter, and will be smarter than you by 2017,” says Carolina Milanesi, research vice president at Gartner. “If there is heavy traffic, it will wake you up early for a meeting with your boss, or simply send an apology if it is a meeting with your colleague. The smartphone will gather contextual information from its calendar, its sensors, the user’s location and personal data”.

Your smartphone will be able to predict your next move or your next purchase based on what it knows about you. This will be made possible by gathering data using a technique called “cognizant computing”.

Gartner analysts will be discussing the future of smart devices at the Gartner Symposium/ITxpo 2013 in Barcelona from November 10-14 [2013].

The Gartner Symposium/Txpo in Barcelona is ending today (Nov. 14, 2013) but should you be curious about it, you can go here to learn more.

This notion that machines might (or will) get smarter or more powerful than humans (or wizards) is explored by Will.i.am (of the Black Eyed Peas) and, futurist, Brian David Johnson in their upcoming comic book, Wizards and Robots (mentioned in my Oct. 6, 2013 posting),. This notion of machines or technology overtaking human life is also being discussed at the University of Cambridge where there’s talk of founding a Centre for the Study of Existential Risk (from my Nov. 26, 2012 posting)

The idea that robots of one kind or another (e.g. nanobots eating up the world and leaving grey goo, Cylons in both versions of Battlestar Galactica trying to exterminate humans, etc.) will take over the world and find humans unnecessary  isn’t especially new in works of fiction. It’s not always mentioned directly but the underlying anxiety often has to do with intelligence and concerns over an ‘explosion of intelligence’. The question it raises,’ what if our machines/creations become more intelligent than humans?’ has been described as existential risk. According to a Nov. 25, 2012 article by Sylvia Hui for Huffington Post, a group of eminent philosophers and scientists at the University of Cambridge are proposing to found a Centre for the Study of Existential Risk,

Could computers become cleverer than humans and take over the world? Or is that just the stuff of science fiction?

Philosophers and scientists at Britain’s Cambridge University think the question deserves serious study. A proposed Center for the Study of Existential Risk will bring together experts to consider the ways in which super intelligent technology, including artificial intelligence, could “threaten our own existence,” the institution said Sunday.

“In the case of artificial intelligence, it seems a reasonable prediction that some time in this or the next century intelligence will escape from the constraints of biology,” Cambridge philosophy professor Huw Price said.

When that happens, “we’re no longer the smartest things around,” he said, and will risk being at the mercy of “machines that are not malicious, but machines whose interests don’t include us.”

Our emerging technologies give rise to questions abut what constitutes life and where human might fit in. For example,

  • are sufficiently advanced machines a new form of life,?
  • what does it mean when human bodies are partially integrated at the neural level with machinery?
  • what happens when machines have feelings?
  • etc.

While this doesn’t exactly fit into my theme of life/nonlife or machine/flesh, this does highlight how some popular culture efforts are attempting to integrate real science into the storytelling. Here’s an excerpt from an interview with Cosima Herter, the science consultant and namesake/model for one of the characters on Orphan Black (from the March 29, 2013 posting on the space.ca blog),

Cosima Herter is Orphan Black’s Science Consultant, and the inspiration for her namesake character in the series. In real-life, Real Cosima is a PhD. student in the History of Science, Technology, and Medicine Program at the University of Minnesota, working on the History and Philosophy of Biology. Hive interns Billi Knight & Peter Rowley spoke with her about her role on the show and the science behind it…

Q: Describe your role in the making of Orphan Black.

A: I’m a resource for the biology, particularly insofar as evolutionary biology is concerned. I study the history and the philosophy of biology, so I do offer some suggestions and some creative ideas, but also help correct some of the misconceptions about science.  I offer different angles and alternatives to look at the way biological science is represented, so (it’s) not reduced to your stereotypical tropes about evolutionary biology and cloning, but also to provide some accuracy for the scripts.

- See more at: http://www.space.ca/article/Orphan-Black-science-consultant#sthash.7P36bbPa.dpuf

For anyone not familiar with the series, from the Wikipedia essay (Note: Links have been removed),

Orphan Black is a Canadian science fiction television series starring Tatiana Maslany as several identical women who are revealed to be clones.

Science events in Vancouver (Canada) for June 7 and June 13, 2013

There’s a University of British Columbia CIHR (Canadian Institutes for Health Research) Café Scientifique event taking place tonight, June 7, 2013, from the event webpage,

June 7, 2013

Blusson Spinal Cord Centre [this is one of the buildings that form the Vancouver General Hospital complex]
7:00 pm

Map & Directions

Join ICORD engineer Dr. Peter Cripton and physician Dr. Peter Wing for refreshments and informal discussion about strategies and devices to prevent spinal cord injuries.
Moderated by Dr. Chris McBride, Executive Director, SCI-BC

No charge • Everyone welcome • Registration required.

You can register here but there is currently a waitlist. I think the reason for event’s popularity can be intuited by reading this event description,

Join ICORD engineer and UBC mechanical engineering prof. Peter Cripton and spine surgeon Dr. Peter Wing at the next Canadian Institutes of Health Research (CIHR) Café Scientifique for an informal discussion about strategies and devices to prevent spinal cord injuries.

The Café provides a forum for health researches to connect directly with the public and broad local research communities in an informal setting. Cripton and Wing will be joined by film and animation producer and injury prevention speaker Kirsten Sharp. [emphasis mine]

The words film and animation attracted my attention and I’m assuming the same would be true of others who might not usually attend a talk about spinal cord injuries.

For those who require a little more notice, there’s a Thursday, June 13, 2013 Women in Science event at the HR MacMillan Space Centre, from the event page,

Thursday, June 13, 7:00 pm
Transforming Human-Robot Interaction – Dr. Elizabeth Croft
Depictions of robots vary from the helpful humanoid to destructive, evil entities. In reality, most robots are used in lab or industrial settings.  These robots are fast, strong and accurate, but not ideal co-workers. They don’t communicate well with humans, and are not always designed for safety when in close proximity to people.  Dr. Croft is finding ways to help humans and robots to work together.
Dr. Elizabeth Croft, B.A.Sc. (88, Mech, UBC), M.A.Sc (92, Mech, Waterloo), Ph.D. (95, Mech, Toronto), PEng, FEC, FASME
Dr. Croft is a professor at UBC; NSERC Chair for Women in Science and Engineering, BC-Yukon at UBC; and leader of the WWEST program for women in engineering, science and technology.  The focus of this initiative is to promote science and engineering as a career choice for women and other under-represented groups, and to identify and eliminate barriers that result in attrition from these career paths. She is the founding faculty advisor for the UBC Engineering Tri-Mentoring Program, and is director of the Collaborative Advanced Robotics and Intelligent Systems Laboratory at UBC.

I found some additional information on the event page (at the bottom),

7:00 pm (doors open at 6:30 pm)
Admission by donation

As for the location, you really need to check out the map and the directions. The HR MacMillan Space Centre is one of two tenants (the other is the Museum of Vancouver) in a facility located in a park near Kitsilano beach. The Bard on the Beach Shakespeare festival which takes place beside the facility starts June 12, 2013. This is a very popular festival and June 13, 2013 is the festival’s opening night for its production of Hamlet. Taking the bus means a 10 -15 minute hike, as well as, the festival hubbub and parking in that area is likely to be at a premium.