Tag Archives: robots

I found it at the movies: a commentary on/review of “Films from the Future”

Kudos to anyone who recognized the reference to Pauline Kael (she changed film criticism forever) and her book “I Lost it at the Movies.” Of course, her book title was a bit of sexual innuendo, quite risqué for an important film critic in 1965 but appropriate for a period (the 1960s) associated with a sexual revolution. (There’s more about the 1960’s sexual revolution in the US along with mention of a prior sexual revolution in the 1920s in this Wikipedia entry.)

The title for this commentary is based on an anecdote from Dr. Andrew Maynard’s (director of the Arizona State University [ASU] Risk Innovation Lab) popular science and technology book, “Films from the Future: The Technology and Morality of Sci-Fi Movies.”

The ‘title-inspiring’ anecdote concerns Maynard’s first viewing of ‘2001: A Space Odyssey, when as a rather “bratty” 16-year-old who preferred to read science fiction, he discovered new ways of seeing and imaging the world. Maynard isn’t explicit about when he became a ‘techno nerd’ or how movies gave him an experience books couldn’t but presumably at 16 he was already gearing up for a career in the sciences. That ‘movie’ revelation received in front of a black and white television on January 1,1982 eventually led him to write, “Films from the Future.” (He has a PhD in physics which he is now applying to the field of risk innovation. For a more detailed description of Dr. Maynard and his work, there’s his ASU profile webpage and, of course, the introduction to his book.)

The book is quite timely. I don’t know how many people have noticed but science and scientific innovation is being covered more frequently in the media than it has been in many years. Science fairs and festivals are being founded on what seems to be a daily basis and you can now find science in art galleries. (Not to mention the movies and television where science topics are covered in comic book adaptations, in comedy, and in standard science fiction style.) Much of this activity is centered on what’s called ’emerging technologies’. These technologies are why people argue for what’s known as ‘blue sky’ or ‘basic’ or ‘fundamental’ science for without that science there would be no emerging technology.

Films from the Future

Isn’t reading the Table of Contents (ToC) the best way to approach a book? (From Films from the Future; Note: The formatting has been altered),

Table of Contents
Chapter One
In the Beginning 14
Beginnings 14
Welcome to the Future 16
The Power of Convergence 18
Socially Responsible Innovation 21
A Common Point of Focus 25
Spoiler Alert 26
Chapter Two
Jurassic Park: The Rise of Resurrection Biology 27
When Dinosaurs Ruled the World 27
De-Extinction 31
Could We, Should We? 36
The Butterfly Effect 39
Visions of Power 43
Chapter Three
Never Let Me Go: A Cautionary Tale of Human Cloning 46
Sins of Futures Past 46
Cloning 51
Genuinely Human? 56
Too Valuable to Fail? 62
Chapter Four
Minority Report: Predicting Criminal Intent 64
Criminal Intent 64
The “Science” of Predicting Bad Behavior 69
Criminal Brain Scans 74
Machine Learning-Based Precognition 77
Big Brother, Meet Big Data 79
Chapter Five
Limitless: Pharmaceutically-enhanced Intelligence 86
A Pill for Everything 86
The Seduction of Self-Enhancement 89
Nootropics 91
If You Could, Would You? 97
Privileged Technology 101
Our Obsession with Intelligence 105
Chapter Six
Elysium: Social Inequity in an Age of Technological
Extremes 110
The Poor Shall Inherit the Earth 110
Bioprinting Our Future Bodies 115
The Disposable Workforce 119
Living in an Automated Future 124
Chapter Seven
Ghost in the Shell: Being Human in an
Augmented Future 129
Through a Glass Darkly 129
Body Hacking 135
More than “Human”? 137
Plugged In, Hacked Out 142
Your Corporate Body 147
Chapter Eight
Ex Machina: AI and the Art of Manipulation 154
Plato’s Cave 154
The Lure of Permissionless Innovation 160
Technologies of Hubris 164
Superintelligence 169
Defining Artificial Intelligence 172
Artificial Manipulation 175
Chapter Nine
Transcendence: Welcome to the Singularity 180
Visions of the Future 180
Technological Convergence 184
Enter the Neo-Luddites 190
Techno-Terrorism 194
Exponential Extrapolation 200
Make-Believe in the Age of the Singularity 203
Chapter Ten
The Man in the White Suit: Living in a Material World 208
There’s Plenty of Room at the Bottom 208
Mastering the Material World 213
Myopically Benevolent Science 220
Never Underestimate the Status Quo 224
It’s Good to Talk 227
Chapter Eleven
Inferno: Immoral Logic in an Age of
Genetic Manipulation 231
Decoding Make-Believe 231
Weaponizing the Genome 234
Immoral Logic? 238
The Honest Broker 242
Dictating the Future 248
Chapter Twelve
The Day After Tomorrow: Riding the Wave of
Climate Change 251
Our Changing Climate 251
Fragile States 255
A Planetary “Microbiome” 258
The Rise of the Anthropocene 260
Building Resiliency 262
Geoengineering the Future 266
Chapter Thirteen
Contact: Living by More than Science Alone 272
An Awful Waste of Space 272
More than Science Alone 277
Occam’s Razor 280
What If We’re Not Alone? 283
Chapter Fourteen
Looking to the Future 288
Acknowledgments 293

The ToC gives the reader a pretty clue as to where the author is going with their book and Maynard explains how he chose his movies in his introductory chapter (from Films from the Future),

“There are some quite wonderful science fiction movies that didn’t make the cut because they didn’t fit the overarching narrative (Blade Runner and its sequel Blade Runner 2049, for instance, and the first of the Matrix trilogy). There are also movies that bombed with the critics, but were included because they ably fill a gap in the bigger story around emerging and converging technologies. Ultimately, the movies that made the cut were chosen because, together, they create an overarching narrative around emerging trends in biotechnologies, cybertechnologies, and materials-based technologies, and they illuminate a broader landscape around our evolving relationship with science and technology. And, to be honest, they are all movies that I get a kick out of watching.” (p. 17)

Jurassic Park (Chapter Two)

Dinosaurs do not interest me—they never have. Despite my profound indifference I did see the movie, Jurassic Park, when it was first released (someone talked me into going). And, I am still profoundly indifferent. Thankfully, Dr. Maynard finds meaning and a connection to current trends in biotechnology,

Jurassic Park is unabashedly a movie about dinosaurs. But it’s also a movie about greed, ambition, genetic engineering, and human folly—all rich pickings for thinking about the future, and what could possibly go wrong. (p. 28)

What really stands out with Jurassic Park, over twenty-five years later, is how it reveals a very human side of science and technology. This comes out in questions around when we should tinker with technology and when we should leave well enough alone. But there is also a narrative here that appears time and time again with the movies in this book, and that is how we get our heads around the sometimes oversized roles mega-entrepreneurs play in dictating how new tech is used, and possibly abused. These are all issues that are just as relevant now as they were in 1993, and are front and center of ensuring that the technologyenabled future we’re building is one where we want to live, and not one where we’re constantly fighting for our lives.  (pp. 30-1)

He also describes a connection to current trends in biotechnology,


In a far corner of Siberia, two Russians—Sergey Zimov and his son Nikita—are attempting to recreate the Ice Age. More precisely, their vision is to reconstruct the landscape and ecosystem of northern Siberia in the Pleistocene, a period in Earth’s history that stretches from around two and a half million years ago to eleven thousand years ago. This was a time when the environment was much colder than now, with huge glaciers and ice sheets flowing over much of the Earth’s northern hemisphere. It was also a time when humans
coexisted with animals that are long extinct, including saber-tooth cats, giant ground sloths, and woolly mammoths.

The Zimovs’ ambitions are an extreme example of “Pleistocene rewilding,” a movement to reintroduce relatively recently extinct large animals, or their close modern-day equivalents, to regions where they were once common. In the case of the Zimovs, the
father-and-son team believe that, by reconstructing the Pleistocene ecosystem in the Siberian steppes and elsewhere, they can slow down the impacts of climate change on these regions. These areas are dominated by permafrost, ground that never thaws through
the year. Permafrost ecosystems have developed and survived over millennia, but a warming global climate (a theme we’ll come back to in chapter twelve and the movie The Day After Tomorrow) threatens to catastrophically disrupt them, and as this happens, the impacts
on biodiversity could be devastating. But what gets climate scientists even more worried is potentially massive releases of trapped methane as the permafrost disappears.

Methane is a powerful greenhouse gas—some eighty times more effective at exacerbating global warming than carbon dioxide— and large-scale releases from warming permafrost could trigger catastrophic changes in climate. As a result, finding ways to keep it in the ground is important. And here the Zimovs came up with a rather unusual idea: maintaining the stability of the environment by reintroducing long-extinct species that could help prevent its destruction, even in a warmer world. It’s a wild idea, but one that has some merit.8 As a proof of concept, though, the Zimovs needed somewhere to start. And so they set out to create a park for deextinct Siberian animals: Pleistocene Park.9

Pleistocene Park is by no stretch of the imagination a modern-day Jurassic Park. The dinosaurs in Hammond’s park date back to the Mesozoic period, from around 250 million years ago to sixty-five million years ago. By comparison, the Pleistocene is relatively modern history, ending a mere eleven and a half thousand years ago. And the vision behind Pleistocene Park is not thrills, spills, and profit, but the serious use of science and technology to stabilize an increasingly unstable environment. Yet there is one thread that ties them together, and that’s using genetic engineering to reintroduce extinct species. In this case, the species in question is warm-blooded and furry: the woolly mammoth.

The idea of de-extinction, or bringing back species from extinction (it’s even called “resurrection biology” in some circles), has been around for a while. It’s a controversial idea, and it raises a lot of tough ethical questions. But proponents of de-extinction argue
that we’re losing species and ecosystems at such a rate that we can’t afford not to explore technological interventions to help stem the flow.

Early approaches to bringing species back from the dead have involved selective breeding. The idea was simple—if you have modern ancestors of a recently extinct species, selectively breeding specimens that have a higher genetic similarity to their forebears can potentially help reconstruct their genome in living animals. This approach is being used in attempts to bring back the aurochs, an ancestor of modern cattle.10 But it’s slow, and it depends on
the fragmented genome of the extinct species still surviving in its modern-day equivalents.

An alternative to selective breeding is cloning. This involves finding a viable cell, or cell nucleus, in an extinct but well-preserved animal and growing a new living clone from it. It’s definitely a more appealing route for impatient resurrection biologists, but it does mean getting your hands on intact cells from long-dead animals and devising ways to “resurrect” these, which is no mean feat. Cloning has potential when it comes to recently extinct species whose cells have been well preserved—for instance, where the whole animal has become frozen in ice. But it’s still a slow and extremely limited option.

Which is where advances in genetic engineering come in.

The technological premise of Jurassic Park is that scientists can reconstruct the genome of long-dead animals from preserved DNA fragments. It’s a compelling idea, if you think of DNA as a massively long and complex instruction set that tells a group of biological molecules how to build an animal. In principle, if we could reconstruct the genome of an extinct species, we would have the basic instruction set—the biological software—to reconstruct
individual members of it.

The bad news is that DNA-reconstruction-based de-extinction is far more complex than this. First you need intact fragments of DNA, which is not easy, as DNA degrades easily (and is pretty much impossible to obtain, as far as we know, for dinosaurs). Then you
need to be able to stitch all of your fragments together, which is akin to completing a billion-piece jigsaw puzzle without knowing what the final picture looks like. This is a Herculean task, although with breakthroughs in data manipulation and machine learning,
scientists are getting better at it. But even when you have your reconstructed genome, you need the biological “wetware”—all the stuff that’s needed to create, incubate, and nurture a new living thing, like eggs, nutrients, a safe space to grow and mature, and so on. Within all this complexity, it turns out that getting your DNA sequence right is just the beginning of translating that genetic code into a living, breathing entity. But in some cases, it might be possible.

In 2013, Sergey Zimov was introduced to the geneticist George Church at a conference on de-extinction. Church is an accomplished scientist in the field of DNA analysis and reconstruction, and a thought leader in the field of synthetic biology (which we’ll come
back to in chapter nine). It was a match made in resurrection biology heaven. Zimov wanted to populate his Pleistocene Park with mammoths, and Church thought he could see a way of
achieving this.

What resulted was an ambitious project to de-extinct the woolly mammoth. Church and others who are working on this have faced plenty of hurdles. But the technology has been advancing so fast that, as of 2017, scientists were predicting they would be able to reproduce the woolly mammoth within the next two years.

One of those hurdles was the lack of solid DNA sequences to work from. Frustratingly, although there are many instances of well preserved woolly mammoths, their DNA rarely survives being frozen for tens of thousands of years. To overcome this, Church and others
have taken a different tack: Take a modern, living relative of the mammoth, and engineer into it traits that would allow it to live on the Siberian tundra, just like its woolly ancestors.

Church’s team’s starting point has been the Asian elephant. This is their source of base DNA for their “woolly mammoth 2.0”—their starting source code, if you like. So far, they’ve identified fifty plus gene sequences they think they can play with to give their modern-day woolly mammoth the traits it would need to thrive in Pleistocene Park, including a coat of hair, smaller ears, and a constitution adapted to cold.

The next hurdle they face is how to translate the code embedded in their new woolly mammoth genome into a living, breathing animal. The most obvious route would be to impregnate a female Asian elephant with a fertilized egg containing the new code. But Asian elephants are endangered, and no one’s likely to allow such cutting edge experimentation on the precious few that are still around, so scientists are working on an artificial womb for their reinvented woolly mammoth. They’re making progress with mice and hope to crack the motherless mammoth challenge relatively soon.

It’s perhaps a stretch to call this creative approach to recreating a species (or “reanimation” as Church refers to it) “de-extinction,” as what is being formed is a new species. … (pp. 31-4)

This selection illustrates what Maynard does so very well throughout the book where he uses each film as a launching pad for a clear, readable description of relevant bits of science so you understand why the premise was likely, unlikely, or pure fantasy while linking it to contemporary practices, efforts, and issues. In the context of Jurassic Park, Maynard goes on to raise some fascinating questions such as: Should we revive animals rendered extinct (due to obsolescence or inability to adapt to new conditions) when we could develop new animals?

General thoughts

‘Films for the Future’ offers readable (to non-scientific types) science, lively writing, and the occasional ‘memorish’ anecdote. As well, Dr. Maynard raises the curtain on aspects of the scientific enterprise that most of us do not get to see.  For example, the meeting  between Sergey Zimov and George Church and how it led to new ‘de-extinction’ work’. He also describes the problems that the scientists encountered and are encountering. This is in direct contrast to how scientific work is usually presented in the news media as one glorious breakthrough after the next.

Maynard does discuss the issues of social inequality and power and ownership. For example, who owns your transplant or data? Puzzlingly, he doesn’t touch on the current environment where scientists in the US and elsewhere are encouraged/pressured to start up companies commercializing their work.

Nor is there any mention of how universities are participating in this grand business experiment often called ‘innovation’. (My March 15, 2017 posting describes an outcome for the CRISPR [gene editing system] patent fight taking place between Harvard University’s & MIT’s [Massachusetts Institute of Technology] Broad Institute vs the University of California at Berkeley and my Sept. 11, 2018 posting about an art/science exhibit in Vancouver [Canada] provides an update for round 2 of the Broad Institute vs. UC Berkeley patent fight [scroll down about 65% of the way.) *To read about how my ‘cultural blindness’ shows up here scroll down to the single asterisk at the end.*

There’s a foray through machine-learning and big data as applied to predictive policing in Maynard’s ‘Minority Report’ chapter (my November 23, 2017 posting describes Vancouver’s predictive policing initiative [no psychics involved], the first such in Canada). There’s no mention of surveillance technology, which if I recall properly was part of the future environment, both by the state and by corporations. (Mia Armstrong’s November 15, 2018 article for Slate on Chinese surveillance being exported to Venezuela provides interesting insight.)

The gaps are interesting and various. This of course points to a problem all science writers have when attempting an overview of science. (Carl Zimmer’s latest, ‘She Has Her Mother’s Laugh: The Powers, Perversions, and Potential of Heredity’] a doorstopping 574 pages, also has some gaps despite his focus on heredity,)

Maynard has worked hard to give an comprehensive overview in a remarkably compact 279 pages while developing his theme about science and the human element. In other words, science is not monolithic; it’s created by human beings and subject to all the flaws and benefits that humanity’s efforts are always subject to—scientists are people too.

The readership for ‘Films from the Future’ spans from the mildly interested science reader to someone like me who’s been writing/blogging about these topics (more or less) for about 10 years. I learned a lot reading this book.

Next time, I’m hopeful there’ll be a next time, Maynard might want to describe the parameters he’s set for his book in more detail that is possible in his chapter headings. He could have mentioned that he’s not a cinéaste so his descriptions of the movies are very much focused on the story as conveyed through words. He doesn’t mention colour palates, camera angles, or, even, cultural lenses.

Take for example, his chapter on ‘Ghost in the Shell’. Focused on the Japanese animation film and not the live action Hollywood version he talks about human enhancement and cyborgs. The Japanese have a different take on robots, inanimate objects, and, I assume, cyborgs than is found in Canada or the US or Great Britain, for that matter (according to a colleague of mine, an Englishwoman who lived in Japan for ten or more years). There’s also the chapter on the Ealing comedy, The Man in The White Suit, an English film from the 1950’s. That too has a cultural (as well as, historical) flavour but since Maynard is from England, he may take that cultural flavour for granted. ‘Never let me go’ in Chapter Two was also a UK production, albeit far more recent than the Ealing comedy and it’s interesting to consider how a UK production about cloning might differ from a US or Chinese or … production on the topic. I am hearkening back to Maynard’s anecdote about movies giving him new ways of seeing and imagining the world.

There’s a corrective. A couple of sentences in Maynard’s introductory chapter cautioning that in depth exploration of ‘cultural lenses’ was not possible without expanding the book to an unreadable size followed by a sentence in each of the two chapters that there are cultural differences.

One area where I had a significant problem was with regard to being “programmed” and having  “instinctual” behaviour,

As a species, we are embarrassingly programmed to see “different” as “threatening,” and to take instinctive action against it. It’s a trait that’s exploited in many science fiction novels and movies, including those in this book. If we want to see the rise of increasingly augmented individuals, we need to be prepared for some social strife. (p. 136)

These concepts are much debated in the social sciences and there are arguments for and against ‘instincts regarding strangers and their possible differences’. I gather Dr. Maynard hies to the ‘instinct to defend/attack’ school of thought.

One final quandary, there was no sex and I was expecting it in the Ex Machina chapter, especially now that sexbots are about to take over the world (I exaggerate). Certainly, if you’re talking about “social strife,” then sexbots would seem to be fruitful line of inquiry, especially when there’s talk of how they could benefit families (my August 29, 2018 posting). Again, there could have been a sentence explaining why Maynard focused almost exclusively in this chapter on the discussions about artificial intelligence and superintelligence.

Taken in the context of the book, these are trifling issues and shouldn’t stop you from reading Films from the Future. What Maynard has accomplished here is impressive and I hope it’s just the beginning.

Final note

Bravo Andrew! (Note: We’ve been ‘internet acquaintances/friends since the first year I started blogging. When I’m referring to him in his professional capacity, he’s Dr. Maynard and when it’s not strictly in his professional capacity, it’s Andrew. For this commentary/review I wanted to emphasize his professional status.)

If you need to see a few more samples of Andrew’s writing, there’s a Nov. 15, 2018 essay on The Conversation, Sci-fi movies are the secret weapon that could help Silicon Valley grow up and a Nov. 21, 2018 article on slate.com, The True Cost of Stain-Resistant Pants; The 1951 British comedy The Man in the White Suit anticipated our fears about nanotechnology. Enjoy.

****Added at 1700 hours on Nov. 22, 2018: You can purchase Films from the Future here.

*Nov. 23, 2018: I should have been more specific and said ‘academic scientists’. In Canada, the great percentage of scientists are academic. It’s to the point where the OECD (Organization for Economic Cooperation and Development) has noted that amongst industrialized countries, Canada has very few industrial scientists in comparison to the others.

AI assistant makes scientific discovery at Tufts University (US)

In light of this latest research from Tufts University, I thought it might be interesting to review the “algorithms, artificial intelligence (AI), robots, and world of work” situation before moving on to Tufts’ latest science discovery. My Feb. 5, 2015 post provides a roundup of sorts regarding work and automation. For those who’d like the latest, there’s a May 29, 2015 article by Sophie Weiner for Fast Company, featuring a predictive interactive tool designed by NPR (US National Public Radio) based on data from Oxford University researchers, which tells you how likely automating your job could be, no one knows for sure, (Note: A link has been removed),

Paralegals and food service workers: the robots are coming.

So suggests this interactive visualization by NPR. The bare-bones graphic lets you select a profession, from tellers and lawyers to psychologists and authors, to determine who is most at risk of losing their jobs in the coming robot revolution. From there, it spits out a percentage. …

You can find the interactive NPR tool here. I checked out the scientist category (in descending order of danger: Historians [43.9%], Economists, Geographers, Survey Researchers, Epidemiologists, Chemists, Animal Scientists, Sociologists, Astronomers, Social Scientists, Political Scientists, Materials Scientists, Conservation Scientists, and Microbiologists [1.2%]) none of whom seem to be in imminent danger if you consider that bookkeepers are rated at  97.6%.

Here at last is the news from Tufts (from a June 4, 2015 Tufts University news release, also on EurekAlert),

An artificial intelligence system has for the first time reverse-engineered the regeneration mechanism of planaria–the small worms whose extraordinary power to regrow body parts has made them a research model in human regenerative medicine.

The discovery by Tufts University biologists presents the first model of regeneration discovered by a non-human intelligence and the first comprehensive model of planarian regeneration, which had eluded human scientists for over 100 years. The work, published in PLOS Computational Biology, demonstrates how “robot science” can help human scientists in the future.

To mine the fast-growing mountain of published experimental data in regeneration and developmental biology Lobo and Levin developed an algorithm that would use evolutionary computation to produce regulatory networks able to “evolve” to accurately predict the results of published laboratory experiments that the researchers entered into a database.

“Our goal was to identify a regulatory network that could be executed in every cell in a virtual worm so that the head-tail patterning outcomes of simulated experiments would match the published data,” Lobo said.

The paper represents a successful application of the growing field of “robot science” – which Levin says can help human researchers by doing much more than crunch enormous datasets quickly.

“While the artificial intelligence in this project did have to do a whole lot of computations, the outcome is a theory of what the worm is doing, and coming up with theories of what’s going on in nature is pretty much the most creative, intuitive aspect of the scientist’s job,” Levin said. “One of the most remarkable aspects of the project was that the model it found was not a hopelessly-tangled network that no human could actually understand, but a reasonably simple model that people can readily comprehend. All this suggests to me that artificial intelligence can help with every aspect of science, not only data mining but also inference of meaning of the data.”

Here’s a link to and a citation for the paper,

Inferring Regulatory Networks from Experimental Morphological Phenotypes: A Computational Method Reverse-Engineers Planarian Regeneration by Daniel Lobo and Michael Levin. PLOS (Computational Biology) DOI: DOI: 10.1371/journal.pcbi.1004295 Published: June 4, 2015

This paper is open access.

It will be interesting to see if attributing the discovery to an algorithm sets off criticism suggesting that the researchers overstated the role the AI assistant played.

Monkeys, mind control, robots, prosthetics, and the 2014 World Cup (soccer/football)

The idea that a monkey in the US could control a robot’s movements in Japan is stunning. Even more stunning is the fact that the research is four years old. It was discussed publicly in a Jan. 15, 2008 article by Sharon Gaudin for Computer World,

Scientists in the U.S. and Japan have successfully used a monkey’s brain activity to control a humanoid robot — over the Internet.

This research may only be a few years away from helping paralyzed people walk again by enabling them to use their thoughts to control exoskeletons attached to their bodies, according to Miguel Nicolelis, a professor of neurobiology at Duke University and lead researcher on the project.

“This is an attempt to restore mobility to people,” said Nicolelis. “We had the animal trained to walk on a treadmill. As it walked, we recorded its brain activity that generated its locomotion pattern. As the animal was walking and slowing down and changing his pattern, his brain activity was driving a robot in Japan in real time.”

This video clip features an animated monkey simulating control of  a real robot in Japan (the Computational Brain Project of the Japan Science and Technology Agency (JST) in Kyoto partnered with Duke University for this project),

I wonder if the Duke researchers or communications staff thought that the sight of real rhesus monkeys on treadmills might be too disturbing. While we’re on the topic of simulation, I wonder where the robot in the clip actually resides. Quibbles about the video clip aside, I have no doubt that the research took place.

There’s a more recent (Oct. 5, 2011) article, about the work being done in Nicolelis’ laboratory at Duke University, by Ed Yong for Discover Magazine (mentioned previously described in my Oct. 6, 2011 posting),

This is where we are now: at Duke University, a monkey controls a virtual arm using only its thoughts. Miguel Nicolelis had fitted the animal with a headset of electrodes that translates its brain activity into movements. It can grab virtual objects without using its arms. It can also feel the objects without its hands, because the headset stimulates its brain to create the sense of different textures. Monkey think, monkey do, monkey feel – all without moving a muscle.
And this is where  Nicolelis wants to be in three years: a young quadriplegic Brazilian man strolls confidently into a massive stadium. He controls his four prosthetic limbs with his thoughts, and they in turn send tactile information straight to his brain. The technology melds so fluidly with his mind that he confidently runs up and delivers the opening kick of the 2014 World Cup.

This sounds like a far-fetched dream, but Nicolelis – a big soccer fan – is talking to the Brazilian government to make it a reality.

According to Yong, Nicolelis has created an international consortium to support the Walk Again Project. From the project home page,

The Walk Again Project, an international consortium of leading research centers around the world represents a new paradigm for scientific collaboration among the world’s academic institutions, bringing together a global network of scientific and technological experts, distributed among all the continents, to achieve a key humanitarian goal.

The project’s central goal is to develop and implement the first BMI [brain-machine interface] capable of restoring full mobility to patients suffering from a severe degree of paralysis. This lofty goal will be achieved by building a neuroprosthetic device that uses a BMI as its core, allowing the patients to capture and use their own voluntary brain activity to control the movements of a full-body prosthetic device. This “wearable robot,” also known as an “exoskeleton,” will be designed to sustain and carry the patient’s body according to his or her mental will.

In addition to proposing to develop new technologies that aim at improving the quality of life of millions of people worldwide, the Walk Again Project also innovates by creating a complete new paradigm for global scientific collaboration among leading academic institutions worldwide. According to this model, a worldwide network of leading scientific and technological experts, distributed among all the continents, come together to participate in a major, non-profit effort to make a fellow human being walk again, based on their collective expertise. These world renowned scholars will contribute key intellectual assets as well as provide a base for continued fundraising capitalization of the project, setting clear goals to establish fundamental advances toward restoring full mobility for patients in need.

It’s the exoskeleton described on the Walk Again Project home page that Nicolelis is hoping will enable a young Brazilian quadriplegic to deliver the opening kick for the 2014 World Cup (soccer/football) in Brazil.

Nanotechnology-enabled robot skin

We take it for granted most of the time. The ability to sense pressure and respond to appropriately doesn’t seem like any great gift but without it, you’d crush fragile objects or be unable to hold onto the heavy ones.

It’s this ability to sense pressure that’s a stumbling block for robotmakers who want to move robots into jobs that require some dexterity, e.g., one that could clean yours windows and your walls without damaging one or failing to clean the other.

Two research teams have recently published papers about their work on solving the ‘pressure problem’. From the article by Jason Palmer for BBC News,

The materials, which can sense pressure as sensitively and quickly as human skin, have been outlined by two groups reporting in [the journal] Nature Materials.

The skins are arrays of small pressure sensors that convert tiny changes in pressure into electrical signals.

The arrays are built into or under flexible rubber sheets that could be stretched into a variety of shapes.

The materials could be used to sheath artificial limbs or to create robots that can pick up and hold fragile objects. They could also be used to improve tools for minimally-invasive surgery.

One team is located at the University of California, Berkeley and the other at Stanford University. The Berkeley team headed by Ali Javey, associate professor of electrical engineering and computer sciences has named their artificial skin ‘e-skin’. From the article by Dan Nosowitz on the Fast Company website,

Researchers at the University of California at Berkeley, backed by DARPA funding, have come up with a thin prototype material that’s getting science nerds all in a tizzy about the future of robotics.

This material is made from germanium and silicon nanowires grown on a cylinder, then rolled around a sticky polyimide substrate. What does that get you? As CNet says, “The result was a shiny, thin, and flexible electronic material organized into a matrix of transistors, each of which with hundreds of semiconductor nanowires.”

But what takes the material to the next level is the thin layer of pressure-sensitive rubber added to the prototype’s surface, capable of measuring pressures between zero and 15 kilopascals–about the normal range of pressure for a low-intensity human activity, like, say, writing a blog post. Basically, this rubber layer turns the nanowire material into a sort of artificial skin, which is being played up as a miracle material.

As Nosowitz points out, this is a remarkable achievement and it is a first step since skin registers pressure, pain, temperature, wetness, and more. Here’s an illustration of Berkeley’s e-skin (Source: University of California Berkeley, accessed from  http://berkeley.edu/news/media/releases/2010/09/12_eskin.shtml Sept. 14, 2010),

An artist’s illustration of an artificial e-skin with nanowire active matrix circuitry covering a hand. The fragile egg illustrates the functionality of the e-skin device for prosthetic and robotic applications.

The Stanford team’s approach has some similarities to the Berkeley’s (from Jason Palmer’s BBC article),

“Javey’s work is a nice demonstration of their capability in making a large array of nanowire TFTs [this film transistor],” said Zhenan Bao of Stanford University, whose group demonstrated the second approach.

The heart of Professor Bao’s devices is micro-structured rubber sheet in the middle of the TFT – effectively re-creating the functionality of the Berkeley group’s skins with less layers.

“Instead of laminating a pressure-sensitive resistor array on top of a nanowire TFT array, we made our transistors to be pressure sensitive,” Professor Bao explained to BBC News.

Here’s a short video about the Stanford team’s work (Source: Stanford University, accessed from http://news.stanford.edu/news/2010/september/sensitive-artificial-skin-091210.html Sept. 14, 2010),

Both approaches to the ‘pressure problem’ have at least one shortcoming. The Berkeley’s team’s e-skin has less sensitivity than Stanford’s while the Stanford team’s artificial skin is less flexible than e-skin as per Palmer’s BBC article. Also, I noticed that the Berkeley team at least is being funded by DARPA ([US Dept. of Defense] Defense Advanced Research Projects Agency) so I’m assuming a fair degree of military interest, which always gives me pause. Nonetheless, bravo to both teams.

Oil-absorbing (nanotechnology-enabled) robots at Venice Biennale?

MIT (Massachusetts Institute of Technology) researchers are going to be presenting nano-enabled oil-absorbing robots, Seaswarm, at the Venice Biennale , (from the news item on Nanowerk),

Using a cutting edge nanotechnology, researchers at MIT have created a robotic prototype that could autonomously navigate the surface of the ocean to collect surface oil and process it on site.

The system, called Seaswarm, is a fleet of vehicles that may make cleaning up future oil spills both less expensive and more efficient than current skimming methods. MIT’s Senseable City Lab will unveil the first Seaswarm prototype at the Venice Biennale’s Italian Pavilion on Saturday, August 28. The Venice Biennale is an international art, music and architecture festival whose current theme addresses how nanotechnology will change the way we live in 2050.

I did look at the Biennale website for more information about the theme and about Seaswarm but details, at least on the English language version of the website, are nonexistent. (Note: The Venice Biennale was launched in 1895 as an art exhibition. Today the Biennale features, cinema, architecture, theatre, and music as well as art.)

You can find out more about Seaswarm at MIT’s senseable city lab here and/or you can watch this animation,

The animation specifically mentions BP and the Gulf of Mexico oil spill and compares the skimmers used to remove oil from the ocean with Seaswarm skimmers outfitted with  nanowire meshes,

The Seaswarm robot uses a conveyor belt covered with a thin nanowire mesh to absorb oil. The fabric, developed by MIT Visiting Associate Professor Francesco Stellacci, and previously featured in a paper published in the journal Nature Nanotechnology, can absorb up to twenty times its own weight in oil while repelling water. By heating up the material, the oil can be removed and burnt locally and the nanofabric can be reused.

“We envisioned something that would move as a ‘rolling carpet’ along the water and seamlessly absorb a surface spill,” said Senseable City Lab Associate Director Assaf Biderman. “This led to the design of a novel marine vehicle: a simple and lightweight conveyor belt that rolls on the surface of the ocean, adjusting to the waves.”

The Seaswarm robot, which is 16 feet long and seven feet wide, uses two square meters of solar panels for self-propulsion. With just 100 watts, the equivalent of one household light bulb, it could potentially clean continuously for weeks.

I’d love to see the prototype in operation not to mention getting a chance to attend La Biennale.

Stickybots at Stanford University

I’ve been intrigued by ‘gecko technology’ or ‘spiderman technology’ since I first started investigating nanotechnology about four years ago.  This is the first time I’ve seen theory put into practice. From the news item on Nanowerk,

Mark Cutkosky, the lead designer of the Stickybot, a professor of mechanical engineering and co-director of the Center for Design Research [Stanford University], has been collaborating with scientists around the nation for the last five years to build climbing robots.

After designing a robot that could conquer rough vertical surfaces such as brick walls and concrete, Cutkosky moved on to smooth surfaces such as glass and metal. He turned to the gecko for ideas.

“Unless you use suction cups, which are kind of slow and inefficient, the other solution out there is to use dry adhesion, which is the technique the gecko uses,” Cutkosky said.

Here’s a video of Stanford’s Stickybot in  action (from the Stanford University News website),

As Cutkosky goes on to explain in the news item,

The interaction between the molecules of gecko toe hair and the wall is a molecular attraction called van der Waals force. A gecko can hang and support its whole weight on one toe by placing it on the glass and then pulling it back. It only sticks when you pull in one direction – their toes are a kind of one-way adhesive, Cutkosky said.

“Other adhesives are sort of like walking around with chewing gum on your feet: You have to press it into the surface and then you have to work to pull it off. But with directional adhesion, it’s almost like you can sort of hook and unhook yourself from the surface,” Cutkosky said.

After the breakthrough insight that direction matters, Cutkosky and his team began asking how to build artificial materials for robots that create the same effect. They came up with a rubber-like material with tiny polymer hairs made from a micro-scale mold.

The designers attach a layer of adhesive cut to the shape of Stickybot’s four feet, which are about the size of a child’s hand. As it steadily moves up the wall, the robot peels and sticks its feet to the surface with ease, resembling a mechanical lizard.

The newest versions of the adhesive, developed in 2009, have a two-layer system, similar to the gecko’s lamellae and setae. The “hairs” are even smaller than the ones on the first version – about 20 micrometers wide, which is five times thinner than a human hair. These versions support higher loads and allow Stickybot to climb surfaces such as wood paneling, painted metal and glass.

The material is strong and reusable, and leaves behind no residue or damage. Robots that scale vertical walls could be useful for accessing dangerous or hard to reach places.

The research team’s paper, Effect of fibril shape on adhesive properties, was published online Aug. 2, 2010 in Applied Physics Letter.

Folding, origami, and shapeshifting and an article with over 50,000 authors

I’m on a metaphor kick these days so here goes, origami (Japanese paper folding), and shapeshifting are metaphors used to describe a certain biological process that nanoscientists from fields not necessarily associated with biology find fascinating, protein folding.


Take for example a research team at the California Institute of Technology (Caltech) working to exploit the electronic properties of carbon nanotubes (mentioned in a Nov. 9, 2010 news item on Nanowerk). One of the big issues is that since all of the tubes in a sample are made of carbon getting one tube to react on its own without activating the others is quite challenging when you’re trying to create nanoelectronic circuits. The research team decided to use a technique developed in a bioengineering lab (from the news item),

DNA origami is a type of self-assembled structure made from DNA that can be programmed to form nearly limitless shapes and patterns (such as smiley faces or maps of the Western Hemisphere or even electrical diagrams). Exploiting the sequence-recognition properties of DNA base paring, DNA origami are created from a long single strand of viral DNA and a mixture of different short synthetic DNA strands that bind to and “staple” the viral DNA into the desired shape, typically about 100 nanometers (nm) on a side.

Single-wall carbon nanotubes are molecular tubes composed of rolled-up hexagonal mesh of carbon atoms. With diameters measuring less than 2 nm and yet with lengths of many microns, they have a reputation as some of the strongest, most heat-conductive, and most electronically interesting materials that are known. For years, researchers have been trying to harness their unique properties in nanoscale devices, but precisely arranging them into desirable geometric patterns has been a major stumbling block.

… To integrate the carbon nanotubes into this system, the scientists colored some of those pixels anti-red, and others anti-blue, effectively marking the positions where they wanted the color-matched nanotubes to stick. They then designed the origami so that the red-labeled nanotubes would cross perpendicular to the blue nanotubes, making what is known as a field-effect transistor (FET), one of the most basic devices for building semiconductor circuits.

Although their process is conceptually simple, the researchers had to work out many kinks, such as separating the bundles of carbon nanotubes into individual molecules and attaching the single-stranded DNA; finding the right protection for these DNA strands so they remained able to recognize their partners on the origami; and finding the right chemical conditions for self-assembly.

After about a year, the team had successfully placed crossed nanotubes on the origami; they were able to see the crossing via atomic force microscopy. These systems were removed from solution and placed on a surface, after which leads were attached to measure the device’s electrical properties. When the team’s simple device was wired up to electrodes, it indeed behaved like a field-effect transistor


For another more recent example (from an August 5, 2010 article on physorg.com by Larry Hardesty,  Shape-shifting robots),

By combining origami and electrical engineering, researchers at MIT and Harvard are working to develop the ultimate reconfigurable robot — one that can turn into absolutely anything. The researchers have developed algorithms that, given a three-dimensional shape, can determine how to reproduce it by folding a sheet of semi-rigid material with a distinctive pattern of flexible creases. To test out their theories, they built a prototype that can automatically assume the shape of either an origami boat or a paper airplane when it receives different electrical signals. The researchers reported their results in the July 13 issue of the Proceedings of the National Academy of Sciences.

As director of the Distributed Robotics Laboratory at the Computer Science and Artificial Intelligence Laboratory (CSAIL), Professor Daniela Rus researches systems of robots that can work together to tackle complicated tasks. One of the big research areas in distributed robotics is what’s called “programmable matter,” the idea that small, uniform robots could snap together like intelligent Legos to create larger, more versatile robots.

Here’s a video from this site at MIT (Massachusetts Institute of Technology) describing the process,

Folding and over 50, 000 authors

With all this I’ve been leading up to a fascinating project, a game called Foldit, that a team from the University of Washington has published results from in the journal Nature (Predicting protein structures with a multiplayer online game), Aug. 5, 2010.

With over 50,000 authors, this study is a really good example of citizen science (discussed in my May 14, 2010 posting and elsewhere here) and how to use games to solve science problems while exploiting a fascination with folding and origami. From the Aug. 5, 2010 news item on Nanowerk,

The game, Foldit, turns one of the hardest problems in molecular biology into a game a bit reminiscent of Tetris. Thousands of people have now played a game that asks them to fold a protein rather than stack colored blocks or rescue a princess.

Scientists know the pieces that make up a protein but cannot predict how those parts fit together into a 3-D structure. And since proteins act like locks and keys, the structure is crucial.

At any moment, thousands of computers are working away at calculating how physical forces would cause a protein to fold. But no computer in the world is big enough, and computers may not take the smartest approach. So the UW team tried to make it into a game that people could play and compete. Foldit turns protein-folding into a game and awards points based on the internal energy of the 3-D protein structure, dictated by the laws of physics.

Tens of thousands of players have taken the challenge. The author list for the paper includes an acknowledgment of more than 57,000 Foldit players, which may be unprecedented on a scientific publication.

“It’s a new kind of collective intelligence, as opposed to individual intelligence, that we want to study,”Popoviç [principal investigator Zoran Popoviç, a UW associate professor of computer science and engineering] said. “We’re opening eyes in terms of how people think about human intelligence and group intelligence, and what the possibilities are when you get huge numbers of people together to solve a very hard problem.”

There’s a more at Nanowerk including a video about the gamers and the scientists. I think most of us take folding for granted and yet it stimulates all kinds of research and ideas.

Emotions and robots

Two new robots (the type that can show their emotions, more or less) have recently been introduced according to an article by Kit Eaton titled Kid and Baby Robots Get Creepy Emotional Faces on Fast Company. From the article,

The two bots were revealed today by creators the JST Erato Asada Project–a research team dedicated to investigating how humans and robots can better relate to each other in the future and so that robots can learn better (though given the early stages of current artificial intelligence science, it’s almost a case of working out how humans can feel better about interacting with robots).


The first is M3-Kindy, a 27-kilo machine with 42 motors and over a hundred touch-sensors. He’s about the size of a 5-year-old child, and can do speech recognition, and machine vision with his stereoscopic camera eyes. Kindy’s also designed to be led around by humans holding its hand, and can be taught to manipulate objects.

But it’s Kindy’s face that’s the freakiest bit. It’s been carefully designed so that it can portray emotions. That’ll undoubtedly be useful in the future, when, for instance, having more friendly, emotionally attractive robot carers look after elderly people and patients in hospitals is going to be important.

… Noby will have you running out of the room. It’s a similar human-machine interaction research droid, but is meant to model a 9-month-old baby, right down to the mass and density of its limbs and soft skin.

Do visit the article to see the images of the two robots and read more.

nanoBIDS; military robots from prototype to working model; prosthetics, the wave of the future?

The Nanowerk website is expanding. From their news item,

Nanowerk, the leading information provider for all areas of nanotechnologies, today added to its nanotechnology information portal a new free service for buyers and vendors of micro- and nanotechnology equipment and services. The new application, called nanoBIDS, is now available on the Nanowerk website. nanoBIDS facilitates the public posting of Requests for Proposal (RFPs) for equipment and services from procurement departments in the micro- and nanotechnologies community. nanoBIDS is open to all research organizations and companies.

I checked out the nanoBIDS page and found RFP listings from UK, US (mostly), and Germany. The earliest are dated Jan.25, 2010 so this site is just over a week old and already has two pages.

The Big Dog robot (which I posted about briefly here) is in the news again. Kit Eaton (Fast Company) whose article last October first alerted me to this device now writes that the robot is being put into production. From the article (Robocalypse Alert: Defense Contract Awarded to Scary BigDog),

The contract’s been won by maker Boston Dynamics, which has just 30 months to turn the research prototype machines into a genuine load-toting, four-legged, semi-intelligent war robot–“first walk-out” of the newly-designated LS3 is scheduled in 2012.

LS3 stands for Legged Squad Support System, and that pretty much sums up what the device is all about: It’s a semi-autonomous assistant designed to follow soldiers and Marines across the battlefield, carrying up to 400 pounds of gear and enough fuel to keep it going for 24 hours over a march of 20 miles.

They have included a video of the prototype on a beach in Thailand and as Eaton notes, the robot is “disarmingly ‘cute'” and, to me, its legs look almost human-shaped, which leads me to my next bit.

I found another article on prosthetics this morning and it’s a very good one. Written by Paul Hochman for Fast Company, Bionic Legs, iLimbs, and Other Super-Human Prostheses delves further into the world where people may be willing to trade a healthy limb for a prosthetic. From the article,

There are many advantages to having your leg amputated.

Pedicure costs drop 50% overnight. A pair of socks lasts twice as long. But Hugh Herr, the director of the Biomechatronics Group at the MIT Media Lab, goes a step further. “It’s actually unfair,” Herr says about amputees’ advantages over the able-bodied. “As tech advancements in prosthetics come along, amputees can exploit those improvements. They can get upgrades. A person with a natural body can’t.”

I came across both a milder version of this sentiment and a more targeted version (able-bodied athletes worried about double amputee Oscar Pistorius’ bid to run in the Olympics rather than the Paralympics) when I wrote my four part series on human enhancement (July 22, 23, 24 & 27, 2009).

The Hochman article also goes on to discuss some of the aesthetic considerations (which I discussed in the same posting where I mentioned the BigDog robots). What Hochman does particularly well is bringing all this information together and explaining how the lure of big money (profit) is stimulating market development,

Not surprisingly, the money is following the market. MIT’s Herr cofounded a company called iWalk, which has received $10 million in venture financing to develop the PowerFoot One — what the company calls the “world’s first actively powered prosthetic ankle and foot.” Meanwhile, the Department of Veterans Affairs recently gave Brown University’s Center for Restorative and Regenerative Medicine a $7 million round of funding, on top of the $7.2 million it provided in 2004. And the Defense Advanced Research Projects Administration (DARPA) has funded Manchester, New Hampshire-based DEKA Research, which is developing the Luke, a powered prosthetic arm (named after Luke Skywalker, whose hand is hacked off by his father, Darth Vader).

This influx of R&D cash, combined with breakthroughs in materials science and processor speed, has had a striking visual and social result: an emblem of hurt and loss has become a paradigm of the sleek, modern, and powerful. Which is why Michael Bailey, a 24-year-old student in Duluth, Georgia, is looking forward to the day when he can amputate the last two fingers on his left hand.

“I don’t think I would have said this if it had never happened,” says Bailey, referring to the accident that tore off his pinkie, ring, and middle fingers. “But I told Touch Bionics I’d cut the rest of my hand off if I could make all five of my fingers robotic.”

This kind of thinking is influencing surgery such that patients are asking to have more of their bodies removed.

The article is lengthy (by internet standards) and worthwhile as it contains nuggets such as this,

But Bailey is most surprised by his own reaction. “When I’m wearing it, I do feel different: I feel stronger. As weird as that sounds, having a piece of machinery incorporated into your body, as a part of you, well, it makes you feel above human. It’s a very powerful thing.”

So the prosthetic makes him “feel above human,” interesting, eh? It leads to the next question (and a grand and philosophical one it is), what does it mean to be human? At least lately, I tend to explore that question by reading fiction.

I have been intrigued by Catherine Asaro‘s Skolian Empire series of books. The series features human beings (mostly soldiers) who have something she calls ‘biomech’  in their bodies to make them smarter, stronger, and faster. She also populates worlds with people who’ve had (thousands of years before) extensive genetic manipulation so they can better adapt to their new homeworlds. Her characters represent different opinions about the ‘biomech’ which is surgically implanted usually in adulthood and voluntarily. Asaro is a physicist who writes ‘hard’ science fiction laced with romance. She handles a great many thorny social questions in the context of this Skolian Empire that she has created where the technologies (nano, genetic engineering, etc.)  that we are exploring are a daily reality.