Tag Archives: robots

Monkeys, mind control, robots, prosthetics, and the 2014 World Cup (soccer/football)

The idea that a monkey in the US could control a robot’s movements in Japan is stunning. Even more stunning is the fact that the research is four years old. It was discussed publicly in a Jan. 15, 2008 article by Sharon Gaudin for Computer World,

Scientists in the U.S. and Japan have successfully used a monkey’s brain activity to control a humanoid robot — over the Internet.

This research may only be a few years away from helping paralyzed people walk again by enabling them to use their thoughts to control exoskeletons attached to their bodies, according to Miguel Nicolelis, a professor of neurobiology at Duke University and lead researcher on the project.

“This is an attempt to restore mobility to people,” said Nicolelis. “We had the animal trained to walk on a treadmill. As it walked, we recorded its brain activity that generated its locomotion pattern. As the animal was walking and slowing down and changing his pattern, his brain activity was driving a robot in Japan in real time.”

This video clip features an animated monkey simulating control of  a real robot in Japan (the Computational Brain Project of the Japan Science and Technology Agency (JST) in Kyoto partnered with Duke University for this project),

I wonder if the Duke researchers or communications staff thought that the sight of real rhesus monkeys on treadmills might be too disturbing. While we’re on the topic of simulation, I wonder where the robot in the clip actually resides. Quibbles about the video clip aside, I have no doubt that the research took place.

There’s a more recent (Oct. 5, 2011) article, about the work being done in Nicolelis’ laboratory at Duke University, by Ed Yong for Discover Magazine (mentioned previously described in my Oct. 6, 2011 posting),

This is where we are now: at Duke University, a monkey controls a virtual arm using only its thoughts. Miguel Nicolelis had fitted the animal with a headset of electrodes that translates its brain activity into movements. It can grab virtual objects without using its arms. It can also feel the objects without its hands, because the headset stimulates its brain to create the sense of different textures. Monkey think, monkey do, monkey feel – all without moving a muscle.
And this is where  Nicolelis wants to be in three years: a young quadriplegic Brazilian man strolls confidently into a massive stadium. He controls his four prosthetic limbs with his thoughts, and they in turn send tactile information straight to his brain. The technology melds so fluidly with his mind that he confidently runs up and delivers the opening kick of the 2014 World Cup.

This sounds like a far-fetched dream, but Nicolelis – a big soccer fan – is talking to the Brazilian government to make it a reality.

According to Yong, Nicolelis has created an international consortium to support the Walk Again Project. From the project home page,

The Walk Again Project, an international consortium of leading research centers around the world represents a new paradigm for scientific collaboration among the world’s academic institutions, bringing together a global network of scientific and technological experts, distributed among all the continents, to achieve a key humanitarian goal.

The project’s central goal is to develop and implement the first BMI [brain-machine interface] capable of restoring full mobility to patients suffering from a severe degree of paralysis. This lofty goal will be achieved by building a neuroprosthetic device that uses a BMI as its core, allowing the patients to capture and use their own voluntary brain activity to control the movements of a full-body prosthetic device. This “wearable robot,” also known as an “exoskeleton,” will be designed to sustain and carry the patient’s body according to his or her mental will.

In addition to proposing to develop new technologies that aim at improving the quality of life of millions of people worldwide, the Walk Again Project also innovates by creating a complete new paradigm for global scientific collaboration among leading academic institutions worldwide. According to this model, a worldwide network of leading scientific and technological experts, distributed among all the continents, come together to participate in a major, non-profit effort to make a fellow human being walk again, based on their collective expertise. These world renowned scholars will contribute key intellectual assets as well as provide a base for continued fundraising capitalization of the project, setting clear goals to establish fundamental advances toward restoring full mobility for patients in need.

It’s the exoskeleton described on the Walk Again Project home page that Nicolelis is hoping will enable a young Brazilian quadriplegic to deliver the opening kick for the 2014 World Cup (soccer/football) in Brazil.

Nanotechnology-enabled robot skin

We take it for granted most of the time. The ability to sense pressure and respond to appropriately doesn’t seem like any great gift but without it, you’d crush fragile objects or be unable to hold onto the heavy ones.

It’s this ability to sense pressure that’s a stumbling block for robotmakers who want to move robots into jobs that require some dexterity, e.g., one that could clean yours windows and your walls without damaging one or failing to clean the other.

Two research teams have recently published papers about their work on solving the ‘pressure problem’. From the article by Jason Palmer for BBC News,

The materials, which can sense pressure as sensitively and quickly as human skin, have been outlined by two groups reporting in [the journal] Nature Materials.

The skins are arrays of small pressure sensors that convert tiny changes in pressure into electrical signals.

The arrays are built into or under flexible rubber sheets that could be stretched into a variety of shapes.

The materials could be used to sheath artificial limbs or to create robots that can pick up and hold fragile objects. They could also be used to improve tools for minimally-invasive surgery.

One team is located at the University of California, Berkeley and the other at Stanford University. The Berkeley team headed by Ali Javey, associate professor of electrical engineering and computer sciences has named their artificial skin ‘e-skin’. From the article by Dan Nosowitz on the Fast Company website,

Researchers at the University of California at Berkeley, backed by DARPA funding, have come up with a thin prototype material that’s getting science nerds all in a tizzy about the future of robotics.

This material is made from germanium and silicon nanowires grown on a cylinder, then rolled around a sticky polyimide substrate. What does that get you? As CNet says, “The result was a shiny, thin, and flexible electronic material organized into a matrix of transistors, each of which with hundreds of semiconductor nanowires.”

But what takes the material to the next level is the thin layer of pressure-sensitive rubber added to the prototype’s surface, capable of measuring pressures between zero and 15 kilopascals–about the normal range of pressure for a low-intensity human activity, like, say, writing a blog post. Basically, this rubber layer turns the nanowire material into a sort of artificial skin, which is being played up as a miracle material.

As Nosowitz points out, this is a remarkable achievement and it is a first step since skin registers pressure, pain, temperature, wetness, and more. Here’s an illustration of Berkeley’s e-skin (Source: University of California Berkeley, accessed from  http://berkeley.edu/news/media/releases/2010/09/12_eskin.shtml Sept. 14, 2010),

An artist’s illustration of an artificial e-skin with nanowire active matrix circuitry covering a hand. The fragile egg illustrates the functionality of the e-skin device for prosthetic and robotic applications.

The Stanford team’s approach has some similarities to the Berkeley’s (from Jason Palmer’s BBC article),

“Javey’s work is a nice demonstration of their capability in making a large array of nanowire TFTs [this film transistor],” said Zhenan Bao of Stanford University, whose group demonstrated the second approach.

The heart of Professor Bao’s devices is micro-structured rubber sheet in the middle of the TFT – effectively re-creating the functionality of the Berkeley group’s skins with less layers.

“Instead of laminating a pressure-sensitive resistor array on top of a nanowire TFT array, we made our transistors to be pressure sensitive,” Professor Bao explained to BBC News.

Here’s a short video about the Stanford team’s work (Source: Stanford University, accessed from http://news.stanford.edu/news/2010/september/sensitive-artificial-skin-091210.html Sept. 14, 2010),

Both approaches to the ‘pressure problem’ have at least one shortcoming. The Berkeley’s team’s e-skin has less sensitivity than Stanford’s while the Stanford team’s artificial skin is less flexible than e-skin as per Palmer’s BBC article. Also, I noticed that the Berkeley team at least is being funded by DARPA ([US Dept. of Defense] Defense Advanced Research Projects Agency) so I’m assuming a fair degree of military interest, which always gives me pause. Nonetheless, bravo to both teams.

Oil-absorbing (nanotechnology-enabled) robots at Venice Biennale?

MIT (Massachusetts Institute of Technology) researchers are going to be presenting nano-enabled oil-absorbing robots, Seaswarm, at the Venice Biennale , (from the news item on Nanowerk),

Using a cutting edge nanotechnology, researchers at MIT have created a robotic prototype that could autonomously navigate the surface of the ocean to collect surface oil and process it on site.

The system, called Seaswarm, is a fleet of vehicles that may make cleaning up future oil spills both less expensive and more efficient than current skimming methods. MIT’s Senseable City Lab will unveil the first Seaswarm prototype at the Venice Biennale’s Italian Pavilion on Saturday, August 28. The Venice Biennale is an international art, music and architecture festival whose current theme addresses how nanotechnology will change the way we live in 2050.

I did look at the Biennale website for more information about the theme and about Seaswarm but details, at least on the English language version of the website, are nonexistent. (Note: The Venice Biennale was launched in 1895 as an art exhibition. Today the Biennale features, cinema, architecture, theatre, and music as well as art.)

You can find out more about Seaswarm at MIT’s senseable city lab here and/or you can watch this animation,

The animation specifically mentions BP and the Gulf of Mexico oil spill and compares the skimmers used to remove oil from the ocean with Seaswarm skimmers outfitted with  nanowire meshes,

The Seaswarm robot uses a conveyor belt covered with a thin nanowire mesh to absorb oil. The fabric, developed by MIT Visiting Associate Professor Francesco Stellacci, and previously featured in a paper published in the journal Nature Nanotechnology, can absorb up to twenty times its own weight in oil while repelling water. By heating up the material, the oil can be removed and burnt locally and the nanofabric can be reused.

“We envisioned something that would move as a ‘rolling carpet’ along the water and seamlessly absorb a surface spill,” said Senseable City Lab Associate Director Assaf Biderman. “This led to the design of a novel marine vehicle: a simple and lightweight conveyor belt that rolls on the surface of the ocean, adjusting to the waves.”

The Seaswarm robot, which is 16 feet long and seven feet wide, uses two square meters of solar panels for self-propulsion. With just 100 watts, the equivalent of one household light bulb, it could potentially clean continuously for weeks.

I’d love to see the prototype in operation not to mention getting a chance to attend La Biennale.

Stickybots at Stanford University

I’ve been intrigued by ‘gecko technology’ or ‘spiderman technology’ since I first started investigating nanotechnology about four years ago.  This is the first time I’ve seen theory put into practice. From the news item on Nanowerk,

Mark Cutkosky, the lead designer of the Stickybot, a professor of mechanical engineering and co-director of the Center for Design Research [Stanford University], has been collaborating with scientists around the nation for the last five years to build climbing robots.

After designing a robot that could conquer rough vertical surfaces such as brick walls and concrete, Cutkosky moved on to smooth surfaces such as glass and metal. He turned to the gecko for ideas.

“Unless you use suction cups, which are kind of slow and inefficient, the other solution out there is to use dry adhesion, which is the technique the gecko uses,” Cutkosky said.

Here’s a video of Stanford’s Stickybot in  action (from the Stanford University News website),

As Cutkosky goes on to explain in the news item,

The interaction between the molecules of gecko toe hair and the wall is a molecular attraction called van der Waals force. A gecko can hang and support its whole weight on one toe by placing it on the glass and then pulling it back. It only sticks when you pull in one direction – their toes are a kind of one-way adhesive, Cutkosky said.

“Other adhesives are sort of like walking around with chewing gum on your feet: You have to press it into the surface and then you have to work to pull it off. But with directional adhesion, it’s almost like you can sort of hook and unhook yourself from the surface,” Cutkosky said.

After the breakthrough insight that direction matters, Cutkosky and his team began asking how to build artificial materials for robots that create the same effect. They came up with a rubber-like material with tiny polymer hairs made from a micro-scale mold.

The designers attach a layer of adhesive cut to the shape of Stickybot’s four feet, which are about the size of a child’s hand. As it steadily moves up the wall, the robot peels and sticks its feet to the surface with ease, resembling a mechanical lizard.

The newest versions of the adhesive, developed in 2009, have a two-layer system, similar to the gecko’s lamellae and setae. The “hairs” are even smaller than the ones on the first version – about 20 micrometers wide, which is five times thinner than a human hair. These versions support higher loads and allow Stickybot to climb surfaces such as wood paneling, painted metal and glass.

The material is strong and reusable, and leaves behind no residue or damage. Robots that scale vertical walls could be useful for accessing dangerous or hard to reach places.

The research team’s paper, Effect of fibril shape on adhesive properties, was published online Aug. 2, 2010 in Applied Physics Letter.

Folding, origami, and shapeshifting and an article with over 50,000 authors

I’m on a metaphor kick these days so here goes, origami (Japanese paper folding), and shapeshifting are metaphors used to describe a certain biological process that nanoscientists from fields not necessarily associated with biology find fascinating, protein folding.


Take for example a research team at the California Institute of Technology (Caltech) working to exploit the electronic properties of carbon nanotubes (mentioned in a Nov. 9, 2010 news item on Nanowerk). One of the big issues is that since all of the tubes in a sample are made of carbon getting one tube to react on its own without activating the others is quite challenging when you’re trying to create nanoelectronic circuits. The research team decided to use a technique developed in a bioengineering lab (from the news item),

DNA origami is a type of self-assembled structure made from DNA that can be programmed to form nearly limitless shapes and patterns (such as smiley faces or maps of the Western Hemisphere or even electrical diagrams). Exploiting the sequence-recognition properties of DNA base paring, DNA origami are created from a long single strand of viral DNA and a mixture of different short synthetic DNA strands that bind to and “staple” the viral DNA into the desired shape, typically about 100 nanometers (nm) on a side.

Single-wall carbon nanotubes are molecular tubes composed of rolled-up hexagonal mesh of carbon atoms. With diameters measuring less than 2 nm and yet with lengths of many microns, they have a reputation as some of the strongest, most heat-conductive, and most electronically interesting materials that are known. For years, researchers have been trying to harness their unique properties in nanoscale devices, but precisely arranging them into desirable geometric patterns has been a major stumbling block.

… To integrate the carbon nanotubes into this system, the scientists colored some of those pixels anti-red, and others anti-blue, effectively marking the positions where they wanted the color-matched nanotubes to stick. They then designed the origami so that the red-labeled nanotubes would cross perpendicular to the blue nanotubes, making what is known as a field-effect transistor (FET), one of the most basic devices for building semiconductor circuits.

Although their process is conceptually simple, the researchers had to work out many kinks, such as separating the bundles of carbon nanotubes into individual molecules and attaching the single-stranded DNA; finding the right protection for these DNA strands so they remained able to recognize their partners on the origami; and finding the right chemical conditions for self-assembly.

After about a year, the team had successfully placed crossed nanotubes on the origami; they were able to see the crossing via atomic force microscopy. These systems were removed from solution and placed on a surface, after which leads were attached to measure the device’s electrical properties. When the team’s simple device was wired up to electrodes, it indeed behaved like a field-effect transistor


For another more recent example (from an August 5, 2010 article on physorg.com by Larry Hardesty,  Shape-shifting robots),

By combining origami and electrical engineering, researchers at MIT and Harvard are working to develop the ultimate reconfigurable robot — one that can turn into absolutely anything. The researchers have developed algorithms that, given a three-dimensional shape, can determine how to reproduce it by folding a sheet of semi-rigid material with a distinctive pattern of flexible creases. To test out their theories, they built a prototype that can automatically assume the shape of either an origami boat or a paper airplane when it receives different electrical signals. The researchers reported their results in the July 13 issue of the Proceedings of the National Academy of Sciences.

As director of the Distributed Robotics Laboratory at the Computer Science and Artificial Intelligence Laboratory (CSAIL), Professor Daniela Rus researches systems of robots that can work together to tackle complicated tasks. One of the big research areas in distributed robotics is what’s called “programmable matter,” the idea that small, uniform robots could snap together like intelligent Legos to create larger, more versatile robots.

Here’s a video from this site at MIT (Massachusetts Institute of Technology) describing the process,

Folding and over 50, 000 authors

With all this I’ve been leading up to a fascinating project, a game called Foldit, that a team from the University of Washington has published results from in the journal Nature (Predicting protein structures with a multiplayer online game), Aug. 5, 2010.

With over 50,000 authors, this study is a really good example of citizen science (discussed in my May 14, 2010 posting and elsewhere here) and how to use games to solve science problems while exploiting a fascination with folding and origami. From the Aug. 5, 2010 news item on Nanowerk,

The game, Foldit, turns one of the hardest problems in molecular biology into a game a bit reminiscent of Tetris. Thousands of people have now played a game that asks them to fold a protein rather than stack colored blocks or rescue a princess.

Scientists know the pieces that make up a protein but cannot predict how those parts fit together into a 3-D structure. And since proteins act like locks and keys, the structure is crucial.

At any moment, thousands of computers are working away at calculating how physical forces would cause a protein to fold. But no computer in the world is big enough, and computers may not take the smartest approach. So the UW team tried to make it into a game that people could play and compete. Foldit turns protein-folding into a game and awards points based on the internal energy of the 3-D protein structure, dictated by the laws of physics.

Tens of thousands of players have taken the challenge. The author list for the paper includes an acknowledgment of more than 57,000 Foldit players, which may be unprecedented on a scientific publication.

“It’s a new kind of collective intelligence, as opposed to individual intelligence, that we want to study,”Popoviç [principal investigator Zoran Popoviç, a UW associate professor of computer science and engineering] said. “We’re opening eyes in terms of how people think about human intelligence and group intelligence, and what the possibilities are when you get huge numbers of people together to solve a very hard problem.”

There’s a more at Nanowerk including a video about the gamers and the scientists. I think most of us take folding for granted and yet it stimulates all kinds of research and ideas.

Emotions and robots

Two new robots (the type that can show their emotions, more or less) have recently been introduced according to an article by Kit Eaton titled Kid and Baby Robots Get Creepy Emotional Faces on Fast Company. From the article,

The two bots were revealed today by creators the JST Erato Asada Project–a research team dedicated to investigating how humans and robots can better relate to each other in the future and so that robots can learn better (though given the early stages of current artificial intelligence science, it’s almost a case of working out how humans can feel better about interacting with robots).


The first is M3-Kindy, a 27-kilo machine with 42 motors and over a hundred touch-sensors. He’s about the size of a 5-year-old child, and can do speech recognition, and machine vision with his stereoscopic camera eyes. Kindy’s also designed to be led around by humans holding its hand, and can be taught to manipulate objects.

But it’s Kindy’s face that’s the freakiest bit. It’s been carefully designed so that it can portray emotions. That’ll undoubtedly be useful in the future, when, for instance, having more friendly, emotionally attractive robot carers look after elderly people and patients in hospitals is going to be important.

… Noby will have you running out of the room. It’s a similar human-machine interaction research droid, but is meant to model a 9-month-old baby, right down to the mass and density of its limbs and soft skin.

Do visit the article to see the images of the two robots and read more.

nanoBIDS; military robots from prototype to working model; prosthetics, the wave of the future?

The Nanowerk website is expanding. From their news item,

Nanowerk, the leading information provider for all areas of nanotechnologies, today added to its nanotechnology information portal a new free service for buyers and vendors of micro- and nanotechnology equipment and services. The new application, called nanoBIDS, is now available on the Nanowerk website. nanoBIDS facilitates the public posting of Requests for Proposal (RFPs) for equipment and services from procurement departments in the micro- and nanotechnologies community. nanoBIDS is open to all research organizations and companies.

I checked out the nanoBIDS page and found RFP listings from UK, US (mostly), and Germany. The earliest are dated Jan.25, 2010 so this site is just over a week old and already has two pages.

The Big Dog robot (which I posted about briefly here) is in the news again. Kit Eaton (Fast Company) whose article last October first alerted me to this device now writes that the robot is being put into production. From the article (Robocalypse Alert: Defense Contract Awarded to Scary BigDog),

The contract’s been won by maker Boston Dynamics, which has just 30 months to turn the research prototype machines into a genuine load-toting, four-legged, semi-intelligent war robot–”first walk-out” of the newly-designated LS3 is scheduled in 2012.

LS3 stands for Legged Squad Support System, and that pretty much sums up what the device is all about: It’s a semi-autonomous assistant designed to follow soldiers and Marines across the battlefield, carrying up to 400 pounds of gear and enough fuel to keep it going for 24 hours over a march of 20 miles.

They have included a video of the prototype on a beach in Thailand and as Eaton notes, the robot is “disarmingly ‘cute’” and, to me, its legs look almost human-shaped, which leads me to my next bit.

I found another article on prosthetics this morning and it’s a very good one. Written by Paul Hochman for Fast Company, Bionic Legs, iLimbs, and Other Super-Human Prostheses delves further into the world where people may be willing to trade a healthy limb for a prosthetic. From the article,

There are many advantages to having your leg amputated.

Pedicure costs drop 50% overnight. A pair of socks lasts twice as long. But Hugh Herr, the director of the Biomechatronics Group at the MIT Media Lab, goes a step further. “It’s actually unfair,” Herr says about amputees’ advantages over the able-bodied. “As tech advancements in prosthetics come along, amputees can exploit those improvements. They can get upgrades. A person with a natural body can’t.”

I came across both a milder version of this sentiment and a more targeted version (able-bodied athletes worried about double amputee Oscar Pistorius’ bid to run in the Olympics rather than the Paralympics) when I wrote my four part series on human enhancement (July 22, 23, 24 & 27, 2009).

The Hochman article also goes on to discuss some of the aesthetic considerations (which I discussed in the same posting where I mentioned the BigDog robots). What Hochman does particularly well is bringing all this information together and explaining how the lure of big money (profit) is stimulating market development,

Not surprisingly, the money is following the market. MIT’s Herr cofounded a company called iWalk, which has received $10 million in venture financing to develop the PowerFoot One — what the company calls the “world’s first actively powered prosthetic ankle and foot.” Meanwhile, the Department of Veterans Affairs recently gave Brown University’s Center for Restorative and Regenerative Medicine a $7 million round of funding, on top of the $7.2 million it provided in 2004. And the Defense Advanced Research Projects Administration (DARPA) has funded Manchester, New Hampshire-based DEKA Research, which is developing the Luke, a powered prosthetic arm (named after Luke Skywalker, whose hand is hacked off by his father, Darth Vader).

This influx of R&D cash, combined with breakthroughs in materials science and processor speed, has had a striking visual and social result: an emblem of hurt and loss has become a paradigm of the sleek, modern, and powerful. Which is why Michael Bailey, a 24-year-old student in Duluth, Georgia, is looking forward to the day when he can amputate the last two fingers on his left hand.

“I don’t think I would have said this if it had never happened,” says Bailey, referring to the accident that tore off his pinkie, ring, and middle fingers. “But I told Touch Bionics I’d cut the rest of my hand off if I could make all five of my fingers robotic.”

This kind of thinking is influencing surgery such that patients are asking to have more of their bodies removed.

The article is lengthy (by internet standards) and worthwhile as it contains nuggets such as this,

But Bailey is most surprised by his own reaction. “When I’m wearing it, I do feel different: I feel stronger. As weird as that sounds, having a piece of machinery incorporated into your body, as a part of you, well, it makes you feel above human. It’s a very powerful thing.”

So the prosthetic makes him “feel above human,” interesting, eh? It leads to the next question (and a grand and philosophical one it is), what does it mean to be human? At least lately, I tend to explore that question by reading fiction.

I have been intrigued by Catherine Asaro‘s Skolian Empire series of books. The series features human beings (mostly soldiers) who have something she calls ‘biomech’  in their bodies to make them smarter, stronger, and faster. She also populates worlds with people who’ve had (thousands of years before) extensive genetic manipulation so they can better adapt to their new homeworlds. Her characters represent different opinions about the ‘biomech’ which is surgically implanted usually in adulthood and voluntarily. Asaro is a physicist who writes ‘hard’ science fiction laced with romance. She handles a great many thorny social questions in the context of this Skolian Empire that she has created where the technologies (nano, genetic engineering, etc.)  that we are exploring are a daily reality.

Military robots, the latest models; Quantum computing at Univ of Toronto; Cultural Cognition Project at Yale; Carla Bruni and Stephen Hawking

There was an industry trade show of military robots  this week which caught my eye since I’ve been mentioning robots, military and otherwise, in my postings lately. Apparently military enthusiasm for robots continues unabated.  From the media release on Physorg.com,

“I think we’re at the beginning of an unmanned revolution,” Gary Kessler, who oversees unmanned aviation programs for the US Navy and Marines, told AFP.

“We’re spending billions of dollars on unmanned systems.”

There’s more,

In 2003, the US military had almost no robots in its arsenal but now has 7,000 unmanned aircraft and at least 10,000 ground vehicles.

The US Air Force, which initially resisted the idea of pilotless planes, said it trains more operators for unmanned aircraft than pilots for its fighter jets and bombers.

Interestingly, iRobot which sells robot vacuum cleaners (Roomba) to consumers also sells a “Wall-E lookalike robot” which searches enemy terrain and buildings to find and dismantle explosives.

This all reminds me of an article on BBC News (Call for debate on killer robots) which I posted about here when I was looking at the possibility (courtesy of an article by Jamais Cascio) of systems that are both unmanned and without operators, i.e. autonomous, intelligent systems/robots.

The University of Toronto (Canada) is hosting a conference on quantum information and control. From the media release on Azonano,

Quantum Information is a revolutionary approach to computing and communication which exploits the phenomena of quantum mechanics – the fundamental theory of nature at is most basic, sub-atomic level – to vastly enhance the capabilities of today’s computers and internet communication.

The conference is being held from August 24 – 27, 2009.

In yesterday’s posting about Andrew Maynard’s review of a book on science illiteracy I mentioned that I had a hesitation about one of the recommendations he made for further reading. Specifically, I have some reservations about the Cultural Cognition Project at Yale Law School’s work on nanotechnology. To be absolutely fair, I’ve read only an earlier version of a paper (then titled) Affect, Values, and Nanotechnology Risk Perceptions: An Experimental Investigation.

I did try to read the latest version and the other papers on nanotechnology produced by the group but they’re behind paywalls (click on Download paper if you like but I just tested them and not one was accessible). So, I’m working off the copy that I could freely download at the time.

First, they are using the word cultural in a fashion that many of us are unfamiliar with. Culture in this paper is used in the context of risk perception and the specific theoretical underpinning comes from anthropologist, Mary Douglas. From the paper I downloaded,

Drawing heavily on the work of anthropologist Mary Douglas, one conception of the cultural cognition of risk divides cultural outlooks along two cross-cutting dimensions. The first, “hierarchy-egalitarianism” characterizes the relative preferences of persons for a society in which resources, opportunities, privileges and duties are distributed along fixed and differentiated (of gender, race, religion, and class, for example) versus one in which those goods are distributed without regard to such differences. The other, “individualism-communitarianism,” characterizes the relative preference of persons for a society in which individuals secure the conditions for their own flourishing without collective interference versus one in which the collective is charged with securing its members’ basic needs and in which individual interests are subordinated to collective ones.

This looks like a very politicized approach. Roughly speaking, you have the Horatio Alger/anybody can become president of the US success myth laced with Henry David Thoreau and his self-sufficient utopia cast against collective action (American Revolution, “power to the people”) and communism.

The authors found that people tended to shape their views about technology according to their values and the authors worried in their conclusion that nanotechnology could be the subject of intransigent attitudes on all sides. From the paper,

Nanotechnology, on this view, could go the route of nuclear power and other controversial technologies, becoming a focal point of culturally infused political conflict.

For my taste there’s just too much agenda underlying this work. Again, from the paper,

Those in a position to educate the public–from government officials to scientists to members of industry–must also intelligently frame that information in ways that make it possible for persons of diverse cultural orientation to reconcile it with their values.

Note that there is no hint that the discussion could go both ways and there’s the implication that if the information is framed “intelligently” that there will be acceptance.

If you can get your hands on the material, it is an interesting and useful read but proceed with caution.

As it’s Friday, I want to finish off with something a little lighter. Raincoaster has two amusing postings, one about Stephen Hawking and the debate on US health care reform. The other posting features a video of Carla Bruni, Mme Sarkozy and wife of French president Nicolas Sarkozy, singing. (She’s pretty good.) Have a nice weekend!

ETA (Aug.14, 2009 at 12 pm PST) I forgot to mention that the article concludes that how much you learn about nanotechnology (i.e. your scientific literacy) does not markedly affect your perception of the risks. From the paper,

One might suppose that as members of the public learn more about nanotechnology their assessment of its risk and benefits should converge. Our results suggest that exactly the opposite is likely to happen.