Tag Archives: robots

Military robots, the latest models; Quantum computing at Univ of Toronto; Cultural Cognition Project at Yale; Carla Bruni and Stephen Hawking

There was an industry trade show of military robots  this week which caught my eye since I’ve been mentioning robots, military and otherwise, in my postings lately. Apparently military enthusiasm for robots continues unabated.  From the media release on Physorg.com,

“I think we’re at the beginning of an unmanned revolution,” Gary Kessler, who oversees unmanned aviation programs for the US Navy and Marines, told AFP.

“We’re spending billions of dollars on unmanned systems.”

There’s more,

In 2003, the US military had almost no robots in its arsenal but now has 7,000 unmanned aircraft and at least 10,000 ground vehicles.

The US Air Force, which initially resisted the idea of pilotless planes, said it trains more operators for unmanned aircraft than pilots for its fighter jets and bombers.

Interestingly, iRobot which sells robot vacuum cleaners (Roomba) to consumers also sells a “Wall-E lookalike robot” which searches enemy terrain and buildings to find and dismantle explosives.

This all reminds me of an article on BBC News (Call for debate on killer robots) which I posted about here when I was looking at the possibility (courtesy of an article by Jamais Cascio) of systems that are both unmanned and without operators, i.e. autonomous, intelligent systems/robots.

The University of Toronto (Canada) is hosting a conference on quantum information and control. From the media release on Azonano,

Quantum Information is a revolutionary approach to computing and communication which exploits the phenomena of quantum mechanics – the fundamental theory of nature at is most basic, sub-atomic level – to vastly enhance the capabilities of today’s computers and internet communication.

The conference is being held from August 24 – 27, 2009.

In yesterday’s posting about Andrew Maynard’s review of a book on science illiteracy I mentioned that I had a hesitation about one of the recommendations he made for further reading. Specifically, I have some reservations about the Cultural Cognition Project at Yale Law School’s work on nanotechnology. To be absolutely fair, I’ve read only an earlier version of a paper (then titled) Affect, Values, and Nanotechnology Risk Perceptions: An Experimental Investigation.

I did try to read the latest version and the other papers on nanotechnology produced by the group but they’re behind paywalls (click on Download paper if you like but I just tested them and not one was accessible). So, I’m working off the copy that I could freely download at the time.

First, they are using the word cultural in a fashion that many of us are unfamiliar with. Culture in this paper is used in the context of risk perception and the specific theoretical underpinning comes from anthropologist, Mary Douglas. From the paper I downloaded,

Drawing heavily on the work of anthropologist Mary Douglas, one conception of the cultural cognition of risk divides cultural outlooks along two cross-cutting dimensions. The first, “hierarchy-egalitarianism” characterizes the relative preferences of persons for a society in which resources, opportunities, privileges and duties are distributed along fixed and differentiated (of gender, race, religion, and class, for example) versus one in which those goods are distributed without regard to such differences. The other, “individualism-communitarianism,” characterizes the relative preference of persons for a society in which individuals secure the conditions for their own flourishing without collective interference versus one in which the collective is charged with securing its members’ basic needs and in which individual interests are subordinated to collective ones.

This looks like a very politicized approach. Roughly speaking, you have the Horatio Alger/anybody can become president of the US success myth laced with Henry David Thoreau and his self-sufficient utopia cast against collective action (American Revolution, “power to the people”) and communism.

The authors found that people tended to shape their views about technology according to their values and the authors worried in their conclusion that nanotechnology could be the subject of intransigent attitudes on all sides. From the paper,

Nanotechnology, on this view, could go the route of nuclear power and other controversial technologies, becoming a focal point of culturally infused political conflict.

For my taste there’s just too much agenda underlying this work. Again, from the paper,

Those in a position to educate the public–from government officials to scientists to members of industry–must also intelligently frame that information in ways that make it possible for persons of diverse cultural orientation to reconcile it with their values.

Note that there is no hint that the discussion could go both ways and there’s the implication that if the information is framed “intelligently” that there will be acceptance.

If you can get your hands on the material, it is an interesting and useful read but proceed with caution.

As it’s Friday, I want to finish off with something a little lighter. Raincoaster has two amusing postings, one about Stephen Hawking and the debate on US health care reform. The other posting features a video of Carla Bruni, Mme Sarkozy and wife of French president Nicolas Sarkozy, singing. (She’s pretty good.) Have a nice weekend!

ETA (Aug.14, 2009 at 12 pm PST) I forgot to mention that the article concludes that how much you learn about nanotechnology (i.e. your scientific literacy) does not markedly affect your perception of the risks. From the paper,

One might suppose that as members of the public learn more about nanotechnology their assessment of its risk and benefits should converge. Our results suggest that exactly the opposite is likely to happen.

Autonomous algorithms; intelligent windows; pretty nano pictures

I was reminded of watching a printer pumping out page after page after page after page of garbage output because I had activated a process I couldn’t stop when reading Jamais Cascio’s article Autonomy without intelligence? in Fast Company last week.  Cascio describes autonomous software systems operating without human intervention in the finance sector. Called, High-frequency trading (HFT), it relies on networked computers making billions of micro transactions to determine and eventually set the prices. From the Cascio article (an example referenced from a NY Times article by Charles Duhigg here),

Soon, thousands of orders began flooding the markets as high-frequency software went into high gear. Automatic programs began issuing and canceling tiny orders within milliseconds to determine how much the slower traders were willing to pay. The high-frequency computers quickly determined that some investors’ upper limit was $26.40. The price shot to $26.39, and high-frequency programs began offering to sell hundreds of thousands of shares.

The potential for abuse is huge as Cascio points out, exploiting legal loopholes left from “pre-computerized stock trading rules, illegal activities, and systems operating too fast for any human to oversee, let alone counter.” ( For more details about High-frequency trading, read the Cascio and Duhigg articles.)

Cascio then goes on to hypothesize the use of similar networked automatic programs for military purposes. Imagine programs (algorithms) being set into motion and our inability to oversee or counteract them in a military situation? The question hit home again when I found this article (Call for Debate on Killer Robots) by Jason Raimer on the BBC News. Describing one of the impacts of using drone planes that are piloted remotely (sometimes from thousands of miles away),

The rise in technology has not helped in terms of limiting collateral damage, [Professor Noel Sharkey, University of Sheffield] said, because the military intelligence behind attacks was not keeping pace.

Between January 2006 and April 2009, he estimated, 60 such “drone” attacks were carried out in Pakistan. While 14 al-Qaeda were killed, some 687 civilian deaths also occurred, he  said.

That physical distance from the actual theatre of war, he said, led naturally to a far greater concern: the push toward unmanned planes and ground robots that make their decisions without the help of human operators at all.

In fact, the article goes on to reveal that Israel is currently deploying the Harpy, an unmanned aerial vehicle that divebombs radar systems without any human intervention whatsoever. I gather everything is in the algorithms.

I recently came across the word intelligent as applied to windows. It’s a use for the word that contrasts strongly with Cascio’s where he implies that intelligence (in the context of the article cited previously) resides in humans. From the media release on Nanowerk News,

RavenBrick’s patent-pending products use nanotechnology to create an intelligent window filter that automatically blocks solar heat when the outside temperature is too hot, while delivering solar heat inside when the outside temperature is cold. RavenBrick smart-window filters use no electricity, wiring or control systems. They can cut building owners’ energy costs and consumption by as much as 50 percent. What’s more, RavenBrick’s smart-window filters make any interior space more comfortable by managing overheating on hot days, and significantly reduce drafts and cold spots on cold days.

What strikes me most about using the word intelligent to describe these new windows is that I would never have questioned it prior to juxtaposing comments from the Cascio, Duhigg, and Raimer articles. Many times I’ve heard the word intelligent or smart applied to systems or objects without every seriously questioning it. If words are important, than what does applying the word smart or intelligent to a window imply? I’m going to be playing with that one for a while.

To finish off, here’s a link to some pretty nano pictures from the SPmages09 competition which were posted on Nanowerk News. Here’s a sample of what you’ll find,

Human malaria infected red blood cells. Li Ang, National University of Singapore

Human malaria infected red blood cells. Li Ang, National University of Singapore

Replacing Asimov’s Laws of Responsible Robotics?; more thoughts on innovation in Canada

David Woods, professor of integrated systems engineering at Ohio State University, and Robin Murphy of Texas A&M University propose three new robot laws in the current issue of IEEE Intelligent Systems in the media release on Science Daily. According to Woods,

“When you think about it, our cultural view of robots has always been anti-people, pro-robot,” … “The philosophy has been, ‘sure, people make mistakes, but robots will be better — a perfect version of ourselves.’ We wanted to write three new laws to get people thinking about the human-robot relationship in more realistic, grounded ways.”

This view contrasts somewhat with Mary King’s work on the differences between Japanese and Western perspectives on robots. She acknowledges the fascination and anti-people perspectives in the West but notes pervasive fears while contrasting them with Japanese perspectives on robots where they are viewed in a more purely beneficial way and as being related to nature. You can read her work here or you can check out my previous posts about Mary King’s work in my series on robots and human enhancement, July 22 and 23 2009 are particularly relevant.

Before looking at the new laws, here’s a refresher of Asimov’s three:

  • A robot may not injure a human being, or through inaction, allow a human being to come to harm.
  • A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Woods points out that Asimov was a writer and his laws were developed as a literary device. Woods’ and Murpy’s proposed laws are these,

  • A human may not deploy a robot without the human-robot work system meeting the highest legal and professional standards of safety and ethics.
  • A robot must respond to humans as appropriate for their roles.
  • A robot must be endowed with sufficient situated autonomy to protect its own existence as long as such protection provides smooth transfer of control which does not conflict with the First and Second Laws.

I see Rob Annan at Don’t leave Canada behind has written some more on innovation in Canada. He highlights a couple of articles in MacLean’s magazine, one focusing on John Manley, former Liberal deputy Prime Minister in Jean Chretien’s cabinet, and a two-part series on Canada’s big five universities. Manley who’s in the process of becoming president of the Canadian Council of Chief Executives has some rather pithy (compared to the usual) things to say about innovation and Canadian business. What makes this interesting is the group he will be leading has 150 members, the chief executives of Canada’s biggest corporations, who claim $3.5 trillion in assets and $800 billion in revenues.

Meanwhile, the presidents of Canada’s big five universities point out that Canadian business does not develop and promote its own research and development labs relying instead on university research. Do read Rob’s blog for more discussion about this.

And since it’s Friday, I’m going to mention Raincoaster’s upcoming 3-day novel workshop on Bowen Island (Vancouver, Canada) which will be held on the Labour Day Weekend. I don’t have any details but will post them as soon as I get them. If you’re curious about Raincoaster, you can check out the regular blog here or the blog that has information about other courses here.

Viruses mine for copper at the University of BC; microscopy at the University of Victoria; the Henry Louis Gates Jr. affair, human nature, & human enhancement

Professor Scott Dunbar at the University of British Columbia’s (Canada) Norman B. Keevil Institute of Mining Engineering needed to partner with colleagues Sue Curtis and Ross MacGillivray from the Centre for Blood Research and the Department of Biochemistry and Molecular Biology after (from the media release on Nanowerk News),

“I read an article about bacteriophage – viruses that infect bacteria – being used to create nanodevices in which proteins on the phage surface are engineered to bind to gold and zinc sulfide,” says Dunbar. “And it struck me: if zinc sulfide, why not copper sulfide? And if so, then it might be possible to use these bio-engineered proteins to separate common economic sulfide minerals from waste during mineral extraction.”

Together the researchers have developed a procedure called “biopanning.” It’s a kind of genetic engineering which could lead to some useful applications.

It turns out that the phage that bind to a mineral do affect the mineral surfaces, causing them to have a different electrical charge than other minerals. The proteins on the phage also form links to each other leading to aggregation of the specific sulfide particles. “The physical and chemical changes caused by phage may be the basis for a highly selective method of mineral separation with better recovery. Another possible application is bioremediation, where metals are removed from contaminated water” says Dunbar.

In other BC news, the University of Victoria (Canada) will be getting a new microscope which senses at subatomic levels. (From the media release on Azonano),

The new microscope-called a Scanning Transmission Electron Holography Microscope (STEHM) — will use an electron beam and holography techniques to observe the inside of materials and their surfaces to an expected resolution as small as one-fiftieth the size of an atom.

This is being done in collaboration with Hitachi High-Technologies which is building the microscope in Japan and installing it at U Vic in late 2010. The microscope will be located in a specially adapted room where work to prepare and calibrate it will continue until it becomes operational sometime in 2011.

After my recent series on robots and human enhancement, I feel moved to comment on the situation in the US vis a vis Henry Louis Gates, Jr. and his arrest by the police officer, James Crowley. It’s reported here and elsewhere that neither the recording of the 911 call nor the concerned neighbour who made the call support Sergeant Crowley’s contention that the two men allegedly breaking into the house were described as ‘black’.

Only the participants know what happened and I don’t fully understand the nuances of race, class, and cultural differences that exist in the US so I can’t comment on anything other than this. It is human to hear what we expect to hear and I have an example from a much less charged situation.

Many years ago, I was transcribing notes from a taped interview (one of my first) for an article that I was writing for a newsletter. As I was transcribing, I noticed that I kept changing words so that the interview subject sounded more like me. They were synonyms but they were my words not his. Over the years I’ve gotten much better at being more exact but I’ve never forgotten how easy it is to insert your pet words (biased or not) when you’re remembering what someone said. Note: I was not in a stressful situation and I could rewind and listen again at my leisure.

I hope that Crowley and Gates, Jr. are able to work this out in some fashion and I really hope that it is done in a way that is respectful to both men and not a rush to a false resolution for the benefit of the cameras. For a more informed discussion of the situation, you may find this essay by Richard Thompson Ford  in Slate helpful. It was written before the recording of the 911 call was made public but I think it still stands.

My reason for mentioning this incident is that human nature tends to assert itself in all kinds of situations including the building of robots and the debates on human enhancement, something I did not mention in my series posted (July 22 – 24, 27, 2009).

Nanotechnology enables robots and human enhancement: part 2

Mary King’s project on Robots and AI, the one I mentioned yesterday, was written in 2007 so there have been some changes since then but her focus is largely cultural and that doesn’t change so quickly. The bird’s eye view she provides of the situation in Japan and other parts of Asia contrasts with the information and ideas that are common currency in North America and, I suspect, Europe too. (As for other geographic regions, I don’t venture any comments as I’m not sufficiently familiar with the thinking in those regions.)  Take for example this,

South Korea, meanwhile, has not only announced that by 2010 it expects to have robo-cops patrolling the streets alongside its police force and army, but that its “Robot Ethics Charter” will take effect later this year. The charter includes Asimov-like laws for the robots, as well as guidelines to protect robots from abuse by humans. South Korea is concerned that some people will become addicted to robots, may want to marry their android or will use robots for illegal activities. The charter demands full human control over the robots, an idea that is likely to be popular with Japanese too. But a number of organizations and individuals in the West are bound to criticize laws that do not grant equal “human” rights to robots.

Mary goes on to cite some of the work on roboethics and robo-rights being done in the West and gives a brief discussion of some of the more apocalyptic possibilities. I think the latest incarnation of Battlestar Galactica anchored its mythology in many of the “Western” fears associated with the arrival of intelligent robots. She also mentions this,

Beyond robots becoming more ubiquitous in our lives, a vanguard of Western scientists asserts that humans will merge with the machine. Brooks says “… it is clear that robotic technology will merge with biotechnology in the first half of this century,” and he therefore concludes that “the distinction between us and robots is going to disappear.

Leading proponents of Strong AI state that humans will transcend biology and evolve to a higher level by merging with robot technology. Ray Kurzweil, a renowned inventor, transhumanist and the author of several books on “spiritual machines,” claims that immortality lies within the grasp of many of us alive today.

The concept of transhumanism does not accord well with the Japanese perspective,

Japan’s fondness for humanoid robots highlights the high regard Japanese share for the role of humans within nature. Humans are viewed as not being above nature, but a part of it.

This reminds me of the discussion taking place on the topic of synthetic biology (blog posting here) where the synthetic biologists are going to reconfigure the human genome to make it better. According to Denise Caruso (executive director of the Hybrid Vigor Institute), many of the synthetic biologists have backgrounds in IT not biology. I highly recommend Mary’s essay. It’s a longish read (5000 words) but well worth it for the insights it provides.

In Canada, we are experiencing robotic surveillance at the border with the US. The CBC reported in June that the US was launching a drone plane in the Great Lakes region of the border. It was the 2nd drone, the 1st being deplored over the Manitoba border and there is talk that a drone will be used on the BC border in the future. For details, go here. More tomorrow.

Nanotechnology enables robots and human enhancement: part 1

I’m doing something a little different as I’m going to be exploring some ideas about robots and AI today and human enhancement technologies over the next day or so. I have never been particularly interested in these topics but after studying and thinking about nanotechnology I have found that I can’t ignore them since nanotech is being used to enable these, for want of a better word, innovations. I have deep reservations about these areas of research, especially human enhancement, but I imagine I would have had deep reservations about electricity had I been around in the days when it was first being commercialized.

This item, Our Metallic Reflection: Considering Future Human-android Interactions, in Science Daily is what set me off,

Everyday human interaction is not what you would call perfect, so what if there was a third party added to the mix – like a metallic version of us? In a new article in Perspectives on Psychological Science, psychologist Neal J. Roese and computer scientist Eyal Amir from the University of Illinois at Urbana-Champaign investigate what human-android interactions may be like 50 years into the future.

As I understand the rough classifications, there are robots (machines that look like machines), androids (machines that look like and act like humans), and cyborgs (part human/part machine). By the way, my mother can be designated as a cyborg since she had her hip replacement a few years ago. It’s a pretty broad designation including people with pacemakers, joint replacements, as well as any other implanted object not native to a human body.

The rest of the Science Daily article goes on to state that by 2060 androids will be able to answer in human-like voices, answer questions and more. The scientists studying the potential interactions are trying to understand how people will react psychologically to these androids of 2060.

For an alternative discussion about robots, AI, etc. you can take a look at a project where Mary King, a collegue and fellow classmate (we completed an MA programme at De Montfort University), compares Western and Japanese responses to them.

This research project explores the theories and work of Japanese and Western scientists in the field of robotics and AI. I ask what differences exist in the approach and expectations of Japanese and Western AI scientists, and I show how these variances came about.

Because the Western media often cites Shinto as the reason for the Japanese affinity for robots, I ask what else has shaped Japan’s harmonious feelings for intelligent machines. Why is Japan eager to develop robots, and particularly humanoid ones? I also aim to discover if religion plays a role in shaping AI scientists’ research styles and perspectives. In addition, I ask how Western and Japanese scientists envision robots/AI playing a role in our lives. Finally, I enquire how the issues of roboethics and rights for robots are perceived in Japan and the West.

You can go here for more.  Amongst other gems, you’ll find this,

Since 1993 Robo-Priest has been on call 24-hours a day at Yokohama Central Cemetery. The bearded robot is programmed to perform funerary rites for several Buddhist sects, as well as for Protestants and Catholics. Meanwhile, Robo-Monk chants sutras, beats a religious drum and welcomes the faithful to Hotoku-ji, a Buddhist temple in Kakogawa city, Hyogo Prefecture. More recently, in 2005, a robot dressed in full samurai armour received blessings at a Shinto shrine on the Japanese island of Kyushu. Kiyomori, named after a famous 12th-century military general, prayed for the souls of all robots in the world before walking quietly out of Munakata Shrine.

It seems our androids are here already despite what the article in Science Daily indicates. More tomorrow.

Book launch announcement:  Susan Baxter, guest blogger here and lead author of The Estrogen Errors: Why Progesterone is Better for Women’s Health, is having a book launch tomorrow, Thursday, July 23, 2009 from 6 – 8 pm, at Strands Hair and Skin Treatment Centre, #203 – 131 Water St. (in the same complex as the kite store), Vancouver.

Biomimicry offers artificial trees? Yikes!

There’s an article about a company (SolarBotanic) which is modeling fake trees that will harvest energy from the sun and wind via nano leaves. I’m not sure what they mean by modeling but it sounds like something which is more concept than anything else.  The article that appears to be the original source for the information is here.

The artificial tree story in conjunction with this item, Teaching Ethics to Robo Warriors, in Fast Company has got me wondering how we’re going to deal with quasi-life. For example, we may have ‘trees’ that function a bit like real trees and we may be teaching ethics to robots. So then, what is nature? What is it to be human? I’ve been thinking about those things a lot lately as I keep gathering information about the technological changes that are coming our way.

As for yesterday’s sniffing phone article, I think the application might be more useful for sniffing out toxic gases or detecting bombs if you have security concerns.