Category Archives: robots

KAIST (Korea Advanced Institute of Science and Technology) will lead an Ideas Lab at 2016 World Economic Forum

The theme for the 2016 World Economic Forum (WEF) is ‘Mastering the Fourth Industrial Revolution’. I’m losing track of how many industrial revolutions we’ve had and this seems like a vague theme. However, there is enlightenment to be had in this Nov. 17, 2015 Korea Advanced Institute of Science and Technology (KAIST) news release on EurekAlert,

KAIST researchers will lead an IdeasLab on biotechnology for an aging society while HUBO, the winner of the 2015 DARPA Robotics Challenge, will interact with the forum participants, offering an experience of state-of-the-art robotics technology

Moving on from the news release’s subtitle, there’s more enlightenment,

Representatives from the Korea Advanced Institute of Science and Technology (KAIST) will attend the 2016 Annual Meeting of the World Economic Forum to run an IdeasLab and showcase its humanoid robot.

With over 2,500 leaders from business, government, international organizations, civil society, academia, media, and the arts expected to participate, the 2016 Annual Meeting will take place on Jan. 20-23, 2016 in Davos-Klosters, Switzerland. Under the theme of ‘Mastering the Fourth Industrial Revolution,’ [emphasis mine] global leaders will discuss the period of digital transformation [emphasis mine] that will have profound effects on economies, societies, and human behavior.

President Sung-Mo Steve Kang of KAIST will join the Global University Leaders Forum (GULF), a high-level academic meeting to foster collaboration among experts on issues of global concern for the future of higher education and the role of science in society. He will discuss how the emerging revolution in technology will affect the way universities operate and serve society. KAIST is the only Korean university participating in GULF, which is composed of prestigious universities invited from around the world.

Four KAIST professors, including Distinguished Professor Sang Yup Lee of the Chemical and Biomolecular Engineering Department, will lead an IdeasLab on ‘Biotechnology for an Aging Society.’

Professor Lee said, “In recent decades, much attention has been paid to the potential effect of the growth of an aging population and problems posed by it. At our IdeasLab, we will introduce some of our research breakthroughs in biotechnology to address the challenges of an aging society.”

In particular, he will present his latest research in systems biotechnology and metabolic engineering. His research has explained the mechanisms of how traditional Oriental medicine works in our bodies by identifying structural similarities between effective compounds in traditional medicine and human metabolites, and has proposed more effective treatments by employing such compounds.

KAIST will also display its networked mobile medical service system, ‘Dr. M.’ Built upon a ubiquitous and mobile Internet, such as the Internet of Things, wearable electronics, and smart homes and vehicles, Dr. M will provide patients with a more affordable and accessible healthcare service.

In addition, Professor Jun-Ho Oh of the Mechanical Engineering Department will showcase his humanoid robot, ‘HUBO,’ during the Annual Meeting. His research team won the International Humanoid Robotics Challenge hosted by the United States Defense Advanced Research Projects Agency (DARPA), which was held in Pomona, California, on June 5-6, 2015. With 24 international teams participating in the finals, HUBO completed all eight tasks in 44 minutes and 28 seconds, 6 minutes earlier than the runner-up, and almost 11 minutes earlier than the third-place team. Team KAIST walked away with the grand prize of USD 2 million.

Professor Oh said, “Robotics technology will grow exponentially in this century, becoming a real driving force to expedite the Fourth Industrial Revolution. I hope HUBO will offer an opportunity to learn about the current advances in robotics technology.”

President Kang pointed out, “KAIST has participated in the Annual Meeting of the World Economic Forum since 2011 and has engaged with a broad spectrum of global leaders through numerous presentations and demonstrations of our excellence in education and research. Next year, we will choreograph our first robotics exhibition on HUBO and present high-tech research results in biotechnology, which, I believe, epitomizes how science and technology breakthroughs in the Fourth Industrial Revolution will shape our future in an unprecedented way.”

Based on what I’m reading in the KAIST news release, I think the conversation about the ‘Fourth revolution’ may veer toward robotics and artificial intelligence (referred to in code as “digital transformation”) as developments in these fields are likely to affect various economies.  Before proceeding with that thought, take a look at this video showcasing HUBO at the DARPA challenge,

I’m quite impressed with how the robot can recalibrate its grasp so it can pick things up and plug an electrical cord into an outlet and knowing whether wheels or legs will be needed to complete a task all due to algorithms which give the robot a type of artificial intelligence. While it may seem more like a machine than anything else, there’s also this version of a HUBO,

Description English: Photo by David Hanson Date 26 October 2006 (original upload date) Source Transferred from en.wikipedia to Commons by Mac. Author Dayofid at English Wikipedia

English: Photo by David Hanson
Date 26 October 2006 (original upload date)
Source Transferred from en.wikipedia to Commons by Mac.
Author Dayofid at English Wikipedia

It’ll be interesting to note if the researchers make the HUBO seem more humanoid by giving it a face for its interactions with WEF attendees. It would be more engaging but also more threatening since there is increasing concern over robots taking work away from humans with implications for various economies. There’s more about HUBO in its Wikipedia entry.

As for the IdeasLab, that’s been in place at the WEF since 2009 according to this WEF July 19, 2011 news release announcing an ideasLab hub (Note: A link has been removed),

The World Economic Forum is publicly launching its biannual interactive IdeasLab hub on 19 July [2011] at 10.00 CEST. The unique IdeasLab hub features short documentary-style, high-definition (HD) videos of preeminent 21st century ideas and critical insights. The hub also provides dynamic Pecha Kucha presentations and visual IdeaScribes that trace and package complex strategic thinking into engaging and powerful images. All videos are HD broadcast quality.

To share the knowledge captured by the IdeasLab sessions, which have been running since 2009, the Forum is publishing 23 of the latest sessions, seen as the global benchmark of collaborative learning and development.

So while you might not be able to visit an IdeasLab presentation at the WEF meetings, you could get a it to see them later.

Getting back to the robotics and artificial intelligence aspect of the 2016 WEF’s ‘digital’ theme, I noticed some reluctance to discuss how the field of robotics is affecting work and jobs in a broadcast of Canadian television show, ‘Conversations with Conrad’.

For those unfamiliar with the interviewer, Conrad Black is somewhat infamous in Canada for a number of reasons (from the Conrad Black Wikipedia entry), Note: Links have been removed,

Conrad Moffat Black, Baron Black of Crossharbour, KSG (born 25 August 1944) is a Canadian-born British former newspaper publisher and author. He is a non-affiliated life peer, and a convicted felon in the United States for fraud.[n 1] Black controlled Hollinger International, once the world’s third-largest English-language newspaper empire,[3] which published The Daily Telegraph (UK), Chicago Sun Times (U.S.), The Jerusalem Post (Israel), National Post (Canada), and hundreds of community newspapers in North America, before he was fired by the board of Hollinger in 2004.[4]

In 2004, a shareholder-initiated prosecution of Black began in the United States. Over $80 million in assets claimed to have been improperly taken or inappropriately spent by Black.[5] He was convicted of three counts of fraud and one count of obstruction of justice in a U.S. court in 2007 and sentenced to six and a half years’ imprisonment. In 2011 two of the charges were overturned on appeal and he was re-sentenced to 42 months in prison on one count of mail fraud and one count of obstruction of justice.[6] Black was released on 4 May 2012.[7]

Despite or perhaps because of his chequered past, he is often a good interviewer and he definitely attracts interesting guests. n an Oct. 26, 2015 programme, he interviewed both former Canadian astronaut, Chris Hadfield, and Canadian-American David Frum who’s currently editor of Atlantic Monthly and a former speechwriter for George W. Bush.

It was Black’s conversation with Frum which surprised me. They discuss robotics without ever once using the word. In a section where Frum notes that manufacturing is returning to the US, he also notes that it doesn’t mean more jobs and cites a newly commissioned plant in the eastern US employing about 40 people where before it would have employed hundreds or thousands. Unfortunately, the video has not been made available as I write this (Nov. 20, 2015) but that situation may change. You can check here.

Final thought, my guess is that economic conditions are fragile and I don’t think anyone wants to set off panic by mentioning robotics and disappearing jobs.

The sense of touch via artificial skin

Scientists have been working for years to allow artificial skin to transmit what the brain would recognize as the sense of touch. For anyone who has lost a limb and gotten a prosthetic replacement, the loss of touch is reputedly one of the more difficult losses to accept. The sense of touch is also vital in robotics if the field is to expand and include activities reliant on the sense of touch, e.g., how much pressure do you use to grasp a cup; how much strength  do you apply when moving an object from one place to another?

For anyone interested in the ‘electronic skin and pursuit of touch’ story, I have a Nov. 15, 2013 posting which highlights the evolution of the research into e-skin and what was then some of the latest work.

This posting is a 2015 update of sorts featuring the latest e-skin research from Stanford University and Xerox PARC. (Dexter Johnson in an Oct. 15, 2015 posting on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineering] site) provides a good research summary.) For anyone with an appetite for more, there’s this from an Oct. 15, 2015 American Association for the Advancement of Science (AAAS) news release on EurekAlert,

Using flexible organic circuits and specialized pressure sensors, researchers have created an artificial “skin” that can sense the force of static objects. Furthermore, they were able to transfer these sensory signals to the brain cells of mice in vitro using optogenetics. For the many people around the world living with prosthetics, such a system could one day allow them to feel sensation in their artificial limbs. To create the artificial skin, Benjamin Tee et al. developed a specialized circuit out of flexible, organic materials. It translates static pressure into digital signals that depend on how much mechanical force is applied. A particular challenge was creating sensors that can “feel” the same range of pressure that humans can. Thus, on the sensors, the team used carbon nanotubes molded into pyramidal microstructures, which are particularly effective at tunneling the signals from the electric field of nearby objects to the receiving electrode in a way that maximizes sensitivity. Transferring the digital signal from the artificial skin system to the cortical neurons of mice proved to be another challenge, since conventional light-sensitive proteins used in optogenetics do not stimulate neural spikes for sufficient durations for these digital signals to be sensed. Tee et al. therefore engineered new optogenetic proteins able to accommodate longer intervals of stimulation. Applying these newly engineered optogenic proteins to fast-spiking interneurons of the somatosensory cortex of mice in vitro sufficiently prolonged the stimulation interval, allowing the neurons to fire in accordance with the digital stimulation pulse. These results indicate that the system may be compatible with other fast-spiking neurons, including peripheral nerves.

And, there’s an Oct. 15, 2015 Stanford University news release on EurkeAlert describing this work from another perspective,

The heart of the technique is a two-ply plastic construct: the top layer creates a sensing mechanism and the bottom layer acts as the circuit to transport electrical signals and translate them into biochemical stimuli compatible with nerve cells. The top layer in the new work featured a sensor that can detect pressure over the same range as human skin, from a light finger tap to a firm handshake.

Five years ago, Bao’s [Zhenan Bao, a professor of chemical engineering at Stanford,] team members first described how to use plastics and rubbers as pressure sensors by measuring the natural springiness of their molecular structures. They then increased this natural pressure sensitivity by indenting a waffle pattern into the thin plastic, which further compresses the plastic’s molecular springs.

To exploit this pressure-sensing capability electronically, the team scattered billions of carbon nanotubes through the waffled plastic. Putting pressure on the plastic squeezes the nanotubes closer together and enables them to conduct electricity.

This allowed the plastic sensor to mimic human skin, which transmits pressure information as short pulses of electricity, similar to Morse code, to the brain. Increasing pressure on the waffled nanotubes squeezes them even closer together, allowing more electricity to flow through the sensor, and those varied impulses are sent as short pulses to the sensing mechanism. Remove pressure, and the flow of pulses relaxes, indicating light touch. Remove all pressure and the pulses cease entirely.

The team then hooked this pressure-sensing mechanism to the second ply of their artificial skin, a flexible electronic circuit that could carry pulses of electricity to nerve cells.

Importing the signal

Bao’s team has been developing flexible electronics that can bend without breaking. For this project, team members worked with researchers from PARC, a Xerox company, which has a technology that uses an inkjet printer to deposit flexible circuits onto plastic. Covering a large surface is important to making artificial skin practical, and the PARC collaboration offered that prospect.

Finally the team had to prove that the electronic signal could be recognized by a biological neuron. It did this by adapting a technique developed by Karl Deisseroth, a fellow professor of bioengineering at Stanford who pioneered a field that combines genetics and optics, called optogenetics. Researchers bioengineer cells to make them sensitive to specific frequencies of light, then use light pulses to switch cells, or the processes being carried on inside them, on and off.

For this experiment the team members engineered a line of neurons to simulate a portion of the human nervous system. They translated the electronic pressure signals from the artificial skin into light pulses, which activated the neurons, proving that the artificial skin could generate a sensory output compatible with nerve cells.

Optogenetics was only used as an experimental proof of concept, Bao said, and other methods of stimulating nerves are likely to be used in real prosthetic devices. Bao’s team has already worked with Bianxiao Cui, an associate professor of chemistry at Stanford, to show that direct stimulation of neurons with electrical pulses is possible.

Bao’s team envisions developing different sensors to replicate, for instance, the ability to distinguish corduroy versus silk, or a cold glass of water from a hot cup of coffee. This will take time. There are six types of biological sensing mechanisms in the human hand, and the experiment described in Science reports success in just one of them.

But the current two-ply approach means the team can add sensations as it develops new mechanisms. And the inkjet printing fabrication process suggests how a network of sensors could be deposited over a flexible layer and folded over a prosthetic hand.

“We have a lot of work to take this from experimental to practical applications,” Bao said. “But after spending many years in this work, I now see a clear path where we can take our artificial skin.”

Here’s a link to and a citation for the paper,

A skin-inspired organic digital mechanoreceptor by Benjamin C.-K. Tee, Alex Chortos, Andre Berndt, Amanda Kim Nguyen, Ariane Tom, Allister McGuire, Ziliang Carter Lin, Kevin Tien, Won-Gyu Bae, Huiliang Wang, Ping Mei, Ho-Hsiu Chou, Bianxiao Cui, Karl Deisseroth, Tse Nga Ng, & Zhenan Bao. Science 16 October 2015 Vol. 350 no. 6258 pp. 313-316 DOI: 10.1126/science.aaa9306

This paper is behind a paywall.

Informal roundup of robot movies and television programmes and a glimpse into our robot future

David Bruggeman has written an informal series of posts about robot movies. The latest, a June 27, 2015 posting on his Pasco Phronesis blog, highlights the latest Terminator film and opines that the recent interest could be traced back to the rebooted Battlestar Galactica television series (Note: Links have been removed),

I suppose this could be traced back to the reboot of Battlestar Galactica over a decade ago, but robots and androids have become an increasing presence on film and television, particularly in the last 2 years.

In the movies, the new Terminator film comes out next week, and the previews suggest we will see a new generation of killer robots traveling through time and space.  Chappie is now out on your digital medium of choice (and I’ll post about any science fiction science policy/SciFiSciPol once I see it), so you can compare its robot police to those from either edition of Robocop or the 2013 series Almost Human.  Robots also have a role …

The new television series he mentions, Humans (click on About) debuted on the US tv channel, AMC, on Sunday, June 28, 2015 (yesterday).

HUMANS is set in a parallel present, where the latest must-have gadget for any busy family is a Synth – a highly-developed robotic servant, eerily similar to its live counterpart. In the hope of transforming the way his family lives, father Joe Hawkins (Tom Goodman-Hill) purchases a Synth (Gemma Chan) against the wishes of his wife (Katharine Parkinson), only to discover that sharing life with a machine has far-reaching and chilling consequences.

Here’s a bit more information from its Wikipedia entry,

Humans (styled as HUM∀NS) is a British-American science fiction television series, debuted in June 2015 on Channel 4 and AMC.[2] Written by the British team Sam Vincent and Jonathan Brackley, based on the award-winning Swedish science fiction drama Real Humans, the series explores the emotional impact of the blurring of the lines between humans and machines. The series is produced jointly by AMC, Channel 4 and Kudos.[3] The series will consist of eight episodes.[4]

David also wrote about Ex Machina, a recent robot film with artistic ambitions, in an April 26, 2015 posting on his Pasco Phronesis blog,

I finally saw Ex Machina, which recently opened in the United States.  It’s a minimalist film, with few speaking roles and a plot revolving around an intelligence test.  Of the robot movies out this year, it has received the strongest reviews, and it may take home some trophies during the next awards season.  Shot in Norway, the film is both lovely to watch and tricky to engage.  I finished the film not quite sure what the characters were thinking, and perhaps that’s a lesson from the film.

Unlike Chappie and Automata, the intelligent robot at the center of Ex Machina is not out in the world. …

He started the series with a Feb. 8, 2015 posting which previews the movies in his later postings but also includes a couple of others not mentioned in either the April or June posting, Avengers: Age of Ultron and Spare Parts.

It’s interesting to me that these robots  are mostly not related to the benign robots in the movie, ‘Forbidden Planet’, a reworking of Shakespeare’s The Tempest in outer space, in ‘Lost in Space’, a 1960s television programme, and in the Jetsons animated tv series of the 1960s. As far as I can tell not having seen the new movies in question, the only benign robot in the current crop would be ‘Chappie’. It should be mentioned that the ‘Terminator’, in the person of Arnold Schwarzenegger, has over a course of three or four movies evolved from a destructive robot bent on evil to a destructive robot working on behalf of good.

I’ll add one more more television programme and I’m not sure if the robot boy is good or evil but there’s Extant where Halle Berry’s robot son seems to be in a version of the Pinocchio story (an ersatz child want to become human), which is enjoying its second season on US television as of July 1, 2015.

Regardless of one or two ‘sweet’ robots, there seems to be a trend toward ominous robots and perhaps, in addition to Battlestar Galactica, the concerns being raised by prominent scientists such as Stephen Hawking and those associated with the Centre for Existential Risk at the University of Cambridge have something to do with this trend and may partially explain why Chappie did not do as well at the box office as hoped. Thematically, it was swimming against the current.

As for a glimpse into the future, there’s this Children’s Hospital of Los Angeles June 29, 2015 news release,

Many hospitals lack the resources and patient volume to employ a round-the-clock, neonatal intensive care specialist to treat their youngest and sickest patients. Telemedicine–with real-time audio and video communication between a neonatal intensive care specialist and a patient–can provide access to this level of care.

A team of neonatologists at Children’s Hospital Los Angeles investigated the use of robot-assisted telemedicine in performing bedside rounds and directing daily care for infants with mild-to-moderate disease. They found no significant differences in patient outcomes when telemedicine was used and noted a high level of parent satisfaction. This is the first published report of using telemedicine for patient rounds in a neonatal intensive care unit (NICU). Results will be published online first on June 29 in the Journal of Telemedicine and Telecare.

Glimpse into the future?

The part I find most fascinating was that there was no difference in outcomes, moreover, the parents’ satisfaction rate was high when robots (telemedicine) were used. Finally, of the families who completed the after care survey (45%), all indicated they would be comfortable with another telemedicine (robot) experience. My comment, should robots prove to be cheaper in the long run and the research results hold as more studies are done, I imagine that hospitals will introduce them as a means of cost cutting.

AI assistant makes scientific discovery at Tufts University (US)

In light of this latest research from Tufts University, I thought it might be interesting to review the “algorithms, artificial intelligence (AI), robots, and world of work” situation before moving on to Tufts’ latest science discovery. My Feb. 5, 2015 post provides a roundup of sorts regarding work and automation. For those who’d like the latest, there’s a May 29, 2015 article by Sophie Weiner for Fast Company, featuring a predictive interactive tool designed by NPR (US National Public Radio) based on data from Oxford University researchers, which tells you how likely automating your job could be, no one knows for sure, (Note: A link has been removed),

Paralegals and food service workers: the robots are coming.

So suggests this interactive visualization by NPR. The bare-bones graphic lets you select a profession, from tellers and lawyers to psychologists and authors, to determine who is most at risk of losing their jobs in the coming robot revolution. From there, it spits out a percentage. …

You can find the interactive NPR tool here. I checked out the scientist category (in descending order of danger: Historians [43.9%], Economists, Geographers, Survey Researchers, Epidemiologists, Chemists, Animal Scientists, Sociologists, Astronomers, Social Scientists, Political Scientists, Materials Scientists, Conservation Scientists, and Microbiologists [1.2%]) none of whom seem to be in imminent danger if you consider that bookkeepers are rated at  97.6%.

Here at last is the news from Tufts (from a June 4, 2015 Tufts University news release, also on EurekAlert),

An artificial intelligence system has for the first time reverse-engineered the regeneration mechanism of planaria–the small worms whose extraordinary power to regrow body parts has made them a research model in human regenerative medicine.

The discovery by Tufts University biologists presents the first model of regeneration discovered by a non-human intelligence and the first comprehensive model of planarian regeneration, which had eluded human scientists for over 100 years. The work, published in PLOS Computational Biology, demonstrates how “robot science” can help human scientists in the future.

To mine the fast-growing mountain of published experimental data in regeneration and developmental biology Lobo and Levin developed an algorithm that would use evolutionary computation to produce regulatory networks able to “evolve” to accurately predict the results of published laboratory experiments that the researchers entered into a database.

“Our goal was to identify a regulatory network that could be executed in every cell in a virtual worm so that the head-tail patterning outcomes of simulated experiments would match the published data,” Lobo said.

The paper represents a successful application of the growing field of “robot science” – which Levin says can help human researchers by doing much more than crunch enormous datasets quickly.

“While the artificial intelligence in this project did have to do a whole lot of computations, the outcome is a theory of what the worm is doing, and coming up with theories of what’s going on in nature is pretty much the most creative, intuitive aspect of the scientist’s job,” Levin said. “One of the most remarkable aspects of the project was that the model it found was not a hopelessly-tangled network that no human could actually understand, but a reasonably simple model that people can readily comprehend. All this suggests to me that artificial intelligence can help with every aspect of science, not only data mining but also inference of meaning of the data.”

Here’s a link to and a citation for the paper,

Inferring Regulatory Networks from Experimental Morphological Phenotypes: A Computational Method Reverse-Engineers Planarian Regeneration by Daniel Lobo and Michael Levin. PLOS (Computational Biology) DOI: DOI: 10.1371/journal.pcbi.1004295 Published: June 4, 2015

This paper is open access.

It will be interesting to see if attributing the discovery to an algorithm sets off criticism suggesting that the researchers overstated the role the AI assistant played.

I sing the body cyber: two projects funded by the US National Science Foundation

Points to anyone who recognized the reference to Walt Whitman’s poem, “I sing the body electric,” from his classic collection, Leaves of Grass (1867 edition; h/t Wikipedia entry). I wonder if the cyber physical systems (CPS) work being funded by the US National Science Foundation (NSF) in the US will occasion poetry too.

More practically, a May 15, 2015 news item on Nanowerk, describes two cyber physical systems (CPS) research projects newly funded by the NSF,

Today [May 12, 2015] the National Science Foundation (NSF) announced two, five-year, center-scale awards totaling $8.75 million to advance the state-of-the-art in medical and cyber-physical systems (CPS).

One project will develop “Cyberheart”–a platform for virtual, patient-specific human heart models and associated device therapies that can be used to improve and accelerate medical-device development and testing. The other project will combine teams of microrobots with synthetic cells to perform functions that may one day lead to tissue and organ re-generation.

CPS are engineered systems that are built from, and depend upon, the seamless integration of computation and physical components. Often called the “Internet of Things,” CPS enable capabilities that go beyond the embedded systems of today.

“NSF has been a leader in supporting research in cyber-physical systems, which has provided a foundation for putting the ‘smart’ in health, transportation, energy and infrastructure systems,” said Jim Kurose, head of Computer & Information Science & Engineering at NSF. “We look forward to the results of these two new awards, which paint a new and compelling vision for what’s possible for smart health.”

Cyber-physical systems have the potential to benefit many sectors of our society, including healthcare. While advances in sensors and wearable devices have the capacity to improve aspects of medical care, from disease prevention to emergency response, and synthetic biology and robotics hold the promise of regenerating and maintaining the body in radical new ways, little is known about how advances in CPS can integrate these technologies to improve health outcomes.

These new NSF-funded projects will investigate two very different ways that CPS can be used in the biological and medical realms.

A May 12, 2015 NSF news release (also on EurekAlert), which originated the news item, describes the two CPS projects,

Bio-CPS for engineering living cells

A team of leading computer scientists, roboticists and biologists from Boston University, the University of Pennsylvania and MIT have come together to develop a system that combines the capabilities of nano-scale robots with specially designed synthetic organisms. Together, they believe this hybrid “bio-CPS” will be capable of performing heretofore impossible functions, from microscopic assembly to cell sensing within the body.

“We bring together synthetic biology and micron-scale robotics to engineer the emergence of desired behaviors in populations of bacterial and mammalian cells,” said Calin Belta, a professor of mechanical engineering, systems engineering and bioinformatics at Boston University and principal investigator on the project. “This project will impact several application areas ranging from tissue engineering to drug development.”

The project builds on previous research by each team member in diverse disciplines and early proof-of-concept designs of bio-CPS. According to the team, the research is also driven by recent advances in the emerging field of synthetic biology, in particular the ability to rapidly incorporate new capabilities into simple cells. Researchers so far have not been able to control and coordinate the behavior of synthetic cells in isolation, but the introduction of microrobots that can be externally controlled may be transformative.

In this new project, the team will focus on bio-CPS with the ability to sense, transport and work together. As a demonstration of their idea, they will develop teams of synthetic cell/microrobot hybrids capable of constructing a complex, fabric-like surface.

Vijay Kumar (University of Pennsylvania), Ron Weiss (MIT), and Douglas Densmore (BU) are co-investigators of the project.

Medical-CPS and the ‘Cyberheart’

CPS such as wearable sensors and implantable devices are already being used to assess health, improve quality of life, provide cost-effective care and potentially speed up disease diagnosis and prevention. [emphasis mine]

Extending these efforts, researchers from seven leading universities and centers are working together to develop far more realistic cardiac and device models than currently exist. This so-called “Cyberheart” platform can be used to test and validate medical devices faster and at a far lower cost than existing methods. CyberHeart also can be used to design safe, patient-specific device therapies, thereby lowering the risk to the patient.

“Innovative ‘virtual’ design methodologies for implantable cardiac medical devices will speed device development and yield safer, more effective devices and device-based therapies, than is currently possible,” said Scott Smolka, a professor of computer science at Stony Brook University and one of the principal investigators on the award.

The group’s approach combines patient-specific computational models of heart dynamics with advanced mathematical techniques for analyzing how these models interact with medical devices. The analytical techniques can be used to detect potential flaws in device behavior early on during the device-design phase, before animal and human trials begin. They also can be used in a clinical setting to optimize device settings on a patient-by-patient basis before devices are implanted.

“We believe that our coordinated, multi-disciplinary approach, which balances theoretical, experimental and practical concerns, will yield transformational results in medical-device design and foundations of cyber-physical system verification,” Smolka said.

The team will develop virtual device models which can be coupled together with virtual heart models to realize a full virtual development platform that can be subjected to computational analysis and simulation techniques. Moreover, they are working with experimentalists who will study the behavior of virtual and actual devices on animals’ hearts.

Co-investigators on the project include Edmund Clarke (Carnegie Mellon University), Elizabeth Cherry (Rochester Institute of Technology), W. Rance Cleaveland (University of Maryland), Flavio Fenton (Georgia Tech), Rahul Mangharam (University of Pennsylvania), Arnab Ray (Fraunhofer Center for Experimental Software Engineering [Germany]) and James Glimm and Radu Grosu (Stony Brook University). Richard A. Gray of the U.S. Food and Drug Administration is another key contributor.

It is fascinating to observe how terminology is shifting from pacemakers and deep brain stimulators as implants to “CPS such as wearable sensors and implantable devices … .” A new category has been created, CPS, which conjoins medical devices with other sensing devices such as wearable fitness monitors found in the consumer market. I imagine it’s an attempt to quell fears about injecting strange things into or adding strange things to your body—microrobots and nanorobots partially derived from synthetic biology research which are “… capable of performing heretofore impossible functions, from microscopic assembly to cell sensing within the body.” They’ve also sneaked in a reference to synthetic biology, an area of research where some concerns have been expressed, from my March 19, 2013 post about a poll and synthetic biology concerns,

In our latest survey, conducted in January 2013, three-fourths of respondents say they have heard little or nothing about synthetic biology, a level consistent with that measured in 2010. While initial impressions about the science are largely undefined, these feelings do not necessarily become more positive as respondents learn more. The public has mixed reactions to specific synthetic biology applications, and almost one-third of respondents favor a ban “on synthetic biology research until we better understand its implications and risks,” while 61 percent think the science should move forward.

I imagine that for scientists, 61% in favour of more research is not particularly comforting given how easily and quickly public opinion can shift.

3D printing soft robots and flexible electronics with metal alloys

This research comes from Purdue University (Indiana, US) which seems to be on a publishing binge these days. From an April 7, 2015 news item on Nanowerk,

New research shows how inkjet-printing technology can be used to mass-produce electronic circuits made of liquid-metal alloys for “soft robots” and flexible electronics.

Elastic technologies could make possible a new class of pliable robots and stretchable garments that people might wear to interact with computers or for therapeutic purposes. However, new manufacturing techniques must be developed before soft machines become commercially feasible, said Rebecca Kramer, an assistant professor of mechanical engineering at Purdue University.

“We want to create stretchable electronics that might be compatible with soft machines, such as robots that need to squeeze through small spaces, or wearable technologies that aren’t restrictive of motion,” she said. “Conductors made from liquid metal can stretch and deform without breaking.”

A new potential manufacturing approach focuses on harnessing inkjet printing to create devices made of liquid alloys.

“This process now allows us to print flexible and stretchable conductors onto anything, including elastic materials and fabrics,” Kramer said.

An April 7, 2015 Purdue University news release (also on EurekAlert) by Emil Venere, which originated the news item, expands on the theme,

A research paper about the method will appear on April 18 [2015] in the journal Advanced Materials. The paper generally introduces the method, called mechanically sintered gallium-indium nanoparticles, and describes research leading up to the project. It was authored by postdoctoral researcher John William Boley, graduate student Edward L. White and Kramer.

A printable ink is made by dispersing the liquid metal in a non-metallic solvent using ultrasound, which breaks up the bulk liquid metal into nanoparticles. This nanoparticle-filled ink is compatible with inkjet printing.

“Liquid metal in its native form is not inkjet-able,” Kramer said. “So what we do is create liquid metal nanoparticles that are small enough to pass through an inkjet nozzle. Sonicating liquid metal in a carrier solvent, such as ethanol, both creates the nanoparticles and disperses them in the solvent. Then we can print the ink onto any substrate. The ethanol evaporates away so we are just left with liquid metal nanoparticles on a surface.”

After printing, the nanoparticles must be rejoined by applying light pressure, which renders the material conductive. This step is necessary because the liquid-metal nanoparticles are initially coated with oxidized gallium, which acts as a skin that prevents electrical conductivity.

“But it’s a fragile skin, so when you apply pressure it breaks the skin and everything coalesces into one uniform film,” Kramer said. “We can do this either by stamping or by dragging something across the surface, such as the sharp edge of a silicon tip.”

The approach makes it possible to select which portions to activate depending on particular designs, suggesting that a blank film might be manufactured for a multitude of potential applications.

“We selectively activate what electronics we want to turn on by applying pressure to just those areas,” said Kramer, who this year was awarded an Early Career Development award from the National Science Foundation, which supports research to determine how to best develop the liquid-metal ink.

The process could make it possible to rapidly mass-produce large quantities of the film.

Future research will explore how the interaction between the ink and the surface being printed on might be conducive to the production of specific types of devices.

“For example, how do the nanoparticles orient themselves on hydrophobic versus hydrophilic surfaces? How can we formulate the ink and exploit its interaction with a surface to enable self-assembly of the particles?” she said.

The researchers also will study and model how individual particles rupture when pressure is applied, providing information that could allow the manufacture of ultrathin traces and new types of sensors.

Here’s a link to and a citation for the paper,

Nanoparticles: Mechanically Sintered Gallium–Indium Nanoparticles by John William Boley, Edward L. White and Rebecca K. Kramer. Advanced Materials Volume 27, Issue 14, page 2270, April 8, 2015 DOI: 10.1002/adma.201570094 Article first published online: 7 APR 2015

© 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This article is behind a paywall.

A bio-inspired robotic sock from Singapore’s National University

Should you ever be confined to a bed over a long period of time or find yourself unable to move your legs at will, this robotic sock could help you avoid blood clots according to a Feb. 10, 2015 National University of Singapore news release (also on EurekAlert but dated Feb. 13, 2015),

Patients who are bedridden or unable to move their legs are often at risk of developing Deep Vein Thrombosis (DVT), a potentially life-threatening condition caused by blood clots forming along the lower extremity veins of the legs. A team of researchers from the National University of Singapore’s (NUS) Yong Loo Lin School of Medicine and Faculty of Engineering has invented a novel sock that can help prevent DVT and improve survival rates of patients.

Equipped with soft actuators that mimic the tentacle movements of corals, the robotic sock emulates natural lower leg muscle contractions in the wearer’s leg, thereby promoting blood circulation throughout the wearer’s body. In addition, the novel device can potentially optimise therapy sessions and enable the patient’s lower leg movements to be monitored to improve therapy outcomes.

The invention is created by Assistant Professor Lim Jeong Hoon from the NUS Department of Medicine, as well as Assistant Professor Raye Yeow Chen Hua and first-year PhD candidate Mr Low Fanzhe of the NUS Department of Biomedical Engineering.

The news release goes on to contrast this new technique with the pharmacological and other methods currently in use,

Current approaches to prevent DVT include pharmacological methods which involve using anti-coagulation drugs to prevent blood from clotting, and mechanical methods that involve the use of compressive stimulations to assist blood flow.

While pharmacological methods are competent in preventing DVT, there is a primary detrimental side effect – there is higher risk of excessive bleeding which can lead to death, especially for patients who suffered hemorrhagic stroke. On the other hand, current mechanical methods such as the use of compression stockings have not demonstrated significant reduction in DVT risk.

In the course of exploring an effective solution that can prevent DVT, Asst Prof Lim, who is a rehabilitation clinician, was inspired by the natural role of the human ankle muscles in facilitating venous blood flow back to the heart. He worked with Asst Prof Yeow and Mr Low to derive a method that can perform this function for patients who are bedridden or unable to move their legs.

The team turned to nature for inspiration to develop a device that is akin to human ankle movements. They found similarities in the elegant structural design of the coral tentacle, which can extend to grab food and contract to bring the food closer for consumption, and invented soft actuators that mimic this “push and pull” mechanism.

By integrating the actuators with a sock and the use of a programmable pneumatic pump-valve control system, the invention is able to create the desired robot-assisted ankle joint motions to facilitate blood flow in the leg.

Explaining the choice of materials, Mr Low said, “We chose to use only soft components and actuators to increase patient comfort during use, hence minimising the risk of injury from excessive mechanical forces. Compression stockings are currently used in the hospital wards, so it makes sense to use a similar sock-based approach to provide comfort and minimise bulk on the ankle and foot.”

The sock complements conventional ankle therapy exercises that therapists perform on patients, thereby optimising therapy time and productivity. In addition, the sock can be worn for prolonged durations to provide robot-assisted therapy, on top of the therapist-assisted sessions. The sock is also embedded with sensors to track the ankle joint angle, allowing the patient’s ankle motion to be monitored for better treatment.

Said Asst Prof Yeow, “Given its compact size, modular design and ease of use, the soft robotic sock can be adopted in hospital wards and rehabilitation centres for on-bed applications to prevent DVT among stroke patients or even at home for bedridden patients. By reducing the risk of DVT using this device, we hope to improve survival rates of these patients.”

The team does not seem to have published any papers about this work although there are plans for clinical trials and commercialization (from the news release),

To further investigate the effectiveness of the robotic sock, Asst Prof Lim, Asst Prof Yeow and Mr Low will be conducting pilot clinical trials with about 30 patients at the National University Hospital over six months, starting March 2015. They hope that the pilot clinical trials will help them to obtain patient and clinical feedback to further improve the design and capabilities of the device.

The team intends to conduct trials across different local hospitals for better evaluation, and they also hope to commercialise the device in future.

The researchers have provided an image of the sock on a ‘patient’,

 Caption: NUS researchers (from right to left) Assistant Professor Raye Yeow, Mr Low Fanzhe and Dr Liu Yuchun demonstrating the novel bio-inspired robotic sock. Credit: National University of Singapore

Caption: NUS researchers (from right to left) Assistant Professor Raye Yeow, Mr Low Fanzhe and Dr Liu Yuchun demonstrating the novel bio-inspired robotic sock.
Credit: National University of Singapore

‘Eve’ (robot/artificial intelligence) searches for new drugs

Following on today’s (Feb. 5, 2015) earlier post, The future of work during the age of robots and artificial intelligence, here’s a Feb. 3, 2015 news item on ScienceDaily featuring ‘Eve’, a scientist robot,

Eve, an artificially-intelligent ‘robot scientist’ could make drug discovery faster and much cheaper, say researchers writing in the Royal Society journal Interface. The team has demonstrated the success of the approach as Eve discovered that a compound shown to have anti-cancer properties might also be used in the fight against malaria.

A Feb. 4, 2015 University of Manchester press release (also on EurekAlert but dated Feb. 3, 2015), which originated the news item, gives a brief introduction to robot scientists,

Robot scientists are a natural extension of the trend of increased involvement of automation in science. They can automatically develop and test hypotheses to explain observations, run experiments using laboratory robotics, interpret the results to amend their hypotheses, and then repeat the cycle, automating high-throughput hypothesis-led research. Robot scientists are also well suited to recording scientific knowledge: as the experiments are conceived and executed automatically by computer, it is possible to completely capture and digitally curate all aspects of the scientific process.

In 2009, Adam, a robot scientist developed by researchers at the Universities of Aberystwyth and Cambridge, became the first machine to autonomously discover new scientific knowledge. The same team has now developed Eve, based at the University of Manchester, whose purpose is to speed up the drug discovery process and make it more economical. In the study published today, they describe how the robot can help identify promising new drug candidates for malaria and neglected tropical diseases such as African sleeping sickness and Chagas’ disease.

“Neglected tropical diseases are a scourge of humanity, infecting hundreds of millions of people, and killing millions of people every year,” says Professor Ross King, from the Manchester Institute of Biotechnology at the University of Manchester. “We know what causes these diseases and that we can, in theory, attack the parasites that cause them using small molecule drugs. But the cost and speed of drug discovery and the economic return make them unattractive to the pharmaceutical industry.

“Eve exploits its artificial intelligence to learn from early successes in her screens and select compounds that have a high probability of being active against the chosen drug target. A smart screening system, based on genetically engineered yeast, is used. This allows Eve to exclude compounds that are toxic to cells and select those that block the action of the parasite protein while leaving any equivalent human protein unscathed. This reduces the costs, uncertainty, and time involved in drug screening, and has the potential to improve the lives of millions of people worldwide.”

The press release goes on to describe how ‘Eve’ works,

Eve is designed to automate early-stage drug design. First, she systematically tests each member from a large set of compounds in the standard brute-force way of conventional mass screening. The compounds are screened against assays (tests) designed to be automatically engineered, and can be generated much faster and more cheaply than the bespoke assays that are currently standard. This enables more types of assay to be applied, more efficient use of screening facilities to be made, and thereby increases the probability of a discovery within a given budget.

Eve’s robotic system is capable of screening over 10,000 compounds per day. However, while simple to automate, mass screening is still relatively slow and wasteful of resources as every compound in the library is tested. It is also unintelligent, as it makes no use of what is learnt during screening.

To improve this process, Eve selects at random a subset of the library to find compounds that pass the first assay; any ‘hits’ are re-tested multiple times to reduce the probability of false positives. Taking this set of confirmed hits, Eve uses statistics and machine learning to predict new structures that might score better against the assays. Although she currently does not have the ability to synthesise such compounds, future versions of the robot could potentially incorporate this feature.

Steve Oliver from the Cambridge Systems Biology Centre and the Department of Biochemistry at the University of Cambridge says: “Every industry now benefits from automation and science is no exception. Bringing in machine learning to make this process intelligent – rather than just a ‘brute force’ approach – could greatly speed up scientific progress and potentially reap huge rewards.”

To test the viability of the approach, the researchers developed assays targeting key molecules from parasites responsible for diseases such as malaria, Chagas’ disease and schistosomiasis and tested against these a library of approximately 1,500 clinically approved compounds. Through this, Eve showed that a compound that has previously been investigated as an anti-cancer drug inhibits a key molecule known as DHFR in the malaria parasite. Drugs that inhibit this molecule are currently routinely used to protect against malaria, and are given to over a million children; however, the emergence of strains of parasites resistant to existing drugs means that the search for new drugs is becoming increasingly more urgent.

“Despite extensive efforts, no one has been able to find a new antimalarial that targets DHFR and is able to pass clinical trials,” adds Professor Oliver. “Eve’s discovery could be even more significant than just demonstrating a new approach to drug discovery.”

Here’s a link to and a citation for the paper,

Cheaper faster drug development validated by the repositioning of drugs against neglected tropical diseases by Kevin Williams, Elizabeth Bilsland, Andrew Sparkes, Wayne Aubrey, Michael Young, Larisa N. Soldatova, Kurt De Grave, Jan Ramon, Michaela de Clare, Worachart Sirawaraporn, Stephen G. Oliver, and Ross D. King. Journal of the Royal Society Interface March 2015 Volume: 12 Issue: 104 DOI: 10.1098/rsif.2014.1289 Published 4 February 2015

This paper is open access.

The future of work during the age of robots and artificial intelligence

2014 was quite the year for discussions about robots/artificial intelligence (AI) taking over the world of work. There was my July 16, 2014 post titled, Writing and AI or is a robot writing this blog?, where I discussed the implications of algorithms which write news stories (business and sports, so far) in the wake of a deal that Associated Press signed with a company called Automated Insights. A few weeks later, the Pew Research Center released a report titled, AI, Robotics, and the Future of Jobs, which was widely covered. As well, sometime during the year, renowned physicist Stephen Hawking expressed serious concerns about artificial intelligence and our ability to control it.

It seems that 2015 is going to be another banner for this discussion. Before launching into the latest on this topic, here’s a sampling of the Pew Research and the response to it. From an Aug. 6, 2014 Pew summary about AI, Robotics, and the Future of Jobs by Aaron Smith and Janna Anderson,

The vast majority of respondents to the 2014 Future of the Internet canvassing anticipate that robotics and artificial intelligence will permeate wide segments of daily life by 2025, with huge implications for a range of industries such as health care, transport and logistics, customer service, and home maintenance. But even as they are largely consistent in their predictions for the evolution of technology itself, they are deeply divided on how advances in AI and robotics will impact the economic and employment picture over the next decade.

We call this a canvassing because it is not a representative, randomized survey. Its findings emerge from an “opt in” invitation to experts who have been identified by researching those who are widely quoted as technology builders and analysts and those who have made insightful predictions to our previous queries about the future of the Internet. …

I wouldn’t have expected Jeff Bercovici’s Aug. 6, 2014 article for Forbes to be quite so hesitant about the possibilities of our robotic and artificially intelligent future,

As part of a major ongoing project looking at the future of the internet, the Pew Research Internet Project canvassed some 1,896 technologists, futurists and other experts about how they see advances in robotics and artificial intelligence affecting the human workforce in 2025.

The results were not especially reassuring. Nearly half of the respondents (48%) predicted that robots and AI will displace more jobs than they create over the coming decade. While that left a slim majority believing the impact of technology on employment will be neutral or positive, that’s not necessarily grounds for comfort: Many experts told Pew they expect the jobs created by the rise of the machines will be lower paying and less secure than the ones displaced, widening the gap between rich and poor, while others said they simply don’t think the major effects of robots and AI, for better or worse, will be in evidence yet by 2025.

Chris Gayomali’s Aug. 6, 2014 article for Fast Company poses an interesting question about how this brave new future will be financed,

A new study by Pew Internet Research takes a hard look at how innovations in robotics and artificial intelligence will impact the future of work. To reach their conclusions, Pew researchers invited 12,000 experts (academics, researchers, technologists, and the like) to answer two basic questions:

Will networked, automated, artificial intelligence (AI) applications and robotic devices have displaced more jobs than they have created by 2025?
To what degree will AI and robotics be parts of the ordinary landscape of the general population by 2025?

Close to 1,900 experts responded. About half (48%) of the people queried envision a future in which machines have displaced both blue- and white-collar jobs. It won’t be so dissimilar from the fundamental shift we saw in manufacturing, in which fewer (human) bosses oversaw automated assembly lines.

Meanwhile, the other 52% of experts surveyed speculate while that many of the jobs will be “substantially taken over by robots,” humans won’t be displaced outright. Rather, many people will be funneled into new job categories that don’t quite exist yet. …

Some worry that over the next 10 years, we’ll see a large number of middle class jobs disappear, widening the economic gap between the rich and the poor. The shift could be dramatic. As artificial intelligence becomes less artificial, they argue, the worry is that jobs that earn a decent living wage (say, customer service representatives, for example) will no longer be available, putting lots and lots of people out of work, possibly without the requisite skill set to forge new careers for themselves.

How do we avoid this? One revealing thread suggested by experts argues that the responsibility will fall on businesses to protect their employees. “There is a relentless march on the part of commercial interests (businesses) to increase productivity so if the technical advances are reliable and have a positive ROI [return on investment],” writes survey respondent Glenn Edens, a director of research in networking, security, and distributed systems at PARC, which is owned by Xerox. “Ultimately we need a broad and large base of employed population, otherwise there will be no one to pay for all of this new world.” [emphasis mine]

Alex Hearn’s Aug. 6, 2014 article for the Guardian reviews the report and comments on the current educational system’s ability to prepare students for the future,

Almost all of the respondents are united on one thing: the displacement of work by robots and AI is going to continue, and accelerate, over the coming decade. Where they split is in the societal response to that displacement.

The optimists predict that the economic boom that would result from vastly reduced costs to businesses would lead to the creation of new jobs in huge numbers, and a newfound premium being placed on the value of work that requires “uniquely human capabilities”. …

But the pessimists worry that the benefits of the labor replacement will accrue to those already wealthy enough to own the automatons, be that in the form of patents for algorithmic workers or the physical form of robots.

The ranks of the unemployed could swell, as people are laid off from work they are qualified in without the ability to retrain for careers where their humanity is a positive. And since this will happen in every economic sector simultaneously, civil unrest could be the result.

One thing many experts agreed on was the need for education to prepare for a post-automation world. ““Only the best-educated humans will compete with machines,” said internet sociologist Howard Rheingold.

“And education systems in the US and much of the rest of the world are still sitting students in rows and columns, teaching them to keep quiet and memorise what is told them, preparing them for life in a 20th century factory.”

Then, Will Oremus’ Aug. 6, 2014 article for Slate suggests we are already experiencing displacement,

… the current jobless recovery, along with a longer-term trend toward income and wealth inequality, has some thinkers wondering whether the latest wave of automation is different from those that preceded it.

Massachusetts Institute of Technology researchers Andrew McAfee and Erik Brynjolfsson, among others, see a “great decoupling” of productivity from wages since about 2000 as technology outpaces human workers’ education and skills. Workers, in other words, are losing the race between education and technology. This may be exacerbating a longer-term trend in which capital has gained the upper hand on labor since the 1970s.

The results of the survey were fascinating. Almost exactly half of the respondents (48 percent) predicted that intelligent software will disrupt more jobs than it can replace. The other half predicted the opposite.

The lack of expert consensus on such a crucial and seemingly straightforward question is startling. It’s even more so given that history and the leading economic models point so clearly to one side of the question: the side that reckons society will adjust, new jobs will emerge, and technology will eventually leave the economy stronger.

More recently, Manish Singh has written about some of his concerns as a writer who could be displaced in a Jan. 31, 2015 (?) article for Beta News (Note: A link has been removed),

Robots are after my job. They’re after yours as well, but let us deal with my problem first. Associated Press, an American multinational nonprofit news agency, revealed on Friday [Jan. 30, 2015] that it published 3,000 articles in the last three months of 2014. The company could previously only publish 300 stories. It didn’t hire more journalists, neither did its existing headcount start writing more, but the actual reason behind this exponential growth is technology. All those stories were written by an algorithm.

The articles produced by the algorithm were accurate, and you won’t be able to separate them from stories written by humans. Good lord, all the stories were written in accordance with the AP Style Guide, something not all journalists follow (but arguably, should).

There has been a growth in the number of such software. Narrative Science, a Chicago-based company offers an automated narrative generator powered by artificial intelligence. The company’s co-founder and CTO, Kristian Hammond, said last year that he believes that by 2030, 90 percent of news could be written by computers. Forbes, a reputable news outlet, has used Narrative’s software. Some news outlets use it to write email newsletters and similar things.

Singh also sounds a note of concern for other jobs by including this video (approximately 16 mins.) in his piece,

This video (Humans Need Not Apply) provides an excellent overview of the situation although it seems C. G. P. Grey, the person who produced and posted the video on YouTube, holds a more pessimistic view of the future than some other futurists.  C. G. P. Grey has a website here and is profiled here on Wikipedia.

One final bit, there’s a robot art critic which some are suggesting is superior to human art critics in Thomas Gorton’s Jan. 16, 2015 (?) article ‘This robot reviews art better than most critics‘ for Dazed Digital (Note: Links have been removed),

… the Novice Art Blogger, a Tumblr page set up by Matthew Plummer Fernandez. The British-Colombian artist programmed a bot with deep learning algorithms to analyse art; so instead of an overarticulate critic rambling about praxis, you get a review that gets down to the nitty-gritty about what exactly you see in front of you.

The results are charmingly honest: think a round robin of Google Translate text uninhibited by PR fluff, personal favouritism or the whims of a bad mood. We asked Novice Art Blogger to review our most recent Winter 2014 cover with Kendall Jenner. …

Beyond Kendall Jenner, it’s worth reading Gorton’s article for the interview with Plummer Fernandez.

Art project (autonomous bot purchases illegal goods) seized by Swiss law enforcement

Having just attended a talk on Robotics and Rehabilitation which included a segment on Robo Ethics, news of an art project where an autonomous bot (robot) is set loose on the darknet to purchase goods (not all of them illegal) was fascinating in itself (it was part of an art exhibition which also displayed the proceeds of the darknet activity). But things got more interesting when the exhibit attracted legal scrutiny in the UK and occasioned legal action in Switzerland.

Here’s more from a Jan. 23, 2015 article by Mike Masnick for Techdirt (Note: A link has been removed),

… some London-based Swiss artists, !Mediengruppe Bitnik [(Carmen Weisskopf and Domagoj Smoljo)], presented an exhibition in Zurich of The Darknet: From Memes to Onionland. Specifically, they had programmed a bot with some Bitcoin to randomly buy $100 worth of things each week via a darknet market, like Silk Road (in this case, it was actually Agora). The artists’ focus was more about the nature of dark markets, and whether or not it makes sense to make them illegal:

The pair see parallels between copyright law and drug laws: “You can enforce laws, but what does that mean for society? Trading is something people have always done without regulation, but today it is regulated,” says ays [sic] Weiskopff.

“There have always been darkmarkets in cities, online or offline. These questions need to be explored. But what systems do we have to explore them in? Post Snowden, space for free-thinking online has become limited, and offline is not a lot better.”

Interestingly the bot got excellent service as Mike Power wrote in his Dec. 5, 2014 review of the show. Power also highlights some of the legal, ethical, and moral implications,

The gallery is next door to a police station, but the artists say they are not afraid of legal repercussions of their bot buying illegal goods.

“We are the legal owner of the drugs [the bot purchased 10 ecstasy pills along with a baseball cap, a pair of sneaker/runners/trainers among other items] – we are responsible for everything the bot does, as we executed the code, says Smoljo. “But our lawyer and the Swiss constitution says art in the public interest is allowed to be free.”

The project also aims to explore the ways that trust is built between anonymous participants in a commercial transaction for possibly illegal goods. Perhaps most surprisingly, not one of the 12 deals the robot has made has ended in a scam.

“The markets copied procedures from Amazon and eBay – their rating and feedback system is so interesting,” adds Smojlo. “With such simple tools you can gain trust. The service level was impressive – we had 12 items and everything arrived.”

“There has been no scam, no rip-off, nothing,” says Weiskopff. “One guy could not deliver a handbag the bot ordered, but he then returned the bitcoins to us.”

The exhibition scheduled from Oct. 18, 2014 – Jan. 11, 2015 enjoyed an uninterrupted run but there were concerns in the UK (from the Power article),

A spokesman for the National Crime Agency, which incorporates the National Cyber Crime Unit, was less philosophical, acknowledging that the question of criminal culpability in the case of a randomised software agent making a purchase of an illegal drug was “very unusual”.

“If the purchase is made in Switzerland, then it’s of course potentially subject to Swiss law, on which we couldn’t comment,” said the NCA. “In the UK, it’s obviously illegal to purchase a prohibited drug (such as ecstasy), but any criminal liability would need to assessed on a case-by-case basis.”

Masnick describes the followup,

Apparently, that [case-by[case] assessment has concluded in this case, because right after the exhibit closed in Switzerland, law enforcement showed up to seize stuff …

!Mediengruppe Bitnik  issued a Jan. 15, 2015 press release (Note: Links have been removed),

«Can a robot, or a piece of software, be jailed if it commits a crime? Where does legal culpability lie if code is criminal by design or default? What if a robot buys drugs, weapons, or hacking equipment and has them sent to you, and police intercept the package?» These are some of the questions Mike Power asked when he reviewed the work «Random Darknet Shopper» in The Guardian. The work was part of the exhibition «The Darknet – From Memes to Onionland. An Exploration» in the Kunst Halle St. Gallen, which closed on Sunday, January 11, 2015. For the duration of the exhibition, !Mediengruppe Bitnik sent a software bot on a shopping spree in the Deepweb. Random Darknet Shopper had a budget of $100 in Bitcoins weekly, which it spent on a randomly chosen item from the deepweb shop Agora. The work and the exhibition received wide attention from the public and the press. The exhibition was well-attended and was discussed in a wide range of local and international press from Saiten to Vice, Arte, Libération, CNN, Forbes. «There’s just one problem», The Washington Post wrote in January about the work, «recently, it bought 10 ecstasy pills».

What does it mean for a society, when there are robots which act autonomously? Who is liable, when a robot breaks the law on its own initiative? These were some of the main questions the work Random Darknet Shopper posed. Global questions, which will now be negotiated locally.

On the morning of January 12, the day after the three-month exhibition was closed, the public prosecutor’s office of St. Gallen seized and sealed our work. It seems, the purpose of the confiscation is to impede an endangerment of third parties through the drugs exhibited by destroying them. This is what we know at present. We believe that the confiscation is an unjustified intervention into freedom of art. We’d also like to thank Kunst Halle St. Gallen for their ongoing support and the wonderful collaboration. Furthermore, we are convinced, that it is an objective of art to shed light on the fringes of society and to pose fundamental contemporary questions.

This project brings to mind Isaac Asimov’s three laws of robotics and a question (from the Wikipedia entry; Note: Links have been removed),

The Three Laws of Robotics (often shortened to The Three Laws or Three Laws, also known as Asimov’s Laws) are a set of rules devised by the science fiction author Isaac Asimov. The rules were introduced in his 1942 short story “Runaround”, although they had been foreshadowed in a few earlier stories. The Three Laws are:

A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Here’s my question, how do you programme a robot to know what would injure a human being? For example, if a human ingests an ecstasy pill the bot purchased, would that be covered in the first law?

Getting back to the robot ethics talk I recently attended, it was given by Ajung Moon (Ph.D. student at the University of British Columbia [Vancouver, Canada] studying Human-Robot Interaction and Roboethics. Mechatronics engineer with a sprinkle of Philosophy background). She has a blog,  Roboethic info DataBase where you can read more on robots and ethics.

I strongly recommend reading both Masnick’s post (he positions this action in a larger context) and Power’s article (more details and images from the exhibit).