Category Archives: robots

AI assistant makes scientific discovery at Tufts University (US)

In light of this latest research from Tufts University, I thought it might be interesting to review the “algorithms, artificial intelligence (AI), robots, and world of work” situation before moving on to Tufts’ latest science discovery. My Feb. 5, 2015 post provides a roundup of sorts regarding work and automation. For those who’d like the latest, there’s a May 29, 2015 article by Sophie Weiner for Fast Company, featuring a predictive interactive tool designed by NPR (US National Public Radio) based on data from Oxford University researchers, which tells you how likely automating your job could be, no one knows for sure, (Note: A link has been removed),

Paralegals and food service workers: the robots are coming.

So suggests this interactive visualization by NPR. The bare-bones graphic lets you select a profession, from tellers and lawyers to psychologists and authors, to determine who is most at risk of losing their jobs in the coming robot revolution. From there, it spits out a percentage. …

You can find the interactive NPR tool here. I checked out the scientist category (in descending order of danger: Historians [43.9%], Economists, Geographers, Survey Researchers, Epidemiologists, Chemists, Animal Scientists, Sociologists, Astronomers, Social Scientists, Political Scientists, Materials Scientists, Conservation Scientists, and Microbiologists [1.2%]) none of whom seem to be in imminent danger if you consider that bookkeepers are rated at  97.6%.

Here at last is the news from Tufts (from a June 4, 2015 Tufts University news release, also on EurekAlert),

An artificial intelligence system has for the first time reverse-engineered the regeneration mechanism of planaria–the small worms whose extraordinary power to regrow body parts has made them a research model in human regenerative medicine.

The discovery by Tufts University biologists presents the first model of regeneration discovered by a non-human intelligence and the first comprehensive model of planarian regeneration, which had eluded human scientists for over 100 years. The work, published in PLOS Computational Biology, demonstrates how “robot science” can help human scientists in the future.

To mine the fast-growing mountain of published experimental data in regeneration and developmental biology Lobo and Levin developed an algorithm that would use evolutionary computation to produce regulatory networks able to “evolve” to accurately predict the results of published laboratory experiments that the researchers entered into a database.

“Our goal was to identify a regulatory network that could be executed in every cell in a virtual worm so that the head-tail patterning outcomes of simulated experiments would match the published data,” Lobo said.

The paper represents a successful application of the growing field of “robot science” – which Levin says can help human researchers by doing much more than crunch enormous datasets quickly.

“While the artificial intelligence in this project did have to do a whole lot of computations, the outcome is a theory of what the worm is doing, and coming up with theories of what’s going on in nature is pretty much the most creative, intuitive aspect of the scientist’s job,” Levin said. “One of the most remarkable aspects of the project was that the model it found was not a hopelessly-tangled network that no human could actually understand, but a reasonably simple model that people can readily comprehend. All this suggests to me that artificial intelligence can help with every aspect of science, not only data mining but also inference of meaning of the data.”

Here’s a link to and a citation for the paper,

Inferring Regulatory Networks from Experimental Morphological Phenotypes: A Computational Method Reverse-Engineers Planarian Regeneration by Daniel Lobo and Michael Levin. PLOS (Computational Biology) DOI: DOI: 10.1371/journal.pcbi.1004295 Published: June 4, 2015

This paper is open access.

It will be interesting to see if attributing the discovery to an algorithm sets off criticism suggesting that the researchers overstated the role the AI assistant played.

I sing the body cyber: two projects funded by the US National Science Foundation

Points to anyone who recognized the reference to Walt Whitman’s poem, “I sing the body electric,” from his classic collection, Leaves of Grass (1867 edition; h/t Wikipedia entry). I wonder if the cyber physical systems (CPS) work being funded by the US National Science Foundation (NSF) in the US will occasion poetry too.

More practically, a May 15, 2015 news item on Nanowerk, describes two cyber physical systems (CPS) research projects newly funded by the NSF,

Today [May 12, 2015] the National Science Foundation (NSF) announced two, five-year, center-scale awards totaling $8.75 million to advance the state-of-the-art in medical and cyber-physical systems (CPS).

One project will develop “Cyberheart”–a platform for virtual, patient-specific human heart models and associated device therapies that can be used to improve and accelerate medical-device development and testing. The other project will combine teams of microrobots with synthetic cells to perform functions that may one day lead to tissue and organ re-generation.

CPS are engineered systems that are built from, and depend upon, the seamless integration of computation and physical components. Often called the “Internet of Things,” CPS enable capabilities that go beyond the embedded systems of today.

“NSF has been a leader in supporting research in cyber-physical systems, which has provided a foundation for putting the ‘smart’ in health, transportation, energy and infrastructure systems,” said Jim Kurose, head of Computer & Information Science & Engineering at NSF. “We look forward to the results of these two new awards, which paint a new and compelling vision for what’s possible for smart health.”

Cyber-physical systems have the potential to benefit many sectors of our society, including healthcare. While advances in sensors and wearable devices have the capacity to improve aspects of medical care, from disease prevention to emergency response, and synthetic biology and robotics hold the promise of regenerating and maintaining the body in radical new ways, little is known about how advances in CPS can integrate these technologies to improve health outcomes.

These new NSF-funded projects will investigate two very different ways that CPS can be used in the biological and medical realms.

A May 12, 2015 NSF news release (also on EurekAlert), which originated the news item, describes the two CPS projects,

Bio-CPS for engineering living cells

A team of leading computer scientists, roboticists and biologists from Boston University, the University of Pennsylvania and MIT have come together to develop a system that combines the capabilities of nano-scale robots with specially designed synthetic organisms. Together, they believe this hybrid “bio-CPS” will be capable of performing heretofore impossible functions, from microscopic assembly to cell sensing within the body.

“We bring together synthetic biology and micron-scale robotics to engineer the emergence of desired behaviors in populations of bacterial and mammalian cells,” said Calin Belta, a professor of mechanical engineering, systems engineering and bioinformatics at Boston University and principal investigator on the project. “This project will impact several application areas ranging from tissue engineering to drug development.”

The project builds on previous research by each team member in diverse disciplines and early proof-of-concept designs of bio-CPS. According to the team, the research is also driven by recent advances in the emerging field of synthetic biology, in particular the ability to rapidly incorporate new capabilities into simple cells. Researchers so far have not been able to control and coordinate the behavior of synthetic cells in isolation, but the introduction of microrobots that can be externally controlled may be transformative.

In this new project, the team will focus on bio-CPS with the ability to sense, transport and work together. As a demonstration of their idea, they will develop teams of synthetic cell/microrobot hybrids capable of constructing a complex, fabric-like surface.

Vijay Kumar (University of Pennsylvania), Ron Weiss (MIT), and Douglas Densmore (BU) are co-investigators of the project.

Medical-CPS and the ‘Cyberheart’

CPS such as wearable sensors and implantable devices are already being used to assess health, improve quality of life, provide cost-effective care and potentially speed up disease diagnosis and prevention. [emphasis mine]

Extending these efforts, researchers from seven leading universities and centers are working together to develop far more realistic cardiac and device models than currently exist. This so-called “Cyberheart” platform can be used to test and validate medical devices faster and at a far lower cost than existing methods. CyberHeart also can be used to design safe, patient-specific device therapies, thereby lowering the risk to the patient.

“Innovative ‘virtual’ design methodologies for implantable cardiac medical devices will speed device development and yield safer, more effective devices and device-based therapies, than is currently possible,” said Scott Smolka, a professor of computer science at Stony Brook University and one of the principal investigators on the award.

The group’s approach combines patient-specific computational models of heart dynamics with advanced mathematical techniques for analyzing how these models interact with medical devices. The analytical techniques can be used to detect potential flaws in device behavior early on during the device-design phase, before animal and human trials begin. They also can be used in a clinical setting to optimize device settings on a patient-by-patient basis before devices are implanted.

“We believe that our coordinated, multi-disciplinary approach, which balances theoretical, experimental and practical concerns, will yield transformational results in medical-device design and foundations of cyber-physical system verification,” Smolka said.

The team will develop virtual device models which can be coupled together with virtual heart models to realize a full virtual development platform that can be subjected to computational analysis and simulation techniques. Moreover, they are working with experimentalists who will study the behavior of virtual and actual devices on animals’ hearts.

Co-investigators on the project include Edmund Clarke (Carnegie Mellon University), Elizabeth Cherry (Rochester Institute of Technology), W. Rance Cleaveland (University of Maryland), Flavio Fenton (Georgia Tech), Rahul Mangharam (University of Pennsylvania), Arnab Ray (Fraunhofer Center for Experimental Software Engineering [Germany]) and James Glimm and Radu Grosu (Stony Brook University). Richard A. Gray of the U.S. Food and Drug Administration is another key contributor.

It is fascinating to observe how terminology is shifting from pacemakers and deep brain stimulators as implants to “CPS such as wearable sensors and implantable devices … .” A new category has been created, CPS, which conjoins medical devices with other sensing devices such as wearable fitness monitors found in the consumer market. I imagine it’s an attempt to quell fears about injecting strange things into or adding strange things to your body—microrobots and nanorobots partially derived from synthetic biology research which are “… capable of performing heretofore impossible functions, from microscopic assembly to cell sensing within the body.” They’ve also sneaked in a reference to synthetic biology, an area of research where some concerns have been expressed, from my March 19, 2013 post about a poll and synthetic biology concerns,

In our latest survey, conducted in January 2013, three-fourths of respondents say they have heard little or nothing about synthetic biology, a level consistent with that measured in 2010. While initial impressions about the science are largely undefined, these feelings do not necessarily become more positive as respondents learn more. The public has mixed reactions to specific synthetic biology applications, and almost one-third of respondents favor a ban “on synthetic biology research until we better understand its implications and risks,” while 61 percent think the science should move forward.

I imagine that for scientists, 61% in favour of more research is not particularly comforting given how easily and quickly public opinion can shift.

3D printing soft robots and flexible electronics with metal alloys

This research comes from Purdue University (Indiana, US) which seems to be on a publishing binge these days. From an April 7, 2015 news item on Nanowerk,

New research shows how inkjet-printing technology can be used to mass-produce electronic circuits made of liquid-metal alloys for “soft robots” and flexible electronics.

Elastic technologies could make possible a new class of pliable robots and stretchable garments that people might wear to interact with computers or for therapeutic purposes. However, new manufacturing techniques must be developed before soft machines become commercially feasible, said Rebecca Kramer, an assistant professor of mechanical engineering at Purdue University.

“We want to create stretchable electronics that might be compatible with soft machines, such as robots that need to squeeze through small spaces, or wearable technologies that aren’t restrictive of motion,” she said. “Conductors made from liquid metal can stretch and deform without breaking.”

A new potential manufacturing approach focuses on harnessing inkjet printing to create devices made of liquid alloys.

“This process now allows us to print flexible and stretchable conductors onto anything, including elastic materials and fabrics,” Kramer said.

An April 7, 2015 Purdue University news release (also on EurekAlert) by Emil Venere, which originated the news item, expands on the theme,

A research paper about the method will appear on April 18 [2015] in the journal Advanced Materials. The paper generally introduces the method, called mechanically sintered gallium-indium nanoparticles, and describes research leading up to the project. It was authored by postdoctoral researcher John William Boley, graduate student Edward L. White and Kramer.

A printable ink is made by dispersing the liquid metal in a non-metallic solvent using ultrasound, which breaks up the bulk liquid metal into nanoparticles. This nanoparticle-filled ink is compatible with inkjet printing.

“Liquid metal in its native form is not inkjet-able,” Kramer said. “So what we do is create liquid metal nanoparticles that are small enough to pass through an inkjet nozzle. Sonicating liquid metal in a carrier solvent, such as ethanol, both creates the nanoparticles and disperses them in the solvent. Then we can print the ink onto any substrate. The ethanol evaporates away so we are just left with liquid metal nanoparticles on a surface.”

After printing, the nanoparticles must be rejoined by applying light pressure, which renders the material conductive. This step is necessary because the liquid-metal nanoparticles are initially coated with oxidized gallium, which acts as a skin that prevents electrical conductivity.

“But it’s a fragile skin, so when you apply pressure it breaks the skin and everything coalesces into one uniform film,” Kramer said. “We can do this either by stamping or by dragging something across the surface, such as the sharp edge of a silicon tip.”

The approach makes it possible to select which portions to activate depending on particular designs, suggesting that a blank film might be manufactured for a multitude of potential applications.

“We selectively activate what electronics we want to turn on by applying pressure to just those areas,” said Kramer, who this year was awarded an Early Career Development award from the National Science Foundation, which supports research to determine how to best develop the liquid-metal ink.

The process could make it possible to rapidly mass-produce large quantities of the film.

Future research will explore how the interaction between the ink and the surface being printed on might be conducive to the production of specific types of devices.

“For example, how do the nanoparticles orient themselves on hydrophobic versus hydrophilic surfaces? How can we formulate the ink and exploit its interaction with a surface to enable self-assembly of the particles?” she said.

The researchers also will study and model how individual particles rupture when pressure is applied, providing information that could allow the manufacture of ultrathin traces and new types of sensors.

Here’s a link to and a citation for the paper,

Nanoparticles: Mechanically Sintered Gallium–Indium Nanoparticles by John William Boley, Edward L. White and Rebecca K. Kramer. Advanced Materials Volume 27, Issue 14, page 2270, April 8, 2015 DOI: 10.1002/adma.201570094 Article first published online: 7 APR 2015

© 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This article is behind a paywall.

A bio-inspired robotic sock from Singapore’s National University

Should you ever be confined to a bed over a long period of time or find yourself unable to move your legs at will, this robotic sock could help you avoid blood clots according to a Feb. 10, 2015 National University of Singapore news release (also on EurekAlert but dated Feb. 13, 2015),

Patients who are bedridden or unable to move their legs are often at risk of developing Deep Vein Thrombosis (DVT), a potentially life-threatening condition caused by blood clots forming along the lower extremity veins of the legs. A team of researchers from the National University of Singapore’s (NUS) Yong Loo Lin School of Medicine and Faculty of Engineering has invented a novel sock that can help prevent DVT and improve survival rates of patients.

Equipped with soft actuators that mimic the tentacle movements of corals, the robotic sock emulates natural lower leg muscle contractions in the wearer’s leg, thereby promoting blood circulation throughout the wearer’s body. In addition, the novel device can potentially optimise therapy sessions and enable the patient’s lower leg movements to be monitored to improve therapy outcomes.

The invention is created by Assistant Professor Lim Jeong Hoon from the NUS Department of Medicine, as well as Assistant Professor Raye Yeow Chen Hua and first-year PhD candidate Mr Low Fanzhe of the NUS Department of Biomedical Engineering.

The news release goes on to contrast this new technique with the pharmacological and other methods currently in use,

Current approaches to prevent DVT include pharmacological methods which involve using anti-coagulation drugs to prevent blood from clotting, and mechanical methods that involve the use of compressive stimulations to assist blood flow.

While pharmacological methods are competent in preventing DVT, there is a primary detrimental side effect – there is higher risk of excessive bleeding which can lead to death, especially for patients who suffered hemorrhagic stroke. On the other hand, current mechanical methods such as the use of compression stockings have not demonstrated significant reduction in DVT risk.

In the course of exploring an effective solution that can prevent DVT, Asst Prof Lim, who is a rehabilitation clinician, was inspired by the natural role of the human ankle muscles in facilitating venous blood flow back to the heart. He worked with Asst Prof Yeow and Mr Low to derive a method that can perform this function for patients who are bedridden or unable to move their legs.

The team turned to nature for inspiration to develop a device that is akin to human ankle movements. They found similarities in the elegant structural design of the coral tentacle, which can extend to grab food and contract to bring the food closer for consumption, and invented soft actuators that mimic this “push and pull” mechanism.

By integrating the actuators with a sock and the use of a programmable pneumatic pump-valve control system, the invention is able to create the desired robot-assisted ankle joint motions to facilitate blood flow in the leg.

Explaining the choice of materials, Mr Low said, “We chose to use only soft components and actuators to increase patient comfort during use, hence minimising the risk of injury from excessive mechanical forces. Compression stockings are currently used in the hospital wards, so it makes sense to use a similar sock-based approach to provide comfort and minimise bulk on the ankle and foot.”

The sock complements conventional ankle therapy exercises that therapists perform on patients, thereby optimising therapy time and productivity. In addition, the sock can be worn for prolonged durations to provide robot-assisted therapy, on top of the therapist-assisted sessions. The sock is also embedded with sensors to track the ankle joint angle, allowing the patient’s ankle motion to be monitored for better treatment.

Said Asst Prof Yeow, “Given its compact size, modular design and ease of use, the soft robotic sock can be adopted in hospital wards and rehabilitation centres for on-bed applications to prevent DVT among stroke patients or even at home for bedridden patients. By reducing the risk of DVT using this device, we hope to improve survival rates of these patients.”

The team does not seem to have published any papers about this work although there are plans for clinical trials and commercialization (from the news release),

To further investigate the effectiveness of the robotic sock, Asst Prof Lim, Asst Prof Yeow and Mr Low will be conducting pilot clinical trials with about 30 patients at the National University Hospital over six months, starting March 2015. They hope that the pilot clinical trials will help them to obtain patient and clinical feedback to further improve the design and capabilities of the device.

The team intends to conduct trials across different local hospitals for better evaluation, and they also hope to commercialise the device in future.

The researchers have provided an image of the sock on a ‘patient’,

 Caption: NUS researchers (from right to left) Assistant Professor Raye Yeow, Mr Low Fanzhe and Dr Liu Yuchun demonstrating the novel bio-inspired robotic sock. Credit: National University of Singapore


Caption: NUS researchers (from right to left) Assistant Professor Raye Yeow, Mr Low Fanzhe and Dr Liu Yuchun demonstrating the novel bio-inspired robotic sock.
Credit: National University of Singapore

‘Eve’ (robot/artificial intelligence) searches for new drugs

Following on today’s (Feb. 5, 2015) earlier post, The future of work during the age of robots and artificial intelligence, here’s a Feb. 3, 2015 news item on ScienceDaily featuring ‘Eve’, a scientist robot,

Eve, an artificially-intelligent ‘robot scientist’ could make drug discovery faster and much cheaper, say researchers writing in the Royal Society journal Interface. The team has demonstrated the success of the approach as Eve discovered that a compound shown to have anti-cancer properties might also be used in the fight against malaria.

A Feb. 4, 2015 University of Manchester press release (also on EurekAlert but dated Feb. 3, 2015), which originated the news item, gives a brief introduction to robot scientists,

Robot scientists are a natural extension of the trend of increased involvement of automation in science. They can automatically develop and test hypotheses to explain observations, run experiments using laboratory robotics, interpret the results to amend their hypotheses, and then repeat the cycle, automating high-throughput hypothesis-led research. Robot scientists are also well suited to recording scientific knowledge: as the experiments are conceived and executed automatically by computer, it is possible to completely capture and digitally curate all aspects of the scientific process.

In 2009, Adam, a robot scientist developed by researchers at the Universities of Aberystwyth and Cambridge, became the first machine to autonomously discover new scientific knowledge. The same team has now developed Eve, based at the University of Manchester, whose purpose is to speed up the drug discovery process and make it more economical. In the study published today, they describe how the robot can help identify promising new drug candidates for malaria and neglected tropical diseases such as African sleeping sickness and Chagas’ disease.

“Neglected tropical diseases are a scourge of humanity, infecting hundreds of millions of people, and killing millions of people every year,” says Professor Ross King, from the Manchester Institute of Biotechnology at the University of Manchester. “We know what causes these diseases and that we can, in theory, attack the parasites that cause them using small molecule drugs. But the cost and speed of drug discovery and the economic return make them unattractive to the pharmaceutical industry.

“Eve exploits its artificial intelligence to learn from early successes in her screens and select compounds that have a high probability of being active against the chosen drug target. A smart screening system, based on genetically engineered yeast, is used. This allows Eve to exclude compounds that are toxic to cells and select those that block the action of the parasite protein while leaving any equivalent human protein unscathed. This reduces the costs, uncertainty, and time involved in drug screening, and has the potential to improve the lives of millions of people worldwide.”

The press release goes on to describe how ‘Eve’ works,

Eve is designed to automate early-stage drug design. First, she systematically tests each member from a large set of compounds in the standard brute-force way of conventional mass screening. The compounds are screened against assays (tests) designed to be automatically engineered, and can be generated much faster and more cheaply than the bespoke assays that are currently standard. This enables more types of assay to be applied, more efficient use of screening facilities to be made, and thereby increases the probability of a discovery within a given budget.

Eve’s robotic system is capable of screening over 10,000 compounds per day. However, while simple to automate, mass screening is still relatively slow and wasteful of resources as every compound in the library is tested. It is also unintelligent, as it makes no use of what is learnt during screening.

To improve this process, Eve selects at random a subset of the library to find compounds that pass the first assay; any ‘hits’ are re-tested multiple times to reduce the probability of false positives. Taking this set of confirmed hits, Eve uses statistics and machine learning to predict new structures that might score better against the assays. Although she currently does not have the ability to synthesise such compounds, future versions of the robot could potentially incorporate this feature.

Steve Oliver from the Cambridge Systems Biology Centre and the Department of Biochemistry at the University of Cambridge says: “Every industry now benefits from automation and science is no exception. Bringing in machine learning to make this process intelligent – rather than just a ‘brute force’ approach – could greatly speed up scientific progress and potentially reap huge rewards.”

To test the viability of the approach, the researchers developed assays targeting key molecules from parasites responsible for diseases such as malaria, Chagas’ disease and schistosomiasis and tested against these a library of approximately 1,500 clinically approved compounds. Through this, Eve showed that a compound that has previously been investigated as an anti-cancer drug inhibits a key molecule known as DHFR in the malaria parasite. Drugs that inhibit this molecule are currently routinely used to protect against malaria, and are given to over a million children; however, the emergence of strains of parasites resistant to existing drugs means that the search for new drugs is becoming increasingly more urgent.

“Despite extensive efforts, no one has been able to find a new antimalarial that targets DHFR and is able to pass clinical trials,” adds Professor Oliver. “Eve’s discovery could be even more significant than just demonstrating a new approach to drug discovery.”

Here’s a link to and a citation for the paper,

Cheaper faster drug development validated by the repositioning of drugs against neglected tropical diseases by Kevin Williams, Elizabeth Bilsland, Andrew Sparkes, Wayne Aubrey, Michael Young, Larisa N. Soldatova, Kurt De Grave, Jan Ramon, Michaela de Clare, Worachart Sirawaraporn, Stephen G. Oliver, and Ross D. King. Journal of the Royal Society Interface March 2015 Volume: 12 Issue: 104 DOI: 10.1098/rsif.2014.1289 Published 4 February 2015

This paper is open access.

The future of work during the age of robots and artificial intelligence

2014 was quite the year for discussions about robots/artificial intelligence (AI) taking over the world of work. There was my July 16, 2014 post titled, Writing and AI or is a robot writing this blog?, where I discussed the implications of algorithms which write news stories (business and sports, so far) in the wake of a deal that Associated Press signed with a company called Automated Insights. A few weeks later, the Pew Research Center released a report titled, AI, Robotics, and the Future of Jobs, which was widely covered. As well, sometime during the year, renowned physicist Stephen Hawking expressed serious concerns about artificial intelligence and our ability to control it.

It seems that 2015 is going to be another banner for this discussion. Before launching into the latest on this topic, here’s a sampling of the Pew Research and the response to it. From an Aug. 6, 2014 Pew summary about AI, Robotics, and the Future of Jobs by Aaron Smith and Janna Anderson,

The vast majority of respondents to the 2014 Future of the Internet canvassing anticipate that robotics and artificial intelligence will permeate wide segments of daily life by 2025, with huge implications for a range of industries such as health care, transport and logistics, customer service, and home maintenance. But even as they are largely consistent in their predictions for the evolution of technology itself, they are deeply divided on how advances in AI and robotics will impact the economic and employment picture over the next decade.

We call this a canvassing because it is not a representative, randomized survey. Its findings emerge from an “opt in” invitation to experts who have been identified by researching those who are widely quoted as technology builders and analysts and those who have made insightful predictions to our previous queries about the future of the Internet. …

I wouldn’t have expected Jeff Bercovici’s Aug. 6, 2014 article for Forbes to be quite so hesitant about the possibilities of our robotic and artificially intelligent future,

As part of a major ongoing project looking at the future of the internet, the Pew Research Internet Project canvassed some 1,896 technologists, futurists and other experts about how they see advances in robotics and artificial intelligence affecting the human workforce in 2025.

The results were not especially reassuring. Nearly half of the respondents (48%) predicted that robots and AI will displace more jobs than they create over the coming decade. While that left a slim majority believing the impact of technology on employment will be neutral or positive, that’s not necessarily grounds for comfort: Many experts told Pew they expect the jobs created by the rise of the machines will be lower paying and less secure than the ones displaced, widening the gap between rich and poor, while others said they simply don’t think the major effects of robots and AI, for better or worse, will be in evidence yet by 2025.

Chris Gayomali’s Aug. 6, 2014 article for Fast Company poses an interesting question about how this brave new future will be financed,

A new study by Pew Internet Research takes a hard look at how innovations in robotics and artificial intelligence will impact the future of work. To reach their conclusions, Pew researchers invited 12,000 experts (academics, researchers, technologists, and the like) to answer two basic questions:

Will networked, automated, artificial intelligence (AI) applications and robotic devices have displaced more jobs than they have created by 2025?
To what degree will AI and robotics be parts of the ordinary landscape of the general population by 2025?

Close to 1,900 experts responded. About half (48%) of the people queried envision a future in which machines have displaced both blue- and white-collar jobs. It won’t be so dissimilar from the fundamental shift we saw in manufacturing, in which fewer (human) bosses oversaw automated assembly lines.

Meanwhile, the other 52% of experts surveyed speculate while that many of the jobs will be “substantially taken over by robots,” humans won’t be displaced outright. Rather, many people will be funneled into new job categories that don’t quite exist yet. …

Some worry that over the next 10 years, we’ll see a large number of middle class jobs disappear, widening the economic gap between the rich and the poor. The shift could be dramatic. As artificial intelligence becomes less artificial, they argue, the worry is that jobs that earn a decent living wage (say, customer service representatives, for example) will no longer be available, putting lots and lots of people out of work, possibly without the requisite skill set to forge new careers for themselves.

How do we avoid this? One revealing thread suggested by experts argues that the responsibility will fall on businesses to protect their employees. “There is a relentless march on the part of commercial interests (businesses) to increase productivity so if the technical advances are reliable and have a positive ROI [return on investment],” writes survey respondent Glenn Edens, a director of research in networking, security, and distributed systems at PARC, which is owned by Xerox. “Ultimately we need a broad and large base of employed population, otherwise there will be no one to pay for all of this new world.” [emphasis mine]

Alex Hearn’s Aug. 6, 2014 article for the Guardian reviews the report and comments on the current educational system’s ability to prepare students for the future,

Almost all of the respondents are united on one thing: the displacement of work by robots and AI is going to continue, and accelerate, over the coming decade. Where they split is in the societal response to that displacement.

The optimists predict that the economic boom that would result from vastly reduced costs to businesses would lead to the creation of new jobs in huge numbers, and a newfound premium being placed on the value of work that requires “uniquely human capabilities”. …

But the pessimists worry that the benefits of the labor replacement will accrue to those already wealthy enough to own the automatons, be that in the form of patents for algorithmic workers or the physical form of robots.

The ranks of the unemployed could swell, as people are laid off from work they are qualified in without the ability to retrain for careers where their humanity is a positive. And since this will happen in every economic sector simultaneously, civil unrest could be the result.

One thing many experts agreed on was the need for education to prepare for a post-automation world. ““Only the best-educated humans will compete with machines,” said internet sociologist Howard Rheingold.

“And education systems in the US and much of the rest of the world are still sitting students in rows and columns, teaching them to keep quiet and memorise what is told them, preparing them for life in a 20th century factory.”

Then, Will Oremus’ Aug. 6, 2014 article for Slate suggests we are already experiencing displacement,

… the current jobless recovery, along with a longer-term trend toward income and wealth inequality, has some thinkers wondering whether the latest wave of automation is different from those that preceded it.

Massachusetts Institute of Technology researchers Andrew McAfee and Erik Brynjolfsson, among others, see a “great decoupling” of productivity from wages since about 2000 as technology outpaces human workers’ education and skills. Workers, in other words, are losing the race between education and technology. This may be exacerbating a longer-term trend in which capital has gained the upper hand on labor since the 1970s.

The results of the survey were fascinating. Almost exactly half of the respondents (48 percent) predicted that intelligent software will disrupt more jobs than it can replace. The other half predicted the opposite.

The lack of expert consensus on such a crucial and seemingly straightforward question is startling. It’s even more so given that history and the leading economic models point so clearly to one side of the question: the side that reckons society will adjust, new jobs will emerge, and technology will eventually leave the economy stronger.

More recently, Manish Singh has written about some of his concerns as a writer who could be displaced in a Jan. 31, 2015 (?) article for Beta News (Note: A link has been removed),

Robots are after my job. They’re after yours as well, but let us deal with my problem first. Associated Press, an American multinational nonprofit news agency, revealed on Friday [Jan. 30, 2015] that it published 3,000 articles in the last three months of 2014. The company could previously only publish 300 stories. It didn’t hire more journalists, neither did its existing headcount start writing more, but the actual reason behind this exponential growth is technology. All those stories were written by an algorithm.

The articles produced by the algorithm were accurate, and you won’t be able to separate them from stories written by humans. Good lord, all the stories were written in accordance with the AP Style Guide, something not all journalists follow (but arguably, should).

There has been a growth in the number of such software. Narrative Science, a Chicago-based company offers an automated narrative generator powered by artificial intelligence. The company’s co-founder and CTO, Kristian Hammond, said last year that he believes that by 2030, 90 percent of news could be written by computers. Forbes, a reputable news outlet, has used Narrative’s software. Some news outlets use it to write email newsletters and similar things.

Singh also sounds a note of concern for other jobs by including this video (approximately 16 mins.) in his piece,

This video (Humans Need Not Apply) provides an excellent overview of the situation although it seems C. G. P. Grey, the person who produced and posted the video on YouTube, holds a more pessimistic view of the future than some other futurists.  C. G. P. Grey has a website here and is profiled here on Wikipedia.

One final bit, there’s a robot art critic which some are suggesting is superior to human art critics in Thomas Gorton’s Jan. 16, 2015 (?) article ‘This robot reviews art better than most critics‘ for Dazed Digital (Note: Links have been removed),

… the Novice Art Blogger, a Tumblr page set up by Matthew Plummer Fernandez. The British-Colombian artist programmed a bot with deep learning algorithms to analyse art; so instead of an overarticulate critic rambling about praxis, you get a review that gets down to the nitty-gritty about what exactly you see in front of you.

The results are charmingly honest: think a round robin of Google Translate text uninhibited by PR fluff, personal favouritism or the whims of a bad mood. We asked Novice Art Blogger to review our most recent Winter 2014 cover with Kendall Jenner. …

Beyond Kendall Jenner, it’s worth reading Gorton’s article for the interview with Plummer Fernandez.

Art project (autonomous bot purchases illegal goods) seized by Swiss law enforcement

Having just attended a talk on Robotics and Rehabilitation which included a segment on Robo Ethics, news of an art project where an autonomous bot (robot) is set loose on the darknet to purchase goods (not all of them illegal) was fascinating in itself (it was part of an art exhibition which also displayed the proceeds of the darknet activity). But things got more interesting when the exhibit attracted legal scrutiny in the UK and occasioned legal action in Switzerland.

Here’s more from a Jan. 23, 2015 article by Mike Masnick for Techdirt (Note: A link has been removed),

… some London-based Swiss artists, !Mediengruppe Bitnik [(Carmen Weisskopf and Domagoj Smoljo)], presented an exhibition in Zurich of The Darknet: From Memes to Onionland. Specifically, they had programmed a bot with some Bitcoin to randomly buy $100 worth of things each week via a darknet market, like Silk Road (in this case, it was actually Agora). The artists’ focus was more about the nature of dark markets, and whether or not it makes sense to make them illegal:

The pair see parallels between copyright law and drug laws: “You can enforce laws, but what does that mean for society? Trading is something people have always done without regulation, but today it is regulated,” says ays [sic] Weiskopff.

“There have always been darkmarkets in cities, online or offline. These questions need to be explored. But what systems do we have to explore them in? Post Snowden, space for free-thinking online has become limited, and offline is not a lot better.”

Interestingly the bot got excellent service as Mike Power wrote in his Dec. 5, 2014 review of the show. Power also highlights some of the legal, ethical, and moral implications,

The gallery is next door to a police station, but the artists say they are not afraid of legal repercussions of their bot buying illegal goods.

“We are the legal owner of the drugs [the bot purchased 10 ecstasy pills along with a baseball cap, a pair of sneaker/runners/trainers among other items] – we are responsible for everything the bot does, as we executed the code, says Smoljo. “But our lawyer and the Swiss constitution says art in the public interest is allowed to be free.”

The project also aims to explore the ways that trust is built between anonymous participants in a commercial transaction for possibly illegal goods. Perhaps most surprisingly, not one of the 12 deals the robot has made has ended in a scam.

“The markets copied procedures from Amazon and eBay – their rating and feedback system is so interesting,” adds Smojlo. “With such simple tools you can gain trust. The service level was impressive – we had 12 items and everything arrived.”

“There has been no scam, no rip-off, nothing,” says Weiskopff. “One guy could not deliver a handbag the bot ordered, but he then returned the bitcoins to us.”

The exhibition scheduled from Oct. 18, 2014 – Jan. 11, 2015 enjoyed an uninterrupted run but there were concerns in the UK (from the Power article),

A spokesman for the National Crime Agency, which incorporates the National Cyber Crime Unit, was less philosophical, acknowledging that the question of criminal culpability in the case of a randomised software agent making a purchase of an illegal drug was “very unusual”.

“If the purchase is made in Switzerland, then it’s of course potentially subject to Swiss law, on which we couldn’t comment,” said the NCA. “In the UK, it’s obviously illegal to purchase a prohibited drug (such as ecstasy), but any criminal liability would need to assessed on a case-by-case basis.”

Masnick describes the followup,

Apparently, that [case-by[case] assessment has concluded in this case, because right after the exhibit closed in Switzerland, law enforcement showed up to seize stuff …

!Mediengruppe Bitnik  issued a Jan. 15, 2015 press release (Note: Links have been removed),

«Can a robot, or a piece of software, be jailed if it commits a crime? Where does legal culpability lie if code is criminal by design or default? What if a robot buys drugs, weapons, or hacking equipment and has them sent to you, and police intercept the package?» These are some of the questions Mike Power asked when he reviewed the work «Random Darknet Shopper» in The Guardian. The work was part of the exhibition «The Darknet – From Memes to Onionland. An Exploration» in the Kunst Halle St. Gallen, which closed on Sunday, January 11, 2015. For the duration of the exhibition, !Mediengruppe Bitnik sent a software bot on a shopping spree in the Deepweb. Random Darknet Shopper had a budget of $100 in Bitcoins weekly, which it spent on a randomly chosen item from the deepweb shop Agora. The work and the exhibition received wide attention from the public and the press. The exhibition was well-attended and was discussed in a wide range of local and international press from Saiten to Vice, Arte, Libération, CNN, Forbes. «There’s just one problem», The Washington Post wrote in January about the work, «recently, it bought 10 ecstasy pills».

What does it mean for a society, when there are robots which act autonomously? Who is liable, when a robot breaks the law on its own initiative? These were some of the main questions the work Random Darknet Shopper posed. Global questions, which will now be negotiated locally.

On the morning of January 12, the day after the three-month exhibition was closed, the public prosecutor’s office of St. Gallen seized and sealed our work. It seems, the purpose of the confiscation is to impede an endangerment of third parties through the drugs exhibited by destroying them. This is what we know at present. We believe that the confiscation is an unjustified intervention into freedom of art. We’d also like to thank Kunst Halle St. Gallen for their ongoing support and the wonderful collaboration. Furthermore, we are convinced, that it is an objective of art to shed light on the fringes of society and to pose fundamental contemporary questions.

This project brings to mind Isaac Asimov’s three laws of robotics and a question (from the Wikipedia entry; Note: Links have been removed),

The Three Laws of Robotics (often shortened to The Three Laws or Three Laws, also known as Asimov’s Laws) are a set of rules devised by the science fiction author Isaac Asimov. The rules were introduced in his 1942 short story “Runaround”, although they had been foreshadowed in a few earlier stories. The Three Laws are:

A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Here’s my question, how do you programme a robot to know what would injure a human being? For example, if a human ingests an ecstasy pill the bot purchased, would that be covered in the first law?

Getting back to the robot ethics talk I recently attended, it was given by Ajung Moon (Ph.D. student at the University of British Columbia [Vancouver, Canada] studying Human-Robot Interaction and Roboethics. Mechatronics engineer with a sprinkle of Philosophy background). She has a blog,  Roboethic info DataBase where you can read more on robots and ethics.

I strongly recommend reading both Masnick’s post (he positions this action in a larger context) and Power’s article (more details and images from the exhibit).

Mind-controlled prostheses ready for real world activities

There’s some exciting news from Sweden’s Chalmers University of Technology about prosthetics. From an Oct. 8, 2014 news item on ScienceDaily,

For the first time, robotic prostheses controlled via implanted neuromuscular interfaces have become a clinical reality. A novel osseointegrated (bone-anchored) implant system gives patients new opportunities in their daily life and professional activities.

In January 2013 a Swedish arm amputee was the first person in the world to receive a prosthesis with a direct connection to bone, nerves and muscles. …

An Oct. 8, 2014 Chalmers University press release (also on EurekAlert), which originated the news item, provides more details about the research and this ‘real world’ prosthetic device,

“Going beyond the lab to allow the patient to face real-world challenges is the main contribution of this work,” says Max Ortiz Catalan, research scientist at Chalmers University of Technology and leading author of the publication.

“We have used osseointegration to create a long-term stable fusion between man and machine, where we have integrated them at different levels. The artificial arm is directly attached to the skeleton, thus providing mechanical stability. Then the human’s biological control system, that is nerves and muscles, is also interfaced to the machine’s control system via neuromuscular electrodes. This creates an intimate union between the body and the machine; between biology and mechatronics.”

The direct skeletal attachment is created by what is known as osseointegration, a technology in limb prostheses pioneered by associate professor Rickard Brånemark and his colleagues at Sahlgrenska University Hospital. Rickard Brånemark led the surgical implantation and collaborated closely with Max Ortiz Catalan and Professor Bo Håkansson at Chalmers University of Technology on this project.

The patient’s arm was amputated over ten years ago. Before the surgery, his prosthesis was controlled via electrodes placed over the skin. Robotic prostheses can be very advanced, but such a control system makes them unreliable and limits their functionality, and patients commonly reject them as a result.

Now, the patient has been given a control system that is directly connected to his own. He has a physically challenging job as a truck driver in northern Sweden, and since the surgery he has experienced that he can cope with all the situations he faces; everything from clamping his trailer load and operating machinery, to unpacking eggs and tying his children’s skates, regardless of the environmental conditions (read more about the benefits of the new technology below).

The patient is also one of the first in the world to take part in an effort to achieve long-term sensation via the prosthesis. Because the implant is a bidirectional interface, it can also be used to send signals in the opposite direction – from the prosthetic arm to the brain. This is the researchers’ next step, to clinically implement their findings on sensory feedback.

“Reliable communication between the prosthesis and the body has been the missing link for the clinical implementation of neural control and sensory feedback, and this is now in place,” says Max Ortiz Catalan. “So far we have shown that the patient has a long-term stable ability to perceive touch in different locations in the missing hand. Intuitive sensory feedback and control are crucial for interacting with the environment, for example to reliably hold an object despite disturbances or uncertainty. Today, no patient walks around with a prosthesis that provides such information, but we are working towards changing that in the very short term.”

The researchers plan to treat more patients with the novel technology later this year.

“We see this technology as an important step towards more natural control of artificial limbs,” says Max Ortiz Catalan. “It is the missing link for allowing sophisticated neural interfaces to control sophisticated prostheses. So far, this has only been possible in short experiments within controlled environments.”

The researchers have provided an image of the patient using his prosthetic arm in the context of his work as a truck driver,

[downloaded from http://www.chalmers.se/en/news/Pages/Mind-controlled-prosthetic-arms-that-work-in-daily-life-are-now-a-reality.aspx]

[downloaded from http://www.chalmers.se/en/news/Pages/Mind-controlled-prosthetic-arms-that-work-in-daily-life-are-now-a-reality.aspx]

The news release offers some additional information about the device,

The new technology is based on the OPRA treatment (osseointegrated prosthesis for the rehabilitation of amputees), where a titanium implant is surgically inserted into the bone and becomes fixated to it by a process known as osseointegration (Osseo = bone). A percutaneous component (abutment) is then attached to the titanium implant to serve as a metallic bone extension, where the prosthesis is then fixated. Electrodes are implanted in nerves and muscles as the interfaces to the biological control system. These electrodes record signals which are transmitted via the osseointegrated implant to the prostheses, where the signals are finally decoded and translated into motions.

There are also some videos of the patient demonstrating various aspects of this device available here (keep scrolling) along with more details about what makes this device so special.

Here’s a link to and a citation for the research paper,

An osseointegrated human-machine gateway for long-term sensory feedback and motor control of artificial limbs by Max Ortiz-Catalan, Bo Håkansson, and Rickard Brånemark. Sci Transl Med 8 October 2014: Vol. 6, Issue 257, p. 257re6 Sci. Transl. Med. DOI: 10.1126/scitranslmed.3008933

This article is behind a paywall and it appears to be part of a special issue or a special section in an issue, so keep scrolling down the linked to page to find more articles on this topic.

I have written about similar research in the past. Notably, there’s a July 19, 2011 post about work on Intraosseous Transcutaneous Amputation Prosthesis (ITAP) and a May 17, 2012 post featuring a video of a woman reaching with a robotic arm for a cup of coffee using her thoughts alone to control the arm.

Robo Brain; a new robot learning project

Having covered the RoboEarth project (a European Union funded ‘internet for robots’ first mentioned here in a Feb. 14, 2011 posting [scroll down about 1/4 of the way] and again in a March 12 2013 posting about the project’s cloud engine, Rapyuta and. most recently in a Jan. 14, 2014 posting), an Aug. 25, 2014 Cornell University news release by Bill Steele (also on EurekAlert with some editorial changes) about the US Robo Brain project immediately caught my attention,

Robo Brain – a large-scale computational system that learns from publicly available Internet resources – is currently downloading and processing about 1 billion images, 120,000 YouTube videos, and 100 million how-to documents and appliance manuals. The information is being translated and stored in a robot-friendly format that robots will be able to draw on when they need it.

The news release spells out why and how researchers have created Robo Brain,

To serve as helpers in our homes, offices and factories, robots will need to understand how the world works and how the humans around them behave. Robotics researchers have been teaching them these things one at a time: How to find your keys, pour a drink, put away dishes, and when not to interrupt two people having a conversation.

This will all come in one package with Robo Brain, a giant repository of knowledge collected from the Internet and stored in a robot-friendly format that robots will be able to draw on when they need it. [emphasis mine]

“Our laptops and cell phones have access to all the information we want. If a robot encounters a situation it hasn’t seen before it can query Robo Brain in the cloud,” explained Ashutosh Saxena, assistant professor of computer science.

Saxena and colleagues at Cornell, Stanford and Brown universities and the University of California, Berkeley, started in July to download about one billion images, 120,000 YouTube videos and 100 million how-to documents and appliance manuals, along with all the training they have already given the various robots in their own laboratories. Robo Brain will process images to pick out the objects in them, and by connecting images and video with text, it will learn to recognize objects and how they are used, along with human language and behavior.

Saxena described the project at the 2014 Robotics: Science and Systems Conference, July 12-16 [2014] in Berkeley.

If a robot sees a coffee mug, it can learn from Robo Brain not only that it’s a coffee mug, but also that liquids can be poured into or out of it, that it can be grasped by the handle, and that it must be carried upright when it is full, as opposed to when it is being carried from the dishwasher to the cupboard.

The system employs what computer scientists call “structured deep learning,” where information is stored in many levels of abstraction. An easy chair is a member of the class of chairs, and going up another level, chairs are furniture. Sitting is something you can do on a chair, but a human can also sit on a stool, a bench or the lawn.

A robot’s computer brain stores what it has learned in a form mathematicians call a Markov model, which can be represented graphically as a set of points connected by lines (formally called nodes and edges). The nodes could represent objects, actions or parts of an image, and each one is assigned a probability – how much you can vary it and still be correct. In searching for knowledge, a robot’s brain makes its own chain and looks for one in the knowledge base that matches within those probability limits.

“The Robo Brain will look like a gigantic, branching graph with abilities for multidimensional queries,” said Aditya Jami, a visiting researcher at Cornell who designed the large-scale database for the brain. It might look something like a chart of relationships between Facebook friends but more on the scale of the Milky Way.

Like a human learner, Robo Brain will have teachers, thanks to crowdsourcing. The Robo Brain website will display things the brain has learned, and visitors will be able to make additions and corrections.

The “robot-friendly format” for information in the European project (RoboEarth) meant machine language but if I understand what’s written in the news release correctly, this project incorporates a mix of machine language and natural (human) language.

This is one of the times the funding sources (US National Science Foundation, two of the armed forces, businesses and a couple of not-for-profit agencies) seem particularly interesting (from the news release),

The project is supported by the National Science Foundation, the Office of Naval Research, the Army Research Office, Google, Microsoft, Qualcomm, the Alfred P. Sloan Foundation and the National Robotics Initiative, whose goal is to advance robotics to help make the United States more competitive in the world economy.

For the curious, here’s a link to the Robo Brain and RoboEarth websites.

Mothbots (cyborg moths)

Apparently the big picture could involve search and rescue applications, meanwhile, the smaller picture shows attempts to create a cyborg moth (mothbot). From an Aug. 20, 2014 news item on ScienceDaily,

North Carolina State University [US] researchers have developed methods for electronically manipulating the flight muscles of moths and for monitoring the electrical signals moths use to control those muscles. The work opens the door to the development of remotely-controlled moths, or “biobots,” for use in emergency response.

“In the big picture, we want to know whether we can control the movement of moths for use in applications such as search and rescue operations,” says Dr. Alper Bozkurt, an assistant professor of electrical and computer engineering at NC State and co-author of a paper on the work. “The idea would be to attach sensors to moths in order to create a flexible, aerial sensor network that can identify survivors or public health hazards in the wake of a disaster.”

An Aug. 20, 2014 North Carolina State University news release (also on EurekAlert), which originated the news item,

The paper presents a technique Bozkurt developed for attaching electrodes to a moth during its pupal stage, when the caterpillar is in a cocoon undergoing metamorphosis into its winged adult stage. This aspect of the work was done in conjunction with Dr. Amit Lal of Cornell University.

But the new findings in the paper involve methods developed by Bozkurt’s research team for improving our understanding of precisely how a moth coordinates its muscles during flight.

By attaching electrodes to the muscle groups responsible for a moth’s flight, Bozkurt’s team is able to monitor electromyographic signals – the electric signals the moth uses during flight to tell those muscles what to do.

The moth is connected to a wireless platform that collects the electromyographic data as the moth moves its wings. To give the moth freedom to turn left and right, the entire platform levitates, suspended in mid-air by electromagnets. A short video describing the work is available at http://www.youtube.com/watch?v=jR325RHPK8o.

“By watching how the moth uses its wings to steer while in flight, and matching those movements with their corresponding electromyographic signals, we’re getting a much better understanding of how moths maneuver through the air,” Bozkurt says.

“We’re optimistic that this information will help us develop technologies to remotely control the movements of moths in flight,” Bozkurt says. “That’s essential to the overarching goal of creating biobots that can be part of a cyberphysical sensor network.”

But Bozkurt stresses that there’s a lot of work yet to be done to make moth biobots a viable tool.

“We now have a platform for collecting data about flight coordination,” Bozkurt says. “Next steps include developing an automated system to explore and fine-tune parameters for controlling moth flight, further miniaturizing the technology, and testing the technology in free-flying moths.”

Here’s an image illustrating the researchers’ work,

Caption: The moth is connected to a wireless platform that collects the electromyographic data as the moth moves its wings. To give the moth freedom to turn left and right, the entire platform levitates, suspended in mid-air by electromagnets. Credit: Alper Bozkurt

Caption: The moth is connected to a wireless platform that collects the electromyographic data as the moth moves its wings. To give the moth freedom to turn left and right, the entire platform levitates, suspended in mid-air by electromagnets.
Credit: Alper Bozkurt

I was expecting to find this research had been funded by the US military but that doesn’t seem to be the case according to the university news release,

… The research was supported by the National Science Foundation, under grant CNS-1239243. The researchers also used transmitters and receivers developed by Triangle Biosystems International and thank them for their contribution to the work.

For the curious, here’s a link to and a citation for the text and the full video,

Early Metamorphic Insertion Technology for Insect Flight Behavior Monitoring by Alexander Verderber, Michael McKnight, and Alper Bozkurt. J. Vis. Exp. (89), e50901, doi:10.3791/50901 (2014)

This material is behind a paywall.