Category Archives: robots

Are they just computer games or are we in a race with technology?

This story poses some interesting questions that touch on the uneasiness being felt as computers get ‘smarter’. From an April 13, 2016 news item on ScienceDaily,

The saying of philosopher René Descartes of what makes humans unique is beginning to sound hollow. ‘I think — therefore soon I am obsolete’ seems more appropriate. When a computer routinely beats us at chess and we can barely navigate without the help of a GPS, have we outlived our place in the world? Not quite. Welcome to the front line of research in cognitive skills, quantum computers and gaming.

Today there is an on-going battle between man and machine. While genuine machine consciousness is still years into the future, we are beginning to see computers make choices that previously demanded a human’s input. Recently, the world held its breath as Google’s algorithm AlphaGo beat a professional player in the game Go–an achievement demonstrating the explosive speed of development in machine capabilities.

An April 13, 2016 Aarhus University press release (also on EurekAlert) by Rasmus Rørbæk, which originated the news item, further develops the point,

But we are not beaten yet — human skills are still superior in some areas. This is one of the conclusions of a recent study by Danish physicist Jacob Sherson, published in the journal Nature.

“It may sound dramatic, but we are currently in a race with technology — and steadily being overtaken in many areas. Features that used to be uniquely human are fully captured by contemporary algorithms. Our results are here to demonstrate that there is still a difference between the abilities of a man and a machine,” explains Jacob Sherson.

At the interface between quantum physics and computer games, Sherson and his research group at Aarhus University have identified one of the abilities that still makes us unique compared to a computer’s enormous processing power: our skill in approaching problems heuristically and solving them intuitively. The discovery was made at the AU Ideas Centre CODER, where an interdisciplinary team of researchers work to transfer some human traits to the way computer algorithms work. ?

Quantum physics holds the promise of immense technological advances in areas ranging from computing to high-precision measurements. However, the problems that need to be solved to get there are so complex that even the most powerful supercomputers struggle with them. This is where the core idea behind CODER–combining the processing power of computers with human ingenuity — becomes clear. ?

Our common intuition

Like Columbus in QuantumLand, the CODER research group mapped out how the human brain is able to make decisions based on intuition and accumulated experience. This is done using the online game “Quantum Moves.” Over 10,000 people have played the game that allows everyone contribute to basic research in quantum physics.

“The map we created gives us insight into the strategies formed by the human brain. We behave intuitively when we need to solve an unknown problem, whereas for a computer this is incomprehensible. A computer churns through enormous amounts of information, but we can choose not to do this by basing our decision on experience or intuition. It is these intuitive insights that we discovered by analysing the Quantum Moves player solutions,” explains Jacob Sherson. ? [sic]

The laws of quantum physics dictate an upper speed limit for data manipulation, which in turn sets the ultimate limit to the processing power of quantum computers — the Quantum Speed ??Limit. Until now a computer algorithm has been used to identify this limit. It turns out that with human input researchers can find much better solutions than the algorithm.

“The players solve a very complex problem by creating simple strategies. Where a computer goes through all available options, players automatically search for a solution that intuitively feels right. Through our analysis we found that there are common features in the players’ solutions, providing a glimpse into the shared intuition of humanity. If we can teach computers to recognise these good solutions, calculations will be much faster. In a sense we are downloading our common intuition to the computer” says Jacob Sherson.

And it works. The group has shown that we can break the Quantum Speed Limit by combining the cerebral cortex and computer chips. This is the new powerful tool in the development of quantum computers and other quantum technologies.

After the buildup, the press release focuses on citizen science and computer games,

Science is often perceived as something distant and exclusive, conducted behind closed doors. To enter you have to go through years of education, and preferably have a doctorate or two. Now a completely different reality is materialising.? [sic]

In recent years, a new phenomenon has appeared–citizen science breaks down the walls of the laboratory and invites in everyone who wants to contribute. The team at Aarhus University uses games to engage people in voluntary science research. Every week people around the world spend 3 billion hours playing games. Games are entering almost all areas of our daily life and have the potential to become an invaluable resource for science.

“Who needs a supercomputer if we can access even a small fraction of this computing power? By turning science into games, anyone can do research in quantum physics. We have shown that games break down the barriers between quantum physicists and people of all backgrounds, providing phenomenal insights into state-of-the-art research. Our project combines the best of both worlds and helps challenge established paradigms in computational research,” explains Jacob Sherson.

The difference between the machine and us, figuratively speaking, is that we intuitively reach for the needle in a haystack without knowing exactly where it is. We ‘guess’ based on experience and thereby skip a whole series of bad options. For Quantum Moves, intuitive human actions have been shown to be compatible with the best computer solutions. In the future it will be exciting to explore many other problems with the aid of human intuition.

“We are at the borderline of what we as humans can understand when faced with the problems of quantum physics. With the problem underlying Quantum Moves we give the computer every chance to beat us. Yet, over and over again we see that players are more efficient than machines at solving the problem. While Hollywood blockbusters on artificial intelligence are starting to seem increasingly realistic, our results demonstrate that the comparison between man and machine still sometimes favours us. We are very far from computers with human-type cognition,” says Jacob Sherson and continues:

“Our work is first and foremost a big step towards the understanding of quantum physical challenges. We do not know if this can be transferred to other challenging problems, but it is definitely something that we will work hard to resolve in the coming years.”

Here’s a link to and a citation for the paper,

Exploring the quantum speed limit with computer games by Jens Jakob W. H. Sørensen, Mads Kock Pedersen, Michael Munch, Pinja Haikka, Jesper Halkjær Jensen, Tilo Planke, Morten Ginnerup Andreasen, Miroslav Gajdacz, Klaus Mølmer, Andreas Lieberoth, & Jacob F. Sherson. Nature 532, 210–213  (14 April 2016) doi:10.1038/nature17620 Published online 13 April 2016

This paper is behind a paywall.

What robots and humans?

I have two robot news bits for this posting. The first probes the unease currently being expressed (pop culture movies, Stephen Hawking, the Cambridge Centre for Existential Risk, etc.) about robots and their increasing intelligence and increased use in all types of labour formerly and currently performed by humans. The second item is about a research project where ‘artificial agents’ (robots) are being taught human values with stories.

Human labour obsolete?

‘When machines can do any job, what will humans do?’ is the question being asked in a presentation by Rice University computer scientist, Moshe Vardi, for the American Association for the Advancement of Science (AAAS) annual meeting held in Washington, D.C. from Feb. 11 – 15, 2016.

Here’s more about Dr. Vardi’s provocative question from a Feb. 14, 2016 Rice University news release (also on EurekAlert),

Rice University computer scientist Moshe Vardi expects that within 30 years, machines will be capable of doing almost any job that a human can. In anticipation, he is asking his colleagues to consider the societal implications. Can the global economy adapt to greater than 50 percent unemployment? Will those out of work be content to live a life of leisure?

“We are approaching a time when machines will be able to outperform humans at almost any task,” Vardi said. “I believe that society needs to confront this question before it is upon us: If machines are capable of doing almost any work humans can do, what will humans do?”

Vardi addressed this issue Sunday [Feb. 14, 2016] in a presentation titled “Smart Robots and Their Impact on Society” at one of the world’s largest and most prestigious scientific meetings — the annual meeting of the American Association for the Advancement of Science in Washington, D.C.

“The question I want to put forward is, Does the technology we are developing ultimately benefit mankind?” Vardi said. He asked the question after presenting a body of evidence suggesting that the pace of advancement in the field of artificial intelligence (AI) is increasing, even as existing robotic and AI technologies are eliminating a growing number of middle-class jobs and thereby driving up income inequality.

Vardi, a member of both the National Academy of Engineering and the National Academy of Science, is a Distinguished Service Professor and the Karen Ostrum George Professor of Computational Engineering at Rice, where he also directs Rice’s Ken Kennedy Institute for Information Technology. Since 2008 he has served as the editor-in-chief of Communications of the ACM, the flagship publication of the Association for Computing Machinery (ACM), one of the world’s largest computational professional societies.

Vardi said some people believe that future advances in automation will ultimately benefit humans, just as automation has benefited society since the dawn of the industrial age.

“A typical answer is that if machines will do all our work, we will be free to pursue leisure activities,” Vardi said. But even if the world economic system could be restructured to enable billions of people to live lives of leisure, Vardi questioned whether it would benefit humanity.

“I do not find this a promising future, as I do not find the prospect of leisure-only life appealing. I believe that work is essential to human well-being,” he said.

“Humanity is about to face perhaps its greatest challenge ever, which is finding meaning in life after the end of ‘In the sweat of thy face shalt thou eat bread,’” Vardi said. “We need to rise to the occasion and meet this challenge” before human labor becomes obsolete, he said.

In addition to dual membership in the National Academies, Vardi is a Guggenheim fellow and a member of the American Academy of Arts and Sciences, the European Academy of Sciences and the Academia Europa. He is a fellow of the ACM, the American Association for Artificial Intelligence and the Institute for Electrical and Electronics Engineers (IEEE). His numerous honors include the Southeastern Universities Research Association’s 2013 Distinguished Scientist Award, the 2011 IEEE Computer Society Harry H. Goode Award, the 2008 ACM Presidential Award, the 2008 Blaise Pascal Medal for Computer Science by the European Academy of Sciences and the 2000 Goedel Prize for outstanding papers in the area of theoretical computer science.

Vardi joined Rice’s faculty in 1993. His research centers upon the application of logic to computer science, database systems, complexity theory, multi-agent systems and specification and verification of hardware and software. He is the author or co-author of more than 500 technical articles and of two books, “Reasoning About Knowledge” and “Finite Model Theory and Its Applications.”

In a Feb. 5, 2015 post, I rounded up a number of articles about our robot future. It provides a still useful overview of the thinking on the topic.

Teaching human values with stories

A Feb. 12, 2016 Georgia (US) Institute of Technology (Georgia Tech) news release (also on EurekAlert) describes the research,

The rapid pace of artificial intelligence (AI) has raised fears about whether robots could act unethically or soon choose to harm humans. Some are calling for bans on robotics research; others are calling for more research to understand how AI might be constrained. But how can robots learn ethical behavior if there is no “user manual” for being human?

Researchers Mark Riedl and Brent Harrison from the School of Interactive Computing at the Georgia Institute of Technology believe the answer lies in “Quixote” — to be unveiled at the AAAI [Association for the Advancement of Artificial Intelligence]-16 Conference in Phoenix, Ariz. (Feb. 12 – 17, 2016). Quixote teaches “value alignment” to robots by training them to read stories, learn acceptable sequences of events and understand successful ways to behave in human societies.

“The collected stories of different cultures teach children how to behave in socially acceptable ways with examples of proper and improper behavior in fables, novels and other literature,” says Riedl, associate professor and director of the Entertainment Intelligence Lab. “We believe story comprehension in robots can eliminate psychotic-appearing behavior and reinforce choices that won’t harm humans and still achieve the intended purpose.”

Quixote is a technique for aligning an AI’s goals with human values by placing rewards on socially appropriate behavior. It builds upon Riedl’s prior research — the Scheherazade system — which demonstrated how artificial intelligence can gather a correct sequence of actions by crowdsourcing story plots from the Internet.

Scheherazade learns what is a normal or “correct” plot graph. It then passes that data structure along to Quixote, which converts it into a “reward signal” that reinforces certain behaviors and punishes other behaviors during trial-and-error learning. In essence, Quixote learns that it will be rewarded whenever it acts like the protagonist in a story instead of randomly or like the antagonist.

For example, if a robot is tasked with picking up a prescription for a human as quickly as possible, the robot could a) rob the pharmacy, take the medicine, and run; b) interact politely with the pharmacists, or c) wait in line. Without value alignment and positive reinforcement, the robot would learn that robbing is the fastest and cheapest way to accomplish its task. With value alignment from Quixote, the robot would be rewarded for waiting patiently in line and paying for the prescription.

Riedl and Harrison demonstrate in their research how a value-aligned reward signal can be produced to uncover all possible steps in a given scenario, map them into a plot trajectory tree, which is then used by the robotic agent to make “plot choices” (akin to what humans might remember as a Choose-Your-Own-Adventure novel) and receive rewards or punishments based on its choice.

The Quixote technique is best for robots that have a limited purpose but need to interact with humans to achieve it, and it is a primitive first step toward general moral reasoning in AI, Riedl says.

“We believe that AI has to be enculturated to adopt the values of a particular society, and in doing so, it will strive to avoid unacceptable behavior,” he adds. “Giving robots the ability to read and understand our stories may be the most expedient means in the absence of a human user manual.”

So there you have it, some food for thought.

Science events (Einstein, getting research to patients, sleep, and art/science) in Vancouver (Canada), Jan. 23 – 28, 2016

There are five upcoming science events in seven days (Jan. 23 – 28, 2016) in the Vancouver area.

Einstein Centenary Series

The first is a Saturday morning, Jan. 23, 2016 lecture, the first for 2016 in a joint TRIUMF (Canada’s national laboratory for particle and nuclear physics), UBC (University of British Columbia), and SFU (Simon Fraser University) series featuring Einstein’s  work and its implications. From the event brochure (pdf), which lists the entire series,

TRIUMF, UBC and SFU are proud to present the 2015-2016 Saturday morning lecture series on the frontiers of modern physics. These free lectures are a level appropriate for high school students and members of the general public.

Parallel lecture series will be held at TRIUMF on the UBC South Campus, and at SFU Surrey Campus.

Lectures start at 10:00 am and 11:10 am. Parking is available.

For information, registration and directions, see :
http://www.triumf.ca/saturday-lectures

January 23, 2016 TRIUMF Auditorium (UBC, Vancouver)
1. General Relativity – the theory (Jonathan Kozaczuk, TRIUMF)
2. Einstein and Light: stimulated emission, photoelectric effect and quantum theory (Mark Van Raamsdonk, UBC)

January 30, 2016 SFU Surrey Room 2740 (SFU, Surrey Campus)

1. General Relativity – the theory (Jonathan Kozaczuk, TRIUMF)
2. Einstein and Light: stimulated emission, photoelectric effect and quantum theory (Mark Van Raamsdonk, UBC)

I believe these lectures are free. One more note, they will be capping off this series with a special lecture by Kip Thorne (astrophysicist and consultant for the movie Interstellar) at Science World, on Thursday, April 14, 2016. More about that * at a closer date.

Café Scientifique

On Tuesday, January 26, 2016 at 7:30 pm in the back room of The Railway Club (2nd floor of 579 Dunsmuir St. [at Seymour St.]), Café Scientifique will be hosting a talk about science and serving patients (from the Jan. 5, 2016 announcement),

Our speakers for the evening will be Dr. Millan Patel and Dr. Shirin Kalyan.  The title of their talk is:

Helping Science to Serve Patients

Science in general and biotechnology in particular are auto-catalytic. That is, they catalyze their own evolution and so generate breakthroughs at an exponentially increasing rate.  The experience of patients is not exponentially getting better, however.  This talk, with a medical geneticist and an immunologist who believe science can deliver far more for patients, will focus on structural and cultural impediments in our system and ways they and others have developed to either lower or leapfrog the barriers. We hope to engage the audience in a highly interactive discussion to share thoughts and perspectives on this important issue.

There is additional information about Dr. Millan Patel here and Dr. Shirin Kalyan here. It would appear both speakers are researchers and academics and while I find the emphasis on the patient and the acknowledgement that medical research benefits are not being delivered in quantity or quality to patients, it seems odd that they don’t have a clinician (a doctor who deals almost exclusively with patients as opposed to two researchers) to add to their perspective.

You may want to take a look at my Jan. 22, 2016 ‘open science’ and Montreal Neurological Institute posting for a look at how researchers there are responding to the issue.

Curiosity Collider

This is an art/science event from an organization that sprang into existence sometime during summer 2015 (my July 7, 2015 posting featuring Curiosity Collider).

When: 8:00pm on Wednesday, January 27, 2016. Door opens at 7:30pm.
Where: Café Deux Soleils. 2096 Commercial Drive, Vancouver, BC (Google Map).
Cost: $5.00 cover (sliding scale) at the door. Proceeds will be used to cover the cost of running this event, and to fund future Curiosity Collider events.

Part I. Speakers

Part II. Open Mic

  • 90 seconds to share your art-science ideas. Think they are “ridiculous”? Well, we think it could be ridiculously awesome – we are looking for creative ideas!
  • Don’t have an idea (yet)? Contribute by sharing your expertise.
  • Chat with other art-science enthusiasts, strike up a conversation to collaborate, all disciplines/backgrounds welcome.
  • Want to showcase your project in the future? Participate in our fall art-science competition (more to come)!

Follow updates on twitter via @ccollider or #CollideConquer

Good luck on the open mic (should you have a project)!

Brain Talks

This particular Brain Talk event is taking place at Vancouver General Hospital (VGH; there is also another Brain Talks series which takes place at the University of British Columbia). Yes, members of the public can attend the VGH version; they didn’t throw me out the last time I was there. Here’s more about the next VGH Brain Talks,

Sleep: biological & pathological perspectives

Thursday, Jan 28, 6:00pm @ Paetzold Auditorium, Vancouver General Hospital

Speakers:

Peter Hamilton, Sleep technician ~ Sleep Architecture

Dr. Robert Comey, MD ~ Sleep Disorders

Dr. Maia Love, MD ~ Circadian Rhythms

Panel discussion and wine and cheese reception to follow!

Please RSVP here

You may want to keep in mind that the event is organized by people who don’t organize events often. Nice people but you may need to search for crackers for your cheese and your wine comes out of a box (and I think it might have been self-serve the time I attended).

What a fabulous week we have ahead of us—Happy Weekend!

*’when’ removed from the sentence on March 28, 2016.

Exceeding the sensitivity of skin with a graphene elastomer

A Jan. 14, 2016 news item on Nanowerk announces the latest in ‘sensitive’ skin,

A new sponge-like material, discovered by Monash [Monash University in Australia] researchers, could have diverse and valuable real-life applications. The new elastomer could be used to create soft, tactile robots to help care for elderly people, perform remote surgical procedures or build highly sensitive prosthetic hands.

Graphene-based cellular elastomer, or G-elastomer, is highly sensitive to pressure and vibrations. Unlike other viscoelastic substances such as polyurethane foam or rubber, G-elastomer bounces back extremely quickly under pressure, despite its exceptionally soft nature. This unique, dynamic response has never been found in existing soft materials, and has excited and intrigued researchers Professor Dan Li and Dr Ling Qiu from the Monash Centre for Atomically Thin Materials (MCATM).

A Jan. 14, 2016 Monash University media release, which originated the news item, offers some insights from the researchers,

According to Dr Qiu, “This graphene elastomer is a flexible, ultra-light material which can detect pressures and vibrations across a broad bandwidth of frequencies. It far exceeds the response range of our skin, and it also has a very fast response time, much faster than conventional polymer elastomer.

“Although we often take it for granted, the pressure sensors in our skin allow us to do things like hold a cup without dropping it, crushing it, or spilling the contents. The sensitivity and response time of G-elastomer could allow a prosthetic hand or a robot to be even more dexterous than a human, while the flexibility could allow us to create next generation flexible electronic devices,” he said.

Professor Li, a director of MCATM, said, ‘Although we are still in the early stages of discovering graphene’s potential, this research is an excellent breakthrough. What we do know is that graphene could have a huge impact on Australia’s economy, both from a resources and innovation perspective, and we’re aiming to be at the forefront of that research and development.’

Dr Qiu’s research has been published in the latest edition of the prestigious journal Advanced Materials and is protected by a suite of patents.

Are they trying to protect the work from competition or wholesale theft of their work?

After all, the idea behind patents and copyrights was to encourage innovation and competition by ensuring that inventors and creators would benefit from their work. An example that comes to mind is the Xerox company which for many years had a monopoly on photocopy machines by virtue of their patent. Once the patent ran out (patents and copyrights were originally intended to be in place for finite time periods) and Xerox had made much, much money, competitors were free to create and market their own photocopy machines, which they did quite promptly. Since those days, companies have worked to extend patent and copyright time periods in efforts to stifle competition.

Getting back to Monash, I do hope the researchers are able to benefit from their work and wish them well. I also hope that they enjoy plenty of healthy competition spurring them onto greater innovation.

Here’s a link to and a citation for their paper,

Ultrafast Dynamic Piezoresistive Response of Graphene-Based Cellular Elastomers by Ling Qiu, M. Bulut Coskun, Yue Tang, Jefferson Z. Liu, Tuncay Alan, Jie Ding, Van-Tan Truong, and Dan Li. Advanced Materials Volume 28, Issue 1 January 6, 2016Pages 194–200 DOI: 10.1002/adma.201503957 First published: 2 November 2015

This paper appears to be open access.

Spermbot alternative for infertility issues

A German team that’s been working with sperm to develop a biological motor has announced it may have an alternative treatment for infertility, according to a Jan. 13, 2016 news item on Nanowerk,

Sperm that don’t swim well [also known as low motility] rank high among the main causes of infertility. To give these cells a boost, women trying to conceive can turn to artificial insemination or other assisted reproduction techniques, but success can be elusive. In an attempt to improve these odds, scientists have developed motorized “spermbots” that can deliver poor swimmers — that are otherwise healthy — to an egg. …

A Jan. 13, 2016 American Chemical Society (ACS) news release (*also on EurekAlert*), which originated the news item, expands on the theme,

Artificial insemination is a relatively inexpensive and simple technique that involves introducing sperm to a woman’s uterus with a medical instrument. Overall, the success rate is on average under 30 percent, according to the Human Fertilisation & Embryology Authority of the United Kingdom. In vitro fertilization can be more effective, but it’s a complicated and expensive process. It requires removing eggs from a woman’s ovaries with a needle, fertilizing them outside the body and then transferring the embryos to her uterus or a surrogate’s a few days later. Each step comes with a risk for failure. Mariana Medina-Sánchez, Lukas Schwarz, Oliver G. Schmidt and colleagues from the Institute for Integrative Nanosciences at IFW Dresden in Germany wanted to see if they could come up with a better option than the existing methods.

Building on previous work on micromotors, the researchers constructed tiny metal helices just large enough to fit around the tail of a sperm. Their movements can be controlled by a rotating magnetic field. Lab testing showed that the motors can be directed to slip around a sperm cell, drive it to an egg for potential fertilization and then release it. The researchers say that although much more work needs to be done before their technique can reach clinical testing, the success of their initial demonstration is a promising start.

For those who prefer to watch their news, there’s this,


This team got a flurry of interest in 2014 when they first announced their research on using sperm as a biological motor. Tracy Staedter in a Jan. 15, 2014 article for Discovery.com describes their then results,

To create these tiny robots, the scientists first had to catch a few. First, they designed microtubes, which are essentially thin sheets of titanium and iron — which have a magnetic property — rolled into conical tubes, with one end wider than the other. Next, they put the microtubes into a solution in a Petri dish and added bovine sperm cells, which are similar size to human sperm. When a live sperm entered the wider end of the tube, it became trapped down near the narrow end. The scientists also closed the wider end, so the sperm wouldn’t swim out. And because sperm are so determined, the trapped cell pushed against the tube, moving it forward.

Next, the scientists used a magnetic field to guide the tube in the direction they wanted it to go, relying on the sperm for the propulsion.

The quick swimming spermbots could use controlled from outside a person body to deliver payloads of drugs and even sperm itself to parts of the body where its needed, whether that’s a cancer tumor or an egg.

This work isn’t nanotechnology per se but it has been published in ACS Nano Letters. Here’s a link to and a citation for the paper,

Cellular Cargo Delivery: Toward Assisted Fertilization by Sperm-Carrying Micromotors by Mariana Medina-Sánchez, Lukas Schwarz, Anne K. Meyer, Franziska Hebenstreit, and Oliver G. Schmidt. Nano Lett., 2016, 16 (1), pp 555–561 DOI: 10.1021/acs.nanolett.5b04221 Publication Date (Web): December 21, 2015

Copyright © 2015 American Chemical Society

This paper is behind a paywall.

*'(also on EurekAlert)’ text and link added Jan. 14, 2016.

KAIST (Korea Advanced Institute of Science and Technology) will lead an Ideas Lab at 2016 World Economic Forum

The theme for the 2016 World Economic Forum (WEF) is ‘Mastering the Fourth Industrial Revolution’. I’m losing track of how many industrial revolutions we’ve had and this seems like a vague theme. However, there is enlightenment to be had in this Nov. 17, 2015 Korea Advanced Institute of Science and Technology (KAIST) news release on EurekAlert,

KAIST researchers will lead an IdeasLab on biotechnology for an aging society while HUBO, the winner of the 2015 DARPA Robotics Challenge, will interact with the forum participants, offering an experience of state-of-the-art robotics technology

Moving on from the news release’s subtitle, there’s more enlightenment,

Representatives from the Korea Advanced Institute of Science and Technology (KAIST) will attend the 2016 Annual Meeting of the World Economic Forum to run an IdeasLab and showcase its humanoid robot.

With over 2,500 leaders from business, government, international organizations, civil society, academia, media, and the arts expected to participate, the 2016 Annual Meeting will take place on Jan. 20-23, 2016 in Davos-Klosters, Switzerland. Under the theme of ‘Mastering the Fourth Industrial Revolution,’ [emphasis mine] global leaders will discuss the period of digital transformation [emphasis mine] that will have profound effects on economies, societies, and human behavior.

President Sung-Mo Steve Kang of KAIST will join the Global University Leaders Forum (GULF), a high-level academic meeting to foster collaboration among experts on issues of global concern for the future of higher education and the role of science in society. He will discuss how the emerging revolution in technology will affect the way universities operate and serve society. KAIST is the only Korean university participating in GULF, which is composed of prestigious universities invited from around the world.

Four KAIST professors, including Distinguished Professor Sang Yup Lee of the Chemical and Biomolecular Engineering Department, will lead an IdeasLab on ‘Biotechnology for an Aging Society.’

Professor Lee said, “In recent decades, much attention has been paid to the potential effect of the growth of an aging population and problems posed by it. At our IdeasLab, we will introduce some of our research breakthroughs in biotechnology to address the challenges of an aging society.”

In particular, he will present his latest research in systems biotechnology and metabolic engineering. His research has explained the mechanisms of how traditional Oriental medicine works in our bodies by identifying structural similarities between effective compounds in traditional medicine and human metabolites, and has proposed more effective treatments by employing such compounds.

KAIST will also display its networked mobile medical service system, ‘Dr. M.’ Built upon a ubiquitous and mobile Internet, such as the Internet of Things, wearable electronics, and smart homes and vehicles, Dr. M will provide patients with a more affordable and accessible healthcare service.

In addition, Professor Jun-Ho Oh of the Mechanical Engineering Department will showcase his humanoid robot, ‘HUBO,’ during the Annual Meeting. His research team won the International Humanoid Robotics Challenge hosted by the United States Defense Advanced Research Projects Agency (DARPA), which was held in Pomona, California, on June 5-6, 2015. With 24 international teams participating in the finals, HUBO completed all eight tasks in 44 minutes and 28 seconds, 6 minutes earlier than the runner-up, and almost 11 minutes earlier than the third-place team. Team KAIST walked away with the grand prize of USD 2 million.

Professor Oh said, “Robotics technology will grow exponentially in this century, becoming a real driving force to expedite the Fourth Industrial Revolution. I hope HUBO will offer an opportunity to learn about the current advances in robotics technology.”

President Kang pointed out, “KAIST has participated in the Annual Meeting of the World Economic Forum since 2011 and has engaged with a broad spectrum of global leaders through numerous presentations and demonstrations of our excellence in education and research. Next year, we will choreograph our first robotics exhibition on HUBO and present high-tech research results in biotechnology, which, I believe, epitomizes how science and technology breakthroughs in the Fourth Industrial Revolution will shape our future in an unprecedented way.”

Based on what I’m reading in the KAIST news release, I think the conversation about the ‘Fourth revolution’ may veer toward robotics and artificial intelligence (referred to in code as “digital transformation”) as developments in these fields are likely to affect various economies.  Before proceeding with that thought, take a look at this video showcasing HUBO at the DARPA challenge,


I’m quite impressed with how the robot can recalibrate its grasp so it can pick things up and plug an electrical cord into an outlet and knowing whether wheels or legs will be needed to complete a task all due to algorithms which give the robot a type of artificial intelligence. While it may seem more like a machine than anything else, there’s also this version of a HUBO,

Description English: Photo by David Hanson Date 26 October 2006 (original upload date) Source Transferred from en.wikipedia to Commons by Mac. Author Dayofid at English Wikipedia

Description
English: Photo by David Hanson
Date 26 October 2006 (original upload date)
Source Transferred from en.wikipedia to Commons by Mac.
Author Dayofid at English Wikipedia

It’ll be interesting to note if the researchers make the HUBO seem more humanoid by giving it a face for its interactions with WEF attendees. It would be more engaging but also more threatening since there is increasing concern over robots taking work away from humans with implications for various economies. There’s more about HUBO in its Wikipedia entry.

As for the IdeasLab, that’s been in place at the WEF since 2009 according to this WEF July 19, 2011 news release announcing an ideasLab hub (Note: A link has been removed),

The World Economic Forum is publicly launching its biannual interactive IdeasLab hub on 19 July [2011] at 10.00 CEST. The unique IdeasLab hub features short documentary-style, high-definition (HD) videos of preeminent 21st century ideas and critical insights. The hub also provides dynamic Pecha Kucha presentations and visual IdeaScribes that trace and package complex strategic thinking into engaging and powerful images. All videos are HD broadcast quality.

To share the knowledge captured by the IdeasLab sessions, which have been running since 2009, the Forum is publishing 23 of the latest sessions, seen as the global benchmark of collaborative learning and development.

So while you might not be able to visit an IdeasLab presentation at the WEF meetings, you could get a it to see them later.

Getting back to the robotics and artificial intelligence aspect of the 2016 WEF’s ‘digital’ theme, I noticed some reluctance to discuss how the field of robotics is affecting work and jobs in a broadcast of Canadian television show, ‘Conversations with Conrad’.

For those unfamiliar with the interviewer, Conrad Black is somewhat infamous in Canada for a number of reasons (from the Conrad Black Wikipedia entry), Note: Links have been removed,

Conrad Moffat Black, Baron Black of Crossharbour, KSG (born 25 August 1944) is a Canadian-born British former newspaper publisher and author. He is a non-affiliated life peer, and a convicted felon in the United States for fraud.[n 1] Black controlled Hollinger International, once the world’s third-largest English-language newspaper empire,[3] which published The Daily Telegraph (UK), Chicago Sun Times (U.S.), The Jerusalem Post (Israel), National Post (Canada), and hundreds of community newspapers in North America, before he was fired by the board of Hollinger in 2004.[4]

In 2004, a shareholder-initiated prosecution of Black began in the United States. Over $80 million in assets claimed to have been improperly taken or inappropriately spent by Black.[5] He was convicted of three counts of fraud and one count of obstruction of justice in a U.S. court in 2007 and sentenced to six and a half years’ imprisonment. In 2011 two of the charges were overturned on appeal and he was re-sentenced to 42 months in prison on one count of mail fraud and one count of obstruction of justice.[6] Black was released on 4 May 2012.[7]

Despite or perhaps because of his chequered past, he is often a good interviewer and he definitely attracts interesting guests. n an Oct. 26, 2015 programme, he interviewed both former Canadian astronaut, Chris Hadfield, and Canadian-American David Frum who’s currently editor of Atlantic Monthly and a former speechwriter for George W. Bush.

It was Black’s conversation with Frum which surprised me. They discuss robotics without ever once using the word. In a section where Frum notes that manufacturing is returning to the US, he also notes that it doesn’t mean more jobs and cites a newly commissioned plant in the eastern US employing about 40 people where before it would have employed hundreds or thousands. Unfortunately, the video has not been made available as I write this (Nov. 20, 2015) but that situation may change. You can check here.

Final thought, my guess is that economic conditions are fragile and I don’t think anyone wants to set off panic by mentioning robotics and disappearing jobs.

The sense of touch via artificial skin

Scientists have been working for years to allow artificial skin to transmit what the brain would recognize as the sense of touch. For anyone who has lost a limb and gotten a prosthetic replacement, the loss of touch is reputedly one of the more difficult losses to accept. The sense of touch is also vital in robotics if the field is to expand and include activities reliant on the sense of touch, e.g., how much pressure do you use to grasp a cup; how much strength  do you apply when moving an object from one place to another?

For anyone interested in the ‘electronic skin and pursuit of touch’ story, I have a Nov. 15, 2013 posting which highlights the evolution of the research into e-skin and what was then some of the latest work.

This posting is a 2015 update of sorts featuring the latest e-skin research from Stanford University and Xerox PARC. (Dexter Johnson in an Oct. 15, 2015 posting on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineering] site) provides a good research summary.) For anyone with an appetite for more, there’s this from an Oct. 15, 2015 American Association for the Advancement of Science (AAAS) news release on EurekAlert,

Using flexible organic circuits and specialized pressure sensors, researchers have created an artificial “skin” that can sense the force of static objects. Furthermore, they were able to transfer these sensory signals to the brain cells of mice in vitro using optogenetics. For the many people around the world living with prosthetics, such a system could one day allow them to feel sensation in their artificial limbs. To create the artificial skin, Benjamin Tee et al. developed a specialized circuit out of flexible, organic materials. It translates static pressure into digital signals that depend on how much mechanical force is applied. A particular challenge was creating sensors that can “feel” the same range of pressure that humans can. Thus, on the sensors, the team used carbon nanotubes molded into pyramidal microstructures, which are particularly effective at tunneling the signals from the electric field of nearby objects to the receiving electrode in a way that maximizes sensitivity. Transferring the digital signal from the artificial skin system to the cortical neurons of mice proved to be another challenge, since conventional light-sensitive proteins used in optogenetics do not stimulate neural spikes for sufficient durations for these digital signals to be sensed. Tee et al. therefore engineered new optogenetic proteins able to accommodate longer intervals of stimulation. Applying these newly engineered optogenic proteins to fast-spiking interneurons of the somatosensory cortex of mice in vitro sufficiently prolonged the stimulation interval, allowing the neurons to fire in accordance with the digital stimulation pulse. These results indicate that the system may be compatible with other fast-spiking neurons, including peripheral nerves.

And, there’s an Oct. 15, 2015 Stanford University news release on EurkeAlert describing this work from another perspective,

The heart of the technique is a two-ply plastic construct: the top layer creates a sensing mechanism and the bottom layer acts as the circuit to transport electrical signals and translate them into biochemical stimuli compatible with nerve cells. The top layer in the new work featured a sensor that can detect pressure over the same range as human skin, from a light finger tap to a firm handshake.

Five years ago, Bao’s [Zhenan Bao, a professor of chemical engineering at Stanford,] team members first described how to use plastics and rubbers as pressure sensors by measuring the natural springiness of their molecular structures. They then increased this natural pressure sensitivity by indenting a waffle pattern into the thin plastic, which further compresses the plastic’s molecular springs.

To exploit this pressure-sensing capability electronically, the team scattered billions of carbon nanotubes through the waffled plastic. Putting pressure on the plastic squeezes the nanotubes closer together and enables them to conduct electricity.

This allowed the plastic sensor to mimic human skin, which transmits pressure information as short pulses of electricity, similar to Morse code, to the brain. Increasing pressure on the waffled nanotubes squeezes them even closer together, allowing more electricity to flow through the sensor, and those varied impulses are sent as short pulses to the sensing mechanism. Remove pressure, and the flow of pulses relaxes, indicating light touch. Remove all pressure and the pulses cease entirely.

The team then hooked this pressure-sensing mechanism to the second ply of their artificial skin, a flexible electronic circuit that could carry pulses of electricity to nerve cells.

Importing the signal

Bao’s team has been developing flexible electronics that can bend without breaking. For this project, team members worked with researchers from PARC, a Xerox company, which has a technology that uses an inkjet printer to deposit flexible circuits onto plastic. Covering a large surface is important to making artificial skin practical, and the PARC collaboration offered that prospect.

Finally the team had to prove that the electronic signal could be recognized by a biological neuron. It did this by adapting a technique developed by Karl Deisseroth, a fellow professor of bioengineering at Stanford who pioneered a field that combines genetics and optics, called optogenetics. Researchers bioengineer cells to make them sensitive to specific frequencies of light, then use light pulses to switch cells, or the processes being carried on inside them, on and off.

For this experiment the team members engineered a line of neurons to simulate a portion of the human nervous system. They translated the electronic pressure signals from the artificial skin into light pulses, which activated the neurons, proving that the artificial skin could generate a sensory output compatible with nerve cells.

Optogenetics was only used as an experimental proof of concept, Bao said, and other methods of stimulating nerves are likely to be used in real prosthetic devices. Bao’s team has already worked with Bianxiao Cui, an associate professor of chemistry at Stanford, to show that direct stimulation of neurons with electrical pulses is possible.

Bao’s team envisions developing different sensors to replicate, for instance, the ability to distinguish corduroy versus silk, or a cold glass of water from a hot cup of coffee. This will take time. There are six types of biological sensing mechanisms in the human hand, and the experiment described in Science reports success in just one of them.

But the current two-ply approach means the team can add sensations as it develops new mechanisms. And the inkjet printing fabrication process suggests how a network of sensors could be deposited over a flexible layer and folded over a prosthetic hand.

“We have a lot of work to take this from experimental to practical applications,” Bao said. “But after spending many years in this work, I now see a clear path where we can take our artificial skin.”

Here’s a link to and a citation for the paper,

A skin-inspired organic digital mechanoreceptor by Benjamin C.-K. Tee, Alex Chortos, Andre Berndt, Amanda Kim Nguyen, Ariane Tom, Allister McGuire, Ziliang Carter Lin, Kevin Tien, Won-Gyu Bae, Huiliang Wang, Ping Mei, Ho-Hsiu Chou, Bianxiao Cui, Karl Deisseroth, Tse Nga Ng, & Zhenan Bao. Science 16 October 2015 Vol. 350 no. 6258 pp. 313-316 DOI: 10.1126/science.aaa9306

This paper is behind a paywall.

Informal roundup of robot movies and television programmes and a glimpse into our robot future

David Bruggeman has written an informal series of posts about robot movies. The latest, a June 27, 2015 posting on his Pasco Phronesis blog, highlights the latest Terminator film and opines that the recent interest could be traced back to the rebooted Battlestar Galactica television series (Note: Links have been removed),

I suppose this could be traced back to the reboot of Battlestar Galactica over a decade ago, but robots and androids have become an increasing presence on film and television, particularly in the last 2 years.

In the movies, the new Terminator film comes out next week, and the previews suggest we will see a new generation of killer robots traveling through time and space.  Chappie is now out on your digital medium of choice (and I’ll post about any science fiction science policy/SciFiSciPol once I see it), so you can compare its robot police to those from either edition of Robocop or the 2013 series Almost Human.  Robots also have a role …

The new television series he mentions, Humans (click on About) debuted on the US tv channel, AMC, on Sunday, June 28, 2015 (yesterday).

HUMANS is set in a parallel present, where the latest must-have gadget for any busy family is a Synth – a highly-developed robotic servant, eerily similar to its live counterpart. In the hope of transforming the way his family lives, father Joe Hawkins (Tom Goodman-Hill) purchases a Synth (Gemma Chan) against the wishes of his wife (Katharine Parkinson), only to discover that sharing life with a machine has far-reaching and chilling consequences.

Here’s a bit more information from its Wikipedia entry,

Humans (styled as HUM∀NS) is a British-American science fiction television series, debuted in June 2015 on Channel 4 and AMC.[2] Written by the British team Sam Vincent and Jonathan Brackley, based on the award-winning Swedish science fiction drama Real Humans, the series explores the emotional impact of the blurring of the lines between humans and machines. The series is produced jointly by AMC, Channel 4 and Kudos.[3] The series will consist of eight episodes.[4]

David also wrote about Ex Machina, a recent robot film with artistic ambitions, in an April 26, 2015 posting on his Pasco Phronesis blog,

I finally saw Ex Machina, which recently opened in the United States.  It’s a minimalist film, with few speaking roles and a plot revolving around an intelligence test.  Of the robot movies out this year, it has received the strongest reviews, and it may take home some trophies during the next awards season.  Shot in Norway, the film is both lovely to watch and tricky to engage.  I finished the film not quite sure what the characters were thinking, and perhaps that’s a lesson from the film.

Unlike Chappie and Automata, the intelligent robot at the center of Ex Machina is not out in the world. …

He started the series with a Feb. 8, 2015 posting which previews the movies in his later postings but also includes a couple of others not mentioned in either the April or June posting, Avengers: Age of Ultron and Spare Parts.

It’s interesting to me that these robots  are mostly not related to the benign robots in the movie, ‘Forbidden Planet’, a reworking of Shakespeare’s The Tempest in outer space, in ‘Lost in Space’, a 1960s television programme, and in the Jetsons animated tv series of the 1960s. As far as I can tell not having seen the new movies in question, the only benign robot in the current crop would be ‘Chappie’. It should be mentioned that the ‘Terminator’, in the person of Arnold Schwarzenegger, has over a course of three or four movies evolved from a destructive robot bent on evil to a destructive robot working on behalf of good.

I’ll add one more more television programme and I’m not sure if the robot boy is good or evil but there’s Extant where Halle Berry’s robot son seems to be in a version of the Pinocchio story (an ersatz child want to become human), which is enjoying its second season on US television as of July 1, 2015.

Regardless of one or two ‘sweet’ robots, there seems to be a trend toward ominous robots and perhaps, in addition to Battlestar Galactica, the concerns being raised by prominent scientists such as Stephen Hawking and those associated with the Centre for Existential Risk at the University of Cambridge have something to do with this trend and may partially explain why Chappie did not do as well at the box office as hoped. Thematically, it was swimming against the current.

As for a glimpse into the future, there’s this Children’s Hospital of Los Angeles June 29, 2015 news release,

Many hospitals lack the resources and patient volume to employ a round-the-clock, neonatal intensive care specialist to treat their youngest and sickest patients. Telemedicine–with real-time audio and video communication between a neonatal intensive care specialist and a patient–can provide access to this level of care.

A team of neonatologists at Children’s Hospital Los Angeles investigated the use of robot-assisted telemedicine in performing bedside rounds and directing daily care for infants with mild-to-moderate disease. They found no significant differences in patient outcomes when telemedicine was used and noted a high level of parent satisfaction. This is the first published report of using telemedicine for patient rounds in a neonatal intensive care unit (NICU). Results will be published online first on June 29 in the Journal of Telemedicine and Telecare.

Glimpse into the future?

The part I find most fascinating was that there was no difference in outcomes, moreover, the parents’ satisfaction rate was high when robots (telemedicine) were used. Finally, of the families who completed the after care survey (45%), all indicated they would be comfortable with another telemedicine (robot) experience. My comment, should robots prove to be cheaper in the long run and the research results hold as more studies are done, I imagine that hospitals will introduce them as a means of cost cutting.

AI assistant makes scientific discovery at Tufts University (US)

In light of this latest research from Tufts University, I thought it might be interesting to review the “algorithms, artificial intelligence (AI), robots, and world of work” situation before moving on to Tufts’ latest science discovery. My Feb. 5, 2015 post provides a roundup of sorts regarding work and automation. For those who’d like the latest, there’s a May 29, 2015 article by Sophie Weiner for Fast Company, featuring a predictive interactive tool designed by NPR (US National Public Radio) based on data from Oxford University researchers, which tells you how likely automating your job could be, no one knows for sure, (Note: A link has been removed),

Paralegals and food service workers: the robots are coming.

So suggests this interactive visualization by NPR. The bare-bones graphic lets you select a profession, from tellers and lawyers to psychologists and authors, to determine who is most at risk of losing their jobs in the coming robot revolution. From there, it spits out a percentage. …

You can find the interactive NPR tool here. I checked out the scientist category (in descending order of danger: Historians [43.9%], Economists, Geographers, Survey Researchers, Epidemiologists, Chemists, Animal Scientists, Sociologists, Astronomers, Social Scientists, Political Scientists, Materials Scientists, Conservation Scientists, and Microbiologists [1.2%]) none of whom seem to be in imminent danger if you consider that bookkeepers are rated at  97.6%.

Here at last is the news from Tufts (from a June 4, 2015 Tufts University news release, also on EurekAlert),

An artificial intelligence system has for the first time reverse-engineered the regeneration mechanism of planaria–the small worms whose extraordinary power to regrow body parts has made them a research model in human regenerative medicine.

The discovery by Tufts University biologists presents the first model of regeneration discovered by a non-human intelligence and the first comprehensive model of planarian regeneration, which had eluded human scientists for over 100 years. The work, published in PLOS Computational Biology, demonstrates how “robot science” can help human scientists in the future.

To mine the fast-growing mountain of published experimental data in regeneration and developmental biology Lobo and Levin developed an algorithm that would use evolutionary computation to produce regulatory networks able to “evolve” to accurately predict the results of published laboratory experiments that the researchers entered into a database.

“Our goal was to identify a regulatory network that could be executed in every cell in a virtual worm so that the head-tail patterning outcomes of simulated experiments would match the published data,” Lobo said.

The paper represents a successful application of the growing field of “robot science” – which Levin says can help human researchers by doing much more than crunch enormous datasets quickly.

“While the artificial intelligence in this project did have to do a whole lot of computations, the outcome is a theory of what the worm is doing, and coming up with theories of what’s going on in nature is pretty much the most creative, intuitive aspect of the scientist’s job,” Levin said. “One of the most remarkable aspects of the project was that the model it found was not a hopelessly-tangled network that no human could actually understand, but a reasonably simple model that people can readily comprehend. All this suggests to me that artificial intelligence can help with every aspect of science, not only data mining but also inference of meaning of the data.”

Here’s a link to and a citation for the paper,

Inferring Regulatory Networks from Experimental Morphological Phenotypes: A Computational Method Reverse-Engineers Planarian Regeneration by Daniel Lobo and Michael Levin. PLOS (Computational Biology) DOI: DOI: 10.1371/journal.pcbi.1004295 Published: June 4, 2015

This paper is open access.

It will be interesting to see if attributing the discovery to an algorithm sets off criticism suggesting that the researchers overstated the role the AI assistant played.

I sing the body cyber: two projects funded by the US National Science Foundation

Points to anyone who recognized the reference to Walt Whitman’s poem, “I sing the body electric,” from his classic collection, Leaves of Grass (1867 edition; h/t Wikipedia entry). I wonder if the cyber physical systems (CPS) work being funded by the US National Science Foundation (NSF) in the US will occasion poetry too.

More practically, a May 15, 2015 news item on Nanowerk, describes two cyber physical systems (CPS) research projects newly funded by the NSF,

Today [May 12, 2015] the National Science Foundation (NSF) announced two, five-year, center-scale awards totaling $8.75 million to advance the state-of-the-art in medical and cyber-physical systems (CPS).

One project will develop “Cyberheart”–a platform for virtual, patient-specific human heart models and associated device therapies that can be used to improve and accelerate medical-device development and testing. The other project will combine teams of microrobots with synthetic cells to perform functions that may one day lead to tissue and organ re-generation.

CPS are engineered systems that are built from, and depend upon, the seamless integration of computation and physical components. Often called the “Internet of Things,” CPS enable capabilities that go beyond the embedded systems of today.

“NSF has been a leader in supporting research in cyber-physical systems, which has provided a foundation for putting the ‘smart’ in health, transportation, energy and infrastructure systems,” said Jim Kurose, head of Computer & Information Science & Engineering at NSF. “We look forward to the results of these two new awards, which paint a new and compelling vision for what’s possible for smart health.”

Cyber-physical systems have the potential to benefit many sectors of our society, including healthcare. While advances in sensors and wearable devices have the capacity to improve aspects of medical care, from disease prevention to emergency response, and synthetic biology and robotics hold the promise of regenerating and maintaining the body in radical new ways, little is known about how advances in CPS can integrate these technologies to improve health outcomes.

These new NSF-funded projects will investigate two very different ways that CPS can be used in the biological and medical realms.

A May 12, 2015 NSF news release (also on EurekAlert), which originated the news item, describes the two CPS projects,

Bio-CPS for engineering living cells

A team of leading computer scientists, roboticists and biologists from Boston University, the University of Pennsylvania and MIT have come together to develop a system that combines the capabilities of nano-scale robots with specially designed synthetic organisms. Together, they believe this hybrid “bio-CPS” will be capable of performing heretofore impossible functions, from microscopic assembly to cell sensing within the body.

“We bring together synthetic biology and micron-scale robotics to engineer the emergence of desired behaviors in populations of bacterial and mammalian cells,” said Calin Belta, a professor of mechanical engineering, systems engineering and bioinformatics at Boston University and principal investigator on the project. “This project will impact several application areas ranging from tissue engineering to drug development.”

The project builds on previous research by each team member in diverse disciplines and early proof-of-concept designs of bio-CPS. According to the team, the research is also driven by recent advances in the emerging field of synthetic biology, in particular the ability to rapidly incorporate new capabilities into simple cells. Researchers so far have not been able to control and coordinate the behavior of synthetic cells in isolation, but the introduction of microrobots that can be externally controlled may be transformative.

In this new project, the team will focus on bio-CPS with the ability to sense, transport and work together. As a demonstration of their idea, they will develop teams of synthetic cell/microrobot hybrids capable of constructing a complex, fabric-like surface.

Vijay Kumar (University of Pennsylvania), Ron Weiss (MIT), and Douglas Densmore (BU) are co-investigators of the project.

Medical-CPS and the ‘Cyberheart’

CPS such as wearable sensors and implantable devices are already being used to assess health, improve quality of life, provide cost-effective care and potentially speed up disease diagnosis and prevention. [emphasis mine]

Extending these efforts, researchers from seven leading universities and centers are working together to develop far more realistic cardiac and device models than currently exist. This so-called “Cyberheart” platform can be used to test and validate medical devices faster and at a far lower cost than existing methods. CyberHeart also can be used to design safe, patient-specific device therapies, thereby lowering the risk to the patient.

“Innovative ‘virtual’ design methodologies for implantable cardiac medical devices will speed device development and yield safer, more effective devices and device-based therapies, than is currently possible,” said Scott Smolka, a professor of computer science at Stony Brook University and one of the principal investigators on the award.

The group’s approach combines patient-specific computational models of heart dynamics with advanced mathematical techniques for analyzing how these models interact with medical devices. The analytical techniques can be used to detect potential flaws in device behavior early on during the device-design phase, before animal and human trials begin. They also can be used in a clinical setting to optimize device settings on a patient-by-patient basis before devices are implanted.

“We believe that our coordinated, multi-disciplinary approach, which balances theoretical, experimental and practical concerns, will yield transformational results in medical-device design and foundations of cyber-physical system verification,” Smolka said.

The team will develop virtual device models which can be coupled together with virtual heart models to realize a full virtual development platform that can be subjected to computational analysis and simulation techniques. Moreover, they are working with experimentalists who will study the behavior of virtual and actual devices on animals’ hearts.

Co-investigators on the project include Edmund Clarke (Carnegie Mellon University), Elizabeth Cherry (Rochester Institute of Technology), W. Rance Cleaveland (University of Maryland), Flavio Fenton (Georgia Tech), Rahul Mangharam (University of Pennsylvania), Arnab Ray (Fraunhofer Center for Experimental Software Engineering [Germany]) and James Glimm and Radu Grosu (Stony Brook University). Richard A. Gray of the U.S. Food and Drug Administration is another key contributor.

It is fascinating to observe how terminology is shifting from pacemakers and deep brain stimulators as implants to “CPS such as wearable sensors and implantable devices … .” A new category has been created, CPS, which conjoins medical devices with other sensing devices such as wearable fitness monitors found in the consumer market. I imagine it’s an attempt to quell fears about injecting strange things into or adding strange things to your body—microrobots and nanorobots partially derived from synthetic biology research which are “… capable of performing heretofore impossible functions, from microscopic assembly to cell sensing within the body.” They’ve also sneaked in a reference to synthetic biology, an area of research where some concerns have been expressed, from my March 19, 2013 post about a poll and synthetic biology concerns,

In our latest survey, conducted in January 2013, three-fourths of respondents say they have heard little or nothing about synthetic biology, a level consistent with that measured in 2010. While initial impressions about the science are largely undefined, these feelings do not necessarily become more positive as respondents learn more. The public has mixed reactions to specific synthetic biology applications, and almost one-third of respondents favor a ban “on synthetic biology research until we better understand its implications and risks,” while 61 percent think the science should move forward.

I imagine that for scientists, 61% in favour of more research is not particularly comforting given how easily and quickly public opinion can shift.