Scientific and technological breakthroughs are more important than ever as a key agent to drive social, economic, and political changes and advancements in today’s world. The World Economic Forum (WEF), an international organization that provides one of the broadest engagement platforms to address issues of major concern to the global community, will discuss the effects of these breakthroughs at its 10th Annual Meeting of the New Champions, a.k.a., the Summer Davos Forum, in Tianjin, China, June 26-28, 2016.
Three professors from the Korea Advanced Institute of Science and Technology (KAIST) will join the Annual Meeting and offer their expertise in the fields of biotechnology, artificial intelligence, and robotics to explore the conference theme, “The Fourth Industrial Revolution and Its Transformational Impact.” The Fourth Industrial Revolution, a term coined by WEF founder, Klaus Schwab, is characterized by a range of new technologies that fuse the physical, digital, and biological worlds, such as the Internet of Things, cloud computing, and automation.
Distinguished Professor Sang Yup Lee of the Chemical and Biomolecular Engineering Department will speak at the Experts Reception to be held on June 25, 2016 on the topic of “The Summer Davos Forum and Science and Technology in Asia.” On June 27, 2016, he will participate in two separate discussion sessions.
In the first session entitled “What If Drugs Are Printed from the Internet?” Professor Lee will discuss the future of medicine being impacted by advancements in biotechnology and 3D printing technology with Nita A. Farahany, a Duke University professor, under the moderation of Clare Matterson, the Director of Strategy at Wellcome Trust in the United Kingdom. The discussants will note recent developments made in the way patients receive their medicine, for example, downloading drugs directly from the internet and the production of yeast strains to make opioids for pain treatment through systems metabolic engineering, and predicting how these emerging technologies will transform the landscape of the pharmaceutical industry in the years to come.
In the second session, “Lessons for Life,” Professor Lee will talk about how to nurture life-long learning and creativity to support personal and professional growth necessary in an era of the new industrial revolution.
During the Annual Meeting, Professors Jong-Hwan Kim of the Electrical Engineering School and David Hyunchul Shim of the Aerospace Department will host, together with researchers from Carnegie Mellon University and AnthroTronix, an engineering research and development company, a technological exhibition on robotics. Professor Kim, the founder of the internally renowned Robot World Cup, will showcase his humanoid micro-robots that play soccer, displaying their various cutting-edge technologies such as imaging processing, artificial intelligence, walking, and balancing. Professor Shim will present a human-like robotic piloting system, PIBOT, which autonomously operates a simulated flight program, grabbing control sticks and guiding an airplane from take offs to landings.
In addition, the two professors will join Professor Lee, who is also a moderator, to host a KAIST-led session on June 26, 2016, entitled “Science in Depth: From Deep Learning to Autonomous Machines.” Professors Kim and Shim will explore new opportunities and challenges in their fields from machine learning to autonomous robotics including unmanned vehicles and drones.
Since 2011, KAIST has been participating in the World Economic Forum’s two flagship conferences, the January and June Davos Forums, to introduce outstanding talents, share their latest research achievements, and interact with global leaders.
KAIST President Steve Kang said, “It is important for KAIST to be involved in global talks that identify issues critical to humanity and seek answers to solve them, where our skills and knowledge in science and technology could play a meaningful role. The Annual Meeting in China will become another venue to accomplish this.”
I mentioned KAIST and the Ideas Lab at the 2016 Davos meeting in this Nov. 20, 2015 posting and was able to clear up my (and possible other people’s) confusion as to what the Fourth Industrial revolution might be in my Dec. 3, 2015 posting.
Researchers from the Georgia Institute of Technology (Georgia Tech), funded by the US Office of Naval Research (ONR), have developed a program that teaches robots to read stories and more in an effort to educate them about humans. From a June 16, 2016 ONR news release by Warren Duffie Jr. (also on EurekAlert),
With support from the Office of Naval Research (ONR), researchers at the Georgia Institute of Technology have created an artificial intelligence software program named Quixote to teach robots to read stories, learn acceptable behavior and understand successful ways to conduct themselves in diverse social situations.
“For years, researchers have debated how to teach robots to act in ways that are appropriate, non-intrusive and trustworthy,” said Marc Steinberg, an ONR program manager who oversees the research. “One important question is how to explain complex concepts such as policies, values or ethics to robots. Humans are really good at using narrative stories to make sense of the world and communicate to other people. This could one day be an effective way to interact with robots.”
The rapid pace of artificial intelligence has stirred fears by some that robots could act unethically or harm humans. Dr. Mark Riedl, an associate professor and director of Georgia Tech’s Entertainment Intelligence Lab, hopes to ease concerns by having Quixote serve as a “human user manual” by teaching robots values through simple stories. After all, stories inform, educate and entertain–reflecting shared cultural knowledge, social mores and protocols.
For example, if a robot is tasked with picking up a pharmacy prescription for a human as quickly as possible, it could: a) take the medicine and leave, b) interact politely with pharmacists, c) or wait in line. Without value alignment and positive reinforcement, the robot might logically deduce robbery is the fastest, cheapest way to accomplish its task. However, with value alignment from Quixote, it would be rewarded for waiting patiently in line and paying for the prescription.
For their research, Riedl and his team crowdsourced stories from the Internet. Each tale needed to highlight daily social interactions–going to a pharmacy or restaurant, for example–as well as socially appropriate behaviors (e.g., paying for meals or medicine) within each setting.
The team plugged the data into Quixote to create a virtual agent–in this case, a video game character placed into various game-like scenarios mirroring the stories. As the virtual agent completed a game, it earned points and positive reinforcement for emulating the actions of protagonists in the stories.
Riedl’s team ran the agent through 500,000 simulations, and it displayed proper social interactions more than 90 percent of the time.
“These games are still fairly simple,” said Riedl, “more like ‘Pac-Man’ instead of ‘Halo.’ However, Quixote enables these artificial intelligence agents to immerse themselves in a story, learn the proper sequence of events and be encoded with acceptable behavior patterns. This type of artificial intelligence can be adapted to robots, offering a variety of applications.”
Within the next six months, Riedl’s team hopes to upgrade Quixote’s games from “old-school” to more modern and complex styles like those found in Minecraft–in which players use blocks to build elaborate structures and societies.
Riedl believes Quixote could one day make it easier for humans to train robots to perform diverse tasks. Steinberg notes that robotic and artificial intelligence systems may one day be a much larger part of military life. This could involve mine detection and deactivation, equipment transport and humanitarian and rescue operations.
“Within a decade, there will be more robots in society, rubbing elbows with us,” said Riedl. “Social conventions grease the wheels of society, and robots will need to understand the nuances of how humans do things. That’s where Quixote can serve as a valuable tool. We’re already seeing it with virtual agents like Siri and Cortana, which are programmed not to say hurtful or insulting things to users.”
This story brought to mind two other projects: RoboEarth (an internet for robots only) mentioned in my Jan. 14, 2014 which was an update on the project featuring its use in hospitals and RoboBrain, a robot learning project (sourcing the internet, YouTube, and more for information to teach robots) was mentioned in my Sept. 2, 2014 posting.
The Victoria & Albert Museum (V&A) in London, UK, opened its Engineering Season show on May 18, 2016 (it runs until Nov. 6, 2016) featuring a robot installation and an exhibition putting the spotlight on Ove Arup, “the most significant engineer of the 20th century” according to the V&A’s May ??, 2016 press release,
The first major retrospective of the most influential engineer of the 20th century and a site specific installation inspired by nature and fabricated by robots will be the highlights of the V&A’s first ever Engineering Season, complemented by displays, events and digital initiatives dedicated to global engineering design. The V&A Engineering Season will highlight the importance of engineering in our daily lives and consider engineers as the ‘unsung heroes’ of design, who play a vital and creative role in the creation of our built environment.
Before launching into the robot/biomimicry part of this story, here’s a very brief description of why Ove Arup is considered so significant and influential,
Engineering the World: Ove Arup and the Philosophy of Total Design will explore the work and legacy of Ove Arup (1895-1988), … . Ove pioneered a multidisciplinary approach to design that has defined the way engineering is understood and practiced today. Spanning 100 years of engineering and architectural design, the exhibition will be guided by Ove’s writings about design and include his early projects, such as the Penguin Pool at London Zoo, as well as renowned projects by the firm including Sydney Opera House [Australia] and the Centre Pompidou in Paris. Arup’s collaborations with major architects of the 20th century pioneered new approaches to design and construction that remain influential today, with the firm’s legacy visible in many buildings across London and around the world. It will also showcase recent work by Arup, from major infrastructure projects like Crossrail and novel technologies for acoustics and crowd flow analysis, to engineering solutions for open source housing design.
Robots, biomimicry and the Elytra Filament Pavilion
A May 18, 2016 article by Tim Master for BBC (British Broadcasting Corporation) news online describes the pavilion installation,
A robot has taken up residence at the Victoria & Albert Musuem to construct a new installation at its London gardens.
The robot – which resembles something from a car assembly line – will build new sections of the Elytra Filament Pavilion over the coming months.
The futuristic structure will grow and change shape using data based on how visitors interact with it.
Elytra’s canopy is made up of 40 hexagonal cells – made from strips of carbon and glass fibre – which have been tightly wound into shape by the computer-controlled Kuka robot.
Each cell takes about three hours to build. On certain days, visitors to the V&A will be able to watch the robot create new cells that will be added to the canopy.
Elytra Filament Pavilion has been created by experimental German architect Achim Menges with Moritz Dörstelmann, structural engineer Jan Knippers and climate engineer Thomas Auer.
Menges and Knippers are leaders of research institutes at the University of Stuttgart that are pioneering the integration of biomimicry, robotic fabrication and new materials research in architecture. This installation emerges from their ongoing research projects and is their first-ever major commission in the UK.
The pavilion explores the impact of emerging robotic technologies on architectural design, engineering and making.
Its design is inspired by lightweight construction principles found in nature, the filament structures of the forewing shells of flying beetles known as elytra. Made of glass and carbon fibre, each component of the undulating canopy is produced using an innovative robotic winding technique developed by the designers. Like beetle elytra, the pavilion’s filament structure is both very strong and very light – spanning over 200m2 it weighs less than 2,5 tonnes.
Elytra is a responsive shelter that will grow over the course of the V&A Engineering Season. Sensors in the canopy fibres will collect data on how visitors inhabit the pavilion and monitor the structure’s behaviour, ultimately informing how and where the canopy grows. During a series of special events as part of the Engineering Season, visitors will have the opportunity to witness the pavilion’s construction live, as new components are fabricated on-site by a Kuka robot.
Unfortunately, I haven’t been able to find more technical detail, particularly about the materials being used in the construction of the pavilion, on the V&A website.
One observation, I’m a little uncomfortable with how they’re gathering data “Sensors in the canopy fibres will collect data on how visitors inhabit the pavilion … .” It sounds like surveillance to me.
Nonetheless, the Engineering Season offers the promise of a very intriguing approach to fulfilling the V&A’s mandate as a museum dedicated to decorative arts and design.
Fiction, more or less seriously, has been exploring the idea of ingestible, tiny robots that can enter the human body for decades (Fantastic Voyage and Innerspace are two movie examples). The concept is coming closer to being realized as per a May 12, 2016 news item on phys.org,
In experiments involving a simulation of the human esophagus and stomach, researchers at MIT [Massachusetts Institute of Technology], the University of Sheffield, and the Tokyo Institute of Technology have demonstrated a tiny origami robot that can unfold itself from a swallowed capsule and, steered by external magnetic fields, crawl across the stomach wall to remove a swallowed button battery or patch a wound.
The new work, which the researchers are presenting this week at the International Conference on Robotics and Automation, builds on a long sequence of papers on origami robots from the research group of Daniela Rus, the Andrew and Erna Viterbi Professor in MIT’s Department of Electrical Engineering and Computer Science.
“It’s really exciting to see our small origami robots doing something with potential important applications to health care,” says Rus, who also directs MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). “For applications inside the body, we need a small, controllable, untethered robot system. It’s really difficult to control and place a robot inside the body if the robot is attached to a tether.”
Although the new robot is a successor to one reported at the same conference last year, the design of its body is significantly different. Like its predecessor, it can propel itself using what’s called a “stick-slip” motion, in which its appendages stick to a surface through friction when it executes a move, but slip free again when its body flexes to change its weight distribution.
Also like its predecessor — and like several other origami robots from the Rus group — the new robot consists of two layers of structural material sandwiching a material that shrinks when heated. A pattern of slits in the outer layers determines how the robot will fold when the middle layer contracts.
The robot’s envisioned use also dictated a host of structural modifications. “Stick-slip only works when, one, the robot is small enough and, two, the robot is stiff enough,” says Guitron [Steven Guitron, a graduate student in mechanical engineering]. “With the original Mylar design, it was much stiffer than the new design, which is based on a biocompatible material.”
To compensate for the biocompatible material’s relative malleability, the researchers had to come up with a design that required fewer slits. At the same time, the robot’s folds increase its stiffness along certain axes.
But because the stomach is filled with fluids, the robot doesn’t rely entirely on stick-slip motion. “In our calculation, 20 percent of forward motion is by propelling water — thrust — and 80 percent is by stick-slip motion,” says Miyashita [Shuhei Miyashita, who was a postdoc at CSAIL when the work was done and is now a lecturer in electronics at the University of York, England]. “In this regard, we actively introduced and applied the concept and characteristics of the fin to the body design, which you can see in the relatively flat design.”
It also had to be possible to compress the robot enough that it could fit inside a capsule for swallowing; similarly, when the capsule dissolved, the forces acting on the robot had to be strong enough to cause it to fully unfold. Through a design process that Guitron describes as “mostly trial and error,” the researchers arrived at a rectangular robot with accordion folds perpendicular to its long axis and pinched corners that act as points of traction.
In the center of one of the forward accordion folds is a permanent magnet that responds to changing magnetic fields outside the body, which control the robot’s motion. The forces applied to the robot are principally rotational. A quick rotation will make it spin in place, but a slower rotation will cause it to pivot around one of its fixed feet. In the researchers’ experiments, the robot uses the same magnet to pick up the button battery.
The researchers tested about a dozen different possibilities for the structural material before settling on the type of dried pig intestine used in sausage casings. “We spent a lot of time at Asian markets and the Chinatown market looking for materials,” Li [Shuguang Li, a CSAIL postdoc] says. The shrinking layer is a biodegradable shrink wrap called Biolefin.
To design their synthetic stomach, the researchers bought a pig stomach and tested its mechanical properties. Their model is an open cross-section of the stomach and esophagus, molded from a silicone rubber with the same mechanical profile. A mixture of water and lemon juice simulates the acidic fluids in the stomach.
Every year, 3,500 swallowed button batteries are reported in the U.S. alone. Frequently, the batteries are digested normally, but if they come into prolonged contact with the tissue of the esophagus or stomach, they can cause an electric current that produces hydroxide, which burns the tissue. Miyashita employed a clever strategy to convince Rus that the removal of swallowed button batteries and the treatment of consequent wounds was a compelling application of their origami robot.
“Shuhei bought a piece of ham, and he put the battery on the ham,” Rus says. [emphasis mine] “Within half an hour, the battery was fully submerged in the ham. So that made me realize that, yes, this is important. If you have a battery in your body, you really want it out as soon as possible.”
“This concept is both highly creative and highly practical, and it addresses a clinical need in an elegant way,” says Bradley Nelson, a professor of robotics at the Swiss Federal Institute of Technology Zurich. “It is one of the most convincing applications of origami robots that I have seen.”
I wonder if they ate the ham afterwards.
Happily, MIT has produced a video featuring this ingestible, origami robot,
Finally, this team has a couple more members than the previously mentioned Rus, Miyashita, and Li,
… Kazuhiro Yoshida of Tokyo Institute of Technology, who was visiting MIT on sabbatical when the work was done; and Dana Damian of the University of Sheffield, in England.
As Rus notes in the video, the next step will be in vivo (animal) studies.
This story poses some interesting questions that touch on the uneasiness being felt as computers get ‘smarter’. From an April 13, 2016 news item on ScienceDaily,
The saying of philosopher René Descartes of what makes humans unique is beginning to sound hollow. ‘I think — therefore soon I am obsolete’ seems more appropriate. When a computer routinely beats us at chess and we can barely navigate without the help of a GPS, have we outlived our place in the world? Not quite. Welcome to the front line of research in cognitive skills, quantum computers and gaming.
Today there is an on-going battle between man and machine. While genuine machine consciousness is still years into the future, we are beginning to see computers make choices that previously demanded a human’s input. Recently, the world held its breath as Google’s algorithm AlphaGo beat a professional player in the game Go–an achievement demonstrating the explosive speed of development in machine capabilities.
But we are not beaten yet — human skills are still superior in some areas. This is one of the conclusions of a recent study by Danish physicist Jacob Sherson, published in the journal Nature.
“It may sound dramatic, but we are currently in a race with technology — and steadily being overtaken in many areas. Features that used to be uniquely human are fully captured by contemporary algorithms. Our results are here to demonstrate that there is still a difference between the abilities of a man and a machine,” explains Jacob Sherson.
At the interface between quantum physics and computer games, Sherson and his research group at Aarhus University have identified one of the abilities that still makes us unique compared to a computer’s enormous processing power: our skill in approaching problems heuristically and solving them intuitively. The discovery was made at the AU Ideas Centre CODER, where an interdisciplinary team of researchers work to transfer some human traits to the way computer algorithms work. ?
Quantum physics holds the promise of immense technological advances in areas ranging from computing to high-precision measurements. However, the problems that need to be solved to get there are so complex that even the most powerful supercomputers struggle with them. This is where the core idea behind CODER–combining the processing power of computers with human ingenuity — becomes clear. ?
Our common intuition
Like Columbus in QuantumLand, the CODER research group mapped out how the human brain is able to make decisions based on intuition and accumulated experience. This is done using the online game “Quantum Moves.” Over 10,000 people have played the game that allows everyone contribute to basic research in quantum physics.
“The map we created gives us insight into the strategies formed by the human brain. We behave intuitively when we need to solve an unknown problem, whereas for a computer this is incomprehensible. A computer churns through enormous amounts of information, but we can choose not to do this by basing our decision on experience or intuition. It is these intuitive insights that we discovered by analysing the Quantum Moves player solutions,” explains Jacob Sherson. ? [sic]
The laws of quantum physics dictate an upper speed limit for data manipulation, which in turn sets the ultimate limit to the processing power of quantum computers — the Quantum Speed ??Limit. Until now a computer algorithm has been used to identify this limit. It turns out that with human input researchers can find much better solutions than the algorithm.
“The players solve a very complex problem by creating simple strategies. Where a computer goes through all available options, players automatically search for a solution that intuitively feels right. Through our analysis we found that there are common features in the players’ solutions, providing a glimpse into the shared intuition of humanity. If we can teach computers to recognise these good solutions, calculations will be much faster. In a sense we are downloading our common intuition to the computer” says Jacob Sherson.
And it works. The group has shown that we can break the Quantum Speed Limit by combining the cerebral cortex and computer chips. This is the new powerful tool in the development of quantum computers and other quantum technologies.
After the buildup, the press release focuses on citizen science and computer games,
Science is often perceived as something distant and exclusive, conducted behind closed doors. To enter you have to go through years of education, and preferably have a doctorate or two. Now a completely different reality is materialising.? [sic]
In recent years, a new phenomenon has appeared–citizen science breaks down the walls of the laboratory and invites in everyone who wants to contribute. The team at Aarhus University uses games to engage people in voluntary science research. Every week people around the world spend 3 billion hours playing games. Games are entering almost all areas of our daily life and have the potential to become an invaluable resource for science.
“Who needs a supercomputer if we can access even a small fraction of this computing power? By turning science into games, anyone can do research in quantum physics. We have shown that games break down the barriers between quantum physicists and people of all backgrounds, providing phenomenal insights into state-of-the-art research. Our project combines the best of both worlds and helps challenge established paradigms in computational research,” explains Jacob Sherson.
The difference between the machine and us, figuratively speaking, is that we intuitively reach for the needle in a haystack without knowing exactly where it is. We ‘guess’ based on experience and thereby skip a whole series of bad options. For Quantum Moves, intuitive human actions have been shown to be compatible with the best computer solutions. In the future it will be exciting to explore many other problems with the aid of human intuition.
“We are at the borderline of what we as humans can understand when faced with the problems of quantum physics. With the problem underlying Quantum Moves we give the computer every chance to beat us. Yet, over and over again we see that players are more efficient than machines at solving the problem. While Hollywood blockbusters on artificial intelligence are starting to seem increasingly realistic, our results demonstrate that the comparison between man and machine still sometimes favours us. We are very far from computers with human-type cognition,” says Jacob Sherson and continues:
“Our work is first and foremost a big step towards the understanding of quantum physical challenges. We do not know if this can be transferred to other challenging problems, but it is definitely something that we will work hard to resolve in the coming years.”
Here’s a link to and a citation for the paper,
Exploring the quantum speed limit with computer games by Jens Jakob W. H. Sørensen, Mads Kock Pedersen, Michael Munch, Pinja Haikka, Jesper Halkjær Jensen, Tilo Planke, Morten Ginnerup Andreasen, Miroslav Gajdacz, Klaus Mølmer, Andreas Lieberoth, & Jacob F. Sherson. Nature 532, 210–213 (14 April 2016) doi:10.1038/nature17620 Published online 13 April 2016
I have two robot news bits for this posting. The first probes the unease currently being expressed (pop culture movies, Stephen Hawking, the Cambridge Centre for Existential Risk, etc.) about robots and their increasing intelligence and increased use in all types of labour formerly and currently performed by humans. The second item is about a research project where ‘artificial agents’ (robots) are being taught human values with stories.
Human labour obsolete?
‘When machines can do any job, what will humans do?’ is the question being asked in a presentation by Rice University computer scientist, Moshe Vardi, for the American Association for the Advancement of Science (AAAS) annual meeting held in Washington, D.C. from Feb. 11 – 15, 2016.
Rice University computer scientist Moshe Vardi expects that within 30 years, machines will be capable of doing almost any job that a human can. In anticipation, he is asking his colleagues to consider the societal implications. Can the global economy adapt to greater than 50 percent unemployment? Will those out of work be content to live a life of leisure?
“We are approaching a time when machines will be able to outperform humans at almost any task,” Vardi said. “I believe that society needs to confront this question before it is upon us: If machines are capable of doing almost any work humans can do, what will humans do?”
Vardi addressed this issue Sunday [Feb. 14, 2016] in a presentation titled “Smart Robots and Their Impact on Society” at one of the world’s largest and most prestigious scientific meetings — the annual meeting of the American Association for the Advancement of Science in Washington, D.C.
“The question I want to put forward is, Does the technology we are developing ultimately benefit mankind?” Vardi said. He asked the question after presenting a body of evidence suggesting that the pace of advancement in the field of artificial intelligence (AI) is increasing, even as existing robotic and AI technologies are eliminating a growing number of middle-class jobs and thereby driving up income inequality.
Vardi, a member of both the National Academy of Engineering and the National Academy of Science, is a Distinguished Service Professor and the Karen Ostrum George Professor of Computational Engineering at Rice, where he also directs Rice’s Ken Kennedy Institute for Information Technology. Since 2008 he has served as the editor-in-chief of Communications of the ACM, the flagship publication of the Association for Computing Machinery (ACM), one of the world’s largest computational professional societies.
Vardi said some people believe that future advances in automation will ultimately benefit humans, just as automation has benefited society since the dawn of the industrial age.
“A typical answer is that if machines will do all our work, we will be free to pursue leisure activities,” Vardi said. But even if the world economic system could be restructured to enable billions of people to live lives of leisure, Vardi questioned whether it would benefit humanity.
“I do not find this a promising future, as I do not find the prospect of leisure-only life appealing. I believe that work is essential to human well-being,” he said.
“Humanity is about to face perhaps its greatest challenge ever, which is finding meaning in life after the end of ‘In the sweat of thy face shalt thou eat bread,’” Vardi said. “We need to rise to the occasion and meet this challenge” before human labor becomes obsolete, he said.
In addition to dual membership in the National Academies, Vardi is a Guggenheim fellow and a member of the American Academy of Arts and Sciences, the European Academy of Sciences and the Academia Europa. He is a fellow of the ACM, the American Association for Artificial Intelligence and the Institute for Electrical and Electronics Engineers (IEEE). His numerous honors include the Southeastern Universities Research Association’s 2013 Distinguished Scientist Award, the 2011 IEEE Computer Society Harry H. Goode Award, the 2008 ACM Presidential Award, the 2008 Blaise Pascal Medal for Computer Science by the European Academy of Sciences and the 2000 Goedel Prize for outstanding papers in the area of theoretical computer science.
Vardi joined Rice’s faculty in 1993. His research centers upon the application of logic to computer science, database systems, complexity theory, multi-agent systems and specification and verification of hardware and software. He is the author or co-author of more than 500 technical articles and of two books, “Reasoning About Knowledge” and “Finite Model Theory and Its Applications.”
In a Feb. 5, 2015 post, I rounded up a number of articles about our robot future. It provides a still useful overview of the thinking on the topic.
The rapid pace of artificial intelligence (AI) has raised fears about whether robots could act unethically or soon choose to harm humans. Some are calling for bans on robotics research; others are calling for more research to understand how AI might be constrained. But how can robots learn ethical behavior if there is no “user manual” for being human?
Researchers Mark Riedl and Brent Harrison from the School of Interactive Computing at the Georgia Institute of Technology believe the answer lies in “Quixote” — to be unveiled at the AAAI [Association for the Advancement of Artificial Intelligence]-16 Conference in Phoenix, Ariz. (Feb. 12 – 17, 2016). Quixote teaches “value alignment” to robots by training them to read stories, learn acceptable sequences of events and understand successful ways to behave in human societies.
“The collected stories of different cultures teach children how to behave in socially acceptable ways with examples of proper and improper behavior in fables, novels and other literature,” says Riedl, associate professor and director of the Entertainment Intelligence Lab. “We believe story comprehension in robots can eliminate psychotic-appearing behavior and reinforce choices that won’t harm humans and still achieve the intended purpose.”
Quixote is a technique for aligning an AI’s goals with human values by placing rewards on socially appropriate behavior. It builds upon Riedl’s prior research — the Scheherazade system — which demonstrated how artificial intelligence can gather a correct sequence of actions by crowdsourcing story plots from the Internet.
Scheherazade learns what is a normal or “correct” plot graph. It then passes that data structure along to Quixote, which converts it into a “reward signal” that reinforces certain behaviors and punishes other behaviors during trial-and-error learning. In essence, Quixote learns that it will be rewarded whenever it acts like the protagonist in a story instead of randomly or like the antagonist.
For example, if a robot is tasked with picking up a prescription for a human as quickly as possible, the robot could a) rob the pharmacy, take the medicine, and run; b) interact politely with the pharmacists, or c) wait in line. Without value alignment and positive reinforcement, the robot would learn that robbing is the fastest and cheapest way to accomplish its task. With value alignment from Quixote, the robot would be rewarded for waiting patiently in line and paying for the prescription.
Riedl and Harrison demonstrate in their research how a value-aligned reward signal can be produced to uncover all possible steps in a given scenario, map them into a plot trajectory tree, which is then used by the robotic agent to make “plot choices” (akin to what humans might remember as a Choose-Your-Own-Adventure novel) and receive rewards or punishments based on its choice.
The Quixote technique is best for robots that have a limited purpose but need to interact with humans to achieve it, and it is a primitive first step toward general moral reasoning in AI, Riedl says.
“We believe that AI has to be enculturated to adopt the values of a particular society, and in doing so, it will strive to avoid unacceptable behavior,” he adds. “Giving robots the ability to read and understand our stories may be the most expedient means in the absence of a human user manual.”
There are five upcoming science events in seven days (Jan. 23 – 28, 2016) in the Vancouver area.
Einstein Centenary Series
The first is a Saturday morning, Jan. 23, 2016 lecture, the first for 2016 in a joint TRIUMF (Canada’s national laboratory for particle and nuclear physics), UBC (University of British Columbia), and SFU (Simon Fraser University) series featuring Einstein’s work and its implications. From the event brochure (pdf), which lists the entire series,
TRIUMF, UBC and SFU are proud to present the 2015-2016 Saturday morning lecture series on the frontiers of modern physics. These free lectures are a level appropriate for high school students and members of the general public.
Parallel lecture series will be held at TRIUMF on the UBC South Campus, and at SFU Surrey Campus.
Lectures start at 10:00 am and 11:10 am. Parking is available.
For information, registration and directions, see :
January 23, 2016 TRIUMF Auditorium (UBC, Vancouver)
1. General Relativity – the theory (Jonathan Kozaczuk, TRIUMF)
2. Einstein and Light: stimulated emission, photoelectric effect and quantum theory (Mark Van Raamsdonk, UBC)
January 30, 2016 SFU Surrey Room 2740 (SFU, Surrey Campus)
1. General Relativity – the theory (Jonathan Kozaczuk, TRIUMF)
2. Einstein and Light: stimulated emission, photoelectric effect and quantum theory (Mark Van Raamsdonk, UBC)
I believe these lectures are free. One more note, they will be capping off this series with a special lecture by Kip Thorne (astrophysicist and consultant for the movie Interstellar) at Science World, on Thursday, April 14, 2016. More about that * at a closer date.
On Tuesday, January 26, 2016 at 7:30 pm in the back room of The Railway Club (2nd floor of 579 Dunsmuir St. [at Seymour St.]), Café Scientifique will be hosting a talk about science and serving patients (from the Jan. 5, 2016 announcement),
Our speakers for the evening will be Dr. Millan Patel and Dr. Shirin Kalyan. The title of their talk is:
Helping Science to Serve Patients
Science in general and biotechnology in particular are auto-catalytic. That is, they catalyze their own evolution and so generate breakthroughs at an exponentially increasing rate. The experience of patients is not exponentially getting better, however. This talk, with a medical geneticist and an immunologist who believe science can deliver far more for patients, will focus on structural and cultural impediments in our system and ways they and others have developed to either lower or leapfrog the barriers. We hope to engage the audience in a highly interactive discussion to share thoughts and perspectives on this important issue.
There is additional information about Dr. Millan Patel here and Dr. Shirin Kalyan here. It would appear both speakers are researchers and academics and while I find the emphasis on the patient and the acknowledgement that medical research benefits are not being delivered in quantity or quality to patients, it seems odd that they don’t have a clinician (a doctor who deals almost exclusively with patients as opposed to two researchers) to add to their perspective.
This is an art/science event from an organization that sprang into existence sometime during summer 2015 (my July 7, 2015 posting featuring Curiosity Collider).
When: 8:00pm on Wednesday, January 27, 2016. Door opens at 7:30pm. Where:Café Deux Soleils. 2096 Commercial Drive, Vancouver, BC (Google Map). Cost: $5.00 cover (sliding scale) at the door. Proceeds will be used to cover the cost of running this event, and to fund future Curiosity Collider events.
90 seconds to share your art-science ideas. Think they are “ridiculous”? Well, we think it could be ridiculously awesome – we are looking for creative ideas!
Don’t have an idea (yet)? Contribute by sharing your expertise.
Chat with other art-science enthusiasts, strike up a conversation to collaborate, all disciplines/backgrounds welcome.
Want to showcase your project in the future? Participate in our fall art-science competition (more to come)!
Follow updates on twitter via @ccollider or #CollideConquer
Good luck on the open mic (should you have a project)!
This particular Brain Talk event is taking place at Vancouver General Hospital (VGH; there is also another Brain Talks series which takes place at the University of British Columbia). Yes, members of the public can attend the VGH version; they didn’t throw me out the last time I was there. Here’s more about the next VGH Brain Talks,
Sleep: biological & pathological perspectives
Thursday, Jan 28, 6:00pm @ Paetzold Auditorium, Vancouver General Hospital
Peter Hamilton, Sleep technician ~ Sleep Architecture
You may want to keep in mind that the event is organized by people who don’t organize events often. Nice people but you may need to search for crackers for your cheese and your wine comes out of a box (and I think it might have been self-serve the time I attended).
What a fabulous week we have ahead of us—Happy Weekend!
*’when’ removed from the sentence on March 28, 2016.
A new sponge-like material, discovered by Monash [Monash University in Australia] researchers, could have diverse and valuable real-life applications. The new elastomer could be used to create soft, tactile robots to help care for elderly people, perform remote surgical procedures or build highly sensitive prosthetic hands.
Graphene-based cellular elastomer, or G-elastomer, is highly sensitive to pressure and vibrations. Unlike other viscoelastic substances such as polyurethane foam or rubber, G-elastomer bounces back extremely quickly under pressure, despite its exceptionally soft nature. This unique, dynamic response has never been found in existing soft materials, and has excited and intrigued researchers Professor Dan Li and Dr Ling Qiu from the Monash Centre for Atomically Thin Materials (MCATM).
According to Dr Qiu, “This graphene elastomer is a flexible, ultra-light material which can detect pressures and vibrations across a broad bandwidth of frequencies. It far exceeds the response range of our skin, and it also has a very fast response time, much faster than conventional polymer elastomer.
“Although we often take it for granted, the pressure sensors in our skin allow us to do things like hold a cup without dropping it, crushing it, or spilling the contents. The sensitivity and response time of G-elastomer could allow a prosthetic hand or a robot to be even more dexterous than a human, while the flexibility could allow us to create next generation flexible electronic devices,” he said.
Professor Li, a director of MCATM, said, ‘Although we are still in the early stages of discovering graphene’s potential, this research is an excellent breakthrough. What we do know is that graphene could have a huge impact on Australia’s economy, both from a resources and innovation perspective, and we’re aiming to be at the forefront of that research and development.’
Dr Qiu’s research has been published in the latest edition of the prestigious journal Advanced Materials and is protected by a suite of patents.
Are they trying to protect the work from competition or wholesale theft of their work?
After all, the idea behind patents and copyrights was to encourage innovation and competition by ensuring that inventors and creators would benefit from their work. An example that comes to mind is the Xerox company which for many years had a monopoly on photocopy machines by virtue of their patent. Once the patent ran out (patents and copyrights were originally intended to be in place for finite time periods) and Xerox had made much, much money, competitors were free to create and market their own photocopy machines, which they did quite promptly. Since those days, companies have worked to extend patent and copyright time periods in efforts to stifle competition.
Getting back to Monash, I do hope the researchers are able to benefit from their work and wish them well. I also hope that they enjoy plenty of healthy competition spurring them onto greater innovation.
A German team that’s been working with sperm to develop a biological motor has announced it may have an alternative treatment for infertility, according to a Jan. 13, 2016 news item on Nanowerk,
Sperm that don’t swim well [also known as low motility] rank high among the main causes of infertility. To give these cells a boost, women trying to conceive can turn to artificial insemination or other assisted reproduction techniques, but success can be elusive. In an attempt to improve these odds, scientists have developed motorized “spermbots” that can deliver poor swimmers — that are otherwise healthy — to an egg. …
Artificial insemination is a relatively inexpensive and simple technique that involves introducing sperm to a woman’s uterus with a medical instrument. Overall, the success rate is on average under 30 percent, according to the Human Fertilisation & Embryology Authority of the United Kingdom. In vitro fertilization can be more effective, but it’s a complicated and expensive process. It requires removing eggs from a woman’s ovaries with a needle, fertilizing them outside the body and then transferring the embryos to her uterus or a surrogate’s a few days later. Each step comes with a risk for failure. Mariana Medina-Sánchez, Lukas Schwarz, Oliver G. Schmidt and colleagues from the Institute for Integrative Nanosciences at IFW Dresden in Germany wanted to see if they could come up with a better option than the existing methods.
Building on previous work on micromotors, the researchers constructed tiny metal helices just large enough to fit around the tail of a sperm. Their movements can be controlled by a rotating magnetic field. Lab testing showed that the motors can be directed to slip around a sperm cell, drive it to an egg for potential fertilization and then release it. The researchers say that although much more work needs to be done before their technique can reach clinical testing, the success of their initial demonstration is a promising start.
For those who prefer to watch their news, there’s this,
This team got a flurry of interest in 2014 when they first announced their research on using sperm as a biological motor. Tracy Staedter in a Jan. 15, 2014 article for Discovery.com describes their then results,
To create these tiny robots, the scientists first had to catch a few. First, they designed microtubes, which are essentially thin sheets of titanium and iron — which have a magnetic property — rolled into conical tubes, with one end wider than the other. Next, they put the microtubes into a solution in a Petri dish and added bovine sperm cells, which are similar size to human sperm. When a live sperm entered the wider end of the tube, it became trapped down near the narrow end. The scientists also closed the wider end, so the sperm wouldn’t swim out. And because sperm are so determined, the trapped cell pushed against the tube, moving it forward.
Next, the scientists used a magnetic field to guide the tube in the direction they wanted it to go, relying on the sperm for the propulsion.
The quick swimming spermbots could use controlled from outside a person body to deliver payloads of drugs and even sperm itself to parts of the body where its needed, whether that’s a cancer tumor or an egg.
This work isn’t nanotechnology per se but it has been published in ACS Nano Letters. Here’s a link to and a citation for the paper,
KAIST researchers will lead an IdeasLab on biotechnology for an aging society while HUBO, the winner of the 2015 DARPA Robotics Challenge, will interact with the forum participants, offering an experience of state-of-the-art robotics technology
Moving on from the news release’s subtitle, there’s more enlightenment,
Representatives from the Korea Advanced Institute of Science and Technology (KAIST) will attend the 2016 Annual Meeting of the World Economic Forum to run an IdeasLab and showcase its humanoid robot.
With over 2,500 leaders from business, government, international organizations, civil society, academia, media, and the arts expected to participate, the 2016 Annual Meeting will take place on Jan. 20-23, 2016 in Davos-Klosters, Switzerland. Under the theme of ‘Mastering the Fourth Industrial Revolution,’ [emphasis mine] global leaders will discuss the period of digital transformation [emphasis mine] that will have profound effects on economies, societies, and human behavior.
President Sung-Mo Steve Kang of KAIST will join the Global University Leaders Forum (GULF), a high-level academic meeting to foster collaboration among experts on issues of global concern for the future of higher education and the role of science in society. He will discuss how the emerging revolution in technology will affect the way universities operate and serve society. KAIST is the only Korean university participating in GULF, which is composed of prestigious universities invited from around the world.
Four KAIST professors, including Distinguished Professor Sang Yup Lee of the Chemical and Biomolecular Engineering Department, will lead an IdeasLab on ‘Biotechnology for an Aging Society.’
Professor Lee said, “In recent decades, much attention has been paid to the potential effect of the growth of an aging population and problems posed by it. At our IdeasLab, we will introduce some of our research breakthroughs in biotechnology to address the challenges of an aging society.”
In particular, he will present his latest research in systems biotechnology and metabolic engineering. His research has explained the mechanisms of how traditional Oriental medicine works in our bodies by identifying structural similarities between effective compounds in traditional medicine and human metabolites, and has proposed more effective treatments by employing such compounds.
KAIST will also display its networked mobile medical service system, ‘Dr. M.’ Built upon a ubiquitous and mobile Internet, such as the Internet of Things, wearable electronics, and smart homes and vehicles, Dr. M will provide patients with a more affordable and accessible healthcare service.
In addition, Professor Jun-Ho Oh of the Mechanical Engineering Department will showcase his humanoid robot, ‘HUBO,’ during the Annual Meeting. His research team won the International Humanoid Robotics Challenge hosted by the United States Defense Advanced Research Projects Agency (DARPA), which was held in Pomona, California, on June 5-6, 2015. With 24 international teams participating in the finals, HUBO completed all eight tasks in 44 minutes and 28 seconds, 6 minutes earlier than the runner-up, and almost 11 minutes earlier than the third-place team. Team KAIST walked away with the grand prize of USD 2 million.
Professor Oh said, “Robotics technology will grow exponentially in this century, becoming a real driving force to expedite the Fourth Industrial Revolution. I hope HUBO will offer an opportunity to learn about the current advances in robotics technology.”
President Kang pointed out, “KAIST has participated in the Annual Meeting of the World Economic Forum since 2011 and has engaged with a broad spectrum of global leaders through numerous presentations and demonstrations of our excellence in education and research. Next year, we will choreograph our first robotics exhibition on HUBO and present high-tech research results in biotechnology, which, I believe, epitomizes how science and technology breakthroughs in the Fourth Industrial Revolution will shape our future in an unprecedented way.”
Based on what I’m reading in the KAIST news release, I think the conversation about the ‘Fourth revolution’ may veer toward robotics and artificial intelligence (referred to in code as “digital transformation”) as developments in these fields are likely to affect various economies. Before proceeding with that thought, take a look at this video showcasing HUBO at the DARPA challenge,
I’m quite impressed with how the robot can recalibrate its grasp so it can pick things up and plug an electrical cord into an outlet and knowing whether wheels or legs will be needed to complete a task all due to algorithms which give the robot a type of artificial intelligence. While it may seem more like a machine than anything else, there’s also this version of a HUBO,
Description English: Photo by David Hanson Date 26 October 2006 (original upload date) Source Transferred from en.wikipedia to Commons by Mac. Author Dayofid at English Wikipedia
It’ll be interesting to note if the researchers make the HUBO seem more humanoid by giving it a face for its interactions with WEF attendees. It would be more engaging but also more threatening since there is increasing concern over robots taking work away from humans with implications for various economies. There’s more about HUBO in its Wikipedia entry.
As for the IdeasLab, that’s been in place at the WEF since 2009 according to this WEF July 19, 2011 news release announcing an ideasLab hub (Note: A link has been removed),
The World Economic Forum is publicly launching its biannual interactive IdeasLab hub on 19 July  at 10.00 CEST. The unique IdeasLab hub features short documentary-style, high-definition (HD) videos of preeminent 21st century ideas and critical insights. The hub also provides dynamic Pecha Kucha presentations and visual IdeaScribes that trace and package complex strategic thinking into engaging and powerful images. All videos are HD broadcast quality.
To share the knowledge captured by the IdeasLab sessions, which have been running since 2009, the Forum is publishing 23 of the latest sessions, seen as the global benchmark of collaborative learning and development.
So while you might not be able to visit an IdeasLab presentation at the WEF meetings, you could get a it to see them later.
Getting back to the robotics and artificial intelligence aspect of the 2016 WEF’s ‘digital’ theme, I noticed some reluctance to discuss how the field of robotics is affecting work and jobs in a broadcast of Canadian television show, ‘Conversations with Conrad’.
For those unfamiliar with the interviewer, Conrad Black is somewhat infamous in Canada for a number of reasons (from the Conrad Black Wikipedia entry), Note: Links have been removed,
Conrad Moffat Black, Baron Black of Crossharbour, KSG (born 25 August 1944) is a Canadian-born British former newspaper publisher and author. He is a non-affiliated life peer, and a convicted felon in the United States for fraud.[n 1] Black controlled Hollinger International, once the world’s third-largest English-language newspaper empire, which published The Daily Telegraph (UK), Chicago Sun Times (U.S.), The Jerusalem Post (Israel), National Post (Canada), and hundreds of community newspapers in North America, before he was fired by the board of Hollinger in 2004.
In 2004, a shareholder-initiated prosecution of Black began in the United States. Over $80 million in assets claimed to have been improperly taken or inappropriately spent by Black. He was convicted of three counts of fraud and one count of obstruction of justice in a U.S. court in 2007 and sentenced to six and a half years’ imprisonment. In 2011 two of the charges were overturned on appeal and he was re-sentenced to 42 months in prison on one count of mail fraud and one count of obstruction of justice. Black was released on 4 May 2012.
Despite or perhaps because of his chequered past, he is often a good interviewer and he definitely attracts interesting guests. n an Oct. 26, 2015 programme, he interviewed both former Canadian astronaut, Chris Hadfield, and Canadian-American David Frum who’s currently editor of Atlantic Monthly and a former speechwriter for George W. Bush.
It was Black’s conversation with Frum which surprised me. They discuss robotics without ever once using the word. In a section where Frum notes that manufacturing is returning to the US, he also notes that it doesn’t mean more jobs and cites a newly commissioned plant in the eastern US employing about 40 people where before it would have employed hundreds or thousands. Unfortunately, the video has not been made available as I write this (Nov. 20, 2015) but that situation may change. You can check here.
Final thought, my guess is that economic conditions are fragile and I don’t think anyone wants to set off panic by mentioning robotics and disappearing jobs.