Flubber (flying rubber) is an imaginary material that provided a plot point for two Disney science fiction comedies, The Absent-Minded Professor in 1961 which was remade in 1997 as Flubber. By contrast, ‘thubber’ (thermally conductive rubber) is a real life new material developed at Carnegie Mellon University (US).
Carmel Majidi and Jonathan Malen of Carnegie Mellon University have developed a thermally conductive rubber material that represents a breakthrough for creating soft, stretchable machines and electronics. The findings were published in Proceedings of the National Academy of Sciences this week.
The new material, nicknamed “thubber,” is an electrically insulating composite that exhibits an unprecedented combination of metal-like thermal conductivity, elasticity similar to soft, biological tissue, and can stretch over six times its initial length.
“Our combination of high thermal conductivity and elasticity is especially critical for rapid heat dissipation in applications such as wearable computing and soft robotics, which require mechanical compliance and stretchable functionality,” said Majidi, an associate professor of mechanical engineering.
Applications could extend to industries like athletic wear and sports medicine—think of lighted clothing for runners and heated garments for injury therapy. Advanced manufacturing, energy, and transportation are other areas where stretchable electronic material could have an impact.
“Until now, high power devices have had to be affixed to rigid, inflexible mounts that were the only technology able to dissipate heat efficiently,” said Malen, an associate professor of mechanical engineering. “Now, we can create stretchable mounts for LED lights or computer processors that enable high performance without overheating in applications that demand flexibility, such as light-up fabrics and iPads that fold into your wallet.”
The key ingredient in “thubber” is a suspension of non-toxic, liquid metal microdroplets. The liquid state allows the metal to deform with the surrounding rubber at room temperature. When the rubber is pre-stretched, the droplets form elongated pathways that are efficient for heat travel. Despite the amount of metal, the material is also electrically insulating.
To demonstrate these findings, the team mounted an LED light onto a strip of the material to create a safety lamp worn around a jogger’s leg. The “thubber” dissipated the heat from the LED, which would have otherwise burned the jogger. The researchers also created a soft robotic fish that swims with a “thubber” tail, without using conventional motors or gears.
“As the field of flexible electronics grows, there will be a greater need for materials like ours,” said Majidi. “We can also see it used for artificial muscles that power bio-inspired robots.”
Majidi and Malen acknowledge the efforts of lead authors Michael Bartlett, Navid Kazem, and Matthew Powell-Palm in performing this multidisciplinary work. They also acknowledge funding from the Air Force, NASA, and the Army Research Office.
For almost a month I’ve been meaning to get to this Feb. 1, 2017 essay by Andrew Maynard (director of Risk Innovation Lab at Arizona State University) and Jack Stilgoe (science policy lecturer at University College London [UCL]) on the topic of artificial intelligence and principles (Note: Links have been removed). First, a walk down memory lane,
Today [Feb. 1, 2017] in Washington DC, leading US and UK scientists are meeting to share dispatches from the frontiers of machine learning – an area of research that is creating new breakthroughs in artificial intelligence (AI). Their meeting follows the publication of a set of principles for beneficial AI that emerged from a conference earlier this year at a place with an important history.
In February 1975, 140 people – mostly scientists, with a few assorted lawyers, journalists and others – gathered at a conference centre on the California coast. A magazine article from the time by Michael Rogers, one of the few journalists allowed in, reported that most of the four days’ discussion was about the scientific possibilities of genetic modification. Two years earlier, scientists had begun using recombinant DNA to genetically modify viruses. The Promethean nature of this new tool prompted scientists to impose a moratorium on such experiments until they had worked out the risks. By the time of the Asilomar conference, the pent-up excitement was ready to burst. It was only towards the end of the conference when a lawyer stood up to raise the possibility of a multimillion-dollar lawsuit that the scientists focussed on the task at hand – creating a set of principles to govern their experiments.
The 1975 Asilomar meeting is still held up as a beacon of scientific responsibility. However, the story told by Rogers, and subsequently by historians, is of scientists motivated by a desire to head-off top down regulation with a promise of self-governance. Geneticist Stanley Cohen said at the time, ‘If the collected wisdom of this group doesn’t result in recommendations, the recommendations may come from other groups less well qualified’. The mayor of Cambridge, Massachusetts was a prominent critic of the biotechnology experiments then taking place in his city. He said, ‘I don’t think these scientists are thinking about mankind at all. I think that they’re getting the thrills and the excitement and the passion to dig in and keep digging to see what the hell they can do’.
The concern in 1975 was with safety and containment in research, not with the futures that biotechnology might bring about. A year after Asilomar, Cohen’s colleague Herbert Boyer founded Genentech, one of the first biotechnology companies. Corporate interests barely figured in the conversations of the mainly university scientists.
Fast-forward 42 years and it is clear that machine learning, natural language processing and other technologies that come under the AI umbrella are becoming big business. The cast list of the 2017 Asilomar meeting included corporate wunderkinds from Google, Facebook and Tesla as well as researchers, philosophers, and other academics. The group was more intellectually diverse than their 1975 equivalents, but there were some notable absences – no public and their concerns, no journalists, and few experts in the responsible development of new technologies.
Maynard and Stilgoe offer a critique of the latest principles,
The principles that came out of the meeting are, at least at first glance, a comforting affirmation that AI should be ‘for the people’, and not to be developed in ways that could cause harm. They promote the idea of beneficial and secure AI, development for the common good, and the importance of upholding human values and shared prosperity.
This is good stuff. But it’s all rather Motherhood and Apple Pie: comforting and hard to argue against, but lacking substance. The principles are short on accountability, and there are notable absences, including the need to engage with a broader set of stakeholders and the public. At the early stages of developing new technologies, public concerns are often seen as an inconvenience. In a world in which populism appears to be trampling expertise into the dirt, it is easy to understand why scientists may be defensive.
I encourage you to read this thoughtful essay in its entirety although I do have one nit to pick: Why only US and UK scientists? I imagine the answer may lie in funding and logistics issues but I find it surprising that the critique makes no mention of the international community as a nod to inclusion.
For anyone interested in the Asolimar AI principles (2017), you can find them here. You can also find videos of the two-day workshop (Jan. 31 – Feb. 1, 2017 workshop titled The Frontiers of Machine Learning (a Raymond and Beverly Sackler USA-UK Scientific Forum [US National Academy of Sciences]) here (videos for each session are available on Youtube).
8 February – 3 September 2017, Science Museum, London
Admission: £15 adults, £13 concessions (Free entry for under 7s; family tickets available)
Tickets available in the Museum or via sciencemuseum.org.uk/robots
Supported by the Heritage Lottery Fund
Throughout history, artists and scientists have sought to understand what it means to be human. The Science Museum’s new Robots exhibition, opening in February 2017, will explore this very human obsession to recreate ourselves, revealing the remarkable 500-year story of humanoid robots.
Featuring a unique collection of over 100 robots, from a 16th-century mechanical monk to robots from science fiction and modern-day research labs, this exhibition will enable visitors to discover the cultural, historical and technological context of humanoid robots. Visitors will be able to interact with some of the 12 working robots on display. Among many other highlights will be an articulated iron manikin from the 1500s, Cygan, a 2.4m tall 1950s robot with a glamorous past, and one of the first walking bipedal robots.
Robots have been at the heart of popular culture since the word ‘robot’ was first used in 1920, but their fascinating story dates back many centuries. Set in five different periods and places, this exhibition will explore how robots and society have been shaped by religious belief, the industrial revolution, 20th century popular culture and dreams about the future.
The quest to build ever more complex robots has transformed our understanding of the human body, and today robots are becoming increasingly human, learning from mistakes and expressing emotions. In the exhibition, visitors will go behind the scenes to glimpse recent developments from robotics research, exploring how roboticists are building robots that resemble us and interact in human-like ways. The exhibition will end by asking visitors to imagine what a shared future with robots might be like. Robots has been generously supported by the Heritage Lottery Fund, with a £100,000 grant from the Collecting Cultures programme.
Ian Blatchford, Director of the Science Museum Group said: ‘This exhibition explores the uniquely human obsession of recreating ourselves, not through paint or marble but in metal. Seeing robots through the eyes of those who built or gazed in awe at them reveals much about humanity’s hopes, fears and dreams.’
‘The latest in our series of ambitious, blockbuster exhibitions, Robots explores the wondrously rich culture, history and technology of humanoid robotics. Last year we moved gigantic spacecraft from Moscow to the Museum, but this year we will bring a robot back to life.’
Today [May ?, 2016] the Science Museum launched a Kickstarter campaign to rebuild Eric, the UK’s first robot. Originally built in 1928 by Captain Richards & A.H. Reffell, Eric was one of the world’s first robots. Built less than a decade after the word robot was first used, he travelled the globe with his makers and amazed crowds in the UK, US and Europe, before disappearing forever.
Getting back to the exhibition, the Guardian’s Ian Sample has written up a Feb. 7, 2017 preview (Note: Links have been removed),
Eric the robot wowed the crowds. He stood and bowed and answered questions as blue sparks shot from his metallic teeth. The British creation was such a hit he went on tour around the world. When he arrived in New York, in 1929, a theatre nightwatchman was so alarmed he pulled out a gun and shot at him.
The curators at London’s Science Museum hope for a less extreme reaction when they open Robots, their latest exhibition, on Wednesday [Feb. 8, 2016]. The collection of more than 100 objects is a treasure trove of delights: a miniature iron man with moving joints; a robotic swan that enthralled Mark Twain; a tiny metal woman with a wager cup who is propelled by a mechanism hidden up her skirt.
The pieces are striking and must have dazzled in their day. Ben Russell, the lead curator, points out that most people would not have seen a clock when they first clapped eyes on one exhibit, a 16th century automaton of a monk [emphasis mine], who trundled along, moved his lips, and beat his chest in contrition. It was surely mesmerising to the audiences of 1560. “Arthur C Clarke once said that any sufficiently advanced technology is indistinguishable from magic,” Russell says. “Well, this is where it all started.”
In every chapter of the 500-year story, robots have held a mirror to human society. Some of the earliest devices brought the Bible to life. One model of Christ on the cross rolls his head and oozes wooden blood from his side as four figures reach up. The mechanisation of faith must have drawn the congregations as much as any sermon.
But faith was not the only focus. Through clockwork animals and human figurines, model makers explored whether humans were simply conscious machines. They brought order to the universe with orreries and astrolabes. The machines became more lighthearted in the enlightened 18th century, when automatons of a flute player, a writer, and a defecating duck all made an appearance. A century later, the style was downright rowdy, with drunken aristocrats, preening dandies and the disturbing life of a sausage from farm to mouth all being recreated as automata.
That reference to an automaton of a monk reminded me of a July 22, 2009 posting where I excerpted a passage (from another blog) about a robot priest and a robot monk,
Since 1993 Robo-Priest has been on call 24-hours a day at Yokohama Central Cemetery. The bearded robot is programmed to perform funerary rites for several Buddhist sects, as well as for Protestants and Catholics. Meanwhile, Robo-Monk chants sutras, beats a religious drum and welcomes the faithful to Hotoku-ji, a Buddhist temple in Kakogawa city, Hyogo Prefecture. More recently, in 2005, a robot dressed in full samurai armour received blessings at a Shinto shrine on the Japanese island of Kyushu. Kiyomori, named after a famous 12th-century military general, prayed for the souls of all robots in the world before walking quietly out of Munakata Shrine.
Sample’s preview takes the reader up to our own age and contemporary robots. And, there is another Guardian article which offering a behind-the-scenes look at the then upcoming exhibition, a Jan. 28, 2016 piece by Jonathan Jones, ,
An android toddler lies on a pallet, its doll-like face staring at the ceiling. On a shelf rests a much more grisly creation that mixes imitation human bones and muscles, with wires instead of arteries and microchips in place of organs. It has no lower body, and a single Cyclopean eye. This store room is an eerie place, then it gets more creepy, as I glimpse behind the anatomical robot a hulking thing staring at me with glowing red eyes. Its plastic skin has been burned off to reveal a metal skeleton with pistons and plates of merciless strength. It is the Terminator, sent back in time by the machines who will rule the future to ensure humanity’s doom.
Backstage at the Science Museum, London, where these real experiments and a full-scale model from the Terminator films are gathered to be installed in the exhibition Robots, it occurs to me that our fascination with mechanical replacements for ourselves is so intense that science struggles to match it. We think of robots as artificial humans that can not only walk and talk but possess digital personalities, even a moral code. In short we accord them agency. Today, the real age of robots is coming, and yet even as these machines promise to transform work or make it obsolete, few possess anything like the charisma of the androids of our dreams and nightmares.
That’s why, although the robotic toddler sleeping in the store room is an impressive piece of tech, my heart leaps in another way at the sight of the Terminator. For this is a bad robot, a scary robot, a robot of remorseless malevolence. It has character, in other words. Its programmed persona (which in later films becomes much more helpful and supportive) is just one of those frightening, funny or touching personalities that science fiction has imagined for robots.
Can the real life – well, real simulated life – robots in the Science Museum’s new exhibition live up to these characters? The most impressively interactive robot in the show will be RoboThespian, who acts as compere for its final gallery displaying the latest advances in robotics. He stands at human height, with a white plastic face and metal arms and legs, and can answer questions about the value of pi and the nature of free will. “I’m a very clever robot,” RoboThespian claims, plausibly, if a little obnoxiously.
Except not quite as clever as all that. A human operator at a computer screen connected with Robothespian by wifi is looking through its video camera eyes and speaking with its digital voice. The result is huge fun – the droid moves in very lifelike ways as it speaks, and its interactions don’t need a live operator as they can be preprogrammed. But a freethinking, free-acting robot with a mind and personality of its own, Robothespian is not.
Our fascination with synthetic humans goes back to the human urge to recreate life itself – to reproduce the mystery of our origins. Artists have aspired to simulate human life since ancient times. The ancient Greek myth of Pygmalion, who made a statue so beautiful he fell in love with it and prayed for it to come to life, is a mythic version of Greek artists such as Pheidias and Praxiteles whose statues, with their superb imitation of muscles and movement, seem vividly alive. The sculptures of centaurs carved for the Parthenon in Athens still possess that uncanny lifelike power.
Most of the finest Greek statues were bronze, and mythology tells of metal robots that sound very much like statues come to life, including the bronze giant Talos, who was to become one of cinema’s greatest robotic monsters thanks to the special effects genius of Ray Harryhausen in Jason and the Argonauts.
Renaissance art took the quest to simulate life to new heights, with awed admirers of Michelangelo’s David claiming it even seemed to breathe (as it really does almost appear to when soft daylight casts mobile shadow on superbly sculpted ribs). So it is oddly inevitable that one of the first recorded inventors of robots was Leonardo da Vinci, consummate artist and pioneering engineer. Leonardo apparently made, or at least designed, a robot knight to amuse the court of Milan. It worked with pulleys and was capable of simple movements. Documents of this invention are frustratingly sparse, but there is a reliable eyewitness account of another of Leonardo’s automata. In 1515 he delighted Francois I, king of France, with a robot lion that walked forward towards the monarch, then released a bunch of lilies, the royal flower, from a panel that opened in its back.
One of the most uncanny androids in the Science Museum show is from Japan, a freakily lifelike female robot called Kodomoroid, the world’s first robot newscaster. With her modest downcast gaze and fine artificial complexion, she has the same fetishised femininity you might see in a Manga comic and appears to reflect a specific social construction of gender. Whether you read that as vulnerability or subservience, presumably the idea is to make us feel we are encountering a robot with real personhood. Here is a robot that combines engineering and art just as Da Vinci dreamed – it has the mechanical genius of his knight and the synthetic humanity of his perfect portrait.
A team of researchers led by Biomedical Engineering Professor Sam Sia has developed a way to manufacture microscale-sized machines from biomaterials that can safely be implanted in the body. Working with hydrogels, which are biocompatible materials that engineers have been studying for decades, Sia has invented a new technique that stacks the soft material in layers to make devices that have three-dimensional, freely moving parts. The study, published online January 4, 2017, in Science Robotics, demonstrates a fast manufacturing method Sia calls “implantable microelectromechanical systems” (iMEMS).
By exploiting the unique mechanical properties of hydrogels, the researchers developed a “locking mechanism” for precise actuation and movement of freely moving parts, which can provide functions such as valves, manifolds, rotors, pumps, and drug delivery. They were able to tune the biomaterials within a wide range of mechanical and diffusive properties and to control them after implantation without a sustained power supply such as a toxic battery. They then tested the “payload” delivery in a bone cancer model and found that the triggering of release of doxorubicin from the device over 10 days showed high treatment efficacy and low toxicity, at 1/10 of the standard systemic chemotherapy dose.
“Overall, our iMEMS platform enables development of biocompatible implantable microdevices with a wide range of intricate moving components that can be wirelessly controlled on demand and solves issues of device powering and biocompatibility,” says Sia, also a member of the Data Science Institute. “We’re really excited about this because we’ve been able to connect the world of biomaterials with that of complex, elaborate medical devices. Our platform has a large number of potential applications, including the drug delivery system demonstrated in our paper which is linked to providing tailored drug doses for precision medicine.”
I particularly like this bit about hydrogels being a challenge to work with and the difficulties of integrating both rigid and soft materials,
Most current implantable microdevices have static components rather than moving parts and, because they require batteries or other toxic electronics, have limited biocompatibility. Sia’s team spent more than eight years working on how to solve this problem. “Hydrogels are difficult to work with, as they are soft and not compatible with traditional machining techniques,” says Sau Yin Chin, lead author of the study who worked with Sia. “We have tuned the mechanical properties and carefully matched the stiffness of structures that come in contact with each other within the device. Gears that interlock have to be stiff in order to allow for force transmission and to withstand repeated actuation. Conversely, structures that form locking mechanisms have to be soft and flexible to allow for the gears to slip by them during actuation, while at the same time they have to be stiff enough to hold the gears in place when the device is not actuated. We also studied the diffusive properties of the hydrogels to ensure that the loaded drugs do not easily diffuse through the hydrogel layers.”
The team used light to polymerize sheets of gel and incorporated a stepper mechanization to control the z-axis and pattern the sheets layer by layer, giving them three-dimensionality. Controlling the z-axis enabled the researchers to create composite structures within one layer of the hydrogel while managing the thickness of each layer throughout the fabrication process. They were able to stack multiple layers that are precisely aligned and, because they could polymerize a layer at a time, one right after the other, the complex structure was built in under 30 minutes.
Sia’s iMEMS technique addresses several fundamental considerations in building biocompatible microdevices, micromachines, and microrobots: how to power small robotic devices without using toxic batteries, how to make small biocompatible moveable components that are not silicon which has limited biocompatibility, and how to communicate wirelessly once implanted (radio frequency microelectronics require power, are relatively large, and are not biocompatible). The researchers were able to trigger the iMEMS device to release additional payloads over days to weeks after implantation. They were also able to achieve precise actuation by using magnetic forces to induce gear movements that, in turn, bend structural beams made of hydrogels with highly tunable properties. (Magnetic iron particles are commonly used and FDA-approved for human use as contrast agents.)
In collaboration with Francis Lee, an orthopedic surgeon at Columbia University Medical Center at the time of the study, the team tested the drug delivery system on mice with bone cancer. The iMEMS system delivered chemotherapy adjacent to the cancer, and limited tumor growth while showing less toxicity than chemotherapy administered throughout the body.
“These microscale components can be used for microelectromechanical systems, for larger devices ranging from drug delivery to catheters to cardiac pacemakers, and soft robotics,” notes Sia. “People are already making replacement tissues and now we can make small implantable devices, sensors, or robots that we can talk to wirelessly. Our iMEMS system could bring the field a step closer in developing soft miniaturized robots that can safely interact with humans and other living systems.”
The researchers have provided a video demonstrating their work (you may want to read the caption below before watching),
Magnetic actuation of the Geneva drive device. A magnet is placed about 1cm below and without contact with the device. The rotating magnet results in the rotational movement of the smaller driving gear. With each full rotation of this driving gear, the larger driven gear is engaged and rotates by 60º, exposing the next reservoir to the aperture on the top layer of the device.
—Video courtesy of Sau Yin Chin/Columbia Engineering
You can hear some background conversation but it doesn’t seem to have been included for informational purposes.
This is take on artificial intelligence that I haven’t encountered before. Sean Captain’s Nov. 15, 2016 article for Fast Company profiles industry giant GE (General Electric) and its foray into that world (Note: Links have been removed),
When you hear the term “artificial intelligence,” you may think of tech giants Amazon, Google, IBM, Microsoft, or Facebook. Industrial powerhouse General Electric is now aiming to be included on that short list. It may not have a chipper digital assistant like Cortana or Alexa. It won’t sort through selfies, but it will look through X-rays. It won’t recommend movies, but it will suggest how to care for a diesel locomotive. Today, GE announced a pair of acquisitions and new services that will bring machine learning AI to the kinds of products it’s known for, including planes, trains, X-ray machines, and power plants.
The effort started in 2015 when GE announced Predix Cloud—an online platform to network and collect data from sensors on industrial machinery such as gas turbines or windmills. At the time, GE touted the benefits of using machine learning to find patterns in sensor data that could lead to energy savings or preventative maintenance before a breakdown. Predix Cloud opened up to customers in February [2016?], but GE is still building up the AI capabilities to fulfill the promise. “We were using machine learning, but I would call it in a custom way,” says Bill Ruh, GE’s chief digital officer and CEO of its GE Digital business (GE calls its division heads CEOs). “And we hadn’t gotten to a general-purpose framework in machine learning.”
Today [Nov. 15, 2016] GE revealed the purchase of two AI companies that Ruh says will get them there. Bit Stew Systems, founded in 2005, was already doing much of what Predix Cloud promises—collecting and analyzing sensor data from power utilities, oil and gas companies, aviation, and factories. (GE Ventures has funded the company.) Customers include BC Hydro, Pacific Gas & Electric, and Scottish & Southern Energy.
The second purchase, Wise.io is a less obvious purchase. Founded by astrophysics and AI experts using machine learning to study the heavens, the company reapplied the tech to streamlining a company’s customer support systems, picking up clients like Pinterest, Twilio, and TaskRabbit. GE believes the technology will transfer yet again, to managing industrial machines. “I think by the middle of next year we will have a full machine learning stack,” says Ruh.
Though young, Predix is growing fast, with 270 partner companies using the platform, according to GE, which expects revenue on software and services to grow over 25% this year, to more than $7 billion. Ruh calls Predix a “significant part” of that extra money. And he’s ready to brag, taking a jab at IBM Watson for being a “general-purpose” machine-learning provider without the deep knowledge of the industries it serves. “We have domain algorithms, on machine learning, that’ll know what a power plant is and all the depth of that, that a general-purpose machine learning will never really understand,” he says.
One especially dull-sounding new Predix service—Predictive Corrosion Management—touches on a very hot political issue: giant oil and gas pipeline projects. Over 400 people have been arrested in months of protests against the Dakota Access Pipeline, which would carry crude oil from North Dakota to Illinois. The issue is very complicated, but one concern of protestors is that a pipeline rupture would contaminate drinking water for the Standing Rock Sioux reservation.
“I think absolutely this is aimed at that problem. If you look at why pipelines spill, it’s corrosion,” says Ruh. “We believe that 10 years from now, we can detect a leak before it occurs and fix it before you see it happen.” Given how political battles over pipelines drag on, 10 years might not be so long to wait.
I recommend reading the article in its entirety if you have the time. And, for those of us in British Columbia, Canada, it was a surprise to see BC Hydro on the list of customers for one of GE’s new acquisitions. As well, that business about the pipelines hits home hard given the current debates (Enbridge Northern Gateway Pipelines) here. *ETA Dec. 27, 2016: This was originally edited just prior to publication to include information about the announcement by the Trudeau cabinet approving two pipelines for TransMountain and Enbridge respectively while rejecting the Northern Gateway pipeline (Canadian Broadcasting Corporation [CBC] online news Nov. 29, 2016). I trust this second edit will stick.*
It seems GE is splashing out in a big way. There’s a second piece on Fast Company, a Nov. 16, 2016 article by Sean Captain (again) this time featuring a chat between an engineer and a robotic power plant,
We are entering the era of talking machines—and it’s about more than just asking Amazon’s Alexa to turn down the music. General Electric has built a digital assistant into its cloud service for managing power plants, jet engines, locomotives, and the other heavy equipment it builds. Over the internet, an engineer can ask a machine—even one hundreds of miles away—how it’s doing and what it needs. …
Voice controls are built on top of GE’s Digital Twin program, which uses sensor readings from machinery to create virtual models in cyberspace. “That model is constantly getting a stream of data, both operational and environmental,” says Colin Parris, VP at GE Software Research. “So it’s adapting itself to that type of data.” The machines live virtual lives online, allowing engineers to see how efficiently each is running and if they are wearing down.
GE partnered with Microsoft on the interface, using the Bing Speech API (the same tech powering the Cortana digital assistant), with special training on key terms like “rotor.” The twin had little trouble understanding the Mandarin Chinese accent of Bo Yu, one of the researchers who built the system; nor did it stumble on Parris’s Trinidad accent. Digital Twin will also work with Microsoft’s HoloLens mixed reality goggles, allowing someone to step into a 3D image of the equipment.
I can’t help wondering if there are some jobs that were eliminated with this technology.
Long a science fiction trope, ‘morphing’, in this case, an airplane wing, is closer to reality with this work from the Massachusetts Institute of Technology (MIT). From a Nov. 3, 2016 MIT news release (also on EurekAlert),
When the Wright brothers accomplished their first powered flight more than a century ago, they controlled the motion of their Flyer 1 aircraft using wires and pulleys that bent and twisted the wood-and-canvas wings. This system was quite different than the separate, hinged flaps and ailerons that have performed those functions on most aircraft ever since. But now, thanks to some high-tech wizardry developed by engineers at MIT and NASA, some aircraft may be returning to their roots, with a new kind of bendable, “morphing” wing.
The new wing architecture, which could greatly simplify the manufacturing process and reduce fuel consumption by improving the wing’s aerodynamics, as well as improving its agility, is based on a system of tiny, lightweight subunits that could be assembled by a team of small specialized robots, and ultimately could be used to build the entire airframe. The wing would be covered by a “skin” made of overlapping pieces that might resemble scales or feathers.
The new concept is described in the journal Soft Robotics, in a paper by Neil Gershenfeld, director of MIT’s Center for Bits and Atoms (CBA); Benjamin Jenett, a CBA graduate student; Kenneth Cheung PhD ’12, a CBA alumnus and NASA research scientist; and four others.
Researchers have been trying for many years to achieve a reliable way of deforming wings as a substitute for the conventional, separate, moving surfaces, but all those efforts “have had little practical impact,” Gershenfeld says. The biggest problem was that most of these attempts relied on deforming the wing through the use of mechanical control structures within the wing, but these structures tended to be so heavy that they canceled out any efficiency advantages produced by the smoother aerodynamic surfaces. They also added complexity and reliability issues.
By contrast, Gershenfeld says, “We make the whole wing the mechanism. It’s not something we put into the wing.” In the team’s new approach, the whole shape of the wing can be changed, and twisted uniformly along its length, by activating two small motors that apply a twisting pressure to each wingtip.
Like building with blocks
The basic principle behind the new concept is the use of an array of tiny, lightweight structural pieces, which Gershenfeld calls “digital materials,” that can be assembled into a virtually infinite variety of shapes, much like assembling a structure from Lego blocks. The assembly, performed by hand for this initial experiment, could be done by simple miniature robots that would crawl along or inside the structure as it took shape. The team has already developed prototypes of such robots.
The individual pieces are strong and stiff, but the exact choice of the dimensions and materials used for the pieces, and the geometry of how they are assembled, allow for a precise tuning of the flexibility of the final shape. For the initial test structure, the goal was to allow the wing to twist in a precise way that would substitute for the motion of separate structural pieces (such as the small ailerons at the trailing edges of conventional wings), while providing a single, smooth aerodynamic surface.
Building up a large and complex structure from an array of small, identical building blocks, which have an exceptional combination of strength, light weight, and flexibility, greatly simplifies the manufacturing process, Gershenfeld explains. While the construction of light composite wings for today’s aircraft requires large, specialized equipment for layering and hardening the material, the new modular structures could be rapidly manufactured in mass quantities and then assembled robotically in place.
Gershenfeld and his team have been pursuing this approach to building complex structures for years, with many potential applications for robotic devices of various kinds. For example, this method could lead to robotic arms and legs whose shapes could bend continuously along their entire length, rather than just having a fixed number of joints.
This research, says Cheung, “presents a general strategy for increasing the performance of highly compliant — that is, ‘soft’ — robots and mechanisms,” by replacing conventional flexible materials with new cellular materials “that are much lower weight, more tunable, and can be made to dissipate energy at much lower rates” while having equivalent stiffness.
Saving fuel, cutting emissions
While exploring possible applications of this nascent technology, Gershenfeld and his team consulted with NASA engineers and others seeking ways to improve the efficiency of aircraft manufacturing and flight. They learned that “the idea that you could continuously deform a wing shape to do pure lift and roll has been a holy grail in the field, for both efficiency and agility,” he says. Given the importance of fuel costs in both the economics of the airline industry and that sector’s contribution to greenhouse gas emissions, even small improvements in fuel efficiency could have a significant impact.
Wind-tunnel tests of this structure showed that it at least matches the aerodynamic properties of a conventional wing, at about one-tenth the weight.
The “skin” of the wing also enhances the structure’s performance. It’s made from overlapping strips of flexible material, layered somewhat like feathers or fish scales, allowing for the pieces to move across each other as the wing flexes, while still providing a smooth outer surface.
The modular structure also provides greater ease of both assembly and disassembly: One of this system’s big advantages, in principle, Gershenfeld says, is that when it’s no longer needed, the whole structure can be taken apart into its component parts, which can then be reassembled into something completely different. Similarly, repairs could be made by simply replacing an area of damaged subunits.
“An inspection robot could just find where the broken part is and replace it, and keep the aircraft 100 percent healthy at all times,” says Jenett.
Following up on the successful wind tunnel tests, the team is now extending the work to tests of a flyable unpiloted aircraft, and initial tests have shown great promise, Jenett says. “The first tests were done by a certified test pilot, and he found it so responsive that he decided to do some aerobatics.”
Some of the first uses of the technology may be to make small, robotic aircraft — “super-efficient long-range drones,” Gershenfeld says, that could be used in developing countries as a way of delivering medicines to remote areas.
Liz Alexander in an Oct. 20, 2016 article for Fast Company describes a ‘futures’ game designed by Toronto, Canada-based Idea Couture,
Other than a brief chat with a college career counselor, or that time a family member asked what you wanted to be when you grew up, has anyone encouraged you to look into the future? Were you ever formally taught how to develop your capacity for foresight? Me neither.
A new game called IMPACT, by the innovation and design firm Idea Couture, wants to change that. Given how rapidly the workforce is evolving—not to mention life’s inherent uncertainty—IMPACT’s creators felt it might be useful to help people sharpen their ability to anticipate and respond to unexpected change, especially when it comes to their careers.
It’s designed for groups of three to five players (though up to six can play), and it’s arguably best suited to people ages 16 and older.
To begin, each player chooses a card that outlines their persona for the duration of the game. All are meant to represent a knowledge worker from the future workforce—someone who helps customize prescriptions for patients; uses social-media mining and systems thinking to assemble distributed teams; or develops living spaces, transportation solutions, and health innovations to make space travel more feasible for humans. And each persona card includes a set of optimal conditions for exercising their skill sets.
In each round, a player draws an “impact card” describing a technological breakthrough that may shake up their career prospects—for good or ill. Every player then has to react to its impact by adding or subtracting “influence cubes” to the game board, which covers 10 “domains” (agriculture, energy, transportation, etc.), only three of which are relevant to each character’s “preferred future.”
As Elaine Cameron, resident futurist and senior director of the FUTURE Perspective Group at the public relations firm Burson-Marsteler, explains, “One of the things futurists learn to be comfortable with is a degree of uncertainty. What we are equipped to do is to track signals of change, anticipate the direction of travel, and imagine possible scenarios that could evolve”—all skills that IMPACT is meant to sharpen in players. “That way you have some kind of plan in place should any of those possibilities become reality.”
Cameron recently contributed to IMPACT’s Kickstarter campaign, which exceeded its target of $15,000 CAD earlier this month. It’s now fully funded and entering production, with Idea Couture taking preorders (at $65 USD) and pledging to donate 25 free copies to educators.
I [Liz Alexander] recently asked some volunteers to give IMPACT a spin. One was Debra France, a corporate educator at a global innovation company. In one round, France was faced with four cards describing real-life technological breakthroughs in green jet fuel, cheap spray-on solar cells, fuel-producing plants, and biomedical implants that can bond with human cells. While only some of these eco-friendly innovations worked to her persona’s strengths, and she won bonus round at the end of the game by coming up with the “era” headline: “It’s now easy being green.”
Afterward, France said she had a strong sense of the game’s real-world applications, including “for young people in STEM programs,” because it pushes players to consider “a broad range of possible future jobs that could help them decide which science or technology to pursue.”
If you have the time, it’s an interesting article.
Here’s a video Idea Couture produced touting their game,
Were you just as surprised to find out the Government of Canada has an innovation lab (Policy Horizons Canada)?
Here’s a little more about the government’s innovation lab from a Sept. 19, 2016 Idea Couture news release announcing their IMPACT Kickstarter campaign (closed on Oct. 1, 2016) on PRWeb (Note: A link has been removed),
The game was originally designed in collaboration with Policy Horizons Canada, an innovation lab within the Government of Canada, whose work explores how disruptive technologies may shape the economy and society. Players learn about developments in fields like nanotechnology, artificial intelligence, Internet of Things, biotechnology, and robotics; and are prompted to consider their industry, environment, and policy implications.
IMPACT is currently used by public servants within the Government of Canada to introduce and teach the discipline of strategic foresight. Now, through the launch of a Kickstarter campaign, Idea Couture is on a mission to make it available to anyone who wants to get better at futures thinking.
Robert Bolton, Head of Foresight Studio at Idea Couture, says, “When people play IMPACT, they practice the creative and critical thinking skills that foresight strategists like us use in our work with Fortune 500 companies and governments. We want to make those skills broadly accessible, so a more diverse population of citizens is empowered to participate in determining the shape of the future.”
I’m glad to see this game as it seems designed to raise awareness about science and future applications. It’s especially good to see the Canadian government and its policy makers using these tools. However, after watching the video, it seems that this game is not for everybody. You may have noticed the players are aged 20 – 40 (at the most). What about those of us who don’t fit into the demographics (employed 20 – 40 year olds) as shown in the video? Plus, I have a strong suspicion that this game is oriented to urbanites in the Canadian south.
If the game is intended to have a broader appeal than what is seen in the video, Idea Couture needs to do a better job of telling the story.
These ‘robomussels’ are not voting but they are being used to monitor mussel bed habitats according to an Oct. 17, 2016 news item on ScienceDaily,
Tiny robots have been helping researchers study how climate change affects biodiversity. Developed by Northeastern University scientist Brian Helmuth, the “robomussels” have the shape, size, and color of actual mussels, with miniature built-in sensors that track temperatures inside the mussel beds.
Caption: This is a robomussel, seen among living mussels and other sea creatures. Credit: Allison Matzelle
For the past 18 years, every 10 to 15 minutes, Helmuth and a global research team of 48 scientists have used robomussels to track internal body temperature, which is determined by the temperature of the surrounding air or water, and the amount of solar radiation the devices absorb. They place the robots inside mussel beds in oceans around the globe and record temperatures. The researchers have built a database of nearly two decades worth of data enabling scientists to pinpoint areas of unusual warming, intervene to help curb damage to vital marine ecosystems, and develop strategies that could prevent extinction of certain species.
Housed at Northeastern’s Marine Science Center in Nahant, Massachusetts, this largest-ever database is not only a remarkable way to track the effects of climate change, the findings can also reveal emerging hotspots so policymakers and scientists can step in and relieve stressors such as erosion and water acidification before it’s too late.
“They look exactly like mussels but they have little green blinking lights in them,” says Helmuth. “You basically pluck out a mussel and then glue the device to the rock right inside the mussel bed. They enable us to link our field observations with the physiological impact of global climate change on these ecologically and economically important animals.”
For ecological forecasters such as Helmuth, mussels act as a barometer of climate change. That’s because they rely on external sources of heat such as air temperature and sun exposure for their body heat and thrive, or not, depending on those conditions. Using fieldwork along with mathematical and computational models, Helmuth forecasts the patterns of growth, reproduction, and survival of mussels in intertidal zones.
Over the years, he and his colleagues have found surprises: “Our expectations of where to look for the effects of climate change in nature are more complex than anticipated,” says Helmuth. For example, in an earlier paper in the journal Science, his team found that hotspots existed not only at the southern end of the species’ distribution, in this case, southern California; they also existed at sites up north, in Oregon and Washington state.
“These datasets tell us when and where to look for the effects of climate change,” he says. “Without them we could miss early warning signs of trouble.”
The robomussels’ near-continuous measurements serve as an early warning system. “If we start to see sites where the animals are regularly getting to temperatures that are right below what kills them, we know that any slight increase is likely to send them over the edge, and we can act,” says Helmuth.
It’s not only the mussels that may be pulled back from the brink. The advance notice could inform everything from maintaining the biodiversity of coastal systems to determining the best–and worst–places to locate mussel farms.
“Losing mussel beds is essentially like clearing a forest,” says Helmuth. “If they go, everything that’s living in them will go. They are a major food supply for many species, including lobsters and crabs. They also function as filters along near-shore waters, clearing huge amounts of particulates. So losing them can affect everything from the growth of species we care about because we want to eat them to water clarity to biodiversity of all the tiny animals that live on the insides of the beds.”
Scientists from Moscow Institute of Physics and Technology (MIPT)’s Research Center for Molecular Mechanisms of Aging and Age-Related Diseases together with Inria research center, Grenoble, France have developed a software package called Knodle to determine an atom’s hybridization, bond orders and functional groups’ annotation in molecules. The program streamlines one of the stages of developing new drugs.
Imagine that you were to develop a new drug. Designing a drug with predetermined properties is called drug-design. Once a drug has entered the human body, it needs to take effect on the cause of a disease. On a molecular level this is a malfunction of some proteins and their encoding genes. In drug-design these are called targets. If a drug is antiviral, it must somehow prevent the incorporation of viral DNA into human DNA. In this case the target is viral protein. The structure of the incorporating protein is known, and we also even know which area is the most important – the active site. If we insert a molecular “plug” then the viral protein will not be able to incorporate itself into the human genome and the virus will die. It boils down to this: you find the “plug” – you have your drug.
But how can we find the molecules required? Researchers use an enormous database of substances for this. There are special programs capable of finding a needle in a haystack; they use quantum chemistry approximations to predict the place and force of attraction between a molecular “plug” and a protein. However, databases only store the shape of a substance; information about atom and bond states is also needed for an accurate prediction. Determining these states is what Knodle does. With the help of the new technology, the search area can be reduced from hundreds of thousands to just a hundred. These one hundred can then be tested to find drugs such as Reltagravir – which has actively been used for HIV prevention since 2011.
From science lessons at school everyone is used to seeing organic substances as letters with sticks (substance structure), knowing that in actual fact there are no sticks. Every stick is a bond between electrons which obeys the laws of quantum chemistry. In the case of one simple molecule, like the one in the diagram [diagram follows], the experienced chemist intuitively knows the hybridizations of every atom (the number of neighboring atoms which it is connected to) and after a few hours looking at reference books, he or she can reestablish all the bonds. They can do this because they have seen hundreds and hundreds of similar substances and know that if oxygen is “sticking out like this”, it almost certainly has a double bond. In their research, Maria Kadukova, a MIPT student, and Sergei Grudinin, a researcher from Inria research center located in Grenoble, France, decided to pass on this intuition to a computer by using machine learning.
Compare “A solid hollow object with a handle, opening at the top and an elongation at the side, at the end of which there is another opening” and “A vessel for the preparation of tea”. Both of them describe a teapot rather well, but the latter is simpler and more believable. The same is true for machine learning, the best algorithm for learning is the simplest. This is why the researchers chose to use a nonlinear support vector machines (SVM), a method which has proven itself in recognizing handwritten text and images. On the input it was given the positions of neighboring atoms and on the output collected hybridization.
Good learning needs a lot of examples and the scientists did this using 7605 substances with known structures and atom states. “This is the key advantage of the program we have developed, learning from a larger database gives better predictions. Knodle is now one step ahead of similar programs: it has a margin of error of 3.9%, while for the closest competitor this figure is 4.7%”, explains Maria Kadukova. And that is not the only benefit. The software package can easily be modified for a specific problem. For example, Knodle does not currently work with substances containing metals, because those kind of substances are rather rare. But if it turns out that a drug for Alzheimer’s is much more effective if it has metal, the only thing needed to adapt the program is a database with metallic substances. We are now left to wonder what new drug will be found to treat a previously incurable disease.
Scientists from MIPT’s Research Center for Molecular Mechanisms of Aging and Age-Related Diseases together with Inria research center, Grenoble, France have developed a software package called Knodle to determine an atom’s hybridization, bond orders and functional groups’ annotation in molecules. The program streamlines one of the stages of developing new drugs. Credit: MIPT Press Office
The US television network, Home Box Office (HBO) is getting ready to première Westworld, a new series based on a movie first released in 1973. Here’s more about the movie from its Wikipedia entry (Note: Links have been removed),
Westworld is a 1973 science fiction Western thriller film written and directed by novelist Michael Crichton and produced by Paul Lazarus III about amusement park robots that malfunction and begin killing visitors. It stars Yul Brynner as an android in a futuristic Western-themed amusement park, and Richard Benjamin and James Brolin as guests of the park.
Westworld was the first theatrical feature directed by Michael Crichton. It was also the first feature film to use digital image processing, to pixellate photography to simulate an android point of view. The film was nominated for Hugo, Nebula and Saturn awards, and was followed by a sequel film, Futureworld, and a short-lived television series, Beyond Westworld. In August 2013, HBO announced plans for a television series based on the original film.
The latest version is due to start broadcasting in the US on Sunday, Oct. 2, 2016 and as part of the publicity effort the producers are profiled by Sean Captain for Fast Company in a Sept. 30, 2016 article,
As Game of Thrones marches into its final seasons, HBO is debuting this Sunday what it hopes—and is betting millions of dollars on—will be its new blockbuster series: Westworld, a thorough reimagining of Michael Crichton’s 1973 cult classic film about a Western theme park populated by lifelike robot hosts. A philosophical prelude to Jurassic Park, Crichton’s Westworld is a cautionary tale about technology gone very wrong: the classic tale of robots that rise up and kill the humans. HBO’s new series, starring Evan Rachel Wood, Anthony Hopkins, and Ed Harris, is subtler and also darker: The humans are the scary ones.
“We subverted the entire premise of Westworld in that our sympathies are meant to be with the robots, the hosts,” says series co-creator Lisa Joy. She’s sitting on a couch in her Burbank office next to her partner in life and on the show—writer, director, producer, and husband Jonathan Nolan—who goes by Jonah. …
Their Westworld, which runs in the revered Sunday-night 9 p.m. time slot, combines present-day production values and futuristic technological visions—thoroughly revamping Crichton’s story with hybrid mechanical-biological robots [emphasis mine] fumbling along the blurry line between simulated and actual consciousness.
Captain never does explain the “hybrid mechanical-biological robots.” For example, do they have human skin or other organs grown for use in a robot? In other words, how are they hybrid?
That nitpick aside, the article provides some interesting nuggets of information and insight into the themes and ideas 2016 Westworld’s creators are exploring (Note: A link has been removed),
… Based on the four episodes I previewed (which get progressively more interesting), Westworld does a good job with the trope—which focused especially on the awakening of Dolores, an old soul of a robot played by Evan Rachel Wood. Dolores is also the catchall Spanish word for suffering, pain, grief, and other displeasures. “There are no coincidences in Westworld,” says Joy, noting that the name is also a play on Dolly, the first cloned mammal.
The show operates on a deeper, though hard-to-define level, that runs beneath the shoot-em and screw-em frontier adventure and robotic enlightenment narratives. It’s an allegory of how even today’s artificial intelligence is already taking over, by cataloging and monetizing our lives and identities. “Google and Facebook, their business is reading your mind in order to advertise shit to you,” says Jonah Nolan. …
“Exist free of rules, laws or judgment. No impulse is taboo,” reads a spoof home page for the resort that HBO launched a few weeks ago. That’s lived to the fullest by the park’s utterly sadistic loyal guest, played by Ed Harris and known only as the Man in Black.
The article also features some quotes from scientists on the topic of artificial intelligence (Note: Links have been removed),
“In some sense, being human, but less than human, it’s a good thing,” says Jon Gratch, professor of computer science and psychology at the University of Southern California [USC]. Gratch directs research at the university’s Institute for Creative Technologies on “virtual humans,” AI-driven onscreen avatars used in military-funded training programs. One of the projects, SimSensei, features an avatar of a sympathetic female therapist, Ellie. It uses AI and sensors to interpret facial expressions, posture, tension in the voice, and word choices by users in order to direct a conversation with them.
“One of the things that we’ve found is that people don’t feel like they’re being judged by this character,” says Gratch. In work with a National Guard unit, Ellie elicited more honest responses about their psychological stresses than a web form did, he says. Other data show that people are more honest when they know the avatar is controlled by an AI versus being told that it was controlled remotely by a human mental health clinician.
“If you build it like a human, and it can interact like a human. That solves a lot of the human-computer or human-robot interaction issues,” says professor Paul Rosenbloom, also with USC’s Institute for Creative Technologies. He works on artificial general intelligence, or AGI—the effort to create a human-like or human level of intellect.
Rosenbloom is building an AGI platform called Sigma that models human cognition, including emotions. These could make a more effective robotic tutor, for instance, “There are times you want the person to know you are unhappy with them, times you want them to know that you think they’re doing great,” he says, where “you” is the AI programmer. “And there’s an emotional component as well as the content.”
Achieving full AGI could take a long time, says Rosenbloom, perhaps a century. Bernie Meyerson, IBM’s chief innovation officer, is also circumspect in predicting if or when Watson could evolve into something like HAL or Her. “Boy, we are so far from that reality, or even that possibility, that it becomes ludicrous trying to get hung up there, when we’re trying to get something to reasonably deal with fact-based data,” he says.
Gratch, Rosenbloom, and Meyerson are talking about screen-based entities and concepts of consciousness and emotions. Then, there’s a scientist who’s talking about the difficulties with robots,
… Ken Goldberg, an artist and professor of engineering at UC [University of California] Berkeley, calls the notion of cyborg robots in Westworld “a pretty common trope in science fiction.” (Joy will take up the theme again, as the screenwriter for a new Battlestar Galactica movie.) Goldberg’s lab is struggling just to build and program a robotic hand that can reliably pick things up. But a sympathetic, somewhat believable Dolores in a virtual setting is not so farfetched.
Captain delves further into a thorny issue,
“Can simulations, at some point, become the real thing?” asks Patrick Lin, director of the Ethics + Emerging Sciences Group at California Polytechnic State University. “If we perfectly simulate a rainstorm on a computer, it’s still not a rainstorm. We won’t get wet. But is the mind or consciousness different? The jury is still out.”
While artificial consciousness is still in the dreamy phase, today’s level of AI is serious business. “What was sort of a highfalutin philosophical question a few years ago has become an urgent industrial need,” says Jonah Nolan. It’s not clear yet how the Delos management intends, beyond entrance fees, to monetize Westworld, although you get a hint when Ford tells Theresa Cullen “We know everything about our guests, don’t we? As we know everything about our employees.”
AI has a clear moneymaking model in this world, according to Nolan. “Facebook is monetizing your social graph, and Google is advertising to you.” Both companies (and others) are investing in AI to better understand users and find ways to make money off this knowledge. …
As my colleague David Bruggeman has often noted on his Pasco Phronesis blog, there’s a lot of science on television.
For anyone who’s interested in artificial intelligence and the effects it may have on urban life, see my Sept. 27, 2016 posting featuring the ‘One Hundred Year Study on Artificial Intelligence (AI100)’, hosted by Stanford University.
Points to anyone who recognized Jonah (Jonathan) Nolan as the producer for the US television series, Person of Interest, a programme based on the concept of a supercomputer with intelligence and personality and the ability to continuously monitor the population 24/7.