Patients who are bedridden or unable to move their legs are often at risk of developing Deep Vein Thrombosis (DVT), a potentially life-threatening condition caused by blood clots forming along the lower extremity veins of the legs. A team of researchers from the National University of Singapore’s (NUS) Yong Loo Lin School of Medicine and Faculty of Engineering has invented a novel sock that can help prevent DVT and improve survival rates of patients.
Equipped with soft actuators that mimic the tentacle movements of corals, the robotic sock emulates natural lower leg muscle contractions in the wearer’s leg, thereby promoting blood circulation throughout the wearer’s body. In addition, the novel device can potentially optimise therapy sessions and enable the patient’s lower leg movements to be monitored to improve therapy outcomes.
The invention is created by Assistant Professor Lim Jeong Hoon from the NUS Department of Medicine, as well as Assistant Professor Raye Yeow Chen Hua and first-year PhD candidate Mr Low Fanzhe of the NUS Department of Biomedical Engineering.
The news release goes on to contrast this new technique with the pharmacological and other methods currently in use,
Current approaches to prevent DVT include pharmacological methods which involve using anti-coagulation drugs to prevent blood from clotting, and mechanical methods that involve the use of compressive stimulations to assist blood flow.
While pharmacological methods are competent in preventing DVT, there is a primary detrimental side effect – there is higher risk of excessive bleeding which can lead to death, especially for patients who suffered hemorrhagic stroke. On the other hand, current mechanical methods such as the use of compression stockings have not demonstrated significant reduction in DVT risk.
In the course of exploring an effective solution that can prevent DVT, Asst Prof Lim, who is a rehabilitation clinician, was inspired by the natural role of the human ankle muscles in facilitating venous blood flow back to the heart. He worked with Asst Prof Yeow and Mr Low to derive a method that can perform this function for patients who are bedridden or unable to move their legs.
The team turned to nature for inspiration to develop a device that is akin to human ankle movements. They found similarities in the elegant structural design of the coral tentacle, which can extend to grab food and contract to bring the food closer for consumption, and invented soft actuators that mimic this “push and pull” mechanism.
By integrating the actuators with a sock and the use of a programmable pneumatic pump-valve control system, the invention is able to create the desired robot-assisted ankle joint motions to facilitate blood flow in the leg.
Explaining the choice of materials, Mr Low said, “We chose to use only soft components and actuators to increase patient comfort during use, hence minimising the risk of injury from excessive mechanical forces. Compression stockings are currently used in the hospital wards, so it makes sense to use a similar sock-based approach to provide comfort and minimise bulk on the ankle and foot.”
The sock complements conventional ankle therapy exercises that therapists perform on patients, thereby optimising therapy time and productivity. In addition, the sock can be worn for prolonged durations to provide robot-assisted therapy, on top of the therapist-assisted sessions. The sock is also embedded with sensors to track the ankle joint angle, allowing the patient’s ankle motion to be monitored for better treatment.
Said Asst Prof Yeow, “Given its compact size, modular design and ease of use, the soft robotic sock can be adopted in hospital wards and rehabilitation centres for on-bed applications to prevent DVT among stroke patients or even at home for bedridden patients. By reducing the risk of DVT using this device, we hope to improve survival rates of these patients.”
The team does not seem to have published any papers about this work although there are plans for clinical trials and commercialization (from the news release),
To further investigate the effectiveness of the robotic sock, Asst Prof Lim, Asst Prof Yeow and Mr Low will be conducting pilot clinical trials with about 30 patients at the National University Hospital over six months, starting March 2015. They hope that the pilot clinical trials will help them to obtain patient and clinical feedback to further improve the design and capabilities of the device.
The team intends to conduct trials across different local hospitals for better evaluation, and they also hope to commercialise the device in future.
The researchers have provided an image of the sock on a ‘patient’,
Caption: NUS researchers (from right to left) Assistant Professor Raye Yeow, Mr Low Fanzhe and Dr Liu Yuchun demonstrating the novel bio-inspired robotic sock. Credit: National University of Singapore
Eve, an artificially-intelligent ‘robot scientist’ could make drug discovery faster and much cheaper, say researchers writing in the Royal Society journal Interface. The team has demonstrated the success of the approach as Eve discovered that a compound shown to have anti-cancer properties might also be used in the fight against malaria.
Robot scientists are a natural extension of the trend of increased involvement of automation in science. They can automatically develop and test hypotheses to explain observations, run experiments using laboratory robotics, interpret the results to amend their hypotheses, and then repeat the cycle, automating high-throughput hypothesis-led research. Robot scientists are also well suited to recording scientific knowledge: as the experiments are conceived and executed automatically by computer, it is possible to completely capture and digitally curate all aspects of the scientific process.
In 2009, Adam, a robot scientist developed by researchers at the Universities of Aberystwyth and Cambridge, became the first machine to autonomously discover new scientific knowledge. The same team has now developed Eve, based at the University of Manchester, whose purpose is to speed up the drug discovery process and make it more economical. In the study published today, they describe how the robot can help identify promising new drug candidates for malaria and neglected tropical diseases such as African sleeping sickness and Chagas’ disease.
“Neglected tropical diseases are a scourge of humanity, infecting hundreds of millions of people, and killing millions of people every year,” says Professor Ross King, from the Manchester Institute of Biotechnology at the University of Manchester. “We know what causes these diseases and that we can, in theory, attack the parasites that cause them using small molecule drugs. But the cost and speed of drug discovery and the economic return make them unattractive to the pharmaceutical industry.
“Eve exploits its artificial intelligence to learn from early successes in her screens and select compounds that have a high probability of being active against the chosen drug target. A smart screening system, based on genetically engineered yeast, is used. This allows Eve to exclude compounds that are toxic to cells and select those that block the action of the parasite protein while leaving any equivalent human protein unscathed. This reduces the costs, uncertainty, and time involved in drug screening, and has the potential to improve the lives of millions of people worldwide.”
The press release goes on to describe how ‘Eve’ works,
Eve is designed to automate early-stage drug design. First, she systematically tests each member from a large set of compounds in the standard brute-force way of conventional mass screening. The compounds are screened against assays (tests) designed to be automatically engineered, and can be generated much faster and more cheaply than the bespoke assays that are currently standard. This enables more types of assay to be applied, more efficient use of screening facilities to be made, and thereby increases the probability of a discovery within a given budget.
Eve’s robotic system is capable of screening over 10,000 compounds per day. However, while simple to automate, mass screening is still relatively slow and wasteful of resources as every compound in the library is tested. It is also unintelligent, as it makes no use of what is learnt during screening.
To improve this process, Eve selects at random a subset of the library to find compounds that pass the first assay; any ‘hits’ are re-tested multiple times to reduce the probability of false positives. Taking this set of confirmed hits, Eve uses statistics and machine learning to predict new structures that might score better against the assays. Although she currently does not have the ability to synthesise such compounds, future versions of the robot could potentially incorporate this feature.
Steve Oliver from the Cambridge Systems Biology Centre and the Department of Biochemistry at the University of Cambridge says: “Every industry now benefits from automation and science is no exception. Bringing in machine learning to make this process intelligent – rather than just a ‘brute force’ approach – could greatly speed up scientific progress and potentially reap huge rewards.”
To test the viability of the approach, the researchers developed assays targeting key molecules from parasites responsible for diseases such as malaria, Chagas’ disease and schistosomiasis and tested against these a library of approximately 1,500 clinically approved compounds. Through this, Eve showed that a compound that has previously been investigated as an anti-cancer drug inhibits a key molecule known as DHFR in the malaria parasite. Drugs that inhibit this molecule are currently routinely used to protect against malaria, and are given to over a million children; however, the emergence of strains of parasites resistant to existing drugs means that the search for new drugs is becoming increasingly more urgent.
“Despite extensive efforts, no one has been able to find a new antimalarial that targets DHFR and is able to pass clinical trials,” adds Professor Oliver. “Eve’s discovery could be even more significant than just demonstrating a new approach to drug discovery.”
2014 was quite the year for discussions about robots/artificial intelligence (AI) taking over the world of work. There was my July 16, 2014 post titled, Writing and AI or is a robot writing this blog?, where I discussed the implications of algorithms which write news stories (business and sports, so far) in the wake of a deal that Associated Press signed with a company called Automated Insights. A few weeks later, the Pew Research Center released a report titled, AI, Robotics, and the Future of Jobs, which was widely covered. As well, sometime during the year, renowned physicist Stephen Hawking expressed serious concerns about artificial intelligence and our ability to control it.
It seems that 2015 is going to be another banner for this discussion. Before launching into the latest on this topic, here’s a sampling of the Pew Research and the response to it. From an Aug. 6, 2014 Pew summary about AI, Robotics, and the Future of Jobs by Aaron Smith and Janna Anderson,
The vast majority of respondents to the 2014 Future of the Internet canvassing anticipate that robotics and artificial intelligence will permeate wide segments of daily life by 2025, with huge implications for a range of industries such as health care, transport and logistics, customer service, and home maintenance. But even as they are largely consistent in their predictions for the evolution of technology itself, they are deeply divided on how advances in AI and robotics will impact the economic and employment picture over the next decade.
We call this a canvassing because it is not a representative, randomized survey. Its findings emerge from an “opt in” invitation to experts who have been identified by researching those who are widely quoted as technology builders and analysts and those who have made insightful predictions to our previous queries about the future of the Internet. …
I wouldn’t have expected Jeff Bercovici’s Aug. 6, 2014 article for Forbes to be quite so hesitant about the possibilities of our robotic and artificially intelligent future,
As part of a major ongoing project looking at the future of the internet, the Pew Research Internet Project canvassed some 1,896 technologists, futurists and other experts about how they see advances in robotics and artificial intelligence affecting the human workforce in 2025.
The results were not especially reassuring. Nearly half of the respondents (48%) predicted that robots and AI will displace more jobs than they create over the coming decade. While that left a slim majority believing the impact of technology on employment will be neutral or positive, that’s not necessarily grounds for comfort: Many experts told Pew they expect the jobs created by the rise of the machines will be lower paying and less secure than the ones displaced, widening the gap between rich and poor, while others said they simply don’t think the major effects of robots and AI, for better or worse, will be in evidence yet by 2025.
Chris Gayomali’s Aug. 6, 2014 article for Fast Company poses an interesting question about how this brave new future will be financed,
A new study by Pew Internet Research takes a hard look at how innovations in robotics and artificial intelligence will impact the future of work. To reach their conclusions, Pew researchers invited 12,000 experts (academics, researchers, technologists, and the like) to answer two basic questions:
Will networked, automated, artificial intelligence (AI) applications and robotic devices have displaced more jobs than they have created by 2025?
To what degree will AI and robotics be parts of the ordinary landscape of the general population by 2025?
Close to 1,900 experts responded. About half (48%) of the people queried envision a future in which machines have displaced both blue- and white-collar jobs. It won’t be so dissimilar from the fundamental shift we saw in manufacturing, in which fewer (human) bosses oversaw automated assembly lines.
Meanwhile, the other 52% of experts surveyed speculate while that many of the jobs will be “substantially taken over by robots,” humans won’t be displaced outright. Rather, many people will be funneled into new job categories that don’t quite exist yet. …
Some worry that over the next 10 years, we’ll see a large number of middle class jobs disappear, widening the economic gap between the rich and the poor. The shift could be dramatic. As artificial intelligence becomes less artificial, they argue, the worry is that jobs that earn a decent living wage (say, customer service representatives, for example) will no longer be available, putting lots and lots of people out of work, possibly without the requisite skill set to forge new careers for themselves.
How do we avoid this? One revealing thread suggested by experts argues that the responsibility will fall on businesses to protect their employees. “There is a relentless march on the part of commercial interests (businesses) to increase productivity so if the technical advances are reliable and have a positive ROI [return on investment],” writes survey respondent Glenn Edens, a director of research in networking, security, and distributed systems at PARC, which is owned by Xerox. “Ultimately we need a broad and large base of employed population, otherwise there will be no one to pay for all of this new world.” [emphasis mine]
Alex Hearn’s Aug. 6, 2014 article for the Guardian reviews the report and comments on the current educational system’s ability to prepare students for the future,
Almost all of the respondents are united on one thing: the displacement of work by robots and AI is going to continue, and accelerate, over the coming decade. Where they split is in the societal response to that displacement.
The optimists predict that the economic boom that would result from vastly reduced costs to businesses would lead to the creation of new jobs in huge numbers, and a newfound premium being placed on the value of work that requires “uniquely human capabilities”. …
But the pessimists worry that the benefits of the labor replacement will accrue to those already wealthy enough to own the automatons, be that in the form of patents for algorithmic workers or the physical form of robots.
The ranks of the unemployed could swell, as people are laid off from work they are qualified in without the ability to retrain for careers where their humanity is a positive. And since this will happen in every economic sector simultaneously, civil unrest could be the result.
One thing many experts agreed on was the need for education to prepare for a post-automation world. ““Only the best-educated humans will compete with machines,” said internet sociologist Howard Rheingold.
“And education systems in the US and much of the rest of the world are still sitting students in rows and columns, teaching them to keep quiet and memorise what is told them, preparing them for life in a 20th century factory.”
Then, Will Oremus’ Aug. 6, 2014 article for Slate suggests we are already experiencing displacement,
… the current jobless recovery, along with a longer-term trend toward income and wealth inequality, has some thinkers wondering whether the latest wave of automation is different from those that preceded it.
Massachusetts Institute of Technology researchers Andrew McAfee and Erik Brynjolfsson, among others, see a “great decoupling” of productivity from wages since about 2000 as technology outpaces human workers’ education and skills. Workers, in other words, are losing the race between education and technology. This may be exacerbating a longer-term trend in which capital has gained the upper hand on labor since the 1970s.
The results of the survey were fascinating. Almost exactly half of the respondents (48 percent) predicted that intelligent software will disrupt more jobs than it can replace. The other half predicted the opposite.
The lack of expert consensus on such a crucial and seemingly straightforward question is startling. It’s even more so given that history and the leading economic models point so clearly to one side of the question: the side that reckons society will adjust, new jobs will emerge, and technology will eventually leave the economy stronger.
More recently, Manish Singh has written about some of his concerns as a writer who could be displaced in a Jan. 31, 2015 (?) article for Beta News (Note: A link has been removed),
Robots are after my job. They’re after yours as well, but let us deal with my problem first. Associated Press, an American multinational nonprofit news agency, revealed on Friday [Jan. 30, 2015] that it published 3,000 articles in the last three months of 2014. The company could previously only publish 300 stories. It didn’t hire more journalists, neither did its existing headcount start writing more, but the actual reason behind this exponential growth is technology. All those stories were written by an algorithm.
The articles produced by the algorithm were accurate, and you won’t be able to separate them from stories written by humans. Good lord, all the stories were written in accordance with the AP Style Guide, something not all journalists follow (but arguably, should).
There has been a growth in the number of such software. Narrative Science, a Chicago-based company offers an automated narrative generator powered by artificial intelligence. The company’s co-founder and CTO, Kristian Hammond, said last year that he believes that by 2030, 90 percent of news could be written by computers. Forbes, a reputable news outlet, has used Narrative’s software. Some news outlets use it to write email newsletters and similar things.
Singh also sounds a note of concern for other jobs by including this video (approximately 16 mins.) in his piece,
This video (Humans Need Not Apply) provides an excellent overview of the situation although it seems C. G. P. Grey, the person who produced and posted the video on YouTube, holds a more pessimistic view of the future than some other futurists. C. G. P. Grey has a website here and is profiled here on Wikipedia.
One final bit, there’s a robot art critic which some are suggesting is superior to human art critics in Thomas Gorton’s Jan. 16, 2015 (?) article ‘This robot reviews art better than most critics‘ for Dazed Digital (Note: Links have been removed),
… the Novice Art Blogger, a Tumblr page set up by Matthew Plummer Fernandez. The British-Colombian artist programmed a bot with deep learning algorithms to analyse art; so instead of an overarticulate critic rambling about praxis, you get a review that gets down to the nitty-gritty about what exactly you see in front of you.
The results are charmingly honest: think a round robin of Google Translate text uninhibited by PR fluff, personal favouritism or the whims of a bad mood. We asked Novice Art Blogger to review our most recent Winter 2014 cover with Kendall Jenner. …
Beyond Kendall Jenner, it’s worth reading Gorton’s article for the interview with Plummer Fernandez.
Having just attended a talk on Robotics and Rehabilitation which included a segment on Robo Ethics, news of an art project where an autonomous bot (robot) is set loose on the darknet to purchase goods (not all of them illegal) was fascinating in itself (it was part of an art exhibition which also displayed the proceeds of the darknet activity). But things got more interesting when the exhibit attracted legal scrutiny in the UK and occasioned legal action in Switzerland.
… some London-based Swiss artists, !Mediengruppe Bitnik [(Carmen Weisskopf and Domagoj Smoljo)], presented an exhibition in Zurich of The Darknet: From Memes to Onionland. Specifically, they had programmed a bot with some Bitcoin to randomly buy $100 worth of things each week via a darknet market, like Silk Road (in this case, it was actually Agora). The artists’ focus was more about the nature of dark markets, and whether or not it makes sense to make them illegal:
The pair see parallels between copyright law and drug laws: “You can enforce laws, but what does that mean for society? Trading is something people have always done without regulation, but today it is regulated,” says ays [sic] Weiskopff.
“There have always been darkmarkets in cities, online or offline. These questions need to be explored. But what systems do we have to explore them in? Post Snowden, space for free-thinking online has become limited, and offline is not a lot better.”
Interestingly the bot got excellent service as Mike Power wrote in his Dec. 5, 2014 review of the show. Power also highlights some of the legal, ethical, and moral implications,
The gallery is next door to a police station, but the artists say they are not afraid of legal repercussions of their bot buying illegal goods.
“We are the legal owner of the drugs [the bot purchased 10 ecstasy pills along with a baseball cap, a pair of sneaker/runners/trainers among other items] – we are responsible for everything the bot does, as we executed the code, says Smoljo. “But our lawyer and the Swiss constitution says art in the public interest is allowed to be free.”
The project also aims to explore the ways that trust is built between anonymous participants in a commercial transaction for possibly illegal goods. Perhaps most surprisingly, not one of the 12 deals the robot has made has ended in a scam.
“The markets copied procedures from Amazon and eBay – their rating and feedback system is so interesting,” adds Smojlo. “With such simple tools you can gain trust. The service level was impressive – we had 12 items and everything arrived.”
“There has been no scam, no rip-off, nothing,” says Weiskopff. “One guy could not deliver a handbag the bot ordered, but he then returned the bitcoins to us.”
The exhibition scheduled from Oct. 18, 2014 – Jan. 11, 2015 enjoyed an uninterrupted run but there were concerns in the UK (from the Power article),
A spokesman for the National Crime Agency, which incorporates the National Cyber Crime Unit, was less philosophical, acknowledging that the question of criminal culpability in the case of a randomised software agent making a purchase of an illegal drug was “very unusual”.
“If the purchase is made in Switzerland, then it’s of course potentially subject to Swiss law, on which we couldn’t comment,” said the NCA. “In the UK, it’s obviously illegal to purchase a prohibited drug (such as ecstasy), but any criminal liability would need to assessed on a case-by-case basis.”
Masnick describes the followup,
Apparently, that [case-by[case] assessment has concluded in this case, because right after the exhibit closed in Switzerland, law enforcement showed up to seize stuff …
«Can a robot, or a piece of software, be jailed if it commits a crime? Where does legal culpability lie if code is criminal by design or default? What if a robot buys drugs, weapons, or hacking equipment and has them sent to you, and police intercept the package?» These are some of the questions Mike Power asked when he reviewed the work «Random Darknet Shopper» in The Guardian. The work was part of the exhibition «The Darknet – From Memes to Onionland. An Exploration» in the Kunst Halle St. Gallen, which closed on Sunday, January 11, 2015. For the duration of the exhibition, !Mediengruppe Bitnik sent a software bot on a shopping spree in the Deepweb. Random Darknet Shopper had a budget of $100 in Bitcoins weekly, which it spent on a randomly chosen item from the deepweb shop Agora. The work and the exhibition received wide attention from the public and the press. The exhibition was well-attended and was discussed in a wide range of local and international press from Saiten to Vice, Arte, Libération, CNN, Forbes. «There’s just one problem», The Washington Post wrote in January about the work, «recently, it bought 10 ecstasy pills».
What does it mean for a society, when there are robots which act autonomously? Who is liable, when a robot breaks the law on its own initiative? These were some of the main questions the work Random Darknet Shopper posed. Global questions, which will now be negotiated locally.
On the morning of January 12, the day after the three-month exhibition was closed, the public prosecutor’s office of St. Gallen seized and sealed our work. It seems, the purpose of the confiscation is to impede an endangerment of third parties through the drugs exhibited by destroying them. This is what we know at present. We believe that the confiscation is an unjustified intervention into freedom of art. We’d also like to thank Kunst Halle St. Gallen for their ongoing support and the wonderful collaboration. Furthermore, we are convinced, that it is an objective of art to shed light on the fringes of society and to pose fundamental contemporary questions.
This project brings to mind Isaac Asimov’s three laws of robotics and a question (from the Wikipedia entry; Note: Links have been removed),
The Three Laws of Robotics (often shortened to The Three Laws or Three Laws, also known as Asimov’s Laws) are a set of rules devised by the science fiction author Isaac Asimov. The rules were introduced in his 1942 short story “Runaround”, although they had been foreshadowed in a few earlier stories. The Three Laws are:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Here’s my question, how do you programme a robot to know what would injure a human being? For example, if a human ingests an ecstasy pill the bot purchased, would that be covered in the first law?
Getting back to the robot ethics talk I recently attended, it was given by Ajung Moon (Ph.D. student at the University of British Columbia [Vancouver, Canada] studying Human-Robot Interaction and Roboethics. Mechatronics engineer with a sprinkle of Philosophy background). She has a blog, Roboethic info DataBase where you can read more on robots and ethics.
I strongly recommend reading both Masnick’s post (he positions this action in a larger context) and Power’s article (more details and images from the exhibit).
There’s some exciting news from Sweden’s Chalmers University of Technology about prosthetics. From an Oct. 8, 2014 news item on ScienceDaily,
For the first time, robotic prostheses controlled via implanted neuromuscular interfaces have become a clinical reality. A novel osseointegrated (bone-anchored) implant system gives patients new opportunities in their daily life and professional activities.
In January 2013 a Swedish arm amputee was the first person in the world to receive a prosthesis with a direct connection to bone, nerves and muscles. …
“Going beyond the lab to allow the patient to face real-world challenges is the main contribution of this work,” says Max Ortiz Catalan, research scientist at Chalmers University of Technology and leading author of the publication.
“We have used osseointegration to create a long-term stable fusion between man and machine, where we have integrated them at different levels. The artificial arm is directly attached to the skeleton, thus providing mechanical stability. Then the human’s biological control system, that is nerves and muscles, is also interfaced to the machine’s control system via neuromuscular electrodes. This creates an intimate union between the body and the machine; between biology and mechatronics.”
The direct skeletal attachment is created by what is known as osseointegration, a technology in limb prostheses pioneered by associate professor Rickard Brånemark and his colleagues at Sahlgrenska University Hospital. Rickard Brånemark led the surgical implantation and collaborated closely with Max Ortiz Catalan and Professor Bo Håkansson at Chalmers University of Technology on this project.
The patient’s arm was amputated over ten years ago. Before the surgery, his prosthesis was controlled via electrodes placed over the skin. Robotic prostheses can be very advanced, but such a control system makes them unreliable and limits their functionality, and patients commonly reject them as a result.
Now, the patient has been given a control system that is directly connected to his own. He has a physically challenging job as a truck driver in northern Sweden, and since the surgery he has experienced that he can cope with all the situations he faces; everything from clamping his trailer load and operating machinery, to unpacking eggs and tying his children’s skates, regardless of the environmental conditions (read more about the benefits of the new technology below).
The patient is also one of the first in the world to take part in an effort to achieve long-term sensation via the prosthesis. Because the implant is a bidirectional interface, it can also be used to send signals in the opposite direction – from the prosthetic arm to the brain. This is the researchers’ next step, to clinically implement their findings on sensory feedback.
“Reliable communication between the prosthesis and the body has been the missing link for the clinical implementation of neural control and sensory feedback, and this is now in place,” says Max Ortiz Catalan. “So far we have shown that the patient has a long-term stable ability to perceive touch in different locations in the missing hand. Intuitive sensory feedback and control are crucial for interacting with the environment, for example to reliably hold an object despite disturbances or uncertainty. Today, no patient walks around with a prosthesis that provides such information, but we are working towards changing that in the very short term.”
The researchers plan to treat more patients with the novel technology later this year.
“We see this technology as an important step towards more natural control of artificial limbs,” says Max Ortiz Catalan. “It is the missing link for allowing sophisticated neural interfaces to control sophisticated prostheses. So far, this has only been possible in short experiments within controlled environments.”
The researchers have provided an image of the patient using his prosthetic arm in the context of his work as a truck driver,
[downloaded from http://www.chalmers.se/en/news/Pages/Mind-controlled-prosthetic-arms-that-work-in-daily-life-are-now-a-reality.aspx]
The news release offers some additional information about the device,
The new technology is based on the OPRA treatment (osseointegrated prosthesis for the rehabilitation of amputees), where a titanium implant is surgically inserted into the bone and becomes fixated to it by a process known as osseointegration (Osseo = bone). A percutaneous component (abutment) is then attached to the titanium implant to serve as a metallic bone extension, where the prosthesis is then fixated. Electrodes are implanted in nerves and muscles as the interfaces to the biological control system. These electrodes record signals which are transmitted via the osseointegrated implant to the prostheses, where the signals are finally decoded and translated into motions.
There are also some videos of the patient demonstrating various aspects of this device available here (keep scrolling) along with more details about what makes this device so special.
Here’s a link to and a citation for the research paper,
This article is behind a paywall and it appears to be part of a special issue or a special section in an issue, so keep scrolling down the linked to page to find more articles on this topic.
I have written about similar research in the past. Notably, there’s a July 19, 2011 post about work on Intraosseous Transcutaneous Amputation Prosthesis (ITAP) and a May 17, 2012 post featuring a video of a woman reaching with a robotic arm for a cup of coffee using her thoughts alone to control the arm.
Robo Brain – a large-scale computational system that learns from publicly available Internet resources – is currently downloading and processing about 1 billion images, 120,000 YouTube videos, and 100 million how-to documents and appliance manuals. The information is being translated and stored in a robot-friendly format that robots will be able to draw on when they need it.
The news release spells out why and how researchers have created Robo Brain,
To serve as helpers in our homes, offices and factories, robots will need to understand how the world works and how the humans around them behave. Robotics researchers have been teaching them these things one at a time: How to find your keys, pour a drink, put away dishes, and when not to interrupt two people having a conversation.
This will all come in one package with Robo Brain, a giant repository of knowledge collected from the Internet and stored in a robot-friendly format that robots will be able to draw on when they need it. [emphasis mine]
“Our laptops and cell phones have access to all the information we want. If a robot encounters a situation it hasn’t seen before it can query Robo Brain in the cloud,” explained Ashutosh Saxena, assistant professor of computer science.
Saxena and colleagues at Cornell, Stanford and Brown universities and the University of California, Berkeley, started in July to download about one billion images, 120,000 YouTube videos and 100 million how-to documents and appliance manuals, along with all the training they have already given the various robots in their own laboratories. Robo Brain will process images to pick out the objects in them, and by connecting images and video with text, it will learn to recognize objects and how they are used, along with human language and behavior.
Saxena described the project at the 2014 Robotics: Science and Systems Conference, July 12-16  in Berkeley.
If a robot sees a coffee mug, it can learn from Robo Brain not only that it’s a coffee mug, but also that liquids can be poured into or out of it, that it can be grasped by the handle, and that it must be carried upright when it is full, as opposed to when it is being carried from the dishwasher to the cupboard.
The system employs what computer scientists call “structured deep learning,” where information is stored in many levels of abstraction. An easy chair is a member of the class of chairs, and going up another level, chairs are furniture. Sitting is something you can do on a chair, but a human can also sit on a stool, a bench or the lawn.
A robot’s computer brain stores what it has learned in a form mathematicians call a Markov model, which can be represented graphically as a set of points connected by lines (formally called nodes and edges). The nodes could represent objects, actions or parts of an image, and each one is assigned a probability – how much you can vary it and still be correct. In searching for knowledge, a robot’s brain makes its own chain and looks for one in the knowledge base that matches within those probability limits.
“The Robo Brain will look like a gigantic, branching graph with abilities for multidimensional queries,” said Aditya Jami, a visiting researcher at Cornell who designed the large-scale database for the brain. It might look something like a chart of relationships between Facebook friends but more on the scale of the Milky Way.
Like a human learner, Robo Brain will have teachers, thanks to crowdsourcing. The Robo Brain website will display things the brain has learned, and visitors will be able to make additions and corrections.
The “robot-friendly format” for information in the European project (RoboEarth) meant machine language but if I understand what’s written in the news release correctly, this project incorporates a mix of machine language and natural (human) language.
This is one of the times the funding sources (US National Science Foundation, two of the armed forces, businesses and a couple of not-for-profit agencies) seem particularly interesting (from the news release),
The project is supported by the National Science Foundation, the Office of Naval Research, the Army Research Office, Google, Microsoft, Qualcomm, the Alfred P. Sloan Foundation and the National Robotics Initiative, whose goal is to advance robotics to help make the United States more competitive in the world economy.
Apparently the big picture could involve search and rescue applications, meanwhile, the smaller picture shows attempts to create a cyborg moth (mothbot). From an Aug. 20, 2014 news item on ScienceDaily,
North Carolina State University [US] researchers have developed methods for electronically manipulating the flight muscles of moths and for monitoring the electrical signals moths use to control those muscles. The work opens the door to the development of remotely-controlled moths, or “biobots,” for use in emergency response.
“In the big picture, we want to know whether we can control the movement of moths for use in applications such as search and rescue operations,” says Dr. Alper Bozkurt, an assistant professor of electrical and computer engineering at NC State and co-author of a paper on the work. “The idea would be to attach sensors to moths in order to create a flexible, aerial sensor network that can identify survivors or public health hazards in the wake of a disaster.”
The paper presents a technique Bozkurt developed for attaching electrodes to a moth during its pupal stage, when the caterpillar is in a cocoon undergoing metamorphosis into its winged adult stage. This aspect of the work was done in conjunction with Dr. Amit Lal of Cornell University.
But the new findings in the paper involve methods developed by Bozkurt’s research team for improving our understanding of precisely how a moth coordinates its muscles during flight.
By attaching electrodes to the muscle groups responsible for a moth’s flight, Bozkurt’s team is able to monitor electromyographic signals – the electric signals the moth uses during flight to tell those muscles what to do.
The moth is connected to a wireless platform that collects the electromyographic data as the moth moves its wings. To give the moth freedom to turn left and right, the entire platform levitates, suspended in mid-air by electromagnets. A short video describing the work is available at http://www.youtube.com/watch?v=jR325RHPK8o.
“By watching how the moth uses its wings to steer while in flight, and matching those movements with their corresponding electromyographic signals, we’re getting a much better understanding of how moths maneuver through the air,” Bozkurt says.
“We’re optimistic that this information will help us develop technologies to remotely control the movements of moths in flight,” Bozkurt says. “That’s essential to the overarching goal of creating biobots that can be part of a cyberphysical sensor network.”
But Bozkurt stresses that there’s a lot of work yet to be done to make moth biobots a viable tool.
“We now have a platform for collecting data about flight coordination,” Bozkurt says. “Next steps include developing an automated system to explore and fine-tune parameters for controlling moth flight, further miniaturizing the technology, and testing the technology in free-flying moths.”
Here’s an image illustrating the researchers’ work,
Caption: The moth is connected to a wireless platform that collects the electromyographic data as the moth moves its wings. To give the moth freedom to turn left and right, the entire platform levitates, suspended in mid-air by electromagnets. Credit: Alper Bozkurt
I was expecting to find this research had been funded by the US military but that doesn’t seem to be the case according to the university news release,
… The research was supported by the National Science Foundation, under grant CNS-1239243. The researchers also used transmitters and receivers developed by Triangle Biosystems International and thank them for their contribution to the work.
For the curious, here’s a link to and a citation for the text and the full video,
Hummingbird-inspired spy cameras have come a long way since the research featured in this Aug. 12, 2011 posting which includes a video of a robot camera designed to look like a hummingbird and mimic some of its extraordinary flying abilities. These days (2014) the emphasis appears to be on mimicking the abilities to a finer degree if Margaret Munro’s July 29, 2014 article for Canada.com is to be believed,
Tiny, high-end military drones are catching up with one of nature’s great engineering masterpieces.
A side-by-side comparison has found a “remarkably similar” aerodynamic performance between hummingbirds and the Black Hornet, the most sophisticated nano spycam yet.
“(The) Average Joe hummingbird” is about on par with the tiny helicopter that is so small it can fit in a pocket, says engineering professor David Lentink, at Stanford University. He led a team from Canada [University of British Columbia], the U.S. and the Netherlands [Wageningen University and Eindhoven University of Technology] that compared the birds and the machine for a study released Tuesday [July 29, 2014].
For a visual comparison with the latest nano spycam (Black Hornet), here’s the ‘hummingbird’ featured in the 2011 posting,
The Nano Hummingbird, a drone from AeroVironment designed for the US Pentagon, would fit into any or all of those categories.
And, here’s this 2013 image of a Black Hornet Nano Helicopter inspired by hummingbirds,
Black Hornet Nano Helicopter UAVView licenseview terms Richard Watt – Photo http://www.defenceimagery.mod.uk/fotoweb/fwbin/download.dll/45153802.jpg Courtesy: Wikipedia
More than 42 million years of natural selection have turned hummingbirds into some of the world’s most energetically efficient flyers, particularly when it comes to hovering in place.
Humans, however, are gaining ground quickly. A new study led by David Lentink, an assistant professor of mechanical engineering at Stanford, reveals that the spinning blades of micro-helicopters are about as efficient at hovering as the average hummingbird.
The experiment involved spinning hummingbird wings – sourced from a pre-existing museum collection – of 12 different species on an apparatus designed to test the aerodynamics of helicopter blades. The researchers used cameras to visualize airflow around the wings, and sensitive load cells to measure the drag and the lift force they exerted, at different speeds and angles.
Lentink and his colleagues then replicated the experiment using the blades from a ProxDynamics Black Hornet autonomous microhelicopter. The Black Hornet is the most sophisticated microcopter available – the United Kingdom’s army uses it in Afghanistan – and is itself about the size of a hummingbird.
Even spinning like a helicopter, rather than flapping, the hummingbird wings excelled: If hummingbirds were able to spin their wings to hover, it would cost them roughly half as much energy as flapping. The microcopter’s wings kept pace with the middle-of-the-pack hummingbird wings, but the topflight wings – those of Anna’s hummingbird, a species common throughout the West Coast – were still about 27 percent more efficient than engineered blades.
Hummingbirds acing the test didn’t particularly surprise Lentink – previous studies had indicated hummingbirds were incredibly efficient – but he was impressed with the helicopter.
“The technology is at the level of an average Joe hummingbird,” Lentink said. “A helicopter is really the most efficient hovering device that we can build. The best hummingbirds are still better, but I think it’s amazing that we’re getting closer. It’s not easy to match their performance, but if we build better wings with better shapes, we might approximate hummingbirds.”
Based on the measurements of Anna’s hummingbirds, Lentink said there is potential to improve microcopter rotor power by up to 27 percent.
The high-fidelity experiment also provided an opportunity to refine previous rough estimates of muscle power. Lentink’s team learned that hummingbirds’ muscles produce a surprising 130 watts of energy per kilogram; the average for other birds, and across most vertebrates, is roughly 100 watts/kg.
Although the current study revealed several details of how a hummingbird hovers in one place, the birds still hold many secrets. For instance, Lentink said, we don’t know how hummingbirds maintain their flight in a strong gust, how they navigate through branches and other clutter, or how they change direction so quickly during aerial “dogfights.”
He also thinks great strides could be made by studying wing aspect ratios, the ratio of wing length to wing width. The aspect ratios of all the hummingbirds’ wings remarkably converged around 3.9. The aspect ratios of most wings used in aviation measure much higher; the Black Hornet’s aspect ratio was 4.7.
“I want to understand if aspect ratio is special, and whether the amount of variation has an effect on performance,” Lentink said. Understanding and replicating these abilities and characteristics could be a boon for robotics and will be the focus of future experiments.
“Those are the things we don’t know right now, and they could be incredibly useful. But I don’t mind it, actually,” Lentink said. “I think it’s nice that there are still a few things about hummingbirds that we don’t know.”
Agreed, it’s nice to know there are still a few mysteries left. You can watch the ‘mysterious’ hummingbird in this video courtesy of the Rivers Ingersoll Lentink Lab at Stanford University,
High speed video of Anna’s hummingbird at Stanford Arizona Cactus Garden.
Here’s a link to and a citation for the paper, H/T to Nancy Owano’s article on phys.org for alerting me to this story.
Despite Munro’s reference to the Black Hornet as a ‘nano’ spycam, the ‘microhelicopter’ description in the news release places the device at the microscale (/1,000,000,000). Still, I don’t understand what makes it microscale since it’s visible to the naked eye. In any case, it is small.
A July 14, 2014 news item on ScienceDaily MIT (Massachusetts Institute of Technology) features robots that mimic mice and other biological constructs or, if you prefer, movie robots,
In the movie “Terminator 2,” the shape-shifting T-1000 robot morphs into a liquid state to squeeze through tight spaces or to repair itself when harmed.
Now a phase-changing material built from wax and foam, and capable of switching between hard and soft states, could allow even low-cost robots to perform the same feat.
The material — developed by Anette Hosoi, a professor of mechanical engineering and applied mathematics at MIT, and her former graduate student Nadia Cheng, alongside researchers at the Max Planck Institute for Dynamics and Self-Organization and Stony Brook University — could be used to build deformable surgical robots. The robots could move through the body to reach a particular point without damaging any of the organs or vessels along the way.
Working with robotics company Boston Dynamics, based in Waltham, Mass., the researchers began developing the material as part of the Chemical Robots program of the Defense Advanced Research Projects Agency (DARPA). The agency was interested in “squishy” robots capable of squeezing through tight spaces and then expanding again to move around a given area, Hosoi says — much as octopuses do.
But if a robot is going to perform meaningful tasks, it needs to be able to exert a reasonable amount of force on its surroundings, she says. “You can’t just create a bowl of Jell-O, because if the Jell-O has to manipulate an object, it would simply deform without applying significant pressure to the thing it was trying to move.”
What’s more, controlling a very soft structure is extremely difficult: It is much harder to predict how the material will move, and what shapes it will form, than it is with a rigid robot.
So the researchers decided that the only way to build a deformable robot would be to develop a material that can switch between a soft and hard state, Hosoi says. “If you’re trying to squeeze under a door, for example, you should opt for a soft state, but if you want to pick up a hammer or open a window, you need at least part of the machine to be rigid,” she says.
Compressible and self-healing
To build a material capable of shifting between squishy and rigid states, the researchers coated a foam structure in wax. They chose foam because it can be squeezed into a small fraction of its normal size, but once released will bounce back to its original shape.
The wax coating, meanwhile, can change from a hard outer shell to a soft, pliable surface with moderate heating. This could be done by running a wire along each of the coated foam struts and then applying a current to heat up and melt the surrounding wax. Turning off the current again would allow the material to cool down and return to its rigid state.
In addition to switching the material to its soft state, heating the wax in this way would also repair any damage sustained, Hosoi says. “This material is self-healing,” she says. “So if you push it too far and fracture the coating, you can heat it and then cool it, and the structure returns to its original configuration.”
To build the material, the researchers simply placed the polyurethane foam in a bath of melted wax. They then squeezed the foam to encourage it to soak up the wax, Cheng says. “A lot of materials innovation can be very expensive, but in this case you could just buy really low-cost polyurethane foam and some wax from a craft store,” she says.
In order to study the properties of the material in more detail, they then used a 3-D printer to build a second version of the foam lattice structure, to allow them to carefully control the position of each of the struts and pores.
When they tested the two materials, they found that the printed lattice was more amenable to analysis than the polyurethane foam, although the latter would still be fine for low-cost applications, Hosoi says.
The wax coating could also be replaced by a stronger material, such as solder, she adds.
Hosoi is now investigating the use of other unconventional materials for robotics, such as magnetorheological and electrorheological fluids. These materials consist of a liquid with particles suspended inside, and can be made to switch from a soft to a rigid state with the application of a magnetic or electric field.
When it comes to artificial muscles for soft and biologically inspired robots, we tend to think of controlling shape through bending or contraction, says Carmel Majidi, an assistant professor of mechanical engineering in the Robotics Institute at Carnegie Mellon University, who was not involved in the research. “But for a lot of robotics tasks, reversibly tuning the mechanical rigidity of a joint can be just as important,” he says. “This work is a great demonstration of how thermally controlled rigidity-tuning could potentially be used in soft robotics.”
In an interview almost 10 years ago for an article I was writing for a digital publishing magazine, I had a conversation with a very technically oriented individually that went roughly this way,
Him: (enthused and excited) We’re developing algorithms that will let us automatically create brochures, written reports, that will always have the right data and can be instantly updated.
Him: (no reaction)
Me: (breaking long pause) You realize you’re talking to a writer, eh? You’ve just told me that at some point in the future nobody will need writers.
Him: (pause) No. (then with more certainty) No. You don’t understand. We’re making things better for you. In the future, you won’t need to do the boring stuff.
It seems the future is now and in the hands of a company known as Automated Insights, You can find this at the base of one of the company’s news releases,
ABOUT AUTOMATED INSIGHTS, INC.
Automated Insights (Ai) transforms Big Data into written reports with the depth of analysis, personality and variability of a human writer. In 2014, Ai and its patented Wordsmith platform will produce over 1 billion personalized reports for clients like Yahoo!, The Associated Press, the NFL, and Edmunds.com. [emphasis mine] The Wordsmith platform uses artificial intelligence to dynamically spot patterns and trends in raw data and then describe those findings in plain English. Wordsmith authors insightful, personalized reports around individual user data at unprecedented scale and in real-time. Automated Insights also offers applications that run on its Wordsmith platform, including the recently launched Wordsmith for Marketing, which enables marketing agencies to automate reporting for clients. Learn more at http://automatedinsights.com.
In the wake of the June 30, 2014 deal with Associated Press, there has been a flurry of media interest especially from writers who seem to have largely concluded that the robots will do the boring stuff and free human writers to do creative, innovative work. A July 2, 2014 news item on FoxNews.com provides more details about the deal,
The Associated Press, the largest American-based news agency in the world, will now use story-writing software to produce U.S. corporate earnings stories.
In a recent blog post post AP Managing Editor Lou Ferarra explained that the software is capable of producing these stories, which are largely technical financial reports that range from 150 to 300 words, in “roughly the same time that it takes our reporters.” [emphasis mine]
AP staff members will initially edit the software-produced reports, but the agency hopes the process will soon be fully automated.
The Wordsmith software constructs narratives in plain English by using algorithms to analyze trends and patterns in a set of data and place them in an appropriate context depending on the nature of the story.
Representatives for the Associated Press have assured anyone who fears robots are making journalists obsolete that Wordsmith will not be taking the jobs of staffers. “We are going to use our brains and time in more enterprising ways during earnings season” Ferarra wrote, in the blog pos. “This is about using technology to free journalists to do more journalism and less data processing, not about eliminating jobs. [emphasis mine]
Russell Brandon’s July 11, 2014 article for The Verge provides more technical detail and context for this emerging field,
Last week, the Associated Press announced it would be automating its articles on quarterly earnings reports. Instead of 300 articles written by humans, the company’s new software will write 4,400 of them, each formatted for AP style, in mere seconds. It’s not the first time a company has tried out automatic writing: last year, a reporter at The LA Times wrote an automated earthquake-reporting program that combined prewritten sentences with automatic seismograph reports to report quakes just seconds after they happen. The natural language-generation company Narrative Science has been churning out automated sports reporting for years.
It appears that AP Managing Editor Lou Ferarra doesn’t know how long it takes to write 150 to 300 words (“roughly the same time that it takes our reporters”) or perhaps he or she wanted to ‘soften’ the news’s possible impact. Getting back to the technical aspects in Brandon’s article,
… So how do you make a robot that writes sentences?
In the case of AP style, a lot of the work has already been done. Every Associated Press article already comes with a clear, direct opening and a structure that spirals out from there. All the algorithm needs to do is code in the same reasoning a reporter might employ. Algorithms detect the most volatile or newsworthy shift in a given earnings report and slot that in as the lede. Circling outward, the program might sense that a certain topic has already been covered recently and decide it’s better to talk about something else. …
The staffers who keep the copy fresh are scribes and coders in equal measure. (Allen [Automated Insights CEO Robbie Allen] says he looks for “stats majors who worked on the school paper.”) They’re not writers in the traditional sense — most of the language work is done beforehand, long before the data is available — but each job requires close attention. For sports articles, the Automated Insights team does all its work during the off-season and then watches the articles write themselves from the sidelines, as soon as each game’s results are available. “I’m often quite surprised by the result,” says Joe Procopio, the company’s head of product engineering. “There might be four or five variables that determine what that lead sentence looks like.” …
A July 11, 2014 article by Catherine Taibi for Huffington Post offers a summary of the current ‘robot/writer’ situation (Automated Insights is not the only company offering this service) along with many links including one to this July 11, 2014 article by Kevin Roose for New York Magazine where he shares what appears to be a widely held opinion and which echoes my interviewee of 10 years ago (Note: A link has been removed),
By this point, we’re no longer surprised when machines replace human workers in auto factories or electronics-manufacturing plants. That’s the norm. But we hoity-toity journalists had long assumed that our jobs were safe from automation. (We’re knowledge workers, after all.) So when the AP announced its new automated workforce, you could hear the panic spread to old-line news desks across the nation. Unplug the printers, Bob! The robots are coming!
I’m not an alarmist, though. In fact, I welcome our new robot colleagues. Not only am I not scared of losing my job to a piece of software, I think the introduction of automated reporting is the best thing to happen to journalists in a long time.
For one thing, humans still have the talent edge. At the moment, the software created by Automated Insights is only capable of generating certain types of news stories — namely, short stories that use structured data as an input, and whose output follows a regular pattern. …
Robot-generated stories aren’t all fill-in-the-blank jobs; the more advanced algorithms use things like perspective, tone, and humor to tailor a story to its audience. …
But these robots, as sophisticated as they are, can’t approach the full creativity of a human writer. They can’t contextualize Emmy snubs like Matt Zoller Seitz, assail opponents of Obamacare like Jonathan Chait, or collect summer-camp sex stories like Maureen O’Connor. My colleagues’ jobs (and mine, knock wood) are too complex for today’s artificial intelligence to handle; they require human skills like picking up the phone, piecing together data points from multiple sources, and drawing original, evidence-based conclusions. [emphasis mine]
The stories that today’s robots can write are, frankly, the kinds of stories that humans hate writing anyway. … [emphasis mine]
Despite his blithe assurances, there is a little anxiety expressed in this piece “My colleagues’ jobs (and mine, knock wood) are too complex for today’s artificial intelligence … .”
I too am feeling a little uncertain. For example, there’s this April 29, 2014 posting by Adam Long on the Automated Insights blog and I can’t help wondering how much was actually written by Long and how much by the company’s robots. After all the company proudly proclaims the blog is powered by Wordsmith Marketing. For that matter, I’m not that sure about the FoxNews.com piece, which has no byline.
For anyone interested in still more links and information, Automated Insights offers a listing of their press coverage here. Although it’s a bit dated now, there is an exhaustive May 22, 2013 posting by Tony Hirst on the OUseful.info blog which, despite the title: ‘Notes on Narrative Science and Automated Insights’, provides additional context for the work being done to automate the writing process since 2009.
For the record, this blog is not written by a robot. As for getting rid of the boring stuff, I can’t help but remember that part of how one learns any craft is by doing the boring, repetitive work needed to build skills.
One final and unrelated note, Automated Insights has done a nice piece of marketing with its name which abbreviates to Ai. One can’t help but be reminded of AI, a term connoting the field of artificial intelligence.