Category Archives: robots

Curbing police violence with machine learning

A rather fascinating Aug. 1, 2016 article by Hal Hodson about machine learning and curbing police violence has appeared in the New Scientist journal (Note: Links have been removed),

None of their colleagues may have noticed, but a computer has. By churning through the police’s own staff records, it has caught signs that an officer is at high risk of initiating an “adverse event” – racial profiling or, worse, an unwarranted shooting.

The Charlotte-Mecklenburg Police Department in North Carolina is piloting the system in an attempt to tackle the police violence that has become a heated issue in the US in the past three years. A team at the University of Chicago is helping them feed their data into a machine learning system that learns to spot risk factors for unprofessional conduct. The department can then step in before risk transforms into actual harm.

The idea is to prevent incidents in which officers who are stressed behave aggressively, for example, such as one in Texas where an officer pulled his gun on children at a pool party after responding to two suicide calls earlier that shift. Ideally, early warning systems would be able to identify individuals who had recently been deployed on tough assignments, and divert them from other sensitive calls.

According to Hodson, there are already systems, both human and algorithmic, in place but the goal is to make them better,

The system being tested in Charlotte is designed to include all of the records a department holds on an individual – from details of previous misconduct and gun use to their deployment history, such as how many suicide or domestic violence calls they have responded to. It retrospectively caught 48 out of 83 adverse incidents between 2005 and now – 12 per cent more than Charlotte-Mecklenberg’s existing early intervention system.

More importantly, the false positive rate – the fraction of officers flagged as being under stress who do not go on to act aggressively – was 32 per cent lower than the existing system’s. “Right now the systems that claim to do this end up flagging the majority of officers,” says Rayid Ghani, who leads the Chicago team. “You can’t really intervene then.”

There is some cautious optimism about this new algorithm (Note: Links have been removed),

Frank Pasquale, who studies the social impact of algorithms at the University of Maryland, is cautiously optimistic. “In many walks of life I think this algorithmic ranking of workers has gone too far – it troubles me,” he says. “But in the context of the police, I think it could work.”

Pasquale says that while such a system for tackling police misconduct is new, it’s likely that older systems created the problem in the first place. “The people behind this are going to say it’s all new,” he says. “But it could be seen as an effort to correct an earlier algorithmic failure. A lot of people say that the reason you have so much contact between minorities and police is because the CompStat system was rewarding officers who got the most arrests.”

CompStat, short for Computer Statistics, is a police management and accountability system that was used to implement the “broken windows” theory of policing, which proposes that coming down hard on minor infractions like public drinking and vandalism helps to create an atmosphere of law and order, bringing serious crime down in its wake. Many police researchers have suggested that the approach has led to the current dangerous tension between police and minority communities.

Ghani has not forgotten the human dimension,

One thing Ghani is certain of is that the interventions will need to be decided on and delivered by humans. “I would not want any of those to be automated,” he says. “As long as there is a human in the middle starting a conversation with them, we’re reducing the chance for things to go wrong.”

h/t Terkko Navigator

I have written about police and violence here in the context of the Dallas Police Department and its use of a robot in a violent confrontation with a sniper, July 25, 2016 posting titled: Robots, Dallas (US), ethics, and killing.

Robots judge a beauty contest

I have a lot of respect for good PR gimmicks and a beauty contest judged by robots (or more accurately, artificial intelligence) is a provocative idea wrapped up in a good public relations (PR) gimmick. A July 12, 2016 In Silico Medicine press release on EurekAlert reveals more,

Beauty.AI 2.0, a platform,” a platform, where human beauty is evaluated by a jury of robots and algorithm developers compete on novel applications of machine intelligence to perception is supported by Ernst and Young.

“We were very impressed by E&Y’s recent advertising campaign with a robot hand holding a beautiful butterfly and a slogan “How human is your algorithm?” and immediately invited them to participate. This slogan captures the very essence of our contest, which is constantly exploring new ideas in machine perception of humans”, said Anastasia Georgievskaya, Managing Scientist at Youth Laboratories, the organizer of Beauty.AI.

Beauty.AI contest is supported by the many innovative companies from the US, Europe, and Asia with some of the top cosmetics companies participating in collaborative research projects. Imagene Labs, one of the leaders in linking facial and biological information from Singapore operating across Asia, is a gold sponsor and research partner of the contest.

There are many approaches to evaluating human beauty. Features like symmetry, pigmentation, pimples, wrinkles may play a role and similarity to actors, models and celebrities may be used in the calculation of the overall score. However, other innovative approaches have been proposed. A robot developed by Insilico Medicine compares the chronological age with the age predicted by a deep neural network. Another team is training an artificially-intelligent system to identify features that contribute to the popularity of the people on dating sites.

“We look forward to collaborating with the Youth Laboratories team to create new AI algorithms. These will eventually allow consumers to objectively evaluate how well their wellness interventions – such as diet, exercise, skincare and supplements – are working. Based on the results they can then fine tune their approach to further improve their well-being and age better”, said Jia-Yi Har, Vice President of Imagene Labs.

The contest is open to anyone with a modern smartphone running either Android or iOS operating system, and Beauty.AI 2.0 app can be downloaded for free from either Google or Apple markets. Programmers and companies can participate by submitting their algorithm to the organizers through the Beauty.AI website.

“The beauty of Beauty.AI pageants is that algorithms are much more impartial than humans, and we are trying to prevent any racial bias and run the contest in multiple age categories. Most of the popular beauty contests discriminate by age, gender, marital status, body weight and race. Algorithms are much less partial”, said Alex Shevtsov, CEO of Youth Laboratories.

Very interesting take on beauty and bias. I wonder if they’re building change into their algorithms. After all, standards for beauty don’t remain static, they change over time.

Unfortunately, that question isn’t asked in Wency Leung’s July 4, 2016 article on the robot beauty contest for the Globe and Mail but she does provides more details about the contest and insight into the world of international cosmetics companies and their use of technology,

Teaching computers about aesthetics involves designing sophisticated algorithms to recognize and measure features like wrinkles, face proportions, blemishes and skin colour. And the beauty industry is rapidly embracing these high-tech tools to respond to consumers’ demand for products that suit their individual tastes and attributes.

Companies like Sephora and Avon, for instance, are using face simulation technology to provide apps that allow customers to virtually try on and shop for lipsticks and eye shadows using their mobile devices. Skincare producers are using similar technologies to track and predict the effects of serums and creams on various skin types. And brands like L’Oréal’s Lancôme are using facial analysis to read consumers’ skin tones to create personalized foundations.

“The more we’re able to use these tools like augmented reality [and] artificial intelligence to provide new consumer experiences, the more we can move to customizing and personalizing products for every consumer around the world, no matter what their skin tone is, no matter where they live, no matter who they are,” says Guive Balooch, global vice-president of L’Oréal’s technology incubator.

Balooch was tasked with starting up the company’s tech research hub four years ago, with a mandate to predict and invent solutions to how consumers would choose and use products in the future. Among its innovations, his team has come up with the Makeup Genius app, a virtual mirror that allows customers to try on products on a mobile screen, and a device called My UV Patch, a sticker sensor that users wear on their skin, which informs them through an app how much UV exposure they get.

These tools may seem easy enough to use, but their simplicity belies the work that goes on behind the scenes. To create the Makeup Genius app, for example, Balooch says the developers sought expertise from the animation industry to enable users to see themselves move onscreen in real time. The developers also brought in hundreds of consumers with different skin tones to test real products in the lab, and they tested the app on some 100,000 images in more than 40 lighting conditions, to ensure the colours of makeup products appeared the same in real life as they did onscreen, Balooch says.

The article is well worth reading in its entirety.

For the seriously curious, you can find Beauty AI here, In Silico Medicine here, and Imagene Labs here. I cannot find a website for Youth Laboratories featuring Anastasia Georgievskaya.

I last wrote about In Silico Medicine in a May 31, 2016 post about deep learning, wrinkles, and aging.

Robots, Dallas (US), ethics, and killing

I’ve waited a while before posting this piece in the hope that the situation would calm. Sadly, it took longer than hoped as there was an additional shooting incident of police officers in Baton Rouge on July 17, 2016. There’s more about that shooting in a July 18, 2016 news posting by Steve Visser for CNN.)

Finally: Robots, Dallas, ethics, and killing: In the wake of the Thursday, July 7, 2016 shooting in Dallas (Texas, US) and subsequent use of a robot armed with a bomb to kill  the suspect, a discussion about ethics has been raised.

This discussion comes at a difficult period. In the same week as the targeted shooting of white police officers in Dallas, two African-American males were shot and killed in two apparently unprovoked shootings by police. The victims were Alton Sterling in Baton Rouge, Louisiana on Tuesday, July 5, 2016 and, Philando Castile in Minnesota on Wednesday, July 6, 2016. (There’s more detail about the shootings prior to Dallas in a July 7, 2016 news item on CNN.) The suspect in Dallas, Micah Xavier Johnson, a 25-year-old African-American male had served in the US Army Reserve and been deployed in Afghanistan (there’s more in a July 9, 2016 news item by Emily Shapiro, Julia Jacobo, and Stephanie Wash for abcnews.go.com). All of this has taken place within the context of a movement started in 2013 in the US, Black Lives Matter.

Getting back to robots, most of the material I’ve seen about ‘killing or killer’ robots has so far involved industrial accidents (very few to date) and ethical issues for self-driven cars (see a May 31, 2016 posting by Noah J. Goodall on the IEEE [Institute of Electrical and Electronics Engineers] Spectrum website).

The incident in Dallas is apparently the first time a US police organization has used a robot as a bomb, although it has been an occasional practice by US Armed Forces in combat situations. Rob Lever in a July 8, 2016 Agence France-Presse piece on phys.org focuses on the technology aspect,

The “bomb robot” killing of a suspected Dallas shooter may be the first lethal use of an automated device by American police, and underscores growing role of technology in law enforcement.

Regardless of the methods in Dallas, the use of robots is expected to grow, to handle potentially dangerous missions in law enforcement and the military.


Researchers at Florida International University meanwhile have been working on a TeleBot that would allow disabled police officers to control a humanoid robot.

The robot, described in some reports as similar to the “RoboCop” in films from 1987 and 2014, was designed “to look intimidating and authoritative enough for citizens to obey the commands,” but with a “friendly appearance” that makes it “approachable to citizens of all ages,” according to a research paper.

Robot developers downplay the potential for the use of automated lethal force by the devices, but some analysts say debate on this is needed, both for policing and the military.

A July 9, 2016 Associated Press piece by Michael Liedtke and Bree Fowler on phys.org focuses more closely on ethical issues raised by the Dallas incident,

When Dallas police used a bomb-carrying robot to kill a sniper, they also kicked off an ethical debate about technology’s use as a crime-fighting weapon.

The strategy opens a new chapter in the escalating use of remote and semi-autonomous devices to fight crime and protect lives. It also raises new questions over when it’s appropriate to dispatch a robot to kill dangerous suspects instead of continuing to negotiate their surrender.

“If lethally equipped robots can be used in this situation, when else can they be used?” says Elizabeth Joh, a University of California at Davis law professor who has followed U.S. law enforcement’s use of technology. “Extreme emergencies shouldn’t define the scope of more ordinary situations where police may want to use robots that are capable of harm.”

In approaching the question about the ethics, Mike Masnick’s July 8, 2016 posting on Techdirt provides a surprisingly sympathetic reading for the Dallas Police Department’s actions, as well as, asking some provocative questions about how robots might be better employed by police organizations (Note: Links have been removed),

The Dallas Police have a long history of engaging in community policing designed to de-escalate situations, rather than encourage antagonism between police and the community, have been handling all of this with astounding restraint, frankly. Many other police departments would be lashing out, and yet the Dallas Police Dept, while obviously grieving for a horrible situation, appear to be handling this tragic situation professionally. And it appears that they did everything they could in a reasonable manner. They first tried to negotiate with Johnson, but after that failed and they feared more lives would be lost, they went with the robot + bomb option. And, obviously, considering he had already shot many police officers, I don’t think anyone would question the police justification if they had shot Johnson.

But, still, at the very least, the whole situation raises a lot of questions about the legality of police using a bomb offensively to blow someone up. And, it raises some serious questions about how other police departments might use this kind of technology in the future. The situation here appears to be one where people reasonably concluded that this was the most effective way to stop further bloodshed. And this is a police department with a strong track record of reasonable behavior. But what about other police departments where they don’t have that kind of history? What are the protocols for sending in a robot or drone to kill someone? Are there any rules at all?

Furthermore, it actually makes you wonder, why isn’t there a focus on using robots to de-escalate these situations? What if, instead of buying military surplus bomb robots, there were robots being designed to disarm a shooter, or detain him in a manner that would make it easier for the police to capture him alive? Why should the focus of remote robotic devices be to kill him? This isn’t faulting the Dallas Police Department for its actions last night. But, rather, if we’re going to enter the age of robocop, shouldn’t we be looking for ways to use such robotic devices in a manner that would help capture suspects alive, rather than dead?

Gordon Corera’s July 12, 2016 article on the BBC’s (British Broadcasting Corporation) news website provides an overview of the use of automation and of ‘killing/killer robots’,

Remote killing is not new in warfare. Technology has always been driven by military application, including allowing killing to be carried out at distance – prior examples might be the introduction of the longbow by the English at Crecy in 1346, then later the Nazi V1 and V2 rockets.

More recently, unmanned aerial vehicles (UAVs) or drones such as the Predator and the Reaper have been used by the US outside of traditional military battlefields.

Since 2009, the official US estimate is that about 2,500 “combatants” have been killed in 473 strikes, along with perhaps more than 100 non-combatants. Critics dispute those figures as being too low.

Back in 2008, I visited the Creech Air Force Base in the Nevada desert, where drones are flown from.

During our visit, the British pilots from the RAF deployed their weapons for the first time.

One of the pilots visibly bristled when I asked him if it ever felt like playing a video game – a question that many ask.

The military uses encrypted channels to control its ordnance disposal robots, but – as any hacker will tell you – there is almost always a flaw somewhere that a determined opponent can find and exploit.

We have already seen cars being taken control of remotely while people are driving them, and the nightmare of the future might be someone taking control of a robot and sending a weapon in the wrong direction.

The military is at the cutting edge of developing robotics, but domestic policing is also a different context in which greater separation from the community being policed risks compounding problems.

The balance between risks and benefits of robots, remote control and automation remain unclear.

But Dallas suggests that the future may be creeping up on us faster than we can debate it.

The excerpts here do not do justice to the articles, if you’re interested in this topic and have the time, I encourage you to read all the articles cited here in their entirety.

*(ETA: July 25, 2016 at 1405 hours PDT: There is a July 25, 2016 essay by Carrie Sheffield for Salon.com which may provide some insight into the Black Lives matter movement and some of the generational issues within the US African-American community as revealed by the movement.)*

Korea Advanced Institute of Science and Technology (KAIST) at summer 2016 World Economic Forum in China

From the Ideas Lab at the 2016 World Economic Forum at Davos to offering expertise at the 2016 World Economic Forum in Tanjin, China that is taking place from June 26 – 28, 2016.

Here’s more from a June 24, 2016 KAIST news release on EurekAlert,

Scientific and technological breakthroughs are more important than ever as a key agent to drive social, economic, and political changes and advancements in today’s world. The World Economic Forum (WEF), an international organization that provides one of the broadest engagement platforms to address issues of major concern to the global community, will discuss the effects of these breakthroughs at its 10th Annual Meeting of the New Champions, a.k.a., the Summer Davos Forum, in Tianjin, China, June 26-28, 2016.

Three professors from the Korea Advanced Institute of Science and Technology (KAIST) will join the Annual Meeting and offer their expertise in the fields of biotechnology, artificial intelligence, and robotics to explore the conference theme, “The Fourth Industrial Revolution and Its Transformational Impact.” The Fourth Industrial Revolution, a term coined by WEF founder, Klaus Schwab, is characterized by a range of new technologies that fuse the physical, digital, and biological worlds, such as the Internet of Things, cloud computing, and automation.

Distinguished Professor Sang Yup Lee of the Chemical and Biomolecular Engineering Department will speak at the Experts Reception to be held on June 25, 2016 on the topic of “The Summer Davos Forum and Science and Technology in Asia.” On June 27, 2016, he will participate in two separate discussion sessions.

In the first session entitled “What If Drugs Are Printed from the Internet?” Professor Lee will discuss the future of medicine being impacted by advancements in biotechnology and 3D printing technology with Nita A. Farahany, a Duke University professor, under the moderation of Clare Matterson, the Director of Strategy at Wellcome Trust in the United Kingdom. The discussants will note recent developments made in the way patients receive their medicine, for example, downloading drugs directly from the internet and the production of yeast strains to make opioids for pain treatment through systems metabolic engineering, and predicting how these emerging technologies will transform the landscape of the pharmaceutical industry in the years to come.

In the second session, “Lessons for Life,” Professor Lee will talk about how to nurture life-long learning and creativity to support personal and professional growth necessary in an era of the new industrial revolution.

During the Annual Meeting, Professors Jong-Hwan Kim of the Electrical Engineering School and David Hyunchul Shim of the Aerospace Department will host, together with researchers from Carnegie Mellon University and AnthroTronix, an engineering research and development company, a technological exhibition on robotics. Professor Kim, the founder of the internally renowned Robot World Cup, will showcase his humanoid micro-robots that play soccer, displaying their various cutting-edge technologies such as imaging processing, artificial intelligence, walking, and balancing. Professor Shim will present a human-like robotic piloting system, PIBOT, which autonomously operates a simulated flight program, grabbing control sticks and guiding an airplane from take offs to landings.

In addition, the two professors will join Professor Lee, who is also a moderator, to host a KAIST-led session on June 26, 2016, entitled “Science in Depth: From Deep Learning to Autonomous Machines.” Professors Kim and Shim will explore new opportunities and challenges in their fields from machine learning to autonomous robotics including unmanned vehicles and drones.

Since 2011, KAIST has been participating in the World Economic Forum’s two flagship conferences, the January and June Davos Forums, to introduce outstanding talents, share their latest research achievements, and interact with global leaders.

KAIST President Steve Kang said, “It is important for KAIST to be involved in global talks that identify issues critical to humanity and seek answers to solve them, where our skills and knowledge in science and technology could play a meaningful role. The Annual Meeting in China will become another venue to accomplish this.”

I mentioned KAIST and the Ideas Lab at the 2016 Davos meeting in this Nov. 20, 2015 posting and was able to clear up my (and possible other people’s) confusion as to what the Fourth Industrial revolution might be in my Dec. 3, 2015 posting.

A human user manual—for robots

Researchers from the Georgia Institute of Technology (Georgia Tech), funded by the US Office of Naval Research (ONR), have developed a program that teaches robots to read stories and more in an effort to educate them about humans. From a June 16, 2016 ONR news release by Warren Duffie Jr. (also on EurekAlert),

With support from the Office of Naval Research (ONR), researchers at the Georgia Institute of Technology have created an artificial intelligence software program named Quixote to teach robots to read stories, learn acceptable behavior and understand successful ways to conduct themselves in diverse social situations.

“For years, researchers have debated how to teach robots to act in ways that are appropriate, non-intrusive and trustworthy,” said Marc Steinberg, an ONR program manager who oversees the research. “One important question is how to explain complex concepts such as policies, values or ethics to robots. Humans are really good at using narrative stories to make sense of the world and communicate to other people. This could one day be an effective way to interact with robots.”

The rapid pace of artificial intelligence has stirred fears by some that robots could act unethically or harm humans. Dr. Mark Riedl, an associate professor and director of Georgia Tech’s Entertainment Intelligence Lab, hopes to ease concerns by having Quixote serve as a “human user manual” by teaching robots values through simple stories. After all, stories inform, educate and entertain–reflecting shared cultural knowledge, social mores and protocols.

For example, if a robot is tasked with picking up a pharmacy prescription for a human as quickly as possible, it could: a) take the medicine and leave, b) interact politely with pharmacists, c) or wait in line. Without value alignment and positive reinforcement, the robot might logically deduce robbery is the fastest, cheapest way to accomplish its task. However, with value alignment from Quixote, it would be rewarded for waiting patiently in line and paying for the prescription.

For their research, Riedl and his team crowdsourced stories from the Internet. Each tale needed to highlight daily social interactions–going to a pharmacy or restaurant, for example–as well as socially appropriate behaviors (e.g., paying for meals or medicine) within each setting.

The team plugged the data into Quixote to create a virtual agent–in this case, a video game character placed into various game-like scenarios mirroring the stories. As the virtual agent completed a game, it earned points and positive reinforcement for emulating the actions of protagonists in the stories.

Riedl’s team ran the agent through 500,000 simulations, and it displayed proper social interactions more than 90 percent of the time.

“These games are still fairly simple,” said Riedl, “more like ‘Pac-Man’ instead of ‘Halo.’ However, Quixote enables these artificial intelligence agents to immerse themselves in a story, learn the proper sequence of events and be encoded with acceptable behavior patterns. This type of artificial intelligence can be adapted to robots, offering a variety of applications.”

Within the next six months, Riedl’s team hopes to upgrade Quixote’s games from “old-school” to more modern and complex styles like those found in Minecraft–in which players use blocks to build elaborate structures and societies.

Riedl believes Quixote could one day make it easier for humans to train robots to perform diverse tasks. Steinberg notes that robotic and artificial intelligence systems may one day be a much larger part of military life. This could involve mine detection and deactivation, equipment transport and humanitarian and rescue operations.

“Within a decade, there will be more robots in society, rubbing elbows with us,” said Riedl. “Social conventions grease the wheels of society, and robots will need to understand the nuances of how humans do things. That’s where Quixote can serve as a valuable tool. We’re already seeing it with virtual agents like Siri and Cortana, which are programmed not to say hurtful or insulting things to users.”

This story brought to mind two other projects: RoboEarth (an internet for robots only) mentioned in my Jan. 14, 2014 which was an update on the project featuring its use in hospitals and RoboBrain, a robot learning project (sourcing the internet, YouTube, and more for information to teach robots) was mentioned in my Sept. 2, 2014 posting.

A Victoria & Albert Museum installation integrates of biomimicry, robotic fabrication and new materials research in architecture

The Victoria & Albert Museum (V&A) in London, UK, opened its Engineering Season show on May 18, 2016 (it runs until Nov. 6, 2016) featuring a robot installation and an exhibition putting the spotlight on Ove Arup, “the most significant engineer of the 20th century” according to the V&A’s May ??, 2016 press release,

The first major retrospective of the most influential engineer of the 20th century and a site specific installation inspired by nature and fabricated by robots will be the highlights of the V&A’s first ever Engineering Season, complemented by displays, events and digital initiatives dedicated to global engineering design. The V&A Engineering Season will highlight the importance of engineering in our daily lives and consider engineers as the ‘unsung heroes’ of design, who play a vital and creative role in the creation of our built environment.

Before launching into the robot/biomimicry part of this story, here’s a very brief description of why Ove Arup is considered so significant and influential,

Engineering the World: Ove Arup and the Philosophy of Total Design will explore the work and legacy of Ove Arup (1895-1988), … . Ove pioneered a multidisciplinary approach to design that has defined the way engineering is understood and practiced today. Spanning 100 years of engineering and architectural design, the exhibition will be guided by Ove’s writings about design and include his early projects, such as the Penguin Pool at London Zoo, as well as renowned projects by the firm including Sydney Opera House [Australia] and the Centre Pompidou in Paris. Arup’s collaborations with major architects of the 20th century pioneered new approaches to design and construction that remain influential today, with the firm’s legacy visible in many buildings across London and around the world. It will also showcase recent work by Arup, from major infrastructure projects like Crossrail and novel technologies for acoustics and crowd flow analysis, to engineering solutions for open source housing design.

Robots, biomimicry and the Elytra Filament Pavilion

A May 18, 2016 article by Tim Master for BBC (British Broadcasting Corporation) news online describes the pavilion installation,

A robot has taken up residence at the Victoria & Albert Musuem to construct a new installation at its London gardens.

The robot – which resembles something from a car assembly line – will build new sections of the Elytra Filament Pavilion over the coming months.

The futuristic structure will grow and change shape using data based on how visitors interact with it.

Elytra’s canopy is made up of 40 hexagonal cells – made from strips of carbon and glass fibre – which have been tightly wound into shape by the computer-controlled Kuka robot.

Each cell takes about three hours to build. On certain days, visitors to the V&A will be able to watch the robot create new cells that will be added to the canopy.

Here are some images made available by V&A,

Elytra Filament Pavilion arriving at the V&A, 2016. © Victoria and Albert Museum, London

Elytra Filament Pavilion arriving at the V&A, 2016. © Victoria and Albert Museum, London

Kuka robot weaving Elytra Filament Pavilion cell fibres, 2016. © Victoria and Albert Museum, London

Kuka robot weaving Elytra Filament Pavilion cell fibres, 2016. © Victoria and Albert Museum, London

[downloaded from http://www.bbc.com/news/entertainment-arts-36322731]

[downloaded from http://www.bbc.com/news/entertainment-arts-36322731]

Elytra Filament Pavilion at the V&A, 2016. © Victoria and Albert Museum, London

Elytra Filament Pavilion at the V&A, 2016. © Victoria and Albert Museum, London

Here’s more detail from the V&A’s Elytra Filament Pavilion installation description,

Elytra Filament Pavilion has been created by experimental German architect Achim Menges with Moritz Dörstelmann, structural engineer Jan Knippers and climate engineer Thomas Auer.

Menges and Knippers are leaders of research institutes at the University of Stuttgart that are pioneering the integration of biomimicry, robotic fabrication and new materials research in architecture. This installation emerges from their ongoing research projects and is their first-ever major commission in the UK.

The pavilion explores the impact of emerging robotic technologies on architectural design, engineering and making.

Its design is inspired by lightweight construction principles found in nature, the filament structures of the forewing shells of flying beetles known as elytra. Made of glass and carbon fibre, each component of the undulating canopy is produced using an innovative robotic winding technique developed by the designers. Like beetle elytra, the pavilion’s filament structure is both very strong and very light – spanning over 200m2 it weighs less than 2,5 tonnes.

Elytra is a responsive shelter that will grow over the course of the V&A Engineering Season. Sensors in the canopy fibres will collect data on how visitors inhabit the pavilion and monitor the structure’s behaviour, ultimately informing how and where the canopy grows. During a series of special events as part of the Engineering Season, visitors will have the opportunity to witness the pavilion’s construction live, as new components are fabricated on-site by a Kuka robot.

Unfortunately, I haven’t been able to find more technical detail, particularly about the materials being used in the construction of the pavilion, on the V&A website.

One observation, I’m a little uncomfortable with how they’re gathering data “Sensors in the canopy fibres will collect data on how visitors inhabit the pavilion … .” It sounds like surveillance to me.

Nonetheless, the Engineering Season offers the promise of a very intriguing approach to fulfilling the V&A’s mandate as a museum dedicated to decorative arts and design.

Ingestible origami robot gets one step closer

Fiction, more or less seriously, has been exploring the idea of ingestible, tiny robots that can enter the human body for decades (Fantastic Voyage and Innerspace are two movie examples). The concept is coming closer to being realized as per a May 12, 2016 news item on phys.org,

In experiments involving a simulation of the human esophagus and stomach, researchers at MIT [Massachusetts Institute of Technology], the University of Sheffield, and the Tokyo Institute of Technology have demonstrated a tiny origami robot that can unfold itself from a swallowed capsule and, steered by external magnetic fields, crawl across the stomach wall to remove a swallowed button battery or patch a wound.

A May 12, 2016 MIT news release (also on EurekAlert), which originated the news item, provides some fascinating depth to this story (Note: Links have been removed),

The new work, which the researchers are presenting this week at the International Conference on Robotics and Automation, builds on a long sequence of papers on origami robots from the research group of Daniela Rus, the Andrew and Erna Viterbi Professor in MIT’s Department of Electrical Engineering and Computer Science.

“It’s really exciting to see our small origami robots doing something with potential important applications to health care,” says Rus, who also directs MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). “For applications inside the body, we need a small, controllable, untethered robot system. It’s really difficult to control and place a robot inside the body if the robot is attached to a tether.”

Although the new robot is a successor to one reported at the same conference last year, the design of its body is significantly different. Like its predecessor, it can propel itself using what’s called a “stick-slip” motion, in which its appendages stick to a surface through friction when it executes a move, but slip free again when its body flexes to change its weight distribution.

Also like its predecessor — and like several other origami robots from the Rus group — the new robot consists of two layers of structural material sandwiching a material that shrinks when heated. A pattern of slits in the outer layers determines how the robot will fold when the middle layer contracts.

Material difference

The robot’s envisioned use also dictated a host of structural modifications. “Stick-slip only works when, one, the robot is small enough and, two, the robot is stiff enough,” says Guitron [Steven Guitron, a graduate student in mechanical engineering]. “With the original Mylar design, it was much stiffer than the new design, which is based on a biocompatible material.”

To compensate for the biocompatible material’s relative malleability, the researchers had to come up with a design that required fewer slits. At the same time, the robot’s folds increase its stiffness along certain axes.

But because the stomach is filled with fluids, the robot doesn’t rely entirely on stick-slip motion. “In our calculation, 20 percent of forward motion is by propelling water — thrust — and 80 percent is by stick-slip motion,” says Miyashita [Shuhei Miyashita, who was a postdoc at CSAIL when the work was done and is now a lecturer in electronics at the University of York, England]. “In this regard, we actively introduced and applied the concept and characteristics of the fin to the body design, which you can see in the relatively flat design.”

It also had to be possible to compress the robot enough that it could fit inside a capsule for swallowing; similarly, when the capsule dissolved, the forces acting on the robot had to be strong enough to cause it to fully unfold. Through a design process that Guitron describes as “mostly trial and error,” the researchers arrived at a rectangular robot with accordion folds perpendicular to its long axis and pinched corners that act as points of traction.

In the center of one of the forward accordion folds is a permanent magnet that responds to changing magnetic fields outside the body, which control the robot’s motion. The forces applied to the robot are principally rotational. A quick rotation will make it spin in place, but a slower rotation will cause it to pivot around one of its fixed feet. In the researchers’ experiments, the robot uses the same magnet to pick up the button battery.

Porcine precedents

The researchers tested about a dozen different possibilities for the structural material before settling on the type of dried pig intestine used in sausage casings. “We spent a lot of time at Asian markets and the Chinatown market looking for materials,” Li [Shuguang Li, a CSAIL postdoc] says. The shrinking layer is a biodegradable shrink wrap called Biolefin.

To design their synthetic stomach, the researchers bought a pig stomach and tested its mechanical properties. Their model is an open cross-section of the stomach and esophagus, molded from a silicone rubber with the same mechanical profile. A mixture of water and lemon juice simulates the acidic fluids in the stomach.

Every year, 3,500 swallowed button batteries are reported in the U.S. alone. Frequently, the batteries are digested normally, but if they come into prolonged contact with the tissue of the esophagus or stomach, they can cause an electric current that produces hydroxide, which burns the tissue. Miyashita employed a clever strategy to convince Rus that the removal of swallowed button batteries and the treatment of consequent wounds was a compelling application of their origami robot.

“Shuhei bought a piece of ham, and he put the battery on the ham,” Rus says. [emphasis mine] “Within half an hour, the battery was fully submerged in the ham. So that made me realize that, yes, this is important. If you have a battery in your body, you really want it out as soon as possible.”

“This concept is both highly creative and highly practical, and it addresses a clinical need in an elegant way,” says Bradley Nelson, a professor of robotics at the Swiss Federal Institute of Technology Zurich. “It is one of the most convincing applications of origami robots that I have seen.”

I wonder if they ate the ham afterwards.

Happily, MIT has produced a video featuring this ingestible, origami robot,

Finally, this team has a couple more members than the previously mentioned Rus, Miyashita, and Li,

…  Kazuhiro Yoshida of Tokyo Institute of Technology, who was visiting MIT on sabbatical when the work was done; and Dana Damian of the University of Sheffield, in England.

As Rus notes in the video, the next step will be in vivo (animal) studies.

Are they just computer games or are we in a race with technology?

This story poses some interesting questions that touch on the uneasiness being felt as computers get ‘smarter’. From an April 13, 2016 news item on ScienceDaily,

The saying of philosopher René Descartes of what makes humans unique is beginning to sound hollow. ‘I think — therefore soon I am obsolete’ seems more appropriate. When a computer routinely beats us at chess and we can barely navigate without the help of a GPS, have we outlived our place in the world? Not quite. Welcome to the front line of research in cognitive skills, quantum computers and gaming.

Today there is an on-going battle between man and machine. While genuine machine consciousness is still years into the future, we are beginning to see computers make choices that previously demanded a human’s input. Recently, the world held its breath as Google’s algorithm AlphaGo beat a professional player in the game Go–an achievement demonstrating the explosive speed of development in machine capabilities.

An April 13, 2016 Aarhus University press release (also on EurekAlert) by Rasmus Rørbæk, which originated the news item, further develops the point,

But we are not beaten yet — human skills are still superior in some areas. This is one of the conclusions of a recent study by Danish physicist Jacob Sherson, published in the journal Nature.

“It may sound dramatic, but we are currently in a race with technology — and steadily being overtaken in many areas. Features that used to be uniquely human are fully captured by contemporary algorithms. Our results are here to demonstrate that there is still a difference between the abilities of a man and a machine,” explains Jacob Sherson.

At the interface between quantum physics and computer games, Sherson and his research group at Aarhus University have identified one of the abilities that still makes us unique compared to a computer’s enormous processing power: our skill in approaching problems heuristically and solving them intuitively. The discovery was made at the AU Ideas Centre CODER, where an interdisciplinary team of researchers work to transfer some human traits to the way computer algorithms work. ?

Quantum physics holds the promise of immense technological advances in areas ranging from computing to high-precision measurements. However, the problems that need to be solved to get there are so complex that even the most powerful supercomputers struggle with them. This is where the core idea behind CODER–combining the processing power of computers with human ingenuity — becomes clear. ?

Our common intuition

Like Columbus in QuantumLand, the CODER research group mapped out how the human brain is able to make decisions based on intuition and accumulated experience. This is done using the online game “Quantum Moves.” Over 10,000 people have played the game that allows everyone contribute to basic research in quantum physics.

“The map we created gives us insight into the strategies formed by the human brain. We behave intuitively when we need to solve an unknown problem, whereas for a computer this is incomprehensible. A computer churns through enormous amounts of information, but we can choose not to do this by basing our decision on experience or intuition. It is these intuitive insights that we discovered by analysing the Quantum Moves player solutions,” explains Jacob Sherson. ? [sic]

The laws of quantum physics dictate an upper speed limit for data manipulation, which in turn sets the ultimate limit to the processing power of quantum computers — the Quantum Speed ??Limit. Until now a computer algorithm has been used to identify this limit. It turns out that with human input researchers can find much better solutions than the algorithm.

“The players solve a very complex problem by creating simple strategies. Where a computer goes through all available options, players automatically search for a solution that intuitively feels right. Through our analysis we found that there are common features in the players’ solutions, providing a glimpse into the shared intuition of humanity. If we can teach computers to recognise these good solutions, calculations will be much faster. In a sense we are downloading our common intuition to the computer” says Jacob Sherson.

And it works. The group has shown that we can break the Quantum Speed Limit by combining the cerebral cortex and computer chips. This is the new powerful tool in the development of quantum computers and other quantum technologies.

After the buildup, the press release focuses on citizen science and computer games,

Science is often perceived as something distant and exclusive, conducted behind closed doors. To enter you have to go through years of education, and preferably have a doctorate or two. Now a completely different reality is materialising.? [sic]

In recent years, a new phenomenon has appeared–citizen science breaks down the walls of the laboratory and invites in everyone who wants to contribute. The team at Aarhus University uses games to engage people in voluntary science research. Every week people around the world spend 3 billion hours playing games. Games are entering almost all areas of our daily life and have the potential to become an invaluable resource for science.

“Who needs a supercomputer if we can access even a small fraction of this computing power? By turning science into games, anyone can do research in quantum physics. We have shown that games break down the barriers between quantum physicists and people of all backgrounds, providing phenomenal insights into state-of-the-art research. Our project combines the best of both worlds and helps challenge established paradigms in computational research,” explains Jacob Sherson.

The difference between the machine and us, figuratively speaking, is that we intuitively reach for the needle in a haystack without knowing exactly where it is. We ‘guess’ based on experience and thereby skip a whole series of bad options. For Quantum Moves, intuitive human actions have been shown to be compatible with the best computer solutions. In the future it will be exciting to explore many other problems with the aid of human intuition.

“We are at the borderline of what we as humans can understand when faced with the problems of quantum physics. With the problem underlying Quantum Moves we give the computer every chance to beat us. Yet, over and over again we see that players are more efficient than machines at solving the problem. While Hollywood blockbusters on artificial intelligence are starting to seem increasingly realistic, our results demonstrate that the comparison between man and machine still sometimes favours us. We are very far from computers with human-type cognition,” says Jacob Sherson and continues:

“Our work is first and foremost a big step towards the understanding of quantum physical challenges. We do not know if this can be transferred to other challenging problems, but it is definitely something that we will work hard to resolve in the coming years.”

Here’s a link to and a citation for the paper,

Exploring the quantum speed limit with computer games by Jens Jakob W. H. Sørensen, Mads Kock Pedersen, Michael Munch, Pinja Haikka, Jesper Halkjær Jensen, Tilo Planke, Morten Ginnerup Andreasen, Miroslav Gajdacz, Klaus Mølmer, Andreas Lieberoth, & Jacob F. Sherson. Nature 532, 210–213  (14 April 2016) doi:10.1038/nature17620 Published online 13 April 2016

This paper is behind a paywall.

What robots and humans?

I have two robot news bits for this posting. The first probes the unease currently being expressed (pop culture movies, Stephen Hawking, the Cambridge Centre for Existential Risk, etc.) about robots and their increasing intelligence and increased use in all types of labour formerly and currently performed by humans. The second item is about a research project where ‘artificial agents’ (robots) are being taught human values with stories.

Human labour obsolete?

‘When machines can do any job, what will humans do?’ is the question being asked in a presentation by Rice University computer scientist, Moshe Vardi, for the American Association for the Advancement of Science (AAAS) annual meeting held in Washington, D.C. from Feb. 11 – 15, 2016.

Here’s more about Dr. Vardi’s provocative question from a Feb. 14, 2016 Rice University news release (also on EurekAlert),

Rice University computer scientist Moshe Vardi expects that within 30 years, machines will be capable of doing almost any job that a human can. In anticipation, he is asking his colleagues to consider the societal implications. Can the global economy adapt to greater than 50 percent unemployment? Will those out of work be content to live a life of leisure?

“We are approaching a time when machines will be able to outperform humans at almost any task,” Vardi said. “I believe that society needs to confront this question before it is upon us: If machines are capable of doing almost any work humans can do, what will humans do?”

Vardi addressed this issue Sunday [Feb. 14, 2016] in a presentation titled “Smart Robots and Their Impact on Society” at one of the world’s largest and most prestigious scientific meetings — the annual meeting of the American Association for the Advancement of Science in Washington, D.C.

“The question I want to put forward is, Does the technology we are developing ultimately benefit mankind?” Vardi said. He asked the question after presenting a body of evidence suggesting that the pace of advancement in the field of artificial intelligence (AI) is increasing, even as existing robotic and AI technologies are eliminating a growing number of middle-class jobs and thereby driving up income inequality.

Vardi, a member of both the National Academy of Engineering and the National Academy of Science, is a Distinguished Service Professor and the Karen Ostrum George Professor of Computational Engineering at Rice, where he also directs Rice’s Ken Kennedy Institute for Information Technology. Since 2008 he has served as the editor-in-chief of Communications of the ACM, the flagship publication of the Association for Computing Machinery (ACM), one of the world’s largest computational professional societies.

Vardi said some people believe that future advances in automation will ultimately benefit humans, just as automation has benefited society since the dawn of the industrial age.

“A typical answer is that if machines will do all our work, we will be free to pursue leisure activities,” Vardi said. But even if the world economic system could be restructured to enable billions of people to live lives of leisure, Vardi questioned whether it would benefit humanity.

“I do not find this a promising future, as I do not find the prospect of leisure-only life appealing. I believe that work is essential to human well-being,” he said.

“Humanity is about to face perhaps its greatest challenge ever, which is finding meaning in life after the end of ‘In the sweat of thy face shalt thou eat bread,’” Vardi said. “We need to rise to the occasion and meet this challenge” before human labor becomes obsolete, he said.

In addition to dual membership in the National Academies, Vardi is a Guggenheim fellow and a member of the American Academy of Arts and Sciences, the European Academy of Sciences and the Academia Europa. He is a fellow of the ACM, the American Association for Artificial Intelligence and the Institute for Electrical and Electronics Engineers (IEEE). His numerous honors include the Southeastern Universities Research Association’s 2013 Distinguished Scientist Award, the 2011 IEEE Computer Society Harry H. Goode Award, the 2008 ACM Presidential Award, the 2008 Blaise Pascal Medal for Computer Science by the European Academy of Sciences and the 2000 Goedel Prize for outstanding papers in the area of theoretical computer science.

Vardi joined Rice’s faculty in 1993. His research centers upon the application of logic to computer science, database systems, complexity theory, multi-agent systems and specification and verification of hardware and software. He is the author or co-author of more than 500 technical articles and of two books, “Reasoning About Knowledge” and “Finite Model Theory and Its Applications.”

In a Feb. 5, 2015 post, I rounded up a number of articles about our robot future. It provides a still useful overview of the thinking on the topic.

Teaching human values with stories

A Feb. 12, 2016 Georgia (US) Institute of Technology (Georgia Tech) news release (also on EurekAlert) describes the research,

The rapid pace of artificial intelligence (AI) has raised fears about whether robots could act unethically or soon choose to harm humans. Some are calling for bans on robotics research; others are calling for more research to understand how AI might be constrained. But how can robots learn ethical behavior if there is no “user manual” for being human?

Researchers Mark Riedl and Brent Harrison from the School of Interactive Computing at the Georgia Institute of Technology believe the answer lies in “Quixote” — to be unveiled at the AAAI [Association for the Advancement of Artificial Intelligence]-16 Conference in Phoenix, Ariz. (Feb. 12 – 17, 2016). Quixote teaches “value alignment” to robots by training them to read stories, learn acceptable sequences of events and understand successful ways to behave in human societies.

“The collected stories of different cultures teach children how to behave in socially acceptable ways with examples of proper and improper behavior in fables, novels and other literature,” says Riedl, associate professor and director of the Entertainment Intelligence Lab. “We believe story comprehension in robots can eliminate psychotic-appearing behavior and reinforce choices that won’t harm humans and still achieve the intended purpose.”

Quixote is a technique for aligning an AI’s goals with human values by placing rewards on socially appropriate behavior. It builds upon Riedl’s prior research — the Scheherazade system — which demonstrated how artificial intelligence can gather a correct sequence of actions by crowdsourcing story plots from the Internet.

Scheherazade learns what is a normal or “correct” plot graph. It then passes that data structure along to Quixote, which converts it into a “reward signal” that reinforces certain behaviors and punishes other behaviors during trial-and-error learning. In essence, Quixote learns that it will be rewarded whenever it acts like the protagonist in a story instead of randomly or like the antagonist.

For example, if a robot is tasked with picking up a prescription for a human as quickly as possible, the robot could a) rob the pharmacy, take the medicine, and run; b) interact politely with the pharmacists, or c) wait in line. Without value alignment and positive reinforcement, the robot would learn that robbing is the fastest and cheapest way to accomplish its task. With value alignment from Quixote, the robot would be rewarded for waiting patiently in line and paying for the prescription.

Riedl and Harrison demonstrate in their research how a value-aligned reward signal can be produced to uncover all possible steps in a given scenario, map them into a plot trajectory tree, which is then used by the robotic agent to make “plot choices” (akin to what humans might remember as a Choose-Your-Own-Adventure novel) and receive rewards or punishments based on its choice.

The Quixote technique is best for robots that have a limited purpose but need to interact with humans to achieve it, and it is a primitive first step toward general moral reasoning in AI, Riedl says.

“We believe that AI has to be enculturated to adopt the values of a particular society, and in doing so, it will strive to avoid unacceptable behavior,” he adds. “Giving robots the ability to read and understand our stories may be the most expedient means in the absence of a human user manual.”

So there you have it, some food for thought.

Science events (Einstein, getting research to patients, sleep, and art/science) in Vancouver (Canada), Jan. 23 – 28, 2016

There are five upcoming science events in seven days (Jan. 23 – 28, 2016) in the Vancouver area.

Einstein Centenary Series

The first is a Saturday morning, Jan. 23, 2016 lecture, the first for 2016 in a joint TRIUMF (Canada’s national laboratory for particle and nuclear physics), UBC (University of British Columbia), and SFU (Simon Fraser University) series featuring Einstein’s  work and its implications. From the event brochure (pdf), which lists the entire series,

TRIUMF, UBC and SFU are proud to present the 2015-2016 Saturday morning lecture series on the frontiers of modern physics. These free lectures are a level appropriate for high school students and members of the general public.

Parallel lecture series will be held at TRIUMF on the UBC South Campus, and at SFU Surrey Campus.

Lectures start at 10:00 am and 11:10 am. Parking is available.

For information, registration and directions, see :
http://www.triumf.ca/saturday-lectures

January 23, 2016 TRIUMF Auditorium (UBC, Vancouver)
1. General Relativity – the theory (Jonathan Kozaczuk, TRIUMF)
2. Einstein and Light: stimulated emission, photoelectric effect and quantum theory (Mark Van Raamsdonk, UBC)

January 30, 2016 SFU Surrey Room 2740 (SFU, Surrey Campus)

1. General Relativity – the theory (Jonathan Kozaczuk, TRIUMF)
2. Einstein and Light: stimulated emission, photoelectric effect and quantum theory (Mark Van Raamsdonk, UBC)

I believe these lectures are free. One more note, they will be capping off this series with a special lecture by Kip Thorne (astrophysicist and consultant for the movie Interstellar) at Science World, on Thursday, April 14, 2016. More about that * at a closer date.

Café Scientifique

On Tuesday, January 26, 2016 at 7:30 pm in the back room of The Railway Club (2nd floor of 579 Dunsmuir St. [at Seymour St.]), Café Scientifique will be hosting a talk about science and serving patients (from the Jan. 5, 2016 announcement),

Our speakers for the evening will be Dr. Millan Patel and Dr. Shirin Kalyan.  The title of their talk is:

Helping Science to Serve Patients

Science in general and biotechnology in particular are auto-catalytic. That is, they catalyze their own evolution and so generate breakthroughs at an exponentially increasing rate.  The experience of patients is not exponentially getting better, however.  This talk, with a medical geneticist and an immunologist who believe science can deliver far more for patients, will focus on structural and cultural impediments in our system and ways they and others have developed to either lower or leapfrog the barriers. We hope to engage the audience in a highly interactive discussion to share thoughts and perspectives on this important issue.

There is additional information about Dr. Millan Patel here and Dr. Shirin Kalyan here. It would appear both speakers are researchers and academics and while I find the emphasis on the patient and the acknowledgement that medical research benefits are not being delivered in quantity or quality to patients, it seems odd that they don’t have a clinician (a doctor who deals almost exclusively with patients as opposed to two researchers) to add to their perspective.

You may want to take a look at my Jan. 22, 2016 ‘open science’ and Montreal Neurological Institute posting for a look at how researchers there are responding to the issue.

Curiosity Collider

This is an art/science event from an organization that sprang into existence sometime during summer 2015 (my July 7, 2015 posting featuring Curiosity Collider).

When: 8:00pm on Wednesday, January 27, 2016. Door opens at 7:30pm.
Where: Café Deux Soleils. 2096 Commercial Drive, Vancouver, BC (Google Map).
Cost: $5.00 cover (sliding scale) at the door. Proceeds will be used to cover the cost of running this event, and to fund future Curiosity Collider events.

Part I. Speakers

Part II. Open Mic

  • 90 seconds to share your art-science ideas. Think they are “ridiculous”? Well, we think it could be ridiculously awesome – we are looking for creative ideas!
  • Don’t have an idea (yet)? Contribute by sharing your expertise.
  • Chat with other art-science enthusiasts, strike up a conversation to collaborate, all disciplines/backgrounds welcome.
  • Want to showcase your project in the future? Participate in our fall art-science competition (more to come)!

Follow updates on twitter via @ccollider or #CollideConquer

Good luck on the open mic (should you have a project)!

Brain Talks

This particular Brain Talk event is taking place at Vancouver General Hospital (VGH; there is also another Brain Talks series which takes place at the University of British Columbia). Yes, members of the public can attend the VGH version; they didn’t throw me out the last time I was there. Here’s more about the next VGH Brain Talks,

Sleep: biological & pathological perspectives

Thursday, Jan 28, 6:00pm @ Paetzold Auditorium, Vancouver General Hospital

Speakers:

Peter Hamilton, Sleep technician ~ Sleep Architecture

Dr. Robert Comey, MD ~ Sleep Disorders

Dr. Maia Love, MD ~ Circadian Rhythms

Panel discussion and wine and cheese reception to follow!

Please RSVP here

You may want to keep in mind that the event is organized by people who don’t organize events often. Nice people but you may need to search for crackers for your cheese and your wine comes out of a box (and I think it might have been self-serve the time I attended).

What a fabulous week we have ahead of us—Happy Weekend!

*’when’ removed from the sentence on March 28, 2016.