Tag Archives: Hal Hodson

Curbing police violence with machine learning

A rather fascinating Aug. 1, 2016 article by Hal Hodson about machine learning and curbing police violence has appeared in the New Scientist journal (Note: Links have been removed),

None of their colleagues may have noticed, but a computer has. By churning through the police’s own staff records, it has caught signs that an officer is at high risk of initiating an “adverse event” – racial profiling or, worse, an unwarranted shooting.

The Charlotte-Mecklenburg Police Department in North Carolina is piloting the system in an attempt to tackle the police violence that has become a heated issue in the US in the past three years. A team at the University of Chicago is helping them feed their data into a machine learning system that learns to spot risk factors for unprofessional conduct. The department can then step in before risk transforms into actual harm.

The idea is to prevent incidents in which officers who are stressed behave aggressively, for example, such as one in Texas where an officer pulled his gun on children at a pool party after responding to two suicide calls earlier that shift. Ideally, early warning systems would be able to identify individuals who had recently been deployed on tough assignments, and divert them from other sensitive calls.

According to Hodson, there are already systems, both human and algorithmic, in place but the goal is to make them better,

The system being tested in Charlotte is designed to include all of the records a department holds on an individual – from details of previous misconduct and gun use to their deployment history, such as how many suicide or domestic violence calls they have responded to. It retrospectively caught 48 out of 83 adverse incidents between 2005 and now – 12 per cent more than Charlotte-Mecklenberg’s existing early intervention system.

More importantly, the false positive rate – the fraction of officers flagged as being under stress who do not go on to act aggressively – was 32 per cent lower than the existing system’s. “Right now the systems that claim to do this end up flagging the majority of officers,” says Rayid Ghani, who leads the Chicago team. “You can’t really intervene then.”

There is some cautious optimism about this new algorithm (Note: Links have been removed),

Frank Pasquale, who studies the social impact of algorithms at the University of Maryland, is cautiously optimistic. “In many walks of life I think this algorithmic ranking of workers has gone too far – it troubles me,” he says. “But in the context of the police, I think it could work.”

Pasquale says that while such a system for tackling police misconduct is new, it’s likely that older systems created the problem in the first place. “The people behind this are going to say it’s all new,” he says. “But it could be seen as an effort to correct an earlier algorithmic failure. A lot of people say that the reason you have so much contact between minorities and police is because the CompStat system was rewarding officers who got the most arrests.”

CompStat, short for Computer Statistics, is a police management and accountability system that was used to implement the “broken windows” theory of policing, which proposes that coming down hard on minor infractions like public drinking and vandalism helps to create an atmosphere of law and order, bringing serious crime down in its wake. Many police researchers have suggested that the approach has led to the current dangerous tension between police and minority communities.

Ghani has not forgotten the human dimension,

One thing Ghani is certain of is that the interventions will need to be decided on and delivered by humans. “I would not want any of those to be automated,” he says. “As long as there is a human in the middle starting a conversation with them, we’re reducing the chance for things to go wrong.”

h/t Terkko Navigator

I have written about police and violence here in the context of the Dallas Police Department and its use of a robot in a violent confrontation with a sniper, July 25, 2016 posting titled: Robots, Dallas (US), ethics, and killing.

SpiderSense and wearable computers

Nancy Owano in her Feb. 23, 2013 article for phys.org, Wearable display meets blindfold test for sensing danger, features a project (SpiderSense) from the University of Illinois at Chicago that will be presented at the Augmented Human ’13 conference to be held March 7 – 8, 2013 in Stuttgart, Germany,

The researchers behind SpiderSense define it as a wearable device that projects the wearer’s near environment on the skin. The suit gives the user a special directional awareness of surrounding objects. They have explored a scenario where multiple sites over the body, rather than just hands, are fitted with transducers. These transducers relay information about the wearer’s environment into tactile sensations.

Modules are distributed across the suit to give the wearer as near to 360-degree ultrasound coverage as possible. The system modules can scan the environment; they are controlled through a Controller Box. The box carries the power source, the electronics and the system logic. The modules and the Controller Box are connected by means of ten pin ribbon cables. The researchers said that, in the future, this could be replaced by a wireless Bluetooth connection.

You can find out more about SpiderSense from its presentation webpage on the University of Illinois at Chicago Electronic Visualization Laboratory (EVL) website,

Sensing the environment through SpiderSense     

authors: Mateevitsi,V., Haggadone, B., Leigh, J., Kunzer, B., Kenyon, R.V.

Augmented Human ’13, 4th International Conference in Cooperation with ACM SIGCHI, Stuttgart, Germany

Recent scientific advances allow the use of technology to expand the number of forms of energy that can be perceived by humans. Smart sensors can detect hazards that human sensors are unable to perceive, for example radiation. This fusing of technology to human’s forms of perception enables exciting new ways of perceiving the world around us. In this paper we describe the design of SpiderSense, a wearable device that projects the wearer’s near environment on the skin and allows for directional awareness of objects around him. The millions of sensory receptors that cover the skin presents opportunities for conveying alerts and messages. We discuss the challenges and considerations of designing similar wearable devices.

Victor Mateevisti wearing SpiderSense image provided by L. Long, EVL

Victor Mateevisti wearing SpiderSense
image provided by L. Long, EVL

A Feb. 22, 2013 article by Hal Hodson for New Scientist inspired Owano who acknowledges that to be the case in her end notes,

Mateevitsi [Victor Mateevitsi] tested the suit out on students, getting them to stand outside on campus, blindfolded, and “feel” for approaching attackers. Each wearer had ninja cardboard throwing stars to use whenever they sensed someone approaching them. “Ninety five per cent of the time they were able to sense someone approaching and throw the star at them,” says Mateevitsi.

The SpiderSense presentation is scheduled for March 7, 2013 at the Augmented Human ’13 conference or as it’s also known, the 4th International Conference in Cooperation with ACM SIGCHI (Association for Computing Machinery, Special Interest Group on Computer-Human Interaction). The team, as per Hal Hodson’s article,  hopes to start human trials of SpiderSense with visually impaired individuals.