Tag Archives: Vancouver Police Department

AI x 2: the Amnesty International and Artificial Intelligence story

Amnesty International and artificial intelligence seem like an unexpected combination but it all makes sense when you read a June 13, 2018 article by Steven Melendez for Fast Company (Note: Links have been removed),

If companies working on artificial intelligence don’t take steps to safeguard human rights, “nightmare scenarios” could unfold, warns Rasha Abdul Rahim, an arms control and artificial intelligence researcher at Amnesty International in a blog post. Those scenarios could involve armed, autonomous systems choosing military targets with little human oversight, or discrimination caused by biased algorithms, she warns.

Rahim pointed at recent reports of Google’s involvement in the Pentagon’s Project Maven, which involves harnessing AI image recognition technology to rapidly process photos taken by drones. Google recently unveiled new AI ethics policies and has said it won’t continue with the project once its current contract expires next year after high-profile employee dissent over the project. …

“Compliance with the laws of war requires human judgement [sic] –the ability to analyze the intentions behind actions and make complex decisions about the proportionality or necessity of an attack,” Rahim writes. “Machines and algorithms cannot recreate these human skills, and nor can they negotiate, produce empathy, or respond to unpredictable situations. In light of these risks, Amnesty International and its partners in the Campaign to Stop Killer Robots are calling for a total ban on the development, deployment, and use of fully autonomous weapon systems.”

Rasha Abdul Rahim’s June 14, 2018 posting (I’m putting the discrepancy in publication dates down to timezone differences) on the Amnesty International website (Note: Links have been removed),

Last week [June 7, 2018] Google released a set of principles to govern its development of AI technologies. They include a broad commitment not to design or deploy AI in weaponry, and come in the wake of the company’s announcement that it will not renew its existing contract for Project Maven, the US Department of Defense’s AI initiative, when it expires in 2019.

The fact that Google maintains its existing Project Maven contract for now raises an important question. Does Google consider that continuing to provide AI technology to the US government’s drone programme is in line with its new principles? Project Maven is a litmus test that allows us to see what Google’s new principles mean in practice.

As details of the US drone programme are shrouded in secrecy, it is unclear precisely what role Google plays in Project Maven. What we do know is that US drone programme, under successive administrations, has been beset by credible allegations of unlawful killings and civilian casualties. The cooperation of Google, in any capacity, is extremely troubling and could potentially implicate it in unlawful strikes.

As AI technology advances, the question of who will be held accountable for associated human rights abuses is becoming increasingly urgent. Machine learning, and AI more broadly, impact a range of human rights including privacy, freedom of expression and the right to life. It is partly in the hands of companies like Google to safeguard these rights in relation to their operations – for us and for future generations. If they don’t, some nightmare scenarios could unfold.

Warfare has already changed dramatically in recent years – a couple of decades ago the idea of remote controlled bomber planes would have seemed like science fiction. While the drones currently in use are still controlled by humans, China, France, Israel, Russia, South Korea, the UK and the US are all known to be developing military robots which are getting smaller and more autonomous.

For example, the UK is developing a number of autonomous systems, including the BAE [Systems] Taranis, an unmanned combat aircraft system which can fly in autonomous mode and automatically identify a target within a programmed area. Kalashnikov, the Russian arms manufacturer, is developing a fully automated, high-calibre gun that uses artificial neural networks to choose targets. The US Army Research Laboratory in Maryland, in collaboration with BAE Systems and several academic institutions, has been developing micro drones which weigh less than 30 grams, as well as pocket-sized robots that can hop or crawl.

Of course, it’s not just in conflict zones that AI is threatening human rights. Machine learning is already being used by governments in a wide range of contexts that directly impact people’s lives, including policing [emphasis mine], welfare systems, criminal justice and healthcare. Some US courts use algorithms to predict future behaviour of defendants and determine their sentence lengths accordingly. The potential for this approach to reinforce power structures, discrimination or inequalities is huge.

In july 2017, the Vancouver Police Department announced its use of predictive policing software, the first such jurisdiction in Canada to make use of the technology. My Nov. 23, 2017 posting featured the announcement.

The almost too aptly named Campaign to Stop Killer Robots can be found here. Their About Us page provides a brief history,

Formed by the following non-governmental organizations (NGOs) at a meeting in New York on 19 October 2012 and launched in London in April 2013, the Campaign to Stop Killer Robots is an international coalition working to preemptively ban fully autonomous weapons. See the Chronology charting our major actions and achievements to date.

Steering Committee

The Steering Committee is the campaign’s principal leadership and decision-making body. It is comprised of five international NGOs, a regional NGO network, and four national NGOs that work internationally:

Human Rights Watch
Article 36
Association for Aid and Relief Japan
International Committee for Robot Arms Control
Mines Action Canada
Nobel Women’s Initiative
PAX (formerly known as IKV Pax Christi)
Pugwash Conferences on Science & World Affairs
Seguridad Humana en América Latina y el Caribe (SEHLAC)
Women’s International League for Peace and Freedom

For more information, see this Overview. A Terms of Reference is also available on request, detailing the committee’s selection process, mandate, decision-making, meetings and communication, and expected commitments.

For anyone who may be interested in joining Amnesty International, go here.

Predictive policing in Vancouver—the first jurisdiction in Canada to employ a machine learning system for property theft reduction

Predictive policing has come to Canada, specifically, Vancouver. A July 22, 2017 article by Matt Meuse for the Canadian Broadcasting Corporation (CBC) news online describes the new policing tool,

The Vancouver Police Department is implementing a city-wide “predictive policing” system that uses machine learning to prevent break-ins by predicting where they will occur before they happen — the first of its kind in Canada.

Police chief Adam Palmer said that, after a six-month pilot project in 2016, the system is now accessible to all officers via their cruisers’ onboard computers, covering the entire city.

“Instead of officers just patrolling randomly throughout the neighbourhood, this will give them targeted areas it makes more sense to patrol in because there’s a higher likelihood of crime to occur,” Palmer said.

 

Things got off to a slow start as the system familiarized itself [during a 2016 pilot project] with the data, and floundered in the fall due to unexpected data corruption.

But Special Const. Ryan Prox said the system reduced property crime by as much as 27 per cent in areas where it was tested, compared to the previous four years.

The accuracy of the system was also tested by having it generate predictions for a given day, and then watching to see what happened that day without acting on the predictions.

Palmer said the system was getting accuracy rates between 70 and 80 per cent.

When a location is identified by the system, Palmer said officers can be deployed to patrol that location. …

“Quite often … that visible presence will deter people from committing crimes [altogether],” Palmer said.

Though similar systems are used in the United States, Palmer said the system is the first of its kind in Canada, and was developed specifically for the VPD.

While the current focus is on residential break-ins, Palmer said the system could also be tweaked for use with car theft — though likely not with violent crime, which is far less predictable.

Palmer dismissed the inevitable comparison to the 2002 Tom Cruise film Minority Report, in which people are arrested to prevent them from committing crimes in the future.

“We’re not targeting people, we’re targeting locations,” Palmer said. “There’s nothing dark here.”

If you want to get a sense of just how dismissive Chief Palmer was, there’s a July 21, 2017 press conference (run time: approx. 21 mins.) embedded with a media release of the same date. The media release offered these details,

The new model is being implemented after the VPD ran a six-month pilot study in 2016 that contributed to a substantial decrease in residential break-and-enters.

The pilot ran from April 1 to September 30, 2016. The number of residential break-and enters during the test period was compared to the monthly average over the same period for the previous four years (2012 to 2015). The highest drop in property crime – 27 per cent – was measured in June.

The new model provides data in two-hour intervals for locations where residential and commercial break-and-enters are anticipated. The information is for 100-metre and 500-metre zones. Police resources can be dispatched to that area on foot or in patrol cars, to provide a visible presence to deter thieves.

The VPD’s new predictive policing model is built on GEODASH – an advanced machine-learning technology that was implemented by the VPD in 2015. A public version of GEODASH was introduced in December 2015 and is publicly available on vpd.ca. It retroactively plots the location of crimes on a map to provide a general idea of crime trends to the public.

I wish Chief Palmer had been a bit more open to discussion about the implications of ‘predictive policing’. In the US where these systems have been employed in various jurisdictions, there’s some concern arising after an almost euphoric initial response as a Nov. 21, 2016 article by Logan Koepke for the slate.com notes (Note: Links have been removed),

When predictive policing systems began rolling out nationwide about five years ago, coverage was often uncritical and overly reliant on references to Minority Report’s precog system. The coverage made predictive policing—the computer systems that attempt to use data to forecast where crime will happen or who will be involved—seem almost magical.

Typically, though, articles glossed over Minority Report’s moral about how such systems can go awry. Even Slate wasn’t immune, running a piece in 2011 called “Time Cops” that said, when it came to these systems, “Civil libertarians can rest easy.”

This soothsaying language extended beyond just media outlets. According to former New York City Police Commissioner William Bratton, predictive policing is the “wave of the future.” Microsoft agrees. One vendor even markets its system as “better than a crystal ball.” More recent coverage has rightfully been more balanced, skeptical, and critical. But many still seem to miss an important point: When it comes to predictive policing, what matters most isn’t the future—it’s the past.

Some predictive policing systems incorporate information like the weather, a location’s proximity to a liquor store, or even commercial data brokerage information. But at their core, they rely either mostly or entirely on historical crime data held by the police. Typically, these are records of reported crimes—911 calls or “calls for service”—and other crimes the police detect. Software automatically looks for historical patterns in the data, and uses those patterns to make its forecasts—a process known as machine learning.

Intuitively, it makes sense that predictive policing systems would base their forecasts on historical crime data. But historical crime data has limits. Criminologists have long emphasized that crime reports—and other statistics gathered by the police—do not necessarily offer an accurate picture of crime in a community. The Department of Justice’s National Crime Victimization Survey estimates that from 2006 to 2010, 52 percent of violent crime went unreported to police, as did 60 percent of household property crime. Essentially: Historical crime data is a direct record of how law enforcement responds to particular crimes, rather than the true rate of crime. Rather than predicting actual criminal activity, then, the current systems are probably better at predicting future police enforcement.

Koepke goes on to cover other potential issues with ‘predicitive policing’ in this thoughtful piece. He also co-authored an August 2016 report, Stuck in a Pattern; Early evidence on “predictive” policing and civil rights.

There seems to be increasing attention on machine learning and bias as noted in my May 24, 2017 posting where I provide links to other FrogHeart postings on the topic and there’s this Feb. 28, 2017 posting about a new regional big data sharing project, the Cascadia Urban Analytics Cooperative where I mention Cathy O’Neil (author of the book, Weapons of Math Destruction) and her critique in a subsection titled: Algorithms and big data.

I would like to see some oversight and some discussion in Canada about this brave new world of big data.

One final comment, it is possible to get access to the Vancouver Police Department’s data through the City of Vancouver’s Open Data Catalogue (home page).

The Code; a preview of the BBC documentary being released in Canada and the US

The three episodes (Numbers, Shapes, and Prediction)  of The Code, a BBC (British Broadcasting Corporation) documentary featuring Professor Marcus du Sautoy, focus on a ‘code’ that according to du Sautoy unlocks the secrets to the laws governing the universe.

During the weekend (June 16 & 17, 2012) I had the pleasure of viewing the two-disc DVD set which is to be released tomorrow, June 19, 2012, in the US and Canada.  It’s a beautiful and, in its way, exuberant exploration of patterns that recur throughout nature and throughout human endeavours. In the first episode, Numbers, du Sautoy relates the architecture of the Chartres Cathedral (France) , St. Augustine‘s (a Roman Catholic theologian born in an area we now call Algeria) sacred numbers, the life cycle of the periodic cicada in Alabama, US and more to number patterns. Here’s an excerpt of du Sautoy in Alabama with Dr. John Cooley discussing the cicadas’ qualities as pets and their remarkable 13 year life cycle,

In the second episode, Shapes,  du Sautoy covers beehive construction (engineering marvels), bird migrations and their distinct shapes (anyone who’s ever seen a big flock of birds move as one has likely marveled at the shapes the flock takes as it moves from area to another), computer animation, soap bubbles and more, explaining how these shapes can be derived from the principle of simplicity or as du Sautoy notes, ‘nature is lazy’. The question being, how do you make the most efficient structure to achieve your ends, i.e., structure a bird flock so it moves efficiently when thousands and thousands are migrating huge distances, build the best beehive while conserving your worker bees’ energies and extracting the most honey possible, create stunning animated movies with tiny algorithms, etc.?

Here’s du Sautoy with ‘soap bubbleologist’ Tom Noddy who’s demonstrating geometry in action,

For the final episode, Prediction, du Sautoy brings the numbers and geometry together demonstrating repeating patterns such as fractals which dominate our landscape, our biology, and our universe. du Sautoy visits a Rock Paper Scissors tournament in New York City trying to discern why some folks can ‘win’ while others cannot (individuals who can read other people’s patterns while breaking their own are more successful), discusses geographic profiling with criminal geographic profiler Prof. Kim Rossmo, Jackson Pollock’s paintings and his fractals, amongst other intriguing patterns.

I paid special to the Rossmo segment as he created and developed his geographic profiling techniques when he worked for the Vancouver (Canada) Police Department (VPD) and studied at a nearby university. As this groundbreaking work was done in my neck of the woods and Rossmo was treated badly by the VPD, I felt a special interest. There’s more about Rossmo’s work and the VPD issues in the Wikipedia essay (Note: I have removed links from the excerpt.),

D. Kim Rossmo is a Canadian criminologist specializing in geographic profiling. He joined the Vancouver Police Department as a civilian employee in 1978 and became a sworn officer in 1980. In 1987 he received a Master’s degree in criminology from Simon Fraser University and in 1995 became the first police officer in Canada to obtain a doctorate in criminology. His dissertation research resulted in a new criminal investigative methodology called geographic profiling, based on Rossmo’s formula.

In 1995, he was promoted to detective inspector and founded a geographic profiling section within the Vancouver Police Department. In 1998, his analysis of cases of missing sex trade workers determined that a serial killer was at work, a conclusion ultimately vindicated by the arrest and conviction of Robert Pickton in 2002. A retired Vancouver police staff sergeant has claimed that animosity toward Rossmo delayed the arrest of Pickton, leaving him free to carry out additional murders. His analytic results were not accepted at the time and after a falling out with senior members of the department he left in 2001. His unsuccessful lawsuit against the Vancouver Police Board for wrongful dismissal exposed considerable apparent dysfunction within that department.

… he moved to Texas State University where he currently holds the Endowed Chair in Criminology and is director of the Center for Geospatial Intelligence and Investigation. …

Within what appeared to be chaos, Rossmo found order. Somehow Jackson Pollock did the same thing to achieve entirely different ends, a new form of art. Here’s a video clip of du Sautoy with artist and physicist, Richard Taylor,

Intuitively, Pollock dripped paint onto his canvases creating fractals decades before mathematician, Benoit Mandelbrot, coined the phrase and established the theory.  (I wrote previously about Jackson Pollock [and fluid dynamics] in my June 30, 2011 posting.)

I gather that du Sautoy’s ‘code’ will offer a unified theory drawing together numbers, patterns, and shapes as they are found throughout the universe in nature  and in our technologies and sciences.

The DVDs offer three extras (4 mins. each): Phi’s the Limit (beauty and the golden ratio or Phi), Go Forth and Multiply (a base 2 system developed by Ethiopian traders predating binary computer codes by millenia) and Imagining the Impossible: The Mathematical Art of M. C. Escher  (Dutch artist’s [Escher] experiments with tessellation/tiling).

I quite enjoyed the episodes although I was glad to have read James Gleick‘s book, Chaos (years ago) before viewing the third episode, Prediction and I was a little puzzled by du Sautoy’s comment in the first episode, Numbers, that atoms are not divisible. As I recall, you create an atomic bomb when you split an atom but it may have been one of those comments that didn’t come out as intended or I misunderstood.

You can find out more about The Code DVDs at Athena Learning. The suggested retail cost is $39.99 US or $52.99 CAD (which seems a little steep for Canadian purchasers since the Canadian dollar is close to par these days and, I believe, has been for some time).

In sum, this is a very engaging look at numbers and mathematics.