Tag Archives: Jill Lawless

UK AI Summit (November 1 – 2, 2023) at Bletchley Park finishes

This is the closest I’ve ever gotten to writing a gossip column (see my October 18, 2023 posting and scroll down to the “Insight into political jockeying [i.e., some juicy news bits]” subhead )for the first half.

Given the role that Canadian researchers (for more about that see my May 25, 2023 posting and scroll down to “The Panic” subhead) have played in the development of artificial intelligence (AI), it’s been surprising that the Canadian Broadcasting Corporation (CBC) has given very little coverage to the event in the UK. However, there is an October 31, 2023 article by Kelvin Chang and Jill Lawless for the Associated Press posted on the CBC website,

Digital officials, tech company bosses and researchers are converging Wednesday [November 1, 2023] at a former codebreaking spy base [Bletchley Park] near London [UK] to discuss and better understand the extreme risks posed by cutting-edge artificial intelligence.

The two-day summit focusing on so-called frontier AI notched up an early achievement with officials from 28 nations and the European Union signing an agreement on safe and responsible development of the technology.

Frontier AI is shorthand for the latest and most powerful general purpose systems that take the technology right up to its limits, but could come with as-yet-unknown dangers. They’re underpinned by foundation models, which power chatbots like OpenAI’s ChatGPT and Google’s Bard and are trained on vast pools of information scraped from the internet.

The AI Safety Summit is a labour of love for British Prime Minister Rishi Sunak, a tech-loving former banker who wants the U.K. to be a hub for computing innovation and has framed the summit as the start of a global conversation about the safe development of AI.[emphasis mine]

But U.S. Vice President Kamala Harris may divert attention Wednesday [November 1, 2023] with a separate speech in London setting out the Biden administration’s more hands-on approach.

Canada’s Minister of Innovation, Science and Industry Francois-Philippe Champagne said AI would not be constrained by national borders, and therefore interoperability between different regulations being put in place was important.

As the meeting began, U.K. Technology Secretary Michelle Donelan announced that the 28 countries and the European Union had signed the Bletchley Declaration on AI Safety. It outlines the “urgent need to understand and collectively manage potential risks through a new joint global effort.”

South Korea has agreed to host a mini virtual AI summit in six months, followed by an in-person one in France in a year’s time, the U.K. government said.

Chris Stokel-Walker’s October 31, 2023 article for Fast Company presents a critique of the summit prior to the opening, Note: Links have been removed,

… one problem, critics say: The summit, which begins on November 1, is too insular and its participants are homogeneous—an especially damning critique for something that’s trying to tackle the huge, possibly intractable questions around AI. The guest list is made up of 100 of the great and good of governments, including representatives from China, Europe, and Vice President Kamala Harris. And it also includes luminaries within the tech sector. But precious few others—which means a lack of diversity in discussions about the impact of AI.

“Self-regulation didn’t work for social media companies, it didn’t work for the finance sector, and it won’t work for AI,” says Carsten Jung, a senior economist at the Institute for Public Policy Research, a progressive think tank that recently published a report advising on key policy pillars it believes should be discussed at the summit. (Jung isn’t on the guest list.) “We need to learn lessons from our past mistakes and create a strong supervisory hub for all things AI, right from the start.”

Kriti Sharma, chief product officer for legal tech at Thomson Reuters, who will be watching from the wings, not receiving an invite, is similarly circumspect about the goals of the summit. “I hope to see leaders moving past the doom to take practical steps to address known issues and concerns in AI, giving businesses the clarity they urgently need,” she says. “Ideally, I’d like to see movement towards putting some fundamental AI guardrails in place, in the form of a globally aligned, cross-industry regulatory framework.”

But it’s uncertain whether the summit will indeed discuss the more practical elements of AI. Already it seems as if the gathering is designed to quell public fears around AI while convincing those developing AI products that the U.K. will not take too strong an approach in regulating the technology, perhaps in contrasts to near neighbors in the European Union, who have been open about their plans to ensure the technology is properly fenced in to ensure user safety.

Already, there are suggestions that the summit has been drastically downscaled in its ambitions, with others, including the United States, where President Biden just announced a sweeping executive order on AI, and the United Nations, which announced its AI advisory board last week.

Ingrid Lunden in her October 31, 2023 article for TechCrunch is more blunt,

As we wrote yesterday, the U.K. is partly using this event — the first of its kind, as it has pointed out — to stake out a territory for itself on the AI map — both as a place to build AI businesses, but also as an authority in the overall field.

That, coupled with the fact that the topics and approach are focused on potential issues, the affair feel like one very grand photo opportunity and PR exercise, a way for the government to show itself off in the most positive way at the same time that it slides down in the polls and it also faces a disastrous, bad-look inquiry into how it handled the COVID-19 pandemic. On the other hand, the U.K. does have the credentials for a seat at the table, so if the government is playing a hand here, it’s able to do it because its cards are strong.

The subsequent guest list, predictably, leans more toward organizations and attendees from the U.K. It’s also almost as revealing to see who is not participating.

Lunden’s October 30, 2023 article “Existential risk? Regulatory capture? AI for one and all? A look at what’s going on with AI in the UK” includes a little ‘inside’ information,

That high-level aspiration is also reflected in who is taking part: top-level government officials, captains of industry, and notable thinkers in the space are among those expected to attend. (Latest late entry: Elon Musk; latest no’s reportedly include President Biden, Justin Trudeau and Olaf Scholz.) [Scholz’s no was mentioned in my my October 18, 2023 posting]

It sounds exclusive, and it is: “Golden tickets” (as Azeem Azhar, a London-based tech founder and writer, describes them) to the Summit are in scarce supply. Conversations will be small and mostly closed. So because nature abhors a vacuum, a whole raft of other events and news developments have sprung up around the Summit, looping in the many other issues and stakeholders at play. These have included talks at the Royal Society (the U.K.’s national academy of sciences); a big “AI Fringe” conference that’s being held across multiple cities all week; many announcements of task forces; and more.

Earlier today, a group of 100 trade unions and rights campaigners sent a letter to the prime minister saying that the government is “squeezing out” their voices in the conversation by not having them be a part of the Bletchley Park event. (They may not have gotten their golden tickets, but they were definitely canny how they objected: The group publicized its letter by sharing it with no less than the Financial Times, the most elite of economic publications in the country.)

And normal people are not the only ones who have been snubbed. “None of the people I know have been invited,” Carissa Véliz, a tutor in philosophy at the University of Oxford, said during one of the AI Fringe events today [October 30, 2023].

More broadly, the summit has become an anchor and only one part of the bigger conversation going on right now. Last week, U.K. prime minister Rishi Sunak outlined an intention to launch a new AI safety institute and a research network in the U.K. to put more time and thought into AI implications; a group of prominent academics, led by Yoshua Bengio [University of Montreal, Canada) and Geoffrey Hinton [University of Toronto, Canada], published a paper called “Managing AI Risks in an Era of Rapid Progress” to put their collective oar into the the waters; and the UN announced its own task force to explore the implications of AI. Today [October 30, 2023], U.S. president Joe Biden issued the country’s own executive order to set standards for AI security and safety.

There are a couple more articles* from the BBC (British Broadcasting Corporation) covering the start of the summit, a November 1, 2023 article by Zoe Kleinman & Tom Gerken, “King Charles: Tackle AI risks with urgency and unity” and another November 1, 2023 article this time by Tom Gerken & Imran Rahman-Jones, “Rishi Sunak: AI firms cannot ‘mark their own homework‘.”

Politico offers more US-centric coverage of the event with a November 1, 2023 article by Mark Scott, Tom Bristow and Gian Volpicelli, “US and China join global leaders to lay out need for AI rulemaking,” a November 1, 2023 article by Vincent Manancourt and Eugene Daniels, “Kamala Harris seizes agenda as Rishi Sunak’s AI summit kicks off,” and a November 1, 2023 article by Vincent Manancourt, Eugene Daniels and Brendan Bordelon, “‘Existential to who[m]?’ US VP Kamala Harris urges focus on near-term AI risks.”

I want to draw special attention to the second Politico article,

Kamala just showed Rishi who’s boss.

As British Prime Minister Rishi Sunak’s showpiece artificial intelligence event kicked off in Bletchley Park on Wednesday, 50 miles south in the futuristic environs of the American Embassy in London, U.S. Vice President Kamala Harris laid out her vision for how the world should govern artificial intelligence.

It was a raw show of U.S. power on the emerging technology.

Did she or was this an aggressive interpretation of events?

*’article’ changed to ‘articles’ on January 17, 2024.

Banksy and the mathematicians

Assuming you’ve heard of Banksy (if not, he’s an internationally known graffiti artist), then you understand that no one knows his real name for certain although there are strong suspicions, as of 2008, that he is Robin Gunningham. It seems the puzzle has aroused scientific curiosity according to a March 4, 2016 article by Jill Lawless on CBC (Canadian Broadcasting Corporation) News online,

Elusive street artist Banksy may have been unmasked — by mathematics.

Scientists have applied a type of modelling used to track down criminals and map disease outbreaks to identify the graffiti artist, whose real name has never been confirmed.

The technique, known as geographic profiling, is used by police forces to narrow down lists of suspects by calculating from multiple crime sites where the offender most likely lives.

The March 3, 2016 article in The Economist about the Banksy project describes the model used to derive his identity in more detail,

Their system, Dirichlet process mixture modelling, is more sophisticated than the criminal geographic targeting (CGT) currently favoured by crime-fighters. CGT is based on a simple assumption: that crimes happen near to where those responsible reside. Plot out an incident map and the points should surround the criminal like a doughnut (malefactors tend not to offend on their own doorsteps, but nor do they stray too far). The Dirichlet model allows for more than one “source”—a place relevant to a suspect such as home, work or a frequent pit stop on a commute—but makes no assumptions about their number; it automatically parses the mess of crime sites into clusters of activity.

Then, for each site, it calculates the probability that the given array of activity, and the way it is clustered, would result from any given source. Through a monumental summing of probabilities across each and every possible combination of sources, the model spits out the most likely ones, with considerable precision—down to 50 metres or so in some cases.

While this seems like harmless mathematical modeling, Banksy lawyers were sufficiently concerned over how this work would be promoted that they contacted the publisher according to Jonathan Webb’s March 3, 2016 article for BBC (British Broadcasting Corporation) News online,

A study that tests the method of geographical profiling on Banksy has appeared, after a delay caused by an intervention from the artist’s lawyers.

Scientists at Queen Mary University of London found that the distribution of Banksy’s famous graffiti supported a previously suggested real identity.

The study was due to appear in the Journal of Spatial Science a week ago.

The BBC understands that Banksy’s legal team contacted QMUL staff with concerns about how the study was to be promoted.

Those concerns apparently centred on the wording of a press release, which has now been withdrawn.

Taylor and Francis, which publishes the journal, said that the research paper itself had not been questioned. It appeared online on Thursday [March 3, 2016] unchanged, after being placed “on hold” while conversations between lawyers took place.

The scientists conducted the study to demonstrate the wide applicability of geoprofiling – but also out of interest, said biologist Steve Le Comber, “to see whether it would work”.

The criminologist and former detective who pioneered geoprofiling, Canadian Dr Kim Rossmo [emphasis mine] – now at Texas State University in the US – is a co-author on the paper.

The researchers say their findings support the use of such profiling in counter-terrorism, based on the idea that minor “terrorism-related acts” – like graffiti – could help locate bases before more serious incidents unfold.

I believe the biologist Steve Le Comber is interested in applying the technique to epidemiology (study of patterns in health and disease in various populations). As for Dr. Rossmo, he featured in one of the more bizarre incidents in Vancouver Police Department (VPD) history as described in the Kim Rossmo entry on Wikipedia (Note: Links have been removed),

D. Kim Rossmo is a Canadian criminologist specializing in geographic profiling. He joined the Vancouver Police Department as a civilian employee in 1978 and became a sworn officer in 1980. In 1987 he received a master’s degree in criminology from Simon Fraser University and in 1995 became the first police officer in Canada to obtain a doctorate in criminology.[1] His dissertation research resulted in a new criminal investigative methodology called geographic profiling, based on Rossmo’s formula. This technology was integrated into a specialized crime analysis software product called Rigel. The Rigel product is developed by the software company Environmental Criminology Research Inc. (ECRI), which Rossmo co-founded.[2]

In 1995, he was promoted to detective inspector and founded a geographic profiling section within the Vancouver Police Department. In 1998, his analysis of cases of missing sex trade workers determined that a serial killer was at work, a conclusion ultimately vindicated by the arrest and conviction of Robert Pickton in 2002. A retired Vancouver police staff sergeant has claimed that animosity toward Rossmo delayed the arrest of Pickton, leaving him free to carry out additional murders.[3] His analytic results were not accepted at the time and after a dispute with senior members of the department he left in 2001. His unsuccessful lawsuit against the Vancouver Police Board for wrongful dismissal exposed considerable apparent dysfunction within that department.[1]

It’s still boggles my mind and the reporters covering story that the VPD would dismiss someone who was being lauded internationally for his work and had helped the department solve a very nasty case. In any event, Dr. Rossmo is now at Texas State University.

Getting back to Banksy and geographic profiling, here’s a link to and a citation for the paper,

Tagging Banksy: using geographic profiling to investigate a modern art mystery by Michelle V. Hauge, Mark D. Stevenson, D. Kim Rossmo & Steven C. Le Comber. Journal of Spatial Science DOI:  10.1080/14498596.2016.1138246 Published online: 03 Mar 2016

This paper is behind a paywall.

For anyone curious about Banksy’s work, here’s an image from this Wikipedia entry,

Stencil on the waterline of The Thekla, an entertainment boat in central Bristol – (wider view). The section of the hull with this picture has now been removed and is on display at the M Shed museum. The image of Death is based on a nineteenth-century etching illustrating the pestilence of The Great Stink.[19] Artist: Banksy - Photographed by Adrian Pingstone

Stencil on the waterline of The Thekla, an entertainment boat in central Bristol – (wider view). The section of the hull with this picture has now been removed and is on display at the M Shed museum. The image of Death is based on a nineteenth-century etching illustrating the pestilence of The Great Stink.[19] Artist: Banksy – Photographed by Adrian Pingstone