Tag Archives: Tom Gerken

Cyberthreats, own goals, and cyber security

In light of recent extraordinary events in the US (see Jeffrey Goldberg’s [editor-in-chief of The Atlantic magazine] March 24, 2025 article “The Trump Administration Accidentally Texted Me Its War Plans; U.S. national-security leaders included me in a group chat about upcoming military strikes in Yemen. I didn’t think it could be real. Then the bombs started falling.”)—an own goal—and my (FrogHeart) recent spate of cyberthreat postings, I thought it would be a good idea to look a little more closely at cyberthreats and, by extension, cyber security.

How did a reporter end up in a White House chat group preparing to launch a military strike?

Tom Gerken’s March 26, 2025 article for the British Broadcasting Corporation (BBC) news online website provides a brief overview of the situation before launching into a description and discussion of Signal security app, Note: Links have been removed,

The messaging app Signal has made headlines after the White House confirmed it was used for a secret group chat between senior US officials.

The editor-in-chief of the [sic] Atlantic, Jeffrey Goldberg, was inadvertently added to the group where plans for a strike against the Houthi group in Yemen were discussed.

Signal’s creator Matthew Rosenfeld – who is better known by the pseudonym Moxie Marlinspike – joked the “great reasons” to join the platform now included “the opportunity for the vice president of the United States of America to randomly add you to a group chat for coordination of sensitive military operations”.

But others are not seeing the funny side, with Democrat Senate leader Chuck Schumer calling it “one of the most stunning” military intelligence leaks in history and calling for an investigation.

But what actually is Signal – and how secure or otherwise were the senior politicians’ communications on it?

Signal has estimated 40-70 million monthly users – making it pretty tiny compared to the biggest messaging services, WhatsApp and Messenger, which count their customers in the billions.

Where it does lead the way though is in security.

At the core of that is end-to-end encryption (E2EE).

Simply put, it means only the sender and the receiver can read messages – even Signal itself cannot access them.

A number of other platforms also have E2EE – including WhatsApp – but Signal’s security features go beyond this.

For example, the code that makes the app work is open source – meaning anybody can check it to make sure there are no vulnerabilities that hackers could exploit.

Its owners say it collects far less information from its users, and in particular does not store records of usernames, profile pictures, or the groups people are part of.

There is also no need to dilute these features to make more money: Signal is owned by the Signal Foundation, a US-based non-profit, which relies on donations rather than ad revenue.

“Signal is the gold standard in private comms,” said its boss Meredith Whittaker in a post on X after the US national security story became public.

Gerken goes on to explain why the “gold standard in private comms” was problematic, from the March 26, 2025 article,

That “gold standard claim” is what makes Signal appealing to cybersecurity experts and journalists, who often use the app.

But even that level of security is considered insufficient for very high level conversations about extremely sensitive national security matters.

That is because there is a largely unavoidable risk to communicating via a mobile phone: it is only as secure as the person that uses it.

If someone gains access to your phone with Signal open – or if they learn your password – they’ll be able to see your messages.

And no app can prevent someone peeking over your shoulder if you are using your phone in a public space.

Data expert Caro Robson, who has worked with the US administration, said it was “very, very unusual” for high ranking security officials to communicate on a messaging platform like Signal.

“Usually you would use a very secure government system that is operated and owned by the government using very high levels of encryption,” she said.

She said this would typically mean devices kept in “very secure government controlled locations”.

The US government has historically used a sensitive compartmented information facility (Scif – pronounced “skiff”) to discuss matters of national security.

Gerken notes another problem, given these were government communications, from the March 26, 2025 article, Note: A link has been removed,

There’s another issue tied to Signal that has raised concerns – disappearing messages.

Signal, like many other messaging apps, allows its users to set messages to disappear after a set period of time.

This may violate laws around record-keeping – unless those using the app forwarded on their messages to an official government account.

… as this controversy shows, no level of security or legal protection matters if you simply share your confidential data with the wrong person.

Or as one critic more bluntly put it: “Encryption can’t protect you from stupid.”

There’s a March 25, 2025 article on Salon by Lucian K. Truscott IV (… a graduate of West Point, has had a 50-year career as a journalist, novelist and screenwriter. He has covered stories such as Watergate, the Stonewall riots and wars in Lebanon, Iraq and Afghanistan. …), which provides more insight about this breach from someone who might be termed a military insider.

What about the rest of us and our cyberthreats/security?

In the wake of this scandal I’ve received two unsolicited pieces (an editorial and a commentary) on cybersecurity (both received via email on March 25, 2025). Both offer what I consider to be good tips for your own cybersecurity. That said, these are not endorsements from me.

First up, I have this “Why It’s a Bad Idea to Share Secrets, Even Via the Safest Apps” editorial by Jurgita Lapienytė for cybernews.com

The Trump Administration discussed a secret military operation on Signal, inadvertently adding Jeffrey Goldberg, the editor-in-chief of The Atlantic, to the thread. Until the bombs started dropping in Yemen, Goldberg couldn’t believe what he was reading.

Even if Goldberg hadn’t been included in the chat, it remains a terrible idea to discuss matters of national security via any app, no matter how secure it is considered. This point, while likely to ruffle some feathers in the political arena, should also serve as a stark reminder that nothing you do online is truly anonymous.

Here’s what you should consider before confiding your secrets to technology:

  1. You are more interesting than you think.

It’s a common misconception that regular citizens like you and me are of no interest to hackers. However, a threat actor could exploit your device to gain access to your employer. By exploiting the data on your phone, a hacker could steal your identity and potentially cripple the entire organization.

  1. Don’t blindly trust what technology companies tell you.

Encrypted chat apps Signal and WhatsApp are publicly debating which one is more secure. Meredith Whittaker, the president of Signal, appears to be particularly annoyed by WhatsApp’s Will Cathcart, who suggests there are hardly any differences between WhatsApp and Signal.

While Signal is generally considered a more trustworthy choice by the security community — and it’s worth noting that WhatsApp is owned by Meta — I still recommend exercising caution when using either app.

Recall how in 2021, Proton, another security-focused company, provided the IP address of a French activist to law enforcement due to legal obligations. Many remain upset about this incident, but it also serves as a reminder, as Proton’s Andy Yen noted, that “the Internet is generally not anonymous.”

  1. Governments are increasingly asking for a backdoor.

The “good guys,” meaning law enforcement, want to have a key to your communication just in case it can be instrumental in some criminal case. Governments have long argued that end-to-end encrypted communication is an obstacle when trying to solve high-profile human trafficking, drug trafficking, and child exploitation cases, among others.

In some countries, the “good guys” might actually succeed in having those backdoors installed. While such amendments are theoretically intended to target only criminals, they set a very dangerous precedent. This is because governments often view protesters, dissidents, and political opponents as threats to national security or even sovereignty, effectively treating them as criminals.

  1. Your phone might get stolen.

Are you the only one who knows your phone’s passcode? Is it a random sequence of numbers or something more meaningful, like someone’s birthday? Imagine what would happen if Goldberg’s phone were stolen. While it’s not child’s play to unlock it, it can be cracked through brute force.

Even though Signal offers encryption, the recent leak of military plans emphasizes the need for caution, even on trusted platforms. It’s crucial for every user, including government officials, to double-check contact identities, use additional layers like two-factor authentication, and be mindful of what’s shared. No tool is foolproof, and the failure to implement proper security measures shows that awareness and caution are just as important as the technology in use.

ABOUT THE EXPERT 

Jurgita Lapienytė is the Editor-in-Chief at Cybernews, where she leads a team of journalists and security experts dedicated to uncovering cyber threats through research, testing, and data-driven reporting. With a career spanning over 15 years, she has reported on major global events, including the 2008 financial crisis and the 2015 Paris terror attacks, and has driven transparency through investigative journalism. A passionate advocate for cybersecurity awareness and women in tech, Jurgita has interviewed leading cybersecurity figures and amplifies underrepresented voices in the industry. Recognized as the Cybersecurity Journalist of the Year and featured in Top Cyber News Magazine’s 40 Under 40 in Cybersecurity, she is a thought leader shaping the conversation around cybersecurity.

I haven’t been able to find out much about cybernews but the articles look interesting and give you some idea as to what’s happening in other parts of the world (i.e., outside Canada and the US). Here’s how the outlet describes itself on its About Us webpage, Note: A link has been removed,

We glimpse into the deep, not just trends.

Cybernews is an independent media outlet, where journalists and security experts debunk cyber by research, testing and data.

Come for breaking news, original investigations and other curious tech stories.

Our Cybernews Investigation team uses white-hat hacking techniques to find and safely disclose cybersecurity threats and vulnerabilities across the online world. Leaks of users’ personal information? Security flaws in enterprises? Exchanges of sensitive data on the dark web? We’re on it.

The Cybernews Editorial team provides cybersecurity-related news, analysis, and opinions by industry insiders. We are working independently and transparently following our Editorial guidelines.

Next up is the “7 Ways Your Encrypted Messaging App Isn’t Protecting Your Privacy” commentary by Kee Jefferys, co-founder of Session, Note: There is a bit of self-interest in this commentary mixed in with some helpful observations, Note: Links have been removed,

The recent revelation of sensitive war plans mistakenly being shared in a Signal group chat, the vulnerability of widely used “secure” messaging platforms like Signal, Telegram and WhatsApp has been exposed once again. While Signal is often regarded as one of the most private messaging apps available, this incident highlights the hidden risks of centralized infrastructure, metadata exposure, and identity-linked registration requirements. Kee Jefferys, Co-founder of Session—a decentralized ‘truly’ secure encrypted messaging app resolving privacy breaches other apps expose users to—can provide expert commentary on the limitations of mainstream encrypted messaging services, why government and high-security entities need stronger privacy protections, and how truly private alternatives exist. Jefferys can also discuss best practices for secure digital communication, ensuring sensitive data remains confidential in high-stakes environments. This includes the “7 Hidden Risks” of using seemingly secure messaging apps, including compromised anonymity as detailed in the below narrative, and why governments and other high-risk entities should demand messaging solutions that prioritize decentralization, no-logs policies, and open-source transparency. Interest here as we hope?

7 Ways Your Encrypted Messaging App Isn’t Protecting Your Privacy

How to Choose a ‘Truly’ Secure Messenger App

In today’s digital age, instant messaging has become an integral part of our lives. We rely on these platforms for everything from casual chats to mission-critical communications. While many popular messaging apps boast “end-to-end encryption,” the reality is that they often fail to provide true privacy. The issue lies not just in the content of your messages, but in the vast amount of metadata these platforms collect.

In an era of mass surveillance, data breaches, and digital tracking, privacy-conscious users have turned to encrypted messaging apps to secure their conversations. However, while many platforms market themselves as private and secure, the reality is that they often fall short of providing true anonymity. Even the most well-known apps—like WhatsApp and Telegram —still leave users exposed in ways they may not realize. 

Here’s why your encrypted messaging app might not be as private as you think.

1. Metadata Collection: The Silent Tracker

Even with end-to-end encryption, apps like WhatsApp and Telegram collect metadata, including your IP address, phone number, timestamps, and who you’re communicating with. This data can be just as revealing as the message content itself, allowing governments, corporations, and hackers to track your activities. 

End-to-end encryption protects message content, but it does nothing to stop metadata collection, which can include information like:

  • Who you are messaging
  • When you send and receive messages
  • Your IP address, location and phone number
  • The device you use

Even if a service cannot read your messages, it can still compile detailed behavioral profiles based on metadata alone. Governments, corporations, and malicious actors can analyze this data to track movements, map social networks, and infer behaviors.

2. Personal Identifier Requirements Compromise Anonymity

Apps like WhatsApp, Telegram and Signal require a phone number for registration. This links your online identity to your real-world identity, compromising your anonymity. For journalists, activists, or individuals in sensitive situations, this can be a serious risk.

3. Centralized Servers Are Vulnerable to Surveillance and Attacks

Many popular messaging apps rely on centralized servers, creating a single point of failure. These servers are vulnerable to government requests, data breaches, and corporate misuse, putting your data at risk. Centralized servers pose risks for significant exposures, including:

  • Hacks and Data Breaches: If a centralized server is compromised, vast amounts of user data can be exposed.
  • Single Point of Failure: A centralized infrastructure makes it easier for despotic governments or hackers to shut down or intercept communications.
  • Government Requests: Authorities can compel these companies to provide user data or enforce censorship.

4. Compromised Anonymity: Not All Encryption Is Equal

While some apps advertise end-to-end encryption, they may not be using it by default in all scenarios. For example:

  • Telegram does not use end-to-end encryption by default, users must specifically use “Secret Chats” to enable end-to-end encryption, this allows the Telegram server operators to read the content of the vast majority of messages stored on its servers.
  • Some apps use proprietary encryption methods that have not been independently audited.
  • Some platforms allow unencrypted backups, meaning your messages can be accessed if a backup is

5. Tracking Pixels and Link Previews Leak Data

Some apps generate link previews by fetching URLs in the background. This can expose your IP address to third parties or even result in unwanted metadata leaks. Tracking pixels embedded in messages can also report when, where, and by whom a message was viewed.

6. Logging and Data Retention Policies

Even if messages are encrypted, some services keep logs of:

  • Login activity
  • Connection times
  • IP addresses
  • Contacts lists

If this data is stored, it can be subpoenaed, hacked, or otherwise exploited.

7. Lack of Transparency:

While some apps use robust encryption protocols, their closed-source nature limits transparency. Without public scrutiny and independent audits, it’s difficult to verify their security claims.

How to Choose a Truly Private Messenger

If you’re serious about privacy, you need a messaging app that prioritizes security beyond just encryption. Here’s what to look for:

  • No Phone Number or Email Required. Your messaging app should not require personally identifiable information like a phone number or email address to register. Instead, look for apps that generate anonymous cryptographically secure identifiers, fully protecting your anonymity.
  • Decentralized Infrastructure. Choose a platform that operates on a decentralized network rather than centralized servers. This reduces the risk of surveillance, censorship, and single points of failure. Optimal solutions use community-operated nodes to route and store messages. This eliminates single points of failure and enhances censorship resistance. 
  • Metadata Minimization. A truly private messenger should collect and create as little metadata as possible—or none at all. Look for a “no logs” policy and open-source transparency. Ensure that even the developers of the app don’t know who you’re communicating with. 
  • Open-Source and Audited Encryption. Only trust messaging apps with publicly available, open-source encryption protocols that have been independently audited. Open-source code allows for public scrutiny and independent audits, which ensures transparency and builds trust.
  • Onion Routing or Multi-Hop Encryption. For enhanced privacy, apps should use onion routing or multi-hop routing to obscure sender and receiver identities. This technology masks your IP address and location, adding an extra layer of privacy making it extremely difficult to track you.
  • Non-Profit Governance: Give precedence to apps run by non-profits and foundations, which can ensure that the app’s development is driven by privacy and security, rather than extracting value from users’ data.

If you value real privacy, don’t just settle for encryption—demand anonymity, decentralization, and complete metadata resistance. By eliminating the creation and collection of metadata, users can send messages—not metadata. In a digital landscape where privacy is constantly under attack, choosing a truly secure messaging app is more critical today than ever before.

~~~

Kee Jefferys is Co-founder of Session—an end-to-end open-source, privacy-focused encrypted messaging app that prioritizes anonymity, security, and decentralization while maintaining the familiar features of mainstream messaging applications but prohibiting sensitive metadata collection that others allow. It’s designed for people who want privacy and freedom from any forms of surveillance. He can be reached at https://getsession.org.

There is a Wikipedia entry for Session, Note: Links have been removed,

Session is a cross-platform end-to-end encrypted instant messaging application emphasizing user confidentiality and anonymity. Developed by The Oxen Project under the non-profit Oxen Privacy Tech Foundation [emphasis mine], it employs a blockchain-based decentralized network for transmission. Users can send one-to-one and group messages, including various media types such as files, voice notes, images, and videos.[3]

Session provides applications for various platforms, such as macOS, Windows, and Linux, along with mobile clients available on both iOS and Android.

I looked up Oxen and found two different sites and, possibly, two different organizations. Here’s oxen.io,

What is Oxen?

Oxen is many things. A private cryptocurrency. A secure messaging platform. A network anonymity layer. A vision for a future where privacy is effortless.

We provide a range of tools and services powered by the Oxen network, enabling people all over the world to leverage the power of decentralised blockchain networks to achieve unparalleled privacy and security as they work, play, and live their day-to-day lives on the internet. But this this isn’t a plan we have for the future — our suite of privacy tools already exists, and it is already used by over half a million people.

Then, there’s this Oxen Privacy Tech Foundation (optf.gov), from the About page,

Meet the Oxen Privacy Tech Foundation

We’re a passionate team of advocates, creatives, and engineers building a world where the internet is open, software is free and accessible, and your privacy is protected.

I’m not sure what to make of the two Oxen. Bottom line: exercise caution and both pieces (editorial and commentary) offer good advice.

UK AI Summit (November 1 – 2, 2023) at Bletchley Park finishes

This is the closest I’ve ever gotten to writing a gossip column (see my October 18, 2023 posting and scroll down to the “Insight into political jockeying [i.e., some juicy news bits]” subhead )for the first half.

Given the role that Canadian researchers (for more about that see my May 25, 2023 posting and scroll down to “The Panic” subhead) have played in the development of artificial intelligence (AI), it’s been surprising that the Canadian Broadcasting Corporation (CBC) has given very little coverage to the event in the UK. However, there is an October 31, 2023 article by Kelvin Chang and Jill Lawless for the Associated Press posted on the CBC website,

Digital officials, tech company bosses and researchers are converging Wednesday [November 1, 2023] at a former codebreaking spy base [Bletchley Park] near London [UK] to discuss and better understand the extreme risks posed by cutting-edge artificial intelligence.

The two-day summit focusing on so-called frontier AI notched up an early achievement with officials from 28 nations and the European Union signing an agreement on safe and responsible development of the technology.

Frontier AI is shorthand for the latest and most powerful general purpose systems that take the technology right up to its limits, but could come with as-yet-unknown dangers. They’re underpinned by foundation models, which power chatbots like OpenAI’s ChatGPT and Google’s Bard and are trained on vast pools of information scraped from the internet.

The AI Safety Summit is a labour of love for British Prime Minister Rishi Sunak, a tech-loving former banker who wants the U.K. to be a hub for computing innovation and has framed the summit as the start of a global conversation about the safe development of AI.[emphasis mine]

But U.S. Vice President Kamala Harris may divert attention Wednesday [November 1, 2023] with a separate speech in London setting out the Biden administration’s more hands-on approach.

Canada’s Minister of Innovation, Science and Industry Francois-Philippe Champagne said AI would not be constrained by national borders, and therefore interoperability between different regulations being put in place was important.

As the meeting began, U.K. Technology Secretary Michelle Donelan announced that the 28 countries and the European Union had signed the Bletchley Declaration on AI Safety. It outlines the “urgent need to understand and collectively manage potential risks through a new joint global effort.”

South Korea has agreed to host a mini virtual AI summit in six months, followed by an in-person one in France in a year’s time, the U.K. government said.

Chris Stokel-Walker’s October 31, 2023 article for Fast Company presents a critique of the summit prior to the opening, Note: Links have been removed,

… one problem, critics say: The summit, which begins on November 1, is too insular and its participants are homogeneous—an especially damning critique for something that’s trying to tackle the huge, possibly intractable questions around AI. The guest list is made up of 100 of the great and good of governments, including representatives from China, Europe, and Vice President Kamala Harris. And it also includes luminaries within the tech sector. But precious few others—which means a lack of diversity in discussions about the impact of AI.

“Self-regulation didn’t work for social media companies, it didn’t work for the finance sector, and it won’t work for AI,” says Carsten Jung, a senior economist at the Institute for Public Policy Research, a progressive think tank that recently published a report advising on key policy pillars it believes should be discussed at the summit. (Jung isn’t on the guest list.) “We need to learn lessons from our past mistakes and create a strong supervisory hub for all things AI, right from the start.”

Kriti Sharma, chief product officer for legal tech at Thomson Reuters, who will be watching from the wings, not receiving an invite, is similarly circumspect about the goals of the summit. “I hope to see leaders moving past the doom to take practical steps to address known issues and concerns in AI, giving businesses the clarity they urgently need,” she says. “Ideally, I’d like to see movement towards putting some fundamental AI guardrails in place, in the form of a globally aligned, cross-industry regulatory framework.”

But it’s uncertain whether the summit will indeed discuss the more practical elements of AI. Already it seems as if the gathering is designed to quell public fears around AI while convincing those developing AI products that the U.K. will not take too strong an approach in regulating the technology, perhaps in contrasts to near neighbors in the European Union, who have been open about their plans to ensure the technology is properly fenced in to ensure user safety.

Already, there are suggestions that the summit has been drastically downscaled in its ambitions, with others, including the United States, where President Biden just announced a sweeping executive order on AI, and the United Nations, which announced its AI advisory board last week.

Ingrid Lunden in her October 31, 2023 article for TechCrunch is more blunt,

As we wrote yesterday, the U.K. is partly using this event — the first of its kind, as it has pointed out — to stake out a territory for itself on the AI map — both as a place to build AI businesses, but also as an authority in the overall field.

That, coupled with the fact that the topics and approach are focused on potential issues, the affair feel like one very grand photo opportunity and PR exercise, a way for the government to show itself off in the most positive way at the same time that it slides down in the polls and it also faces a disastrous, bad-look inquiry into how it handled the COVID-19 pandemic. On the other hand, the U.K. does have the credentials for a seat at the table, so if the government is playing a hand here, it’s able to do it because its cards are strong.

The subsequent guest list, predictably, leans more toward organizations and attendees from the U.K. It’s also almost as revealing to see who is not participating.

Lunden’s October 30, 2023 article “Existential risk? Regulatory capture? AI for one and all? A look at what’s going on with AI in the UK” includes a little ‘inside’ information,

That high-level aspiration is also reflected in who is taking part: top-level government officials, captains of industry, and notable thinkers in the space are among those expected to attend. (Latest late entry: Elon Musk; latest no’s reportedly include President Biden, Justin Trudeau and Olaf Scholz.) [Scholz’s no was mentioned in my my October 18, 2023 posting]

It sounds exclusive, and it is: “Golden tickets” (as Azeem Azhar, a London-based tech founder and writer, describes them) to the Summit are in scarce supply. Conversations will be small and mostly closed. So because nature abhors a vacuum, a whole raft of other events and news developments have sprung up around the Summit, looping in the many other issues and stakeholders at play. These have included talks at the Royal Society (the U.K.’s national academy of sciences); a big “AI Fringe” conference that’s being held across multiple cities all week; many announcements of task forces; and more.

Earlier today, a group of 100 trade unions and rights campaigners sent a letter to the prime minister saying that the government is “squeezing out” their voices in the conversation by not having them be a part of the Bletchley Park event. (They may not have gotten their golden tickets, but they were definitely canny how they objected: The group publicized its letter by sharing it with no less than the Financial Times, the most elite of economic publications in the country.)

And normal people are not the only ones who have been snubbed. “None of the people I know have been invited,” Carissa Véliz, a tutor in philosophy at the University of Oxford, said during one of the AI Fringe events today [October 30, 2023].

More broadly, the summit has become an anchor and only one part of the bigger conversation going on right now. Last week, U.K. prime minister Rishi Sunak outlined an intention to launch a new AI safety institute and a research network in the U.K. to put more time and thought into AI implications; a group of prominent academics, led by Yoshua Bengio [University of Montreal, Canada) and Geoffrey Hinton [University of Toronto, Canada], published a paper called “Managing AI Risks in an Era of Rapid Progress” to put their collective oar into the the waters; and the UN announced its own task force to explore the implications of AI. Today [October 30, 2023], U.S. president Joe Biden issued the country’s own executive order to set standards for AI security and safety.

There are a couple more articles* from the BBC (British Broadcasting Corporation) covering the start of the summit, a November 1, 2023 article by Zoe Kleinman & Tom Gerken, “King Charles: Tackle AI risks with urgency and unity” and another November 1, 2023 article this time by Tom Gerken & Imran Rahman-Jones, “Rishi Sunak: AI firms cannot ‘mark their own homework‘.”

Politico offers more US-centric coverage of the event with a November 1, 2023 article by Mark Scott, Tom Bristow and Gian Volpicelli, “US and China join global leaders to lay out need for AI rulemaking,” a November 1, 2023 article by Vincent Manancourt and Eugene Daniels, “Kamala Harris seizes agenda as Rishi Sunak’s AI summit kicks off,” and a November 1, 2023 article by Vincent Manancourt, Eugene Daniels and Brendan Bordelon, “‘Existential to who[m]?’ US VP Kamala Harris urges focus on near-term AI risks.”

I want to draw special attention to the second Politico article,

Kamala just showed Rishi who’s boss.

As British Prime Minister Rishi Sunak’s showpiece artificial intelligence event kicked off in Bletchley Park on Wednesday, 50 miles south in the futuristic environs of the American Embassy in London, U.S. Vice President Kamala Harris laid out her vision for how the world should govern artificial intelligence.

It was a raw show of U.S. power on the emerging technology.

Did she or was this an aggressive interpretation of events?

*’article’ changed to ‘articles’ on January 17, 2024.

AI safety talks at Bletchley Park in November 2023

There’s a very good article about the upcoming AI (artificial intelligence) safety talks on the British Broadcasting Corporation (BBC) news website (plus some juicy perhaps even gossipy news about who may not be attending the event) but first, here’s the August 24, 2023 UK government press release making the announcement,

Iconic Bletchley Park to host UK AI Safety Summit in early November [2023]

Major global event to take place on the 1st and 2nd of November.[2023]

– UK to host world first summit on artificial intelligence safety in November

– Talks will explore and build consensus on rapid, international action to advance safety at the frontier of AI technology

– Bletchley Park, one of the birthplaces of computer science, to host the summit

International governments, leading AI companies and experts in research will unite for crucial talks in November on the safe development and use of frontier AI technology, as the UK Government announces Bletchley Park as the location for the UK summit.

The major global event will take place on the 1st and 2nd November to consider the risks of AI, especially at the frontier of development, and discuss how they can be mitigated through internationally coordinated action. Frontier AI models hold enormous potential to power economic growth, drive scientific progress and wider public benefits, while also posing potential safety risks if not developed responsibly.

To be hosted at Bletchley Park in Buckinghamshire, a significant location in the history of computer science development and once the home of British Enigma codebreaking – it will see coordinated action to agree a set of rapid, targeted measures for furthering safety in global AI use.

Preparations for the summit are already in full flow, with Matt Clifford and Jonathan Black recently appointed as the Prime Minister’s Representatives. Together they’ll spearhead talks and negotiations, as they rally leading AI nations and experts over the next three months to ensure the summit provides a platform for countries to work together on further developing a shared approach to agree the safety measures needed to mitigate the risks of AI.

Prime Minister Rishi Sunak said:

“The UK has long been home to the transformative technologies of the future, so there is no better place to host the first ever global AI safety summit than at Bletchley Park this November.

To fully embrace the extraordinary opportunities of artificial intelligence, we must grip and tackle the risks to ensure it develops safely in the years ahead.

With the combined strength of our international partners, thriving AI industry and expert academic community, we can secure the rapid international action we need for the safe and responsible development of AI around the world.”

Technology Secretary Michelle Donelan said:

“International collaboration is the cornerstone of our approach to AI regulation, and we want the summit to result in leading nations and experts agreeing on a shared approach to its safe use.

The UK is consistently recognised as a world leader in AI and we are well placed to lead these discussions. The location of Bletchley Park as the backdrop will reaffirm our historic leadership in overseeing the development of new technologies.

AI is already improving lives from new innovations in healthcare to supporting efforts to tackle climate change, and November’s summit will make sure we can all realise the technology’s huge benefits safely and securely for decades to come.”

The summit will also build on ongoing work at international forums including the OECD, Global Partnership on AI, Council of Europe, and the UN and standards-development organisations, as well as the recently agreed G7 Hiroshima AI Process.

The UK boasts strong credentials as a world leader in AI. The technology employs over 50,000 people, directly supports one of the Prime Minister’s five priorities by contributing £3.7 billion to the economy, and is the birthplace of leading AI companies such as Google DeepMind. It has also invested more on AI safety research than any other nation, backing the creation of the Foundation Model Taskforce with an initial £100 million.

Foreign Secretary James Cleverly said:

“No country will be untouched by AI, and no country alone will solve the challenges posed by this technology. In our interconnected world, we must have an international approach.

The origins of modern AI can be traced back to Bletchley Park. Now, it will also be home to the global effort to shape the responsible use of AI.”

Bletchley Park’s role in hosting the summit reflects the UK’s proud tradition of being at the frontier of new technology advancements. Since Alan Turing’s celebrated work some eight decades ago, computing and computer science have become fundamental pillars of life both in the UK and across the globe.

Iain Standen, CEO of the Bletchley Park Trust, said:

“Bletchley Park Trust is immensely privileged to have been chosen as the venue for the first major international summit on AI safety this November, and we look forward to welcoming the world to our historic site.

It is fitting that the very spot where leading minds harnessed emerging technologies to influence the successful outcome of World War 2 will, once again, be the crucible for international co-ordinated action.

We are incredibly excited to be providing the stage for discussions on global safety standards, which will help everyone manage and monitor the risks of artificial intelligence.”

The roots of AI can be traced back to the leading minds who worked at Bletchley during World War 2, with codebreakers Jack Good and Donald Michie among those who went on to write extensive works on the technology. In November [2023], it will once again take centre stage as the international community comes together to agree on important guardrails which ensure the opportunities of AI can be realised, and its risks safely managed.

The announcement follows the UK government allocating £13 million to revolutionise healthcare research through AI, unveiled last week. The funding supports a raft of new projects including transformations to brain tumour surgeries, new approaches to treating chronic nerve pain, and a system to predict a patient’s risk of developing future health problems based on existing conditions.

Tom Gerken’s August 24, 2023 BBC news article (an analysis by Zoe Kleinman follows as part of the article) fills in a few blanks, Note: Links have been removed,

World leaders will meet with AI companies and experts on 1 and 2 November for the discussions.

The global talks aim to build an international consensus on the future of AI.

The summit will take place at Bletchley Park, where Alan Turing, one of the pioneers of modern computing, worked during World War Two.

It is unknown which world leaders will be invited to the event, with a particular question mark over whether the Chinese government or tech giant Baidu will be in attendance.

The BBC has approached the government for comment.

The summit will address how the technology can be safely developed through “internationally co-ordinated action” but there has been no confirmation of more detailed topics.

It comes after US tech firm Palantir rejected calls to pause the development of AI in June, with its boss Alex Karp saying it was only those with “no products” who wanted a pause.

And in July [2023], children’s charity the Internet Watch Foundation called on Mr Sunak to tackle AI-generated child sexual abuse imagery, which it says is on the rise.

Kleinman’s analysis includes this, Note: A link has been removed,

Will China be represented? Currently there is a distinct east/west divide in the AI world but several experts argue this is a tech that transcends geopolitics. Some say a UN-style regulator would be a better alternative to individual territories coming up with their own rules.

If the government can get enough of the right people around the table in early November [2023], this is perhaps a good subject for debate.

Three US AI giants – OpenAI, Anthropic and Palantir – have all committed to opening London headquarters.

But there are others going in the opposite direction – British DeepMind co-founder Mustafa Suleyman chose to locate his new AI company InflectionAI in California. He told the BBC the UK needed to cultivate a more risk-taking culture in order to truly become an AI superpower.

Many of those who worked at Bletchley Park decoding messages during WW2 went on to write and speak about AI in later years, including codebreakers Irving John “Jack” Good and Donald Michie.

Soon after the War, [Alan] Turing proposed the imitation game – later dubbed the “Turing test” – which seeks to identify whether a machine can behave in a way indistinguishable from a human.

There is a Bletchley Park website, which sells tickets for tours.

Insight into political jockeying (i.e., some juicy news bits)

This has recently been reported by BBC, from an October 17 (?). 2023 news article by Jessica Parker & Zoe Kleinman on BBC news online,

German Chancellor Olaf Scholz may turn down his invitation to a major UK summit on artificial intelligence, the BBC understands.

While no guest list has been published of an expected 100 participants, some within the sector say it’s unclear if the event will attract top leaders.

A government source insisted the summit is garnering “a lot of attention” at home and overseas.

The two-day meeting is due to bring together leading politicians as well as independent experts and senior execs from the tech giants, who are mainly US based.

The first day will bring together tech companies and academics for a discussion chaired by the Secretary of State for Science, Innovation and Technology, Michelle Donelan.

The second day is set to see a “small group” of people, including international government figures, in meetings run by PM Rishi Sunak.

Though no final decision has been made, it is now seen as unlikely that the German Chancellor will attend.

That could spark concerns of a “domino effect” with other world leaders, such as the French President Emmanuel Macron, also unconfirmed.

Government sources say there are heads of state who have signalled a clear intention to turn up, and the BBC understands that high-level representatives from many US-based tech giants are going.

The foreign secretary confirmed in September [2023] that a Chinese representative has been invited, despite controversy.

Some MPs within the UK’s ruling Conservative Party believe China should be cut out of the conference after a series of security rows.

It is not known whether there has been a response to the invitation.

China is home to a huge AI sector and has already created its own set of rules to govern responsible use of the tech within the country.

The US, a major player in the sector and the world’s largest economy, will be represented by Vice-President Kamala Harris.

Britain is hoping to position itself as a key broker as the world wrestles with the potential pitfalls and risks of AI.

However, Berlin is thought to want to avoid any messy overlap with G7 efforts, after the group of leading democratic countries agreed to create an international code of conduct.

Germany is also the biggest economy in the EU – which is itself aiming to finalise its own landmark AI Act by the end of this year.

It includes grading AI tools depending on how significant they are, so for example an email filter would be less tightly regulated than a medical diagnosis system.

The European Commission President Ursula von der Leyen is expected at next month’s summit, while it is possible Berlin could send a senior government figure such as its vice chancellor, Robert Habeck.

A source from the Department for Science, Innovation and Technology said: “This is the first time an international summit has focused on frontier AI risks and it is garnering a lot of attention at home and overseas.

“It is usual not to confirm senior attendance at major international events until nearer the time, for security reasons.”

Fascinating, eh?