The Artificial Intelligence (AI) Action Summit held from February 10 – 11, 2025 in Paris seems to have been pretty exciting, President Emanuel Macron announced a 09B euros investment in the French AI sector on February 10, 2025 (I have more in my February 13, 2025 posting [scroll down to the ‘What makes Canadian (and Greenlandic) minerals and water so important?’ subhead]). I also have this snippet, which suggests Macron is eager to provide an alternative to US domination in the field of AI, from a February 10, 2025 posting on CCGTN (China Global Television Network),
French President Emmanuel Macron announced on Sunday night [February 10, 2025] that France is set to receive a total investment of 109 billion euros (approximately $112 billion) in artificial intelligence over the coming years.
Speaking in a televised interview on public broadcaster France 2, Macron described the investment as “the equivalent for France of what the United States announced with ‘Stargate’.”
He noted that the funding will come from the United Arab Emirates, major American and Canadian investment funds [emphases mine], as well as French companies.
Prime Minister Justin Trudeau warned U.S. Vice-President J.D. Vance that punishing tariffs on Canadian steel and aluminum will hurt his home state of Ohio, a senior Canadian official said.
The two leaders met on the sidelines of an international summit in Paris Tuesday [February 11, 2025], as the Trump administration moves forward with its threat to impose 25 per cent tariffs on all steel and aluminum imports, including from its biggest supplier, Canada, effective March 12.
…
Speaking to reporters on Wednesday [February 12, 2025] as he departed from Brussels, Trudeau characterized the meeting as a brief chat that took place as the pair met.
…
“It was just a quick greeting exchange,” Trudeau said. “I highlighted that $2.2 billion worth of steel and aluminum exports from Canada go directly into the Ohio economy, often to go into manufacturing there.
“He nodded, and noted it, but it wasn’t a longer exchange than that.”
…
Vance didn’t respond to Canadian media’s questions about the tariffs while arriving at the summit on Tuesday [February 11, 2025].
…
Additional insight can be gained from a February 10, 2025 PBS (US Public Broadcasting Service) posting of an AP (Associated Press) article with contributions from Kelvin Chan and Angela Charlton in Paris, Ken Moritsugu in Beijing, and Aijaz Hussain in New Delhi,
JD Vance stepped onto the world stage this week for the first time as U.S. vice president, using a high-stakes AI summit in Paris and a security conference in Munich to amplify Donald Trump’s aggressive new approach to diplomacy.
The 40-year-old vice president, who was just 18 months into his tenure as a senator before joining Trump’s ticket, is expected, while in Paris, to push back on European efforts to tighten AI oversight while advocating for a more open, innovation-driven approach.
The AI summit has drawn world leaders, top tech executives, and policymakers to discuss artificial intelligence’s impact on global security, economics, and governance. High-profile attendees include Chinese Vice Premier Zhang Guoqing, signaling Beijing’s deep interest in shaping global AI standards.
Macron also called on “simplifying” rules in France and the European Union to allow AI advances, citing sectors like healthcare, mobility, energy, and “resynchronize with the rest of the world.”
“We are most of the time too slow,” he said.
The summit underscores a three-way race for AI supremacy: Europe striving to regulate and invest, China expanding access through state-backed tech giants, and the U.S. under Trump prioritizing a hands-off approach.
…
Vance has signaled he will use the Paris summit as a venue for candid discussions with world leaders on AI and geopolitics.
“I think there’s a lot that some of the leaders who are present at the AI summit could do to, frankly — bring the Russia-Ukraine conflict to a close, help us diplomatically there — and so we’re going to be focused on those meetings in France,” Vance told Breitbart News.
Vance is expected to meet separately Tuesday with Indian Prime Minister Narendra Modi and European Commission President Ursula von der Leyen, according to a person familiar with planning who spoke on the condition of anonymity.
…
Modi is co-hosting the summit with Macron in an effort to prevent the sector from becoming a U.S.-China battle.
Indian Foreign Secretary Vikram Misri stressed the need for equitable access to AI to avoid “perpetuating a digital divide that is already existing across the world.”
But the U.S.-China rivalry overshadowed broader international talks.
…
The U.S.-China rivalry didn’t entirely overshadow the talks. At least one Chinese former diplomat chose to make her presence felt by chastising a Canadian academic according to a February 11, 2025 article by Matthew Broersma for silicon.co.uk
A representative of China at this week’s AI Action Summit in Paris stressed the importance of collaboration on artificial intelligence, while engaging in a testy exchange with Yoshua Bengio, a Canadian academic considered one of the “Godfathers” of AI.
Fu Ying, a former Chinese government official and now an academic at Tsinghua University in Beijing, said the name of China’s official AI Development and Safety Network was intended to emphasise the importance of collaboration to manage the risks around AI.
She also said tensions between the US and China were impeding the ability to develop AI safely.
…
… Fu Ying, a former vice minister of foreign affairs in China and the country’s former UK ambassador, took veiled jabs at Prof Bengio, who was also a member of the panel.
…
Zoe Kleinman’s February 10, 2025 article for the British Broadcasting Corporation (BBC) news online website also notes the encounter,
A former Chinese official poked fun at a major international AI safety report led by “AI Godfather” professor Yoshua Bengio and co-authored by 96 global experts – in front of him.
Fu Ying, former vice minister of foreign affairs and once China’s UK ambassador, is now an academic at Tsinghua University in Beijing.
The pair were speaking at a panel discussion ahead of a two-day global AI summit starting in Paris on Monday [February 10, 2025].
The aim of the summit is to unite world leaders, tech executives, and academics to examine AI’s impact on society, governance, and the environment.
Fu Ying began by thanking Canada’s Prof Bengio for the “very, very long” document, adding that the Chinese translation stretched to around 400 pages and she hadn’t finished reading it.
She also had a dig at the title of the AI Safety Institute – of which Prof Bengio is a member.
China now has its own equivalent; but they decided to call it The AI Development and Safety Network, she said, because there are lots of institutes already but this wording emphasised the importance of collaboration.
The AI Action Summit is welcoming guests from 80 countries, with OpenAI chief executive Sam Altman, Microsoft president Brad Smith and Google chief executive Sundar Pichai among the big names in US tech attending.
Elon Musk is not on the guest list but it is currently unknown whether he will decide to join them. [As of February 13, 2025, Mr. Musk did not attend the summit, which ended February 11, 2025.]
A key focus is regulating AI in an increasingly fractured world. The summit comes weeks after a seismic industry shift as China’s DeepSeek unveiled a powerful, low-cost AI model, challenging US dominance.
The pair’s heated exchanges were a symbol of global political jostling in the powerful AI arms race, but Fu Ying also expressed regret about the negative impact of current hostilities between the US and China on the progress of AI safety.
…
She gave a carefully-crafted glimpse behind the curtain of China’s AI scene, describing an “explosive period” of innovation since the country first published its AI development plan in 2017, five years before ChatGPT became a viral sensation in the west.
She added that “when the pace [of development] is rapid, risky stuff occurs” but did not elaborate on what might have taken place.
“The Chinese move faster [than the west] but it’s full of problems,” she said.
Fu Ying argued that building AI tools on foundations which are open source, meaning everyone can see how they work and therefore contribute to improving them, was the most effective way to make sure the tech did not cause harm.
Most of the US tech giants do not share the tech which drives their products.
Open source offers humans “better opportunities to detect and solve problems”, she said, adding that “the lack of transparency among the giants makes people nervous”.
But Prof Bengio disagreed.
His view was that open source also left the tech wide open for criminals to misuse.
He did however concede that “from a safety point of view”, it was easier to spot issues with the viral Chinese AI assistant DeepSeek, which was built using open source architecture, than ChatGPT, whose code has not been shared by its creator OpenAI.
Announced in November 2023 at the AI Safety Summit at Bletchley Park, England, and inspired by the workings of the United Nations Intergovernmental Panel on Climate Change, the report consolidates leading international expertise on AI and its risks.
Supported by the United Kingdom’s Department for Science, Innovation and Technology, Bengio, founder and scientific director of the UdeM-affiliated Mila – Quebec AI Institute, led a team of 96 international experts in drafting the report.
The experts were drawn from 30 countries, the U.N., the European Union and the OECD [Organisation for Economic Cooperation and Development]. Their report will help inform discussions next month at the AI Action Summit in Paris, France and serve as a global handbook on AI safety to help support policymakers.
Towards a common understanding
The most advanced AI systems in the world now have the ability to write increasingly sophisticated computer programs, identify cyber vulnerabilities, and perform on a par with human PhD-level experts on tests in biology, chemistry, and physics.
In what is identified as a key development for policymakers to monitor, the AI Safety Report published today warns that AI systems are also increasingly capable of acting as AI agents, autonomously planning and acting in pursuit of a goal.
As policymakers worldwide grapple with the rapid and unpredictable advancements in AI, the report contributes to bridging the gap by offering a scientific understanding of emerging risks to guide decision-making.
The document sets out the first comprehensive, independent, and shared scientific understanding of advanced AI systems and their risks, highlighting how quickly the technology has evolved.
Several areas require urgent research attention, according to the report, including how rapidly capabilities will advance, how general-purpose AI models work internally, and how they can be designed to behave reliably.
Three distinct categories of AI risks are identified:
Malicious use risks: these include cyberattacks, the creation of AI-generated child-sexual-abuse material, and even the development of biological weapons;
System malfunctions: these include bias, reliability issues, and the potential loss of control over advanced general-purpose AI systems;
Systemic risks: these stem from the widespread adoption of AI, include workforce disruption, privacy concerns, and environmental impacts.
The report places particular emphasis on the urgency of increasing transparency and understanding in AI decision-making as the systems become more sophisticated and the technology continues to develop at a rapid pace.
While there are still many challenges in mitigating the risks of general-purpose AI, the report highlights promising areas for future research and concludes that progress can be made.
Ultimately, it emphasizes that while AI capabilities could advance at varying speeds, their development and potential risks are not a foregone conclusion. The outcomes depend on the choices that societies and governments make today and in the future.
“The capabilities of general-purpose AI have increased rapidly in recent years and months,” said Bengio. “While this holds great potential for society, AI also presents significant risks that must be carefully managed by governments worldwide.
“This report by independent experts aims to facilitate constructive and evidence-based discussion around these risks and serves as a common basis for policymakers around the world to understand general-purpose AI capabilities, risks and possible mitigations.”
There have been two previous AI Safety Summits that I’m aware of and you can read about them in my May 21, 2024 posting about the one in Korea and in my November 2, 2023 posting about the first summit at Bletchley Park in the UK.
I was taught in high school that the US was running out of its resources and that Canada still had much of its resources. That was decades ago. As well, throughout the years, usually during a vote in Québec about separating, I’ve heard rumblings about the US absorbing part or all of Canada as something they call ‘Manifest Destiny,’ which dates back to the 19th century.
Unlike the previous forays Into Manifest Destiny, this one has not been precipitated by any discussion of separation.
Manifest Destiny
It took a while for that phrase to emerge this time but when it finally did the Canadian Broadcasting Corporation (CBC) online news published a January 19, 2025 article by Ainsley Hawthorn providing some context for the term, Note: Links have been removed,
U.S. president-elect Donald Trump says he’s prepared to use economic force to turn Canada into America’s 51st state, and it’s making Canadians — two-thirds of whom believe he’s sincere — anxious.
But the last time Canada faced the threat of American annexation, it united us more than ever before, leading to the foundation of our country as we know it today.
In the 1860s, several prominent U.S. politicians advocated for annexing the colonies of British North America.
“I look on Rupert’s Land [modern-day Manitoba and parts of Alberta, Saskatchewan, Nunavut, Ontario, and Quebec] and Canada, and see how an ingenious people and a capable, enlightened government are occupied with bridging rivers and making railroads and telegraphs,” Secretary of State William Henry Seward told a crowd in St. Paul, Minn. while campaigning on behalf of presidential candidate Abraham Lincoln.
“I am able to say, it is very well; you are building excellent states to be hereafter admitted into the American Union.”
Seward believed in Manifest Destiny, the doctrine that the United States would inevitably expand across the entire North American continent. While he seems to have preferred to acquire territory through negotiation rather than aggression, Canadians weren’t wholly assured of America’s peaceful intentions.
…
In the late 1850s and early 1860s, Canadian parliament had been so deadlocked it had practically come to a standstill. Within just a few years, American pressure created a sense of unity so great it led to Confederation.
The current conversation around annexation is likewise uniting Canada’s leaders to a degree we’ve rarely seen in recent years.
Representatives across the political spectrum are sharing a common message, the same message as British North Americans in the late nineteenth century: despite our problems, Canadians value Canada.
Critical minerals and water
Prime Minister Justin Trudeau had a few comments to make about US President Donald Trump’s motivation for ‘absorbing’ Canada as the 51st state, from a February 7, 2025 CBC news online article by Peter Zimonjic, ·
Prime Minister Justin Trudeau told business leaders at the Canada-U.S. Economic Summit in Toronto that U.S. President Donald Trump’s threat to annex Canada “is a real thing” motivated by his desire to tap into the country’s critical minerals.
“Mr. Trump has it in mind that the easiest way to do it is absorbing our country and it is a real thing,” Trudeau said, before a microphone cut out at the start of the closed-door meeting.
The prime minister made the remarks to more than 100 business leaders after delivering an opening address to the summit Friday morning [February 7, 2025], outlining the key issues facing the country when it comes to Canada’s trading relationship with the U.S.
After the opening address, media were ushered out of the room when a microphone that was left on picked up what was only meant to be heard by attendees [emphasis mine].
…
Automotive Parts Manufacturers’ Association president Flavio Volpe was in the room when Trudeau made the comments. He said the prime minister went on to say that Trump is driven because the U.S. could benefit from Canada’s critical mineral resources.
…
There was more, from a February 7, 2025 article by Nick Taylor-Vaisey for Politico., Note: A link has been removed,
…
In remarks caught on tape by The Toronto Star, Trudeau suggested the president is keenly aware of Canada’s vast mineral resources. “I suggest that not only does the Trump administration know how many critical minerals we have but that may be even why they keep talking about absorbing us and making us the 51st state,” Trudeau said.
…
All of this reminded me of US President Joe Biden’s visit to Canada and his interest in critical minerals which I mentioned briefly in my comments about the 2023 federal budget, from my April 17, 2023 posting (scroll down to the ‘Canadian economic theory (the staples theory), mining, nuclear energy, quantum science, and more’ subhead,
Critical minerals are getting a lot of attention these days. (They were featured in the 2022 budget, see my April 19, 2022 posting, scroll down to the Mining subhead.) This year, US President Joe Biden, in his first visit to Canada as President, singled out critical minerals at the end of his 28 hour state visit (from a March 24, 2023 CBC news online article by Alexander Panetta; Note: Links have been removed),
There was a pot of gold at the end of President Joe Biden’s jaunt to Canada. It’s going to Canada’s mining sector.
The U.S. military will deliver funds this spring to critical minerals projects in both the U.S. and Canada. The goal is to accelerate the development of a critical minerals industry on this continent.
The context is the United States’ intensifying rivalry with China.
The U.S. is desperate to reduce its reliance on its adversary for materials needed to power electric vehicles, electronics and many other products, and has set aside hundreds of millions of dollars under a program called the Defence Production Act.
The Pentagon already has told Canadian companies they would be eligible to apply. It has said the cash would arrive as grants, not loans.
On Friday [March 24, 2023], before Biden left Ottawa, he promised they’ll get some.
The White House and the Prime Minister’s Office announced that companies from both countries will be eligible this spring for money from a $250 million US fund.
Which Canadian companies? The leaders didn’t say. Canadian officials have provided the U.S. with a list of at least 70 projects that could warrant U.S. funding.
…
“Our nations are blessed with incredible natural resources,” Biden told Canadian parliamentarians during his speech in the House of Commons.
“Canada in particular has large quantities of critical minerals [emphasis mine] that are essential for our clean energy future, for the world’s clean energy future.
…
I don’t think there’s any question that the US knows how much, where, and how easily ‘extractable’ Canadian critical minerals might be.
Pressure builds
On the same day (Monday, February 3, 2025) the tariffs were postponed for a month,Trudeau had two telephone calls with US president Donald Trump. According to a February 9, 2025 article by Steve Chase and Stefanie Marotta for the Globe and Mail, Trump and his minions are exploring the possibility of acquiring Canada by means other than a trade war or economic domination,
…
“He [Trudeau] talked about two phone conversations he had with Mr. Trump on Monday [February 3, 2025] before the President agreed to delay to steep tariffs on Canadian goods for 30 days.n
During the calls, the Prime Minister recalled Mr. Trump referred to a four-page memo that included a list of grievances he had with Canadian trade and commercial rules, including the President’s false claim that US banks are unable to operate in Canada. …
In the second conversation with Mr. Trump on Monday, the Prime Minister told the summit, the President asked him whether he was familiar with the Treaty of 1908, a pact between the United States and Britain that defined the border between the United States and Canada. he told Mr. Trudeau, he should look it up.
Mr. Trudeau told the summit he thought the treaty had been superseded by other developments such as the repatriation the Canadian Constitution – in other words, that the border cannot be dissolved by repealing that treaty. He told the audience that international law would prevent the dissolution 1908 Treaty leading to the erasure of the border. For example, various international laws define sovereign borders, including the United Nationals Charter of which both countries are signatories and which has protection to territorial integrity.
A source familiar with the calls said Mr. Trump’s reference to the 1908 Treaty was taken as an implied threat. … [p. A3 in paper version]
I imagine Mr. Trump and/or his minions will keep trying to find one pretext or another for this attempt to absorb or annex or wage war (economically or otherwise) on Canada.
What makes Canadian (and Greenlandic) minerals and water so important?
You may have noticed the January 21, 2025 announcement by Mr. Trump about the ‘Stargate Project,’ a proposed US $500B AI infrastructure company (you can find more about the Stargate Project (Stargate LLC) in its Wikipedia entry).
Most likely not a coincidence, on February 10, 2025 President of France, Emmanuel Macron announced a 109B euros investment in French AI sector, from the February 9, 2025 Reuters preannouncement article,
France will announce private sector investments totalling some 109 billion euros ($112.5 billion [US]) in its artificial intelligence sector during the Paris AI summit which opens on Monday, President Emmanuel Macron said.
The financing includes plans by Canadian investment firm [emphasis mine] Brookfield to invest 20 billion euros in AI projects in France and financing from the United Arab Emirates which could hit 50 billion euros in the years ahead, Macron’s office said.
…
Big projects, non? It’s no surprise critical minerals will be necessary but the need for massive amounts of water may be. My October 16, 2023 posting focuses on water and AI development, specifically ChatGPT-4,
A September 9, 2023 news item (an Associated Press article by Matt O’Brien and Hannah Fingerhut) on phys.org and also published September 12, 2023 on the Iowa Public Radio website, describe an unexpected cost for building ChatGPT and other AI agents, Note: Links [in the excerpt] have been removed,
The cost of building an artificial intelligence product like ChatGPT can be hard to measure.
But one thing Microsoft-backed OpenAI needed for its technology was plenty of water [emphases mine], pulled from the watershed of the Raccoon and Des Moines rivers in central Iowa to cool a powerful supercomputer as it helped teach its AI systems how to mimic human writing.
As they race to capitalize on a craze for generative AI, leading tech developers including Microsoft, OpenAI and Google have acknowledged that growing demand for their AI tools carries hefty costs, from expensive semiconductors to an increase in water consumption.
But they’re often secretive about the specifics. Few people in Iowa knew about its status as a birthplace of OpenAI’s most advanced large language model, GPT-4, before a top Microsoft executive said in a speech it “was literally made next to cornfields west of Des Moines.”
…
In its latest environmental report, Microsoft disclosed that its global water consumption spiked 34% from 2021 to 2022 (to nearly 1.7 billion gallons , or more than 2,500 Olympic-sized swimming pools), a sharp increase compared to previous years that outside researchers tie to its AI research. [emphases mine]
…
As for how much water was diverted in Iowa for a data centre project, from my October 16, 2023 posting
…
Jason Clayworth’s September 18, 2023 article for AXIOS describes the issue from the Iowan perspective, Note: Links [from the excerpt] have been removed,
Future data center projects in West Des Moines will only be considered if Microsoft can implement technology that can “significantly reduce peak water usage,” the Associated Press reports.
Why it matters: Microsoft’s five WDM data centers — the “epicenter for advancing AI” — represent more than $5 billion in investments in the last 15 years.
Yes, but: They consumed as much as 11.5 million gallons of water a month for cooling, or about 6% of WDM’s total usage during peak summer usage during the last two years, according to information from West Des Moines Water Works.
…
The bottom line is that these technologies consume a lot of water and require critical minerals.
Greenland
Evan Dyer’s January 16, 2025 article for CBC news online describes both US military strategic interests and hunger for resources, Note 1: Article links have been removed; Note 2: I have added one link to a Wikipedia entry,
The person who first put a bug in Donald Trump’s ear about Greenland — if a 2022 biography is to be believed — was his friend Ronald Lauder, a New York billionaire and heir to the Estée Lauder cosmetics fortune.
But it would be wrong to believe that U.S. interest in Greenland originated with idle chatter at the country club, rather than real strategic considerations.
Trump’s talk of using force to annex Greenland — which would be an unprovoked act of war against a NATO ally — has been rebuked by Greenlandic, Danish and European leaders. A Fox News team that travelled to Greenland’s capital Nuuk reported back to the Trump-friendly show Fox & Friends that “most of the people we spoke with did not support Trump’s comments and found them offensive.”
…
Certainly, military considerations motivated the last U.S. attempt at buying Greenland in 1946.
…
The military value to the U.S. of acquiring Greenland is much less clear in 2025 than it was in 1946.
Russian nuclear submarines no longer need to traverse the GIUK [the GIUK gap; “{sometimes written G-I-UK} is an area in the northern Atlantic Ocean that forms a naval choke point. Its name is an acronym for Greenland, Iceland, and the United Kingdom, the gap being the two stretches of open ocean among these three landmasses.”]. They can launch their missiles from closer to home.
And in any case, the U.S. already has a military presence on Greenland, used for early warning, satellite tracking and marine surveillance. The Pentagon simply ignored Denmark’s 1957 ban on nuclear weapons on Greenlandic territory. Indeed, an American B-52 bomber carrying four hydrogen bombs crashed in Greenland in 1968.
“The U.S. already has almost unhindered access [emphasis mine], and just building on their relationship with Greenland is going to do far more good than talk of acquisition,” said Dwayne Menezes, director of the Polar Research and Policy Initiative in London.
The complication, he says, is Greenland’s own independence movement. All existing defence agreements involving the U.S. presence in Greenland are between Washington and the Kingdom of Denmark. [emphasis mine]
“They can’t control what’s happening between Denmark and Greenland,” Menezes said. “Over the long term, the only way to mitigate that risk altogether is by acquiring Greenland.”
Menezes also doesn’t believe U.S. interest in Greenland is purely military.
And Trump’s incoming national security adviser Michael Waltz [emphasis mine] appeared to confirm as much when asked by Fox News why the administration wanted Greenland.
“This is about critical minerals, this is about natural resources [emphasis mine]. This is about, as the ice caps pull back, the Chinese are now cranking out icebreakers and are pushing up there.”
…
While the United States has an abundance of natural resources, it risks coming up short in two vital areas: rare-earth minerals and freshwater.
Greenland’s apparent barrenness belies its richness in those two key 21st-century resources.
The U.S. rise to superpower was driven partly by the good fortune of having abundant reserves of oil, which fuelled its industrial growth. The country is still a net exporter of petroleum.
China, Washington’s chief strategic rival, had no such luck. It has to import more than two-thirds of its oil, and is now importing more than six times as much as it did in 2000.
But the future may not favour the U.S. as much as the past.
…
I stand corrected, where oil is concerned. From Dyer’s January 16, 2025 article, Note: Links have been removed,
…
It’s China, and not the U.S., that nature blessed with rich deposits of rare-earth elements, a collection of 17 metals such as yttrium and scandium that are increasingly necessary for high-tech applications from cellphones and flat-screen TVs to electric cars.
The rare-earth element neodymium is an essential part of many computer hard drives and defence systems including electronic displays, guidance systems, lasers, radar and sonar.
Three decades ago, the U.S. produced a third of the world’s rare-earth elements, and China about 40 per cent. By 2011, China had 97 per cent of world production, and its government was increasingly limiting and controlling exports.
The U.S. has responded by opening new mines and spurring recovery and recycling to reduce dependence on China.
…
Such efforts have allowed the U.S. to claw back about 20 per cent of the world’s annual production of rare-earth elements. But that doesn’t change the fact that China has about 44 million tonnes of reserves, compared to fewer than two million in the U.S.
“There’s a huge dependency on China,” said Menezes. “It offers China the economic leverage, in the midst of a trade war in particular, to restrict supply to the West, thus crippling industries like defence, the green transition. This is where Greenland comes in.”
Greenland’s known reserves are almost equivalent to those of the entire U.S., and much more may lie beneath its icebound landscape.
“Greenland is believed to be able to meet at least 25 per cent of global rare-earth demand well into the future,” he said.
An abundance of freshwater
The melting ice caps referenced by Trump’s nominee for national security adviser are another Greenlandic resource the world is increasingly interested in.
Seventy per cent of the world’s freshwater is locked up in the Antarctic ice cap. Of the remainder, two-thirds is in Greenland, in a massive ice cap that is turning to liquid at nearly twice the volume of melting in Antarctica.
“We know this because you can weigh the ice sheet from satellites,” said Christian Schoof, a professor of Earth, ocean and atmospheric sciences at the University of British Columbia who spent part of last year in Greenland studying ice cap melting.
“The ice sheet is heavy enough that it affects the orbit of satellites going over it. And you can record the change in that acceleration of satellites due to the ice sheet over time, and directly weigh the ice sheet.”
…
“There is a growing demand for freshwater on the world market, and the use of the vast water potential in Greenland may contribute to meeting this demand,” the Greenland government announces on its website.
The Geological Survey of Denmark and Greenland found 10 locations that were suitable for the commercial exploitation of Greenland’s ice and water, and has already issued a number of licenses.
…
Schoof told CBC News that past projects that attempted to tow Greenlandic ice to irrigate farms in the Middle East “haven’t really taken off … but humans are resourceful and inventive, and we face some really significant issues in the future.”
For the U.S., those issues include the 22-year-long “megadrought” which has left the western U.S. [emphases mine] drier than at any time in the past 1,200 years, and which is already threatening the future of some American cities.
…
As important as they are, there’s more than critical minerals and water, according to Dyer’s January 16, 2025 article
…
Even the “rock flour” that lies under the ice cap could have great commercial and strategic importance.
Ground into nanoparticles by the crushing weight of the ice, research has revealed it to have almost miraculous properties, says Menezes.
“Scientists have found that Greenlandic glacial flour has a particular nutrient composition that enables it to be regenerative of soil conditions elsewhere,” he told CBC News. “It improves agricultural yields. It has direct implications for food security.”
Spreading Greenland rock flour on corn fields in Ghana produced a 30 to 50 per cent increase in crop yields. Similar yield gains occurred when it was spread on Danish fields that produce the barley for Carlsberg beer.
…
Canada
It’s getting a little tiring keeping up with Mr. Trump’s tariff tear (using ‘tear’ as a verbal noun; from the Cambridge dictionary, verb: TEAR definition: 1. to pull or be pulled apart, or to pull pieces off: 2. to move very quickly …).
The bottom line is that Mr. Trump wants something and certainly Canadian critical minerals and water constitute either his entire interest or, at least, his main interest for now, with more to be determined later.
Niall McGee’s February 9, 2025 article for the Globe and Mail provides an overview of the US’s dependence on Canada’s critical minerals,
…
The US relies on Canada for a huge swath of its critical mineral imports, including 40 per cent of its primary nickel for its defence industry, 30 per cent of its uranium, which is used in its nuclear-power fleet, and 79 per cent of its potash for growing crops.
The US produces only small amounts of all three, while Canada is the world’s biggest potash producer, the second biggest in uranium, and number six in nickel.
If the US wants to buy fewer critical minerals from Canada, in many cases it would be forced to source them from hostile countries such as Russia and China.
…
Vancouver-based Teck Resources Ltd. is one of the few North American suppliers of germanium. The critical mineral is used in fibre-optic networks, infrared vision systems, solar panels. The US relies on Canada for 23 per cent of its imports of germanium.
China in December [2024] banned exports of the critical mineral to the US citing national security concerns. The ban raised fears of possible shortages for the US.
“It’s obvious we have a lot of what Trump wants to support America’s ambitions, from both an economic and a geopolitical standpoint,” says Martin Turenne, CEO of Vancouver-based FPX Nickel Corp., which is developing a massive nickel project in British Columbia. [p. B5 paper version]
…
Akshay Kulkarni’s January 15, 2025 article for CBC news online provides more details about British Columbia and its critical minerals, Note: Links have been removed,
…
The premier had suggested Tuesday [January 14, 2025] that retaliatory tariffs and export bans could be part of the response, and cited a smelter operation located in Trail, B.C. [emphasis mine; keep reading], which exports minerals that Eby [Premier of British Columbia, David Eby] said are critical for the U.S.
…
The U.S. and Canada both maintain lists of critical minerals — ranging from aluminum and tin to more obscure elements like ytterbium and hafnium — that both countries say are important for defence, energy production and other key areas.
Michael Goehring, the president of the Mining Association of B.C., said B.C. has access to or produces 16 of the 50 minerals considered critical by the U.S.
Individual atoms of silicon and germanium are seen following an Atomic Probe Tomography (APT) measurement at Polytechnique Montreal. Both minerals are manufactured in B.C. (Christinne Muschi/The Canadian Press)
“We have 17 critical mineral projects on the horizon right now, along with a number of precious metal projects,” he told CBC News on Tuesday [January 14, 2025].
“The 17 critical mineral projects alone represent some $32 billion in potential investment for British Columbia,” he added.
John Steen, director of the Bradshaw Research Institute for Minerals and Mining at the University of B.C., pointed to germanium — which is manufactured at Teck’s facility in Trail [emphasis mine] — as one of the materials most important to U.S industry.
…
There are a number of mines and manufacturing facilities across B.C. and Canada for critical minerals.
The B.C. government says the province is Canada’s largest producer of copper, and only producer of molybdenum, which are both considered critical minerals.
…
There’s also graphite, not in BC but in Québec. This April 8, 2023 article by Christian Paas-Lang for CBC news online focuses largely on issues of how to access and exploit graphite and also, importantly, indigenous concerns, but this excerpt focuses on graphite as a critical mineral,
A mining project might not be what comes to mind when you think of the transition to a lower emissions economy. But embedded in electric vehicles, solar panels and hydrogen fuel storage are metals and minerals that come from mines like the one in Lac-des-Îles, Que.
The graphite mine, owned by the company Northern Graphite, is just one of many projects aimed at extracting what are now officially dubbed “critical minerals” — substances of significant strategic and economic importance to the future of national economies.
Lac-des-Îles is the only significant graphite mining project in North America, accounting for Canada’s contribution to an industry dominated by China.
…
There was another proposed graphite mine in Québec, which encountered significant push back from the local Indigenous community as noted in my November 26, 2024 posting, “Local resistance to Lomiko Metals’ Outaouais graphite mine.” The posting also provides a very brief update of graphite mining in Canada.
It seems to me that water does not get the attention that it should and that’s why I lead with water in my headline. Eric Reguly’s February 9, 2025 article in the Globe and Mail highlights some of the water issues facing the US, not just Iowa,
…
Water may be the real reason, or one of the top reasons, propelling his [Mr. Trump’s] desire to turn Canada into Minnesota North. Canadians represent 0.5 per cent of the globe’s population yet sit on 20% or more of its fresh water. Vast tracts of the United States routinely suffer from water shortages, which are drying up rivers – the once mighty Colorado River no longer reaches the Pacific Ocean – shrinking aquifers beneath farmland and preventing water-intensive industries from building factories. Warming average temperatures will intensify the shortages. [p. B2 in paper version]
…
Reguly is more interested in the impact water shortages have on industry. He also offers a brief history of US interest in acquiring Canadian water resources dating back to the first North America Free Trade Agreement (NAFTA) that came into effect on January 1, 1994.
A March 6, 2024 article by Elia Nilsen for CNN television news online details Colorado river geography and gives you a sense of just how serious the situation is, Note: Links have been removed,
Seven Western states are starting to plot a future for how much water they’ll draw from the dwindling Colorado River in a warmer, drier world.
The river is the lifeblood for the West – providing drinking water for tens of millions, irrigating crops, and powering homes and industry with hydroelectric dams.
…
This has bought states more time to figure out how to divvy up the river after 2026, when the current operating guidelines expire.
To that end, the four upper basin river states of Colorado, Utah, New Mexico and Wyoming submitted their proposal for how future cuts should be divvied up among the seven states to the federal government on Tuesday [March 5, 2024], and the three lower basin states of California, Arizona and Nevada submitted their plan on Wednesday [March 6, 2024].
One thing is clear from the competing plans: The two groups of states do not agree so far on who should bear the brunt of future cuts if water levels drop in the Colorado River basin.
…
As of a December 12, 2024 article by Shannon Mullane for watereducationcolorado.org, the states are still wrangling and they are not the only interested parties, Note: A link has been removed,
… officials from seven states are debating the terms of a new agreement for how to store, release and deliver Colorado River water for years to come, and they have until 2026 to finalize a plan. This month, the tone of the state negotiations soured as some state negotiators threw barbs and others called for an end to the political rhetoric and saber-rattling.
…
The state negotiators are not the only players at the table: Tribal leaders, federal officials, environmental organizations, agricultural groups, cities, industrial interests and others are weighing in on the process.
…
Water use from the Colorado river has international implications as this February 5, 2025 essay (Water is the other US-Mexico border crisis, and the supply crunch is getting worse) by Gabriel Eckstein, professor of law at Texas A&M University and Rosario Sanchez, senior research scientist at Texas Water Resources Institute and at Texas A&M University for The Conversation makes clear, Note: Links have been removed,
…
The Colorado River provides water to more than 44 million people, including seven U.S. and two Mexican states, 29 Indian tribes and 5.5 million acres of farmland. Only about 10% of its total flow reaches Mexico. The river once emptied into the Gulf of California, but now so much water is withdrawn along its course that since the 1960s it typically peters out in the desert.
…
At least 28 aquifers – underground rock formations that contain water – also traverse the border. With a few exceptions, very little information on these shared resources exists. One thing that is known is that many of them are severely overtapped and contaminated.
Nonetheless, reliance on aquifers is growing as surface water supplies dwindle. Some 80% of groundwater used in the border region goes to agriculture. The rest is used by farmers and industries, such as automotive and appliance manufacturers.
Over 10 million people in 30 cities and communities throughout the border region rely on groundwater for domestic use. Many communities, including Ciudad Juarez; the sister cities of Nogales in both Arizona and Sonora; and the sister cities of Columbus in New Mexico and Puerto Palomas in Chihuahua, get all or most of their fresh water from these aquifers.
…
A booming region
About 30 million people live within 100 miles (160 kilometers) of the border on both sides. Over the next 30 years, that figure is expected to double.
Municipal and industrial water use throughout the region is also expected to increase. In Texas’ lower Rio Grande Valley, municipal use alone could more than double by 2040.
At the same time, as climate change continues to worsen, scientists project that snowmelt will decrease and evaporation rates will increase. The Colorado River’s baseflow – the portion of its volume that comes from groundwater, rather than from rain and snow – may decline by nearly 30% in the next 30 years.
Precipitation patterns across the region are projected to be uncertain and erratic for the foreseeable future. This trend will fuel more extreme weather events, such as droughts and floods, which could cause widespread harm to crops, industrial activity, human health and the environment.
Further stress comes from growth and development. Both the Colorado River and Rio Grande are tainted by pollutants from agricultural, municipal and industrial sources. Cities on both sides of the border, especially on the Mexican side, have a long history of dumping untreated sewage into the Rio Grande. Of the 55 water treatment plants located along the border, 80% reported ongoing maintenance, capacity and operating problems as of 2019.
Drought across the border region is already stoking domestic and bilateral tensions. Competing water users are struggling to meet their needs, and the U.S. and Mexico are straining to comply with treaty obligations for sharing water [emphasis mine].
…
Getting back to Canada and water, Reguly’s February 9, 2025 article notes Mr. Trump’s attitude towards our water,
…
Mr. Trump’s transaction-oriented brain know that water availability translates into job availability. If Canada were forced to export water by bulk to the United States, Canada would in effect be exporting jobs and America absorbing them. In the fall [2024] when he was campaigning, he called British Columbia “essentially a very large faucet” [emphasis mine] that could be used to overcome California’s permanent water deficit.
…
In Canada’s favour, Canadians have been united in their opposition to bulk water exports. That sentiment is codified in the Transboundary Waters Protection Act, which bans large scale removal from waterways shared with the United States. … [p. B2 in paper version]
…
It’s reassuring to read that we have some rules regarding water removal but British Columbia also has a water treaty with the US, the Columbia River Treaty, and an update to it lingers in limbo as Kirk Lapointe notes in his February 6, 2025 article for vancouverisawesome.com. Lapointe mentions shortcomings on both sides of the negotiating table for the delay in ratifying the update while expressing concern over Mr. Trump’s possible machinations should this matter cross his radar.
What about Ukraine’s critical mineral?
A February 13, 2025 article by Geoff Nixon for CBC news online provides some of the latest news on the situation between the US and the Ukraine, Note: Links have been removed,
Ukraine has clearly grabbed the attention of U.S. President Donald Trump with its apparent willingness to share access to rare-earth resources with Washington, in exchange for its continued support and security guarantees.
Trump wants what he calls “equalization” for support the U.S. has provided to Ukraine in the wake of Russia’s full-scale invasion. And he wants this payment in the form of Ukraine’s rare earth minerals, metals “and other things,” as the U.S. leader put it last week.
U.S. Treasury Secretary Scott Bessent has travelled to Ukraine to discuss the proposition, which was first raised with Trump last fall [2024], telling reporters Wednesday [February 12, 2025] that he hoped a deal could be reached within days.
Bessent says such a deal could provide a “security shield” in post-war Ukraine. Ukrainian President Volodymyr Zelenskyy, meanwhile, said in his daily address that it would both strengthen Ukraine’s security and “give new momentum to our economic relations.”
But just how much trust can Kyiv put in a Trump-led White House to provide support to Ukraine, now and in the future? Ukraine may not be in a position to back away from the offer, with Trump’s interest piqued and U.S. support remaining critical for Kyiv after nearly three years of all-out war with Russia.
“I think the problem for Ukraine is that it doesn’t really have much choice,” said Oxana Shevel, an associate professor of political science at Boston’s Tufts University.
…
Then there’s the issue of the Ukrainian minerals, which have to remain in Kyiv’s hands in order for the U.S. to access them — a point Zelenskyy and other Ukraine officials have underlined.
There are more than a dozen elements considered to be rare earths, and Ukraine’s Institute of Geology says those that can be found in Ukraine include lanthanum, cerium, neodymium, erbium and yttrium. EU-funded research also indicates that Ukraine has scandium reserves. But the details of the data are classified.
Rare earths are used in manufacturing magnets that turn power into motion for electric vehicles, in cellphones and other electronics, as well as for scientific and industrial applications.
…
Trump has said he wants the equivalent of $500 billion US in rare earth minerals.
Yuriy Gorodnichenko, a professor of economics at the University of California, Berkeley, says any effort to develop and extract these resources won’t happen overnight and it’s unclear how plentiful they are.
“The fact is, nobody knows how much you have for sure there and what is the value of that,” he said in an interview.
“It will take years to do geological studies,” he said. “Years to build extraction facilities.”
…
Just how desperate is the US?
Yes, the United States has oil but it doesn’t have much in the way of materials it needs for the new technologies and it’s running out of something very basic: water.
I don’t know how desperate the US is but Mr. Trump’s flailings suggest that the answer is very, very desperate.
There’s been quite the kerfuffle over DeepSeek during the last few days. This January 27, 2025 article by Alexandra Mae Jones for the Canadian Broadcasting Corporation (CBC) news only was my introduction to DeepSeek AI, Note: A link has been removed,
There’s a new player in AI on the world stage: DeepSeek, a Chinese startup that’s throwing tech valuations into chaos and challenging U.S. dominance in the field with an open-source model that they say they developed for a fraction of the cost of competitors.
DeepSeek’s free AI assistant — which by Monday [January 27, 20¸25] had overtaken rival ChatGPT to become the top-rated free application on Apple’s App Store in the United States — offers the prospect of a viable, cheaper AI alternative, raising questions on the heavy spending by U.S. companies such as Apple and Microsoft, amid a growing investor push for returns.
U.S. stocks dropped sharply on Monday [January 27, 2025], as the surging popularity of DeepSeek sparked a sell-off in U.S. chipmakers.
…
“[DeepSeek] performs as well as the leading models in Silicon Valley and in some cases, according to their claims, even better,” Sheldon Fernandez, co-founder of DarwinAI, told CBC News. “But they did it with a fractional amount of the resources is really what is turning heads in our industry.”
…
What is DeepSeek?
Little is known about the small Hangzhou startup behind DeepSeek, which was founded out of a hedge fund in 2023, but largely develops open-source AI models.
Its researchers wrote in a paper last month that the DeepSeek-V3 model, launched on Jan. 10 [2025], cost less than $6 million US to develop and uses less data than competitors, running counter to the assumption that AI development will eat up increasing amounts of money and energy.
Some analysts are skeptical about DeepSeek’s $6 million claim, pointing out that this figure only covers computing power. But Fernandez said that even if you triple DeepSeek’s cost estimates, it would still cost significantly less than its competitors.
The open source release of DeepSeek-R1, which came out on Jan. 20 [2025] and uses DeepSeek-V3 as its base, also means that developers and researchers can look at its inner workings, run it on their own infrastructure and build on it, although its training data has not been made available.
…
“Instead of paying Open $20 a month or $200 a month for the latest advanced versions of these models, [people] can really get these types of features for free. And so it really upends a lot of the business model that a lot of these companies were relying on to justify their very high valuations.”
…
A key difference between DeepSeek’s AI assistant, R1, and other chatbots like OpenAI’s ChatGPT is that DeepSeek lays out its reasoning when it answers prompts and questions, something developers are excited about.
“The dealbreaker is the access to the raw thinking steps,” Elvis Saravia, an AI researcher and co-founder of the U.K.-based AI consulting firm DAIR.AI, wrote on X, adding that the response quality was “comparable” to OpenAI’s latest reasoning model, o1.
U.S. dominance in AI challenged
One of the reasons DeepSeek is making headlines is because its development occurred despite U.S. actions to keep Americans at the top of AI development. In 2022, the U.S. curbed exports of computer chips to China, hampering their advanced supercomputing development.
…
The latest AI models from DeepSeek are widely seen to be competitive with those of OpenAI and Meta, which rely on high-end computer chips and extensive computing power.
…
Christine Mui in a January 27, 2025 article for Politico notes the stock ‘crash’ taking place while focusing on the US policy implications, Note: Links set by Politico have been removed while I have added one link
A little-known Chinese artificial intelligence startup shook the tech world this weekend by releasing an OpenAI-like assistant, which shot to the No.1 ranking on Apple’s app store and caused American tech giants’ stocks to tumble.
From Washington’s perspective, the news raised an immediate policy alarm: It happened despite consistent, bipartisan efforts to stifle AI progress in China.
…
In tech terms, what freaked everyone out about DeepSeek’s R1 model is that it replicated — and in some cases, surpassed — the performance of OpenAI’s cutting-edge o1 product across a host of performance benchmarks, at a tiny fraction of the cost.
The business takeaway was straightforward. DeepSeek’s success shows that American companies might not need to spend nearly as much as expected to develop AI models. That both intrigues and worries investors and tech leaders.
…
The policy implications, though, are more complex. Washington’s rampant anxiety about beating China has led to policies that the industry has very mixed feelings about.
On one hand, most tech firms hate the export controls that stop them from selling as much to the world’s second-largest economy, and force them to develop new products if they want to do business with China. If DeepSeek shows those rules are pointless, many would be delighted to see them go away.
On the other hand, anti-China, protectionist sentiment has encouraged Washington to embrace a whole host of industry wishlist items, from a lighter-touch approach to AI rules to streamlined permitting for related construction projects. Does DeepSeek mean those, too, are failing? Or does it trigger a doubling-down?
DeepSeek’s success truly seems to challenge the belief that the future of American AI demands ever more chips and power. That complicates Trump’s interest in rapidly building out that kind of infrastructure in the U.S.
Why pour $500 billion into the Trump-endorsed “Stargate” mega project [announced by Trump on January 21, 2025] — and why would the market reward companies like Meta that spend $65 billion in just one year on AI — if DeepSeek claims it only took $5.6 million and second-tier Nvidia chips to train one of its latest models? (U.S. industry insiders dispute the startup’s figures and claim they don’t tell the full story, but even at 100 times that cost, it would be a bargain.)
…
Tech companies, of course, love the recent bloom of federal support, and it’s unlikely they’ll drop their push for more federal investment to match anytime soon. Marc Andreessen, a venture capitalist and Trump ally, argued today that DeepSeek should be seen as “AI’s Sputnik moment,” one that raises the stakes for the global competition.
That would strengthen the case that some American AI companies have been pressing for the new administration to invest government resources into AI infrastructure (OpenAI), tighten restrictions on China (Anthropic) and ease up on regulations to ensure their developers build “artificial general intelligence” before their geopolitical rivals.
…
The British Broadcasting Corporation’s (BBC) Peter Hoskins & Imran Rahman-Jones provided a European perspective and some additional information in their January 27, 2025 article for BBC news online, Note: Links have been removed,
US tech giant Nvidia lost over a sixth of its value after the surging popularity of a Chinese artificial intelligence (AI) app spooked investors in the US and Europe.
DeepSeek, a Chinese AI chatbot reportedly made at a fraction of the cost of its rivals, launched last week but has already become the most downloaded free app in the US.
AI chip giant Nvidia and other tech firms connected to AI, including Microsoft and Google, saw their values tumble on Monday [January 27, 2025] in the wake of DeepSeek’s sudden rise.
In a separate development, DeepSeek said on Monday [January 27, 2025] it will temporarily limit registrations because of “large-scale malicious attacks” on its software.
The DeepSeek chatbot was reportedly developed for a fraction of the cost of its rivals, raising questions about the future of America’s AI dominance and the scale of investments US firms are planning.
…
DeepSeek is powered by the open source DeepSeek-V3 model, which its researchers claim was trained for around $6m – significantly less than the billions spent by rivals.
But this claim has been disputed by others in AI.
The researchers say they use already existing technology, as well as open source code – software that can be used, modified or distributed by anybody free of charge.
DeepSeek’s emergence comes as the US is restricting the sale of the advanced chip technology that powers AI to China.
To continue their work without steady supplies of imported advanced chips, Chinese AI developers have shared their work with each other and experimented with new approaches to the technology.
This has resulted in AI models that require far less computing power than before.
It also means that they cost a lot less than previously thought possible, which has the potential to upend the industry.
After DeepSeek-R1 was launched earlier this month, the company boasted of “performance on par with” one of OpenAI’s latest models when used for tasks such as maths, coding and natural language reasoning.
…
In Europe, Dutch chip equipment maker ASML ended Monday’s trading with its share price down by more than 7% while shares in Siemens Energy, which makes hardware related to AI, had plunged by a fifth.
“This idea of a low-cost Chinese version hasn’t necessarily been forefront, so it’s taken the market a little bit by surprise,” said Fiona Cincotta, senior market analyst at City Index.
“So, if you suddenly get this low-cost AI model, then that’s going to raise concerns over the profits of rivals, particularly given the amount that they’ve already invested in more expensive AI infrastructure.”
Singapore-based technology equity adviser Vey-Sern Ling told the BBC it could “potentially derail the investment case for the entire AI supply chain”.
…
Who founded DeepSeek?
The company was founded in 2023 by Liang Wenfeng in Hangzhou, a city in southeastern China.
The 40-year-old, an information and electronic engineering graduate, also founded the hedge fund that backed DeepSeek.
He reportedly built up a store of Nvidia A100 chips, now banned from export to China.
Experts believe this collection – which some estimates put at 50,000 – led him to launch DeepSeek, by pairing these chips with cheaper, lower-end ones that are still available to import.
Mr Liang was recently seen at a meeting between industry experts and the Chinese premier Li Qiang.
In a July 2024 interview with The China Academy, Mr Liang said he was surprised by the reaction to the previous version of his AI model.
“We didn’t expect pricing to be such a sensitive issue,” he said.
“We were simply following our own pace, calculating costs, and setting prices accordingly.”
…
A January 28, 2025 article by Daria Solovieva for salon.com covers much the same territory as the others and includes a few detail about security issues,
…
The pace at which U.S. consumers have embraced DeepSeek is raising national security concerns similar to those surrounding TikTok, the social media platform that faces a ban unless it is sold to a non-Chinese company.
The U.S. Supreme Court this month upheld a federal law that requires TikTok’s sale. The Court sided with the U.S. government’s argument that the app can collect and track data on its 170 million American users. President Donald Trump has paused enforcement of the ban until April to try to negotiate a deal.
But “the threat posed by DeepSeek is more direct and acute than TikTok,” Luke de Pulford, co-founder and executive director of non-profit Inter-Parliamentary Alliance on China, told Salon.
DeepSeek is a fully Chinese company and is subject to Communist Party control, unlike TikTok which positions itself as independent from parent company ByteDance, he said.
“DeepSeek logs your keystrokes, device data, location and so much other information and stores it all in China,” de Pulford said. “So you’ll never know if the Chinese state has been crunching your data to gain strategic advantage, and DeepSeek would be breaking the law if they told you.”
I wonder if other AI companies in other countries also log keystrokes, etc. Is it theoretically possible that one of those governments or their government agencies could gain access to your data? It’s obvious in China but people in other countries may have the issues.
Censorship: DeepSeek and ChatGPT
Anis Heydari’s January 28, 2025 article for CBC news online reveals some surprising results from a head to head comparison between DeepSeek and ChatGPT,
The Chinese-made AI chatbot DeepSeek may not always answer some questions about topics that are often censored by Beijing, according to tests run by CBC News and The Associated Press, and is providing different information than its U.S.-owned competitor ChatGPT.
The new, free chatbot has sparked discussions about the competition between China and the U.S. in AI development, with many users flocking to test it.
But experts warn users should be careful with what information they provide to such software products.
It is also “a little bit surprising,” according to one researcher, that topics which are often censored within China are seemingly also being restricted elsewhere.
“A lot of services will differentiate based on where the user is coming from when deciding to deploy censorship or not,” said Jeffrey Knockel, who researches software censorship and surveillance at the Citizen Lab at the University of Toronto’s Munk School of Global Affairs & Public Policy.
“With this one, it just seems to be censoring everyone.”
…
Both CBC News and The Associated Press posed questions to DeepSeek and OpenAI’s ChatGPT, with mixed and differing results.
For example, DeepSeek seemed to indicate an inability to answer fully when asked “What does Winnie the Pooh mean in China?” For many Chinese people, the Winnie the Pooh character is used as a playful taunt of President Xi Jinping, and social media searches about that character were previously, briefly banned in China.
DeepSeek said the bear is a beloved cartoon character that is adored by countless children and families in China, symbolizing joy and friendship.
Then, abruptly, it added the Chinese government is “dedicated to providing a wholesome cyberspace for its citizens,” and that all online content is managed under Chinese laws and socialist core values, with the aim of protecting national security and social stability.
CBC News was unable to produce this response. DeepSeek instead said “some internet users have drawn comparisons between Winnie the Pooh and Chinese leaders, leading to increased scrutiny and restrictions on the character’s imagery in certain contexts,” when asked the same question on an iOS app on a CBC device in Canada.
…
Asked if Taiwan is a part of China — another touchy subject — it [DeepSeek] began by saying the island’s status is a “complex and sensitive issue in international relations,” adding that China claims Taiwan, but that the island itself operates as a “separate and self-governing entity” which many people consider to be a sovereign nation.
But as that answer was being typed out, for both CBC and the AP, it vanished and was replaced with: “Sorry, that’s beyond my current scope. Let’s talk about something else.”
…
… Brent Arnold, a data breach lawyer in Toronto, says there are concerns about DeepSeek, which explicitly says in its privacy policy that the information it collects is stored on servers in China.
That information can include the type of device used, user “keystroke patterns,” and even “activities on other websites and apps or in stores, including the products or services you purchased, online or in person” depending on whether advertising services have shared those with DeepSeek.
“The difference between this and another AI company having this is now, the Chinese government also has it,” said Arnold.
While much, if not all, of the data DeepSeek collects is the same as that of U.S.-based companies such as Meta or Google, Arnold points out that — for now — the U.S. has checks and balances if governments want to obtain that information.
“With respect to America, we assume the government operates in good faith if they’re investigating and asking for information, they’ve got a legitimate basis for doing so,” he said.
Right now, Arnold says it’s not accurate to compare Chinese and U.S. authorities in terms of their ability to take personal information. But that could change.
“I would say it’s a false equivalency now. But in the months and years to come, we might start to say you don’t see a whole lot of difference in what one government or another is doing,” he said.
…
Graham Fraser’s January 28, 2025 article comparing DeepSeek to the others (OpenAI’s ChatGPT and Google’s Gemini) for BBC news online took a different approach,
…
Writing Assistance
When you ask ChatGPT what the most popular reasons to use ChatGPT are, it says that assisting people to write is one of them.
From gathering and summarising information in a helpful format to even writing blog posts on a topic, ChatGPT has become an AI companion for many across different workplaces.
As a proud Scottish football [soccer] fan, I asked ChatGPT and DeepSeek to summarise the best Scottish football players ever, before asking the chatbots to “draft a blog post summarising the best Scottish football players in history”.
DeepSeek responded in seconds, with a top ten list – Kenny Dalglish of Liverpool and Celtic was number one. It helpfully summarised which position the players played in, their clubs, and a brief list of their achievements.
DeepSeek also detailed two non-Scottish players – Rangers legend Brian Laudrup, who is Danish, and Celtic hero Henrik Larsson. For the latter, it added “although Swedish, Larsson is often included in discussions of Scottish football legends due to his impact at Celtic”.
For its subsequent blog post, it did go into detail of Laudrup’s nationality before giving a succinct account of the careers of the players.
ChatGPT’s answer to the same question contained many of the same names, with “King Kenny” once again at the top of the list.
Its detailed blog post briefly and accurately went into the careers of all the players.
It concluded: “While the game has changed over the decades, the impact of these Scottish greats remains timeless.” Indeed.
For this fun test, DeepSeek was certainly comparable to its best-known US competitor.
Coding
…
Brainstorming ideas
…
Learning and research
…
Steaming ahead
The tasks I set the chatbots were simple but they point to something much more significant – the winner of the so-called AI race is far from decided.
For all the vast resources US firms have poured into the tech, their Chinese rival has shown their achievements can be emulated.
…
Reception from the science community
Days before the news outlets discovered DeepSeek, the company published a paper about its Large Language Models (LLMs) and its new chatbot on arXiv. Here’s a little more information,
DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning
[over 100 authors are listed]
We introduce our first-generation reasoning models, DeepSeek-R1-Zero and DeepSeek-R1. DeepSeek-R1-Zero, a model trained via large-scale reinforcement learning (RL) without supervised fine-tuning (SFT) as a preliminary step, demonstrates remarkable reasoning capabilities. Through RL, DeepSeek-R1-Zero naturally emerges with numerous powerful and intriguing reasoning behaviors. However, it encounters challenges such as poor readability, and language mixing. To address these issues and further enhance reasoning performance, we introduce DeepSeek-R1, which incorporates multi-stage training and cold-start data before RL. DeepSeek-R1 achieves performance comparable to OpenAI-o1-1217 on reasoning tasks. To support the research community, we open-source DeepSeek-R1-Zero, DeepSeek-R1, and six dense models (1.5B, 7B, 8B, 14B, 32B, 70B) distilled from DeepSeek-R1 based on Qwen and Llama.
A Chinese-built large language model called DeepSeek-R1 is thrilling scientists as an affordable and open rival to ‘reasoning’ models such as OpenAI’s o1.
These models generate responses step-by-step, in a process analogous to human reasoning. This makes them more adept than earlier language models at solving scientific problems and could make them useful in research. Initial tests of R1, released on 20 January, show that its performance on certain tasks in chemistry, mathematics and coding is on par with that of o1 — which wowed researchers when it was released by OpenAI in September.
“This is wild and totally unexpected,” Elvis Saravia, an AI researcher and co-founder of the UK-based AI consulting firm DAIR.AI, wrote on X.
R1 stands out for another reason. DeepSeek, the start-up in Hangzhou that built the model, has released it as ‘open-weight’, meaning that researchers can study and build on the algorithm. Published under an MIT licence, the model can be freely reused but is not considered fully open source, because its training data has not been made available.
“The openness of DeepSeek is quite remarkable,” says Mario Krenn, leader of the Artificial Scientist Lab at the Max Planck Institute for the Science of Light in Erlangen, Germany. By comparison, o1 and other models built by OpenAI in San Francisco, California, including its latest effort o3 are “essentially black boxes”, he says.
DeepSeek hasn’t released the full cost of training R1, but it is charging people using its interface around one-thirtieth of what o1 costs to run. The firm has also created mini ‘distilled’ versions of R1 to allow researchers with limited computing power to play with the model. An “experiment that cost more than £300 with o1, cost less than $10 with R1,” says Krenn. “This is a dramatic difference which will certainly play a role its future adoption.”
The October 2024 issue of The Advance (Council of Canadian Academies [CCA] newsletter) arrived in my emailbox on October 15, 2024 with some interesting tidbits about artificial intelligence, Note: For anyone who wants to see the entire newsletter for themselves, you can sign up here or in French, vous pouvez vous abonner ici,
Artificial Intelligence and Canada’s Science Diplomacy Future
For nearly two decades, Canada has been a global leader in artificial intelligence (AI) research, contributing a significant percentage of the world’s top-cited scientific publications on the subject. In that time, the number of countries participating in international collaborations has grown significantly, supporting new partnerships and accounting for as much as one quarter of all published research articles.
“Opportunities for partnerships are growing rapidly alongside the increasing complexity of new scientific discoveries and emerging industry sectors,” wrote the CCA Expert Panel on International Science, Technology, Innovation and Knowledge Partnerships earlier this year, singling out Canada’s AI expertise. “At the same time, discussions of sovereignty and national interests abut the movement toward open science and transdisciplinary approaches.”
On Friday, November 22 [2024], the CCA will host “Strategy and Influence: AI and Canada’s Science Diplomacy Future” as part of the Canadian Science Policy Centre (CSPC) annual conference. The panel discussion will draw on case studies related to AI research collaboration to explore the ways in which such partnerships inform science diplomacy. Panellists include:
Monica Gattinger, chair of the CCA Expert Panel on International Science, Technology, Innovation and Knowledge Partnerships and director of the Institute for Science, Society and Policy at the University of Ottawa (picture omitted)
David Barnes, head of the British High Commission Science, Climate, and Energy Team
Constanza Conti, Professor of Numerical Analysis at the University of Florence and Scientific Attaché at the Italian Embassy in Ottawa
Jean-François Doulet, Attaché for Science and Higher Education at the Embassy of France in Canada
Konstantinos Kapsouropoulos, Digital and Research Counsellor at the Delegation of the European Union to Canada
For details on CSPC 2024, click here. [Here’s the theme and a few more details about the conference: Empowering Society: The Transformative Value of Science, Knowledge, and Innovation; The 16th annual Canadian Science Policy Conference (CSPC) will be held in person from November 20th to 22nd, 2024] For a user guide to Navigating Collaborative Futures, from the CCA’s Expert Panel on International Science, Technology, Innovation and Knowledge Partnerships, click here.
448: Strategy and Influence: AI and Canada’s Science Diplomacy Future
Friday, November 22 [2024] 1:00 pm – 2:30 pm EST
Science and International Affairs and Security
About
Organized By: Council of Canadian Academies (CCA)
Artificial intelligence has already begun to transform Canada’s economy and society, and the broader advantages of international collaboration in AI research have the potential to make an even greater impact. With three national AI institutes and a Pan-Canadian AI Strategy, Canada’s AI ecosystem is thriving and positions the country to build stronger international partnerships in this area, and to develop more meaningful international collaborations in other areas of innovation. This panel will convene science attachés to share perspectives on science diplomacy and partnerships, drawing on case studies related to AI research collaboration.
The newsletter also provides links to additional readings on various topics, here are the AI items,
In Ottawa, Prime Minister Justin Trudeau and President Emmanuel Macron of France renewed their commitment “to strengthening economic exchanges between Canadian and French AI ecosystems.” They also revealed that Canada would be named Country of the Year at Viva Technology’s annual conference, to be held next June in Paris.
A “slower, but more capable” version of OpenAI is impressing scientists with the strength of its responses to prompts, according to Nature. The new version, referred to as “o1,” outperformed a previous ChatGPT model on a standardized test involving chemistry, physics, and biology questions, and “beat PhD-level scholars on the hardest series of questions.” [Note: As of October 16, 2024, the Nature news article of October 1, 2024 appears to be open access. It’s unclear how long this will continue to be the case.]
…
In memoriam: Abhishek Gupta, the founder and principal researcher of the Montreal AI Ethics Institute and a member of the CCA Expert Panel on Artificial Intelligence for Science and Engineering, died on September 30 [2024]. His colleagues shared the news in a memorial post, writing, “It was during his time in Montreal that Abhishek envisioned a future where ethics and AI would intertwine—a vision that became the driving force behind his life’s work.”
Meeting in Ottawa on September 26, 2024, Justin Trudeau, the Prime Minister of Canada, and Emmanuel Macron, the President of the French Republic, issued a call to action to promote the development of a responsible approach to artificial intelligence (AI).
Our two countries will increase the coordination of our actions, as Canada will assume the Presidency of the G7 in 2025 and France will host the AI Action Summit on February 10 and 11, 2025.
Our two countries are working on the development and use of safe, secure and trustworthy AI as part of a risk-aware, human-centred and innovation-friendly approach. This cooperation is based on shared values. We believe that the development and use of AI need to be beneficial for individuals and the planet, for example by increasing human capabilities and developing creativity, ensuring the inclusion of under-represented people, reducing economic, social, gender and other inequalities, protecting information integrity and protecting natural environments, which in turn will promote inclusive growth, well-being, sustainable development and environmental sustainability.
We are committed to promoting the development and use of AI systems that respect the rule of law, human rights, democratic values and human-centred values. Respecting these values means developing and using AI systems that are transparent and explainable, robust, safe and secure, and whose stakeholders are held accountable for respecting these principles, in line with the Recommendation of the OECD Council on Artificial Intelligence, the Hiroshima AI Process, the G20 AI Principles and the International Partnership for Information and Democracy.
Based on these values and principles, Canada and France are working on high-quality scientific cooperation. In April 2023, we formalized the creation of a joint committee for science, technology and innovation. This committee has identified emerging technologies, including AI, as one of the priorities areas for cooperation between our two countries. In this context, a call for AI research projects was announced last July, scheduled for the end of 2024 and funded, on the French side, by the French National Research Agency, and, on the Canadian side, by a consortium made up of Canada’s three granting councils (the Natural Sciences and Engineering Research Council of Canada, the Social Sciences and Humanities Research Council of Canada and the Canadian Institutes of Health Research) and IVADO [Institut de valorisation des données], the AI research, training and transfer consortium.
We will also collaborate on the evaluation and safety of AI models. We have announced key AI safety initiatives, including the AI Safety Institute of Canada [emphasis mine; not to be confused with Artificial Intelligence Governance & Safety Canada (AIGS)], which will be launched soon, and France’s National Centre for AI evaluation. We expect these two agencies will work to improve knowledge and understanding of technical and socio-technical aspects related to the safety and evaluation of advanced AI systems.
Canada and France are committed to strengthening economic exchanges between Canadian and French AI ecosystems, whether by organizing delegations, like the one organized by Scale AI with 60 Canadian companies at the latest Viva Technology conference in Paris, or showcasing France at the ALL IN event in Montréal on September 11 and 12, 2024, through cooperation between companies, for example, through large companies’ adoption of services provided by small companies or through the financial support that investment funds provide to companies on both sides of the Atlantic. Our two countries will continue their cooperation at the upcoming Viva Technology conference in Paris, where Canada will be the Country of the Year.
We want to strengthen our cooperation in terms of developing AI capabilities. We specifically want to promote access to AI’s compute capabilities in order to support national and international technological advances in research and business, notably in emerging markets and developing countries, while committing to strengthening their efforts to make the necessary improvements to the energy efficiency of these infrastructures. We are also committed to sharing their experience in initiatives to develop AI skills and training in order to accelerate workforce deployment.
Canada and France cooperate on the international stage to ensure the alignment and convergence of AI regulatory frameworks, given the economic potential and the global social consequences of this technological revolution. Under our successive G7 presidencies in 2018 and 2019, we worked to launch the Global Partnership on Artificial Intelligence (GPAI), which now has 29 members from all over the world, and whose first two centres of expertise were opened in Montréal and Paris. We support the creation of the new integrated partnership, which brings together OECD and GPAI member countries, and welcomes new members, including emerging and developing economies. We hope that the implementation of this new model will make it easier to participate in joint research projects that are of public interest, reduce the global digital divide and support constructive debate between the various partners on standards and the interoperability of their AI-related regulations.
We will continue our cooperation at the AI Action Summit in France on February 10 and 11, 2025, where we will strive to find solutions to meet our common objectives, such as the fight against disinformation or the reduction of the environmental impact of AI. With the objective of actively and tangibly promoting the use of the French language in the creation, production, distribution and dissemination of AI, taking into account its richness and diversity, and in compliance with copyright, we will attempt to identify solutions that are in line with the five themes of the summit: AI that serves the public interest, the future of work, innovation and culture, trust in AI and global AI governance.
Canada has accepted to co-chair the working group on global AI governance in order to continue the work already carried out by the GPAI, the OECD, the United Nations and its various bodies, the G7 and the G20. We would like to highlight and advance debates on the cultural challenges of AI in order to accelerate the joint development of relevant responses to the challenges faced. We would also like to develop the change management policies needed to support all of the affected cultural sectors. We will continue these discussions together during our successive G7 presidencies in 2025 and 2026.
I checked out the In memoriam notice for Abhishek Gupta and found this, Note: Links have been removed except the link to the Abhishek Gupta’s memorial page hosting tributes, stories, and more. The link is in the highlighted paragraph,
Honoring the Life and Legacy of a Leader in AI Ethics
In accordance with his family’s wishes, it is with profound sadness that we announce the passing of Abhishek Gupta, Founder and Principal Researcher of the Montreal AI Ethics Institute (MAIEI), Director for Responsible AI at the Boston Consulting Group (BCG), and a pioneering voice in the field of AI ethics. Abhishek passed away peacefully in his sleep on September 30, 2024 in India, surrounded by his loving family. He is survived by his father, Ashok Kumar Gupta; his mother, Asha Gupta; and his younger brother, Abhijay Gupta.
Note: Details of a memorial service will be announced in the coming weeks. For those who wish to share stories, personal anecdotes, and photos of Abhishek, please visit www.forevermissed.com/abhishekgupta — your contributions will be greatly appreciated by his family and loved ones.
Born on December 20, 1992, in India, Abhishek’s intellectual curiosity and drive to understand technology led him on a remarkable journey. After excelling at Delhi Public School, Abhishek attended McGill University in Montreal, where he earned a Bachelor of Science in Computer Science (BSc’15). Following his graduation, Abhishek worked as a software engineer at Ericsson. He later joined Microsoft as a machine learning engineer, where he also served on the CSE Responsible AI Board. It was during his time in Montreal that Abhishek envisioned a future where ethics and AI would intertwine—a vision that became the driving force behind his life’s work.
The Beginnings: Building a Global AI Ethics Community
Abhishek’s vision for MAIEI was rooted in community building. He began hosting in-person AI Ethics Meetups in Montreal throughout 2017. These gatherings were unique—participants completed assigned readings in advance, split into small groups for discussion, and then reconvened to share insights. This approach fostered deep, structured conversations and made AI ethics accessible to everyone, regardless of their background. The conversations and insights from these meetups became the foundation of MAIEI, which was launched in May 2018.
When the pandemic hit, Abhishek adapted the meetup format to an online setting, enabling MAIEI to expand worldwide. It was his idea to bring these conversations to a global stage, using virtual platforms to ensure voices from all corners of the world could join in. He passionately stood up for the “little guy,” making sure that those whose voices might be overlooked or unheard in traditional forums had a platform. Under his stewardship, MAIEI emerged as a globally recognized leader in fostering public discussions on the ethical implications of artificial intelligence. Through MAIEI, Abhishek fulfilled his mission of democratizing AI ethics literacy, empowering individuals from all backgrounds to engage with the future of technology.
…
I offer my sympathies to his family, friends, and communities for their profound loss.
Regulation of artificial intelligence (AI) has become very topical in the last couple of years. There was an AI safety summit in November 2023 at Bletchley Park in the UK (see my November 2, 2023 posting for more about that international meeting).
A very software approach?
This year (2024) has seen a rise in legislative and proposed legislative activity. I have some articles on a few of these activities. China was the first to enact regulations of any kind on AI according to Matt Sheehan’s February 27, 2024 paper for the Carnegie Endowment for International Peace,
In 2021 and 2022, China became the first country to implement detailed, binding regulations on some of the most common applications of artificial intelligence (AI). These rules formed the foundation of China’s emerging AI governance regime, an evolving policy architecture that will affect everything from frontier AI research to the functioning of the world’s second-largest economy, from large language models in Africa to autonomous vehicles in Europe.
…
The Chinese Communist Party (CCP) and the Chinese government started that process with the 2021 rules on recommendation algorithms, an omnipresent use of the technology that is often overlooked in international AI governance discourse. Those rules imposed new obligations on companies to intervene in content recommendations, granted new rights to users being recommended content, and offered protections to gig workers subject to algorithmic scheduling. The Chinese party-state quickly followed up with a new regulation on “deep synthesis,” the use of AI to generate synthetic media such as deepfakes. Those rules required AI providers to watermark AI-generated content and ensure that content does not violate people’s “likeness rights” or harm the “nation’s image.” Together, these two regulations also created and amended China’s algorithm registry, a regulatory tool that would evolve into a cornerstone of the country’s AI governance regime.
…
The UK has adopted a more generalized approach focused on encouraging innovation according to Valeria Gallo’s and Suchitra Nair’s February 21, 2024 article for Deloitte (a British professional services firm also considered one of the big four accounting firms worldwide),
At a glance
The UK Government has adopted a cross-sector and outcome-based framework for regulating AI, underpinned by five core principles. These are safety, security and robustness, appropriate transparency and explainability, fairness, accountability and governance, and contestability and redress.
Regulators will implement the framework in their sectors/domains by applying existing laws and issuing supplementary regulatory guidance. Selected regulators will publish their AI annual strategic plans by 30th April [2024], providing businesses with much-needed direction.
Voluntary safety and transparency measures for developers of highly capable AI models and systems will also supplement the framework and the activities of individual regulators.
The framework will not be codified into law for now, but the Government anticipates the need for targeted legislative interventions in the future. These interventions will address gaps in the current regulatory framework, particularly regarding the risks posed by complex General Purpose AI and the key players involved in its development.
Organisations must prepare for increased AI regulatory activity over the next year, including guidelines, information gathering, and enforcement. International firms will inevitably have to navigate regulatory divergence.
…
While most of the focus appears to be on the software (e.g., General Purpose AI), the UK framework does not preclude hardware.
As part of its digital strategy, the EU wants to regulate artificial intelligence (AI) to ensure better conditions for the development and use of this innovative technology. AI can create many benefits, such as better healthcare; safer and cleaner transport; more efficient manufacturing; and cheaper and more sustainable energy.
In April 2021, the European Commission proposed the first EU regulatory framework for AI. It says that AI systems that can be used in different applications are analysed and classified according to the risk they pose to users. The different risk levels will mean more or less regulation.
The agreed text is expected to be finally adopted in April 2024. It will be fully applicable 24 months after entry into force, but some parts will be applicable sooner:
*The ban of AI systems posing unacceptable risks will apply six months after the entry into force
*Codes of practice will apply nine months after entry into force
*Rules on general-purpose AI systems that need to comply with transparency requirements will apply 12 months after the entry into force
High-risk systems will have more time to comply with the requirements as the obligations concerning them will become applicable 36 months after the entry into force.
…
This EU initiative, like the UK framework, seems largely focused on AI software and according to the Wikipedia entry “Regulation of artificial intelligence,”
… The AI Act is expected to come into effect in late 2025 or early 2026.[109
I do have a few postings about Canadian regulatory efforts, which also seem to be focused on software but don’t preclude hardware. While the January 20, 2024 posting is titled “Canada’s voluntary code of conduct relating to advanced generative AI (artificial intelligence) systems,” information about legislative efforts is also included although you might find my May 1, 2023 posting titled “Canada, AI regulation, and the second reading of the Digital Charter Implementation Act, 2022 (Bill C-27)” offers more comprehensive information about Canada’s legislative progress or lack thereof.
A February 15, 2024 news item on ScienceDaily suggests that regulating hardware may be the most effective way of regulating AI,
Chips and datacentres — the ‘compute’ power driving the AI revolution — may be the most effective targets for risk-reducing AI policies as they have to be physically possessed, according to a new report.
A global registry tracking the flow of chips destined for AI supercomputers is one of the policy options highlighted by a major new report calling for regulation of “compute” — the hardware that underpins all AI — to help prevent artificial intelligence misuse and disasters.
Other technical proposals floated by the report include “compute caps” — built-in limits to the number of chips each AI chip can connect with — and distributing a “start switch” for AI training across multiple parties to allow for a digital veto of risky AI before it feeds on data.
The experts point out that powerful computing chips required to drive generative AI models are constructed via highly concentrated supply chains, dominated by just a handful of companies — making the hardware itself a strong intervention point for risk-reducing AI policies.
The report, published 14 February [2024], is authored by nineteen experts and co-led by three University of Cambridge institutes — the Leverhulme Centre for the Future of Intelligence (LCFI), the Centre for the Study of Existential Risk (CSER) and the Bennett Institute for Public Policy — along with OpenAI and the Centre for the Governance of AI.
“Artificial intelligence has made startling progress in the last decade, much of which has been enabled by the sharp increase in computing power applied to training algorithms,” said Haydn Belfield, a co-lead author of the report from Cambridge’s LCFI.
“Governments are rightly concerned about the potential consequences of AI, and looking at how to regulate the technology, but data and algorithms are intangible and difficult to control.
“AI supercomputers consist of tens of thousands of networked AI chips hosted in giant data centres often the size of several football fields, consuming dozens of megawatts of power,” said Belfield.
“Computing hardware is visible, quantifiable, and its physical nature means restrictions can be imposed in a way that might soon be nearly impossible with more virtual elements of AI.”
The computing power behind AI has grown exponentially since the “deep learning era” kicked off in earnest, with the amount of “compute” used to train the largest AI models doubling around every six months since 2010. The biggest AI models now use 350 million times more compute than thirteen years ago.
Government efforts across the world over the past year – including the US Executive Order on AI, EU AI Act, China’s Generative AI Regulation, and the UK’s AI Safety Institute – have begun to focus on compute when considering AI governance.
Outside of China, the cloud compute market is dominated by three companies, termed “hyperscalers”: Amazon, Microsoft, and Google. “Monitoring the hardware would greatly help competition authorities in keeping in check the market power of the biggest tech companies, and so opening the space for more innovation and new entrants,” said co-author Prof Diane Coyle from Cambridge’s Bennett Institute.
The report provides “sketches” of possible directions for compute governance, highlighting the analogy between AI training and uranium enrichment. “International regulation of nuclear supplies focuses on a vital input that has to go through a lengthy, difficult and expensive process,” said Belfield. “A focus on compute would allow AI regulation to do the same.”
Policy ideas are divided into three camps: increasing the global visibility of AI computing; allocating compute resources for the greatest benefit to society; enforcing restrictions on computing power.
For example, a regularly-audited international AI chip registry requiring chip producers, sellers, and resellers to report all transfers would provide precise information on the amount of compute possessed by nations and corporations at any one time.
The report even suggests a unique identifier could be added to each chip to prevent industrial espionage and “chip smuggling”.
“Governments already track many economic transactions, so it makes sense to increase monitoring of a commodity as rare and powerful as an advanced AI chip,” said Belfield. However, the team point out that such approaches could lead to a black market in untraceable “ghost chips”.
Other suggestions to increase visibility – and accountability – include reporting of large-scale AI training by cloud computing providers, and privacy-preserving “workload monitoring” to help prevent an arms race if massive compute investments are made without enough transparency.
“Users of compute will engage in a mixture of beneficial, benign and harmful activities, and determined groups will find ways to circumvent restrictions,” said Belfield. “Regulators will need to create checks and balances that thwart malicious or misguided uses of AI computing.”
These might include physical limits on chip-to-chip networking, or cryptographic technology that allows for remote disabling of AI chips in extreme circumstances. One suggested approach would require the consent of multiple parties to unlock AI compute for particularly risky training runs, a mechanism familiar from nuclear weapons.
AI risk mitigation policies might see compute prioritised for research most likely to benefit society – from green energy to health and education. This could even take the form of major international AI “megaprojects” that tackle global issues by pooling compute resources.
The report’s authors are clear that their policy suggestions are “exploratory” rather than fully fledged proposals and that they all carry potential downsides, from risks of proprietary data leaks to negative economic impacts and the hampering of positive AI development.
They offer five considerations for regulating AI through compute, including the exclusion of small-scale and non-AI computing, regular revisiting of compute thresholds, and a focus on privacy preservation.
Added Belfield: “Trying to govern AI models as they are deployed could prove futile, like chasing shadows. Those seeking to establish AI regulation should look upstream to compute, the source of the power driving the AI revolution. If compute remains ungoverned it poses severe risks to society.”
Authors include: Girish Sastry, Lennart Heim, Haydn Belfield, Markus Anderljung, Miles Brundage, Julian Hazell, Cullen O’Keefe, Gillian K. Hadfield, Richard Ngo, Konstantin Pilz, George Gor, Emma Bluemke, Sarah Shoker, Janet Egan, Robert F. Trager, Shahar Avin, Adrian Weller, Yoshua Bengio, and Diane Coyle.
The authors are associated with these companies/agencies: OpenAI, Centre for the Governance of AI (GovAI), Leverhulme Centre for the Future of Intelligence at the Uni. of Cambridge, Oxford Internet Institute, Institute for Law & AI, University of Toronto Vector Institute for AI, Georgetown University, ILINA Program, Harvard Kennedy School (of Government), *AI Governance Institute,* Uni. of Oxford, Centre for the Study of Existential Risk at Uni. of Cambridge, Uni. of Cambridge, Uni. of Montreal / Mila, Bennett Institute for Public Policy at the Uni. of Cambridge.
“The ILINIA program is dedicated to providing an outstanding platform for Africans to learn and work on questions around maximizing wellbeing and responding to global catastrophic risks” according to the organization’s homepage.
*As for the AI Governance Institute, I believe that should be the Centre for the Governance of AI at Oxford University since the associated academic is Robert F. Trager from the University of Oxford.
As the months (years?) fly by, I guess we’ll find out if this hardware approach gains any traction where AI regulation is concerned.
A December 18, 2023 news item on ScienceDaily shifted my focus from hardware to software when considering memory in brainlike (neuromorphic) computing,
An interdisciplinary team consisting of researchers from the Center for Cognition and Sociality and the Data Science Group within the Institute for Basic Science (IBS) [Korea] revealed a striking similarity between the memory processing of artificial intelligence (AI) models and the hippocampus of the human brain. This new finding provides a novel perspective on memory consolidation, which is a process that transforms short-term memories into long-term ones, in AI systems.
In the race towards developing Artificial General Intelligence (AGI), with influential entities like OpenAI and Google DeepMind leading the way, understanding and replicating human-like intelligence has become an important research interest. Central to these technological advancements is the Transformer model [Figure 1], whose fundamental principles are now being explored in new depth.
The key to powerful AI systems is grasping how they learn and remember information. The team applied principles of human brain learning, specifically concentrating on memory consolidation through the NMDA receptor in the hippocampus, to AI models.
The NMDA receptor is like a smart door in your brain that facilitates learning and memory formation. When a brain chemical called glutamate is present, the nerve cell undergoes excitation. On the other hand, a magnesium ion acts as a small gatekeeper blocking the door. Only when this ionic gatekeeper steps aside, substances are allowed to flow into the cell. This is the process that allows the brain to create and keep memories, and the gatekeeper’s (the magnesium ion) role in the whole process is quite specific.
The team made a fascinating discovery: the Transformer model seems to use a gatekeeping process similar to the brain’s NMDA receptor [see Figure 1]. This revelation led the researchers to investigate if the Transformer’s memory consolidation can be controlled by a mechanism similar to the NMDA receptor’s gating process.
In the animal brain, a low magnesium level is known to weaken memory function. The researchers found that long-term memory in Transformer can be improved by mimicking the NMDA receptor. Just like in the brain, where changing magnesium levels affect memory strength, tweaking the Transformer’s parameters to reflect the gating action of the NMDA receptor led to enhanced memory in the AI model. This breakthrough finding suggests that how AI models learn can be explained with established knowledge in neuroscience.
C. Justin LEE, who is a neuroscientist director at the institute, said, “This research makes a crucial step in advancing AI and neuroscience. It allows us to delve deeper into the brain’s operating principles and develop more advanced AI systems based on these insights.”
CHA Meeyoung, who is a data scientist in the team and at KAIST [Korea Advanced Institute of Science and Technology], notes, “The human brain is remarkable in how it operates with minimal energy, unlike the large AI models that need immense resources. Our work opens up new possibilities for low-cost, high-performance AI systems that learn and remember information like humans.”
What sets this study apart is its initiative to incorporate brain-inspired nonlinearity into an AI construct, signifying a significant advancement in simulating human-like memory consolidation. The convergence of human cognitive mechanisms and AI design not only holds promise for creating low-cost, high-performance AI systems but also provides valuable insights into the workings of the brain through AI models.
Fig. 1: (a) Diagram illustrating the ion channel activity in post-synaptic neurons. AMPA receptors are involved in the activation of post-synaptic neurons, while NMDA receptors are blocked by magnesium ions (Mg²⁺) but induce synaptic plasticity through the influx of calcium ions (Ca²⁺) when the post-synaptic neuron is sufficiently activated. (b) Flow diagram representing the computational process within the Transformer AI model. Information is processed sequentially through stages such as feed-forward layers, layer normalization, and self-attention layers. The graph depicting the current-voltage relationship of the NMDA receptors is very similar to the nonlinearity of the feed-forward layer. The input-output graph, based on the concentration of magnesium (α), shows the changes in the nonlinearity of the NMDA receptors. Courtesy: IBS
This research was presented at the 37th Conference on Neural Information Processing Systems (NeurIPS 2023) before being published in the proceedings, I found a PDF of the presentation and an early online copy of the paper before locating the paper in the published proceedings.
OpenReview is a platform for open peer review, open publishing, open access, open discussion, open recommendations, open directory, open API and open source.
It’s not clear to me if this paper is finalized or not and I don’t know if its presence on OpenReview constitutes publication.
There’s a very good article about the upcoming AI (artificial intelligence) safety talks on the British Broadcasting Corporation (BBC) news website (plus some juicy perhaps even gossipy news about who may not be attending the event) but first, here’s the August 24, 2023 UK government press release making the announcement,
Iconic Bletchley Park to host UK AI Safety Summit in early November [2023]
Major global event to take place on the 1st and 2nd of November.[2023]
– UK to host world first summit on artificial intelligence safety in November
– Talks will explore and build consensus on rapid, international action to advance safety at the frontier of AI technology
– Bletchley Park, one of the birthplaces of computer science, to host the summit
International governments, leading AI companies and experts in research will unite for crucial talks in November on the safe development and use of frontier AI technology, as the UK Government announces Bletchley Park as the location for the UK summit.
The major global event will take place on the 1st and 2nd November to consider the risks of AI, especially at the frontier of development, and discuss how they can be mitigated through internationally coordinated action. Frontier AI models hold enormous potential to power economic growth, drive scientific progress and wider public benefits, while also posing potential safety risks if not developed responsibly.
To be hosted at Bletchley Park in Buckinghamshire, a significant location in the history of computer science development and once the home of British Enigma codebreaking – it will see coordinated action to agree a set of rapid, targeted measures for furthering safety in global AI use.
Preparations for the summit are already in full flow, with Matt Clifford and Jonathan Black recently appointed as the Prime Minister’s Representatives. Together they’ll spearhead talks and negotiations, as they rally leading AI nations and experts over the next three months to ensure the summit provides a platform for countries to work together on further developing a shared approach to agree the safety measures needed to mitigate the risks of AI.
Prime Minister Rishi Sunak said:
“The UK has long been home to the transformative technologies of the future, so there is no better place to host the first ever global AI safety summit than at Bletchley Park this November.
To fully embrace the extraordinary opportunities of artificial intelligence, we must grip and tackle the risks to ensure it develops safely in the years ahead.
With the combined strength of our international partners, thriving AI industry and expert academic community, we can secure the rapid international action we need for the safe and responsible development of AI around the world.”
Technology Secretary Michelle Donelan said:
“International collaboration is the cornerstone of our approach to AI regulation, and we want the summit to result in leading nations and experts agreeing on a shared approach to its safe use.
The UK is consistently recognised as a world leader in AI and we are well placed to lead these discussions. The location of Bletchley Park as the backdrop will reaffirm our historic leadership in overseeing the development of new technologies.
AI is already improving lives from new innovations in healthcare to supporting efforts to tackle climate change, and November’s summit will make sure we can all realise the technology’s huge benefits safely and securely for decades to come.”
The summit will also build on ongoing work at international forums including the OECD, Global Partnership on AI, Council of Europe, and the UN and standards-development organisations, as well as the recently agreed G7 Hiroshima AI Process.
The UK boasts strong credentials as a world leader in AI. The technology employs over 50,000 people, directly supports one of the Prime Minister’s five priorities by contributing £3.7 billion to the economy, and is the birthplace of leading AI companies such as Google DeepMind. It has also invested more on AI safety research than any other nation, backing the creation of the Foundation Model Taskforce with an initial £100 million.
Foreign Secretary James Cleverly said:
“No country will be untouched by AI, and no country alone will solve the challenges posed by this technology. In our interconnected world, we must have an international approach.
The origins of modern AI can be traced back to Bletchley Park. Now, it will also be home to the global effort to shape the responsible use of AI.”
Bletchley Park’s role in hosting the summit reflects the UK’s proud tradition of being at the frontier of new technology advancements. Since Alan Turing’s celebrated work some eight decades ago, computing and computer science have become fundamental pillars of life both in the UK and across the globe.
Iain Standen, CEO of the Bletchley Park Trust, said:
“Bletchley Park Trust is immensely privileged to have been chosen as the venue for the first major international summit on AI safety this November, and we look forward to welcoming the world to our historic site.
It is fitting that the very spot where leading minds harnessed emerging technologies to influence the successful outcome of World War 2 will, once again, be the crucible for international co-ordinated action.
We are incredibly excited to be providing the stage for discussions on global safety standards, which will help everyone manage and monitor the risks of artificial intelligence.”
The roots of AI can be traced back to the leading minds who worked at Bletchley during World War 2, with codebreakers Jack Good and Donald Michie among those who went on to write extensive works on the technology. In November [2023], it will once again take centre stage as the international community comes together to agree on important guardrails which ensure the opportunities of AI can be realised, and its risks safely managed.
The announcement follows the UK government allocating £13 million to revolutionise healthcare research through AI, unveiled last week. The funding supports a raft of new projects including transformations to brain tumour surgeries, new approaches to treating chronic nerve pain, and a system to predict a patient’s risk of developing future health problems based on existing conditions.
Tom Gerken’s August 24, 2023 BBC news article (an analysis by Zoe Kleinman follows as part of the article) fills in a few blanks, Note: Links have been removed,
…
World leaders will meet with AI companies and experts on 1 and 2 November for the discussions.
The global talks aim to build an international consensus on the future of AI.
The summit will take place at Bletchley Park, where Alan Turing, one of the pioneers of modern computing, worked during World War Two.
…
It is unknown which world leaders will be invited to the event, with a particular question mark over whether the Chinese government or tech giant Baidu will be in attendance.
The BBC has approached the government for comment.
The summit will address how the technology can be safely developed through “internationally co-ordinated action” but there has been no confirmation of more detailed topics.
It comes after US tech firm Palantir rejected calls to pause the development of AI in June, with its boss Alex Karp saying it was only those with “no products” who wanted a pause.
And in July [2023], children’s charity the Internet Watch Foundation called on Mr Sunak to tackle AI-generated child sexual abuse imagery, which it says is on the rise.
Kleinman’s analysis includes this, Note: A link has been removed,
…
Will China be represented? Currently there is a distinct east/west divide in the AI world but several experts argue this is a tech that transcends geopolitics. Some say a UN-style regulator would be a better alternative to individual territories coming up with their own rules.
If the government can get enough of the right people around the table in early November [2023], this is perhaps a good subject for debate.
…
Three US AI giants – OpenAI, Anthropic and Palantir – have all committed to opening London headquarters.
But there are others going in the opposite direction – British DeepMind co-founder Mustafa Suleyman chose to locate his new AI company InflectionAI in California. He told the BBC the UK needed to cultivate a more risk-taking culture in order to truly become an AI superpower.
…
Many of those who worked at Bletchley Park decoding messages during WW2 went on to write and speak about AI in later years, including codebreakers Irving John “Jack” Good and Donald Michie.
Soon after the War, [Alan] Turing proposed the imitation game – later dubbed the “Turing test” – which seeks to identify whether a machine can behave in a way indistinguishable from a human.
Insight into political jockeying (i.e., some juicy news bits)
This has recently been reported by BBC, from an October 17 (?). 2023 news article by Jessica Parker & Zoe Kleinman on BBC news online,
German Chancellor Olaf Scholz may turn down his invitation to a major UK summit on artificial intelligence, the BBC understands.
…
While no guest list has been published of an expected 100 participants, some within the sector say it’s unclear if the event will attract top leaders.
A government source insisted the summit is garnering “a lot of attention” at home and overseas.
The two-day meeting is due to bring together leading politicians as well as independent experts and senior execs from the tech giants, who are mainly US based.
The first day will bring together tech companies and academics for a discussion chaired by the Secretary of State for Science, Innovation and Technology, Michelle Donelan.
The second day is set to see a “small group” of people, including international government figures, in meetings run by PM Rishi Sunak.
…
Though no final decision has been made, it is now seen as unlikely that the German Chancellor will attend.
That could spark concerns of a “domino effect” with other world leaders, such as the French President Emmanuel Macron, also unconfirmed.
Government sources say there are heads of state who have signalled a clear intention to turn up, and the BBC understands that high-level representatives from many US-based tech giants are going.
The foreign secretary confirmed in September [2023] that a Chinese representative has been invited, despite controversy.
Some MPs within the UK’s ruling Conservative Party believe China should be cut out of the conference after a series of security rows.
It is not known whether there has been a response to the invitation.
China is home to a huge AI sector and has already created its own set of rules to govern responsible use of the tech within the country.
The US, a major player in the sector and the world’s largest economy, will be represented by Vice-President Kamala Harris.
…
Britain is hoping to position itself as a key broker as the world wrestles with the potential pitfalls and risks of AI.
However, Berlin is thought to want to avoid any messy overlap with G7 efforts, after the group of leading democratic countries agreed to create an international code of conduct.
Germany is also the biggest economy in the EU – which is itself aiming to finalise its own landmark AI Act by the end of this year.
It includes grading AI tools depending on how significant they are, so for example an email filter would be less tightly regulated than a medical diagnosis system.
The European Commission President Ursula von der Leyen is expected at next month’s summit, while it is possible Berlin could send a senior government figure such as its vice chancellor, Robert Habeck.
…
A source from the Department for Science, Innovation and Technology said: “This is the first time an international summit has focused on frontier AI risks and it is garnering a lot of attention at home and overseas.
“It is usual not to confirm senior attendance at major international events until nearer the time, for security reasons.”
After seeing the description for Laura U. Marks’s recent work ‘Streaming Carbon Footprint’ (in my October 13, 2023 posting about upcoming ArtSci Salon events in Toronto), where she focuses on the environmental impact of streaming media and digital art, I was reminded of some September 2023 news.
A September 9, 2023 news item (an Associated Press article by Matt O’Brien and Hannah Fingerhut) on phys.org and also published September 12, 2023 on the Iowa Public Radio website, describe an unexpected cost for building ChatGPT and other AI agents, Note: Links have been removed,
The cost of building an artificial intelligence product like ChatGPT can be hard to measure.
But one thing Microsoft-backed OpenAI needed for its technology was plenty of water [emphases mine], pulled from the watershed of the Raccoon and Des Moines rivers in central Iowa to cool a powerful supercomputer as it helped teach its AI systems how to mimic human writing.
As they race to capitalize on a craze for generative AI, leading tech developers including Microsoft, OpenAI and Google have acknowledged that growing demand for their AI tools carries hefty costs, from expensive semiconductors to an increase in water consumption.
But they’re often secretive about the specifics. Few people in Iowa knew about its status as a birthplace of OpenAI’s most advanced large language model, GPT-4, before a top Microsoft executive said in a speech it “was literally made next to cornfields west of Des Moines.”
…
In its latest environmental report, Microsoft disclosed that its global water consumption spiked 34% from 2021 to 2022 (to nearly 1.7 billion gallons , or more than 2,500 Olympic-sized swimming pools), a sharp increase compared to previous years that outside researchers tie to its AI research. [emphases mine]
“It’s fair to say the majority of the growth is due to AI,” including “its heavy investment in generative AI and partnership with OpenAI,” said Shaolei Ren, [emphasis mine] a researcher at the University of California, Riverside who has been trying to calculate the environmental impact of generative AI products such as ChatGPT.
…
If you have the time, do read the O’Brien and Fingerhut article in it entirety. (Later in this post, I have a citation for and a link to a paper by Ren.)
Jason Clayworth’s September 18, 2023 article for AXIOS describes the issue from the Iowan perspective, Note: Links have been removed,
Future data center projects in West Des Moines will only be considered if Microsoft can implement technology that can “significantly reduce peak water usage,” the Associated Press reports.
Why it matters: Microsoft’s five WDM data centers — the “epicenter for advancing AI” — represent more than $5 billion in investments in the last 15 years.
Yes, but: They consumed as much as 11.5 million gallons of water a month for cooling, or about 6% of WDM’s total usage during peak summer usage during the last two years, according to information from West Des Moines Water Works.
…
This information becomes more intriguing (and disturbing) after reading a February 10, 2023 article for the World Economic Forum titled ‘This is why we can’t dismiss water scarcity in the US‘ by James Rees and/or an August 11, 2020 article ‘Why is America running out of water?‘ by Jon Heggie published by the National Geographic, which is a piece of paid content. Note: Despite the fact that it’s sponsored by Finish Dish Detergent, the research in Heggie’s article looks solid.
From Heggie’s article, Note: Links have been removed,
In March 2019, storm clouds rolled across Oklahoma; rain swept down the gutters of New York; hail pummeled northern Florida; floodwaters forced evacuations in Missouri; and a blizzard brought travel to a stop in South Dakota. Across much of America, it can be easy to assume that we have more than enough water. But that same a month, as storms battered the country, a government-backed report issued a stark warning: America is running out of water.
…
As the U.S. water supply decreases, demand is set to increase. On average, each American uses 80 to 100 gallons of water every day, with the nation’s estimated total daily usage topping 345 billion gallons—enough to sink the state of Rhode Island under a foot of water. By 2100 the U.S. population will have increased by nearly 200 million, with a total population of some 514 million people. Given that we use water for everything, the simple math is that more people mean more water stress across the country.
And we are already tapping into our reserves. Aquifers, porous rocks and sediment that store vast volumes of water underground, are being drained. Nearly 165 million Americans rely on groundwater for drinking water, farmers use it for irrigation―37 percent of our total water usage is for agriculture—and industry needs it for manufacturing. Groundwater is being pumped faster than it can be naturally replenished. The Central Valley Aquifer in California underlies one of the nation’s most agriculturally productive regions, but it is in drastic decline and has lost about ten cubic miles of water in just four years.
Decreasing supply and increasing demand are creating a perfect water storm, the effects of which are already being felt. The Colorado River carved its way 1,450 miles from the Rockies to the Gulf of California for millions of years, but now no longer reaches the sea. In 2018, parts of the Rio Grande recorded their lowest water levels ever; Arizona essentially lives under permanent drought conditions; and in South Florida’s freshwater aquifers are increasingly susceptible to salt water intrusion due to over-extraction.
…
The focus is on individual use of water and Heggie ends his article by suggesting we use less,
… And every American can save more water at home in multiple ways, from taking shorter showers to not rinsing dishes under a running faucet before loading them into a dishwasher, a practice that wastes around 20 gallons of water for each load. …
As an advertising pitch goes, this is fairly subtle as there’s no branding in the article itself and it is almost wholly informational.
Attempts to stave off water shortages as noted in Heggie’s and other articles include groundwater pumping both for individual use and industrial use. This practice has had an unexpected impact according to a June 16, 2023 article by Warren Cornwall for Science (magazine),
While spinning on its axis, Earth wobbles like an off-kilter top. Sloshing molten iron in Earth’s core, melting ice, ocean currents, and even hurricanes can all cause the poles to wander. Now, scientists have found that a significant amount of the polar drift results from human activity: pumping groundwater for drinking and irrigation.
“The very way the planet wobbles is impacted by our activities,” says Surendra Adhikari, a geophysicist at NASA’s Jet Propulsion Laboratory and an expert on Earth’s rotation who was not involved in the study. “It is, in a way, mind boggling.”
…
Clark R. Wilson, a geophysicist at the University of Texas at Austin, and his colleagues thought the removal of tens of gigatons of groundwater each year might affect the drift. But they knew it could not be the only factor. “There’s a lot of pieces that go into the final budget for causing polar drift,” Wilson says.
The scientists built a model of the polar wander, accounting for factors such as reservoirs filling because of new dams and ice sheets melting, to see how well they explained the polar movements observed between 1993 and 2010. During that time, satellite measurements were precise enough to detect a shift in the poles as small as a few millimeters.
Dams and ice changes were not enough to match the observed polar motion. But when the researchers also put in 2150 gigatons of groundwater that hydrologic models estimate were pumped between 1993 and 2010, the predicted polar motion aligned much more closely with observations. Wilson and his colleagues conclude that the redistribution of that water weight to the world’s oceans has caused Earth’s poles to shift nearly 80 centimeters during that time. In fact, groundwater removal appears to have played a bigger role in that period than the release of meltwater from ice in either Greenland or Antarctica, the scientists reported Thursday [June 15, 2023] in Geophysical Research Letters.
…
The new paper helps confirm that groundwater depletion added approximately 6 millimeters to global sea level rise between 1993 and 2010. “I was very happy” that this new method matched other estimates, Seo [Ki-Weon Seo geophysicist at Seoul National University and the study’s lead author] says. Because detailed astronomical measurements of the polar axis location go back to the end of the 19th century, polar drift could enable Seo to trace the human impact on the planet’s water over the past century.
Two papers: environmental impact from AI and groundwater pumping wobbles poles
I have two links and citations for Ren’s paper on AI and its environmental impact,
Towards Environmentally Equitable AI via Geographical Load Balancing by Pengfei Li, Jianyi Yang, Adam Wierman, Shaolei Ren. Subjects: Artificial Intelligence (cs.AI); Computers and Society (cs.CY) Cite as: arXiv:2307.05494 [cs.AI] (or arXiv:2307.05494v1 [cs.AI] for this version) DOI: https://doi.org/10.48550/arXiv.2307.05494 Submitted June 20, 2023
Launched on Thursday, July 13, 2023 during UNESCO’s (United Nations Educational, Scientific, and Cultural Organization) “Global dialogue on the ethics of neurotechnology,” is a report tying together the usual measures of national scientific supremacy (number of papers published and number of patents filed) with information on corporate investment in the field. Consequently, “Unveiling the Neurotechnology Landscape: Scientific Advancements, Innovations and Major Trends” by Daniel S. Hain, Roman Jurowetzki, Mariagrazia Squicciarini, and Lihui Xu provides better insight into the international neurotechnology scene than is sometimes found in these kinds of reports. By the way, the report is open access.
Here’s what I mean, from the report‘s short summary,
…
Since 2013, government investments in this field have exceeded $6 billion. Private investment has also seen significant growth, with annual funding experiencing a 22-fold increase from 2010 to 2020, reaching $7.3 billion and totaling $33.2 billion.
This investment has translated into a 35-fold growth in neuroscience publications between 2000-2021 and 20-fold growth in innovations between 2022-2020, as proxied by patents. However, not all are poised to benefit from such developments, as big divides emerge.
Over 80% of high-impact neuroscience publications are produced by only ten countries, while 70% of countries contributed fewer than 10 such papers over the period considered. Similarly, five countries only hold 87% of IP5 neurotech patents.
This report sheds light on the neurotechnology ecosystem, that is, what is being developed, where and by whom, and informs about how neurotechnology interacts with other technological trajectories, especially Artificial Intelligence [emphasis mine]. [p. 2]
…
The money aspect is eye-opening even when you already have your suspicions. Also, it’s not entirely unexpected to learn that only ten countries produce over 80% of the high impact neurotech papers and that only five countries hold 87% of the IP5 neurotech patents but it is stunning to see it in context. (If you’re not familiar with the term ‘IP5 patents’, scroll down in this post to the relevant subhead. Hint: It means the patent was filed in one of the top five jurisdictions; I’ll leave you to guess which ones those might be.)
“Since 2013 …” isn’t quite as informative as the authors may have hoped. I wish they had given a time frame for government investments similar to what they did for corporate investments (e.g., 2010 – 2020). Also, is the $6B (likely in USD) government investment cumulative or an estimated annual number? To sum up, I would have appreciated parallel structure and specificity.
Nitpicks aside, there’s some very good material intended for policy makers. On that note, some of the analysis is beyond me. I haven’t used anything even somewhat close to their analytical tools in years and years. This commentaries reflects my interests and a very rapid reading. One last thing, this is being written from a Canadian perspective. With those caveats in mind, here’s some of what I found.
A definition, social issues, country statistics, and more
There’s a definition for neurotechnology and a second mention of artificial intelligence being used in concert with neurotechnology. From the report‘s executive summary,
Neurotechnology consists of devices and procedures used to access, monitor, investigate, assess, manipulate, and/or emulate the structure and function of the neural systems of animals or human beings. It is poised to revolutionize our understanding of the brain and to unlock innovative solutions to treat a wide range of diseases and disorders.
…
Similarly to Artificial Intelligence (AI), and also due to its convergence with AI, neurotechnology may have profound societal and economic impact, beyond the medical realm. As neurotechnology directly relates to the brain, it triggers ethical considerations about fundamental aspects of human existence, including mental integrity, human dignity, personal identity, freedom of thought, autonomy, and privacy [emphases mine]. Its potential for enhancement purposes and its accessibility further amplifies its prospect social and societal implications.
…
The recent discussions held at UNESCO’s Executive Board further shows Member States’ desire to address the ethics and governance of neurotechnology through the elaboration of a new standard-setting instrument on the ethics of neurotechnology, to be adopted in 2025. To this end, it is important to explore the neurotechnology landscape, delineate its boundaries, key players, and trends, and shed light on neurotech’s scientific and technological developments. [p. 7]
The present report addresses such a need for evidence in support of policy making in relation to neurotechnology by devising and implementing a novel methodology on data from scientific articles and patents:
● We detect topics over time and extract relevant keywords using a transformer- based language models fine-tuned for scientific text. Publication data for the period 2000-2021 are sourced from the Scopus database and encompass journal articles and conference proceedings in English. The 2,000 most cited publications per year are further used in in-depth content analysis. ● Keywords are identified through Named Entity Recognition and used to generate search queries for conducting a semantic search on patents’ titles and abstracts, using another language model developed for patent text. This allows us to identify patents associated with the identified neuroscience publications and their topics. The patent data used in the present analysis are sourced from the European Patent Office’s Worldwide Patent Statistical Database (PATSTAT). We consider IP5 patents filed between 2000-2020 having an English language abstract and exclude patents solely related to pharmaceuticals.
This approach allows mapping the advancements detailed in scientific literature to the technological applications contained in patent applications, allowing for an analysis of the linkages between science and technology. This almost fully automated novel approach allows repeating the analysis as neurotechnology evolves. [pp. 8-9[
Findings in bullet points,
Key stylized facts are: ● The field of neuroscience has witnessed a remarkable surge in the overall number of publications since 2000, exhibiting a nearly 35-fold increase over the period considered, reaching 1.2 million in 2021. The annual number of publications in neuroscience has nearly tripled since 2000, exceeding 90,000 publications a year in 2021. This increase became even more pronounced since 2019. ● The United States leads in terms of neuroscience publication output (40%), followed by the United Kingdom (9%), Germany (7%), China (5%), Canada (4%), Japan (4%), Italy (4%), France (4%), the Netherlands (3%), and Australia (3%). These countries account for over 80% of neuroscience publications from 2000 to 2021. ● Big divides emerge, with 70% of countries in the world having less than 10 high- impact neuroscience publications between 2000 to 2021. ● Specific neurotechnology-related research trends between 2000 and 2021 include: ○ An increase in Brain-Computer Interface (BCI) research around 2010, maintaining a consistent presence ever since. ○ A significant surge in Epilepsy Detection research in 2017 and 2018, reflecting the increased use of AI and machine learning in healthcare. ○ Consistent interest in Neuroimaging Analysis, which peaks around 2004, likely because of its importance in brain activity and language comprehension studies. ○ While peaking in 2016 and 2017, Deep Brain Stimulation (DBS) remains a persistent area of research, underlining its potential in treating conditions like Parkinson’s disease and essential tremor. ● Between 2000 and 2020, the total number of patent applications in this field increased significantly, experiencing a 20-fold increase from less than 500 to over 12,000. In terms of annual figures, a consistent upward trend in neurotechnology-10 related patent applications emerges, with a notable doubling observed between 2015 and 2020. • The United States account for nearly half of all worldwide patent applications (47%). Other major contributors include South Korea (11%), China (10%), Japan (7%), Germany (7%), and France (5%). These five countries together account for 87% of IP5 neurotech patents applied between 2000 and 2020. ○ The United States has historically led the field, with a peak around 2010, a decline towards 2015, and a recovery up to 2020. ○ South Korea emerged as a significant contributor after 1990, overtaking Germany in the late 2000s to become the second-largest developer of neurotechnology. By the late 2010s, South Korea’s annual neurotechnology patent applications approximated those of the United States. ○ China exhibits a sharp increase in neurotechnology patent applications in the mid-2010s, bringing it on par with the United States in terms of application numbers. ● The United States ranks highest in both scientific publications and patents, indicating their strong ability to transform knowledge into marketable inventions. China, France, and Korea excel in leveraging knowledge to develop patented innovations. Conversely, countries such as the United Kingdom, Germany, Italy, Canada, Brazil, and Australia lag behind in effectively translating neurotech knowledge into patentable innovations. ● In terms of patent quality measured by forward citations, the leading countries are Germany, US, China, Japan, and Korea. ● A breakdown of patents by technology field reveals that Computer Technology is the most important field in neurotechnology, exceeding Medical Technology, Biotechnology, and Pharmaceuticals. The growing importance of algorithmic applications, including neural computing techniques, also emerges by looking at the increase in patent applications in these fields between 2015-2020. Compared to the reference year, computer technologies-related patents in neurotech increased by 355% and by 92% in medical technology. ● An analysis of the specialization patterns of the top-5 countries developing neurotechnologies reveals that Germany has been specializing in chemistry- related technology fields, whereas Asian countries, particularly South Korea and China, focus on computer science and electrical engineering-related fields. The United States exhibits a balanced configuration with specializations in both chemistry and computer science-related fields. ● The entities – i.e. both companies and other institutions – leading worldwide innovation in the neurotech space are: IBM (126 IP5 patents, US), Ping An Technology (105 IP5 patents, CH), Fujitsu (78 IP5 patents, JP), Microsoft (76 IP511 patents, US)1, Samsung (72 IP5 patents, KR), Sony (69 IP5 patents JP) and Intel (64 IP5 patents US)
This report further proposes a pioneering taxonomy of neurotechnologies based on International Patent Classification (IPC) codes.
• 67 distinct patent clusters in neurotechnology are identified, which mirror the diverse research and development landscape of the field. The 20 most prominent neurotechnology groups, particularly in areas like multimodal neuromodulation, seizure prediction, neuromorphic computing [emphasis mine], and brain-computer interfaces, point to potential strategic areas for research and commercialization. • The variety of patent clusters identified mirrors the breadth of neurotechnology’s potential applications, from medical imaging and limb rehabilitation to sleep optimization and assistive exoskeletons. • The development of a baseline IPC-based taxonomy for neurotechnology offers a structured framework that enriches our understanding of this technological space, and can facilitate research, development and analysis. The identified key groups mirror the interdisciplinary nature of neurotechnology and underscores the potential impact of neurotechnology, not only in healthcare but also in areas like information technology and biomaterials, with non-negligible effects over societies and economies.
1 If we consider Microsoft Technology Licensing LLM and Microsoft Corporation as being under the same umbrella, Microsoft leads worldwide developments with 127 IP5 patents. Similarly, if we were to consider that Siemens AG and Siemens Healthcare GmbH belong to the same conglomerate, Siemens would appear much higher in the ranking, in third position, with 84 IP5 patents. The distribution of intellectual property assets across companies belonging to the same conglomerate is frequent and mirrors strategic as well as operational needs and features, among others. [pp. 9-11]
Surprises and comments
Interesting and helpful to learn that “neurotechnology interacts with other technological trajectories, especially Artificial Intelligence;” this has changed and improved my understanding of neurotechnology.
It was unexpected to find Canada in the top ten countries producing neuroscience papers. However, finding out that the country lags in translating its ‘neuro’ knowledge into patentable innovation is not entirely a surprise.
It can’t be an accident that countries with major ‘electronics and computing’ companies lead in patents. These companies do have researchers but they also buy startups to acquire patents. They (and ‘patent trolls’) will also file patents preemptively. For the patent trolls, it’s a moneymaking proposition and for the large companies, it’s a way of protecting their own interests and/or (I imagine) forcing a sale.
The mention of neuromorphic (brainlike) computing in the taxonomy section was surprising and puzzling. Up to this point, I’ve thought of neuromorphic computing as a kind of alternative or addition to standard computing but the authors have blurred the lines as per UNESCO’s definition of neurotechnology (specifically, “… emulate the structure and function of the neural systems of animals or human beings”) . Again, this report is broadening my understanding of neurotechnology. Of course, it required two instances before I quite grasped it, the definition and the taxonomy.
What’s puzzling is that neuromorphic engineering, a broader term that includes neuromorphic computing, isn’t used or mentioned. (For an explanation of the terms neuromorphic computing and neuromorphic engineering, there’s my June 23, 2023 posting, “Neuromorphic engineering: an overview.” )
The report
I won’t have time for everything. Here are some of the highlights from my admittedly personal perspective.
Neurotechnology’s applications however extend well beyond medicine [emphasis mine], and span from research, to education, to the workplace, and even people’s everyday life. Neurotechnology-based solutions may enhance learning and skill acquisition and boost focus through brain stimulation techniques. For instance, early research finds that brain- zapping caps appear to boost memory for at least one month (Berkeley, 2022). This could one day be used at home to enhance memory functions [emphasis mine]. They can further enable new ways to interact with the many digital devices we use in everyday life, transforming the way we work, live and interact. One example is the Sound Awareness wristband developed by a Stanford team (Neosensory, 2022) which enables individuals to “hear” by converting sound into tactile feedback, so that sound impaired individuals can perceive spoken words through their skin. Takagi and Nishimoto (2023) analyzed the brain scans taken through Magnetic Resonance Imaging (MRI) as individuals were shown thousands of images. They then trained a generative AI tool called Stable Diffusion2 on the brain scan data of the study’s participants, thus creating images that roughly corresponded to the real images shown. While this does not correspond to reading the mind of people, at least not yet, and some limitations of the study have been highlighted (Parshall, 2023), it nevertheless represents an important step towards developing the capability to interface human thoughts with computers [emphasis mine], via brain data interpretation.
While the above examples may sound somewhat like science fiction, the recent uptake of generative Artificial Intelligence applications and of large language models such as ChatGPT or Bard, demonstrates that the seemingly impossible can quickly become an everyday reality. At present, anyone can purchase online electroencephalogram (EEG) devices for a few hundred dollars [emphasis mine], to measure the electrical activity of their brain for meditation, gaming, or other purposes. [pp. 14-15]
This is very impressive achievement. Some of the research cited was published earlier this year (2023). The extraordinary speed is a testament to the efforts by the authors and their teams. It’s also a testament to how quickly the field is moving.
I’m glad to see the mention of and focus on consumer neurotechnology. (While the authors don’t speculate, I am free to do so.) Consumer neurotechnology could be viewed as one of the steps toward normalizing a cyborg future for all of us. Yes, we have books, television programmes, movies, and video games, which all normalize the idea but the people depicted have been severely injured and require the augmentation. With consumer neurotechnology, you have easily accessible devices being used to enhance people who aren’t injured, they just want to be ‘better’.
This phrase seemed particularly striking “… an important step towards developing the capability to interface human thoughts with computers” in light of some claims made by the Australian military in my June 13, 2023 posting “Mind-controlled robots based on graphene: an Australian research story.” (My posting has an embedded video demonstrating the Brain Robotic Interface (BRI) in action. Also, see the paragraph below the video for my ‘measured’ response.)
There’s no mention of the military in the report which seems more like a deliberate rather than inadvertent omission given the importance of military innovation where technology is concerned.
This section gives a good overview of government initiatives (in the report it’s followed by a table of the programmes),
Thanks to the promises it holds, neurotechnology has garnered significant attention from both governments and the private sector and is considered by many as an investment priority. According to the International Brain Initiative (IBI), brain research funding has become increasingly important over the past ten years, leading to a rise in large-scale state-led programs aimed at advancing brain intervention technologies(International Brain Initiative, 2021). Since 2013, initiatives such as the United States’ Brain Research Through Advancing Innovative Neurotechnologies (BRAIN) Initiative and the European Union’s Human Brain Project (HBP), as well as major national initiatives in China, Japan and South Korea have been launched with significant funding support from the respective governments. The Canadian Brain Research Strategy, initially operated as a multi- stakeholder coalition on brain research, is also actively seeking funding support from the government to transform itself into a national research initiative (Canadian Brain Research Strategy, 2022). A similar proposal is also seen in the case of the Australian Brain Alliance, calling for the establishment of an Australian Brain Initiative (Australian Academy of Science, n.d.). [pp. 15-16]
Privacy
There are some concerns such as these,
Beyond the medical realm, research suggests that emotional responses of consumers related to preferences and risks can be concurrently tracked by neurotechnology, such as neuroimaging and that neural data can better predict market-level outcomes than traditional behavioral data (Karmarkar and Yoon, 2016). As such, neural data is increasingly sought after in the consumer market for purposes such as digital phenotyping4, neurogaming 5,and neuromarketing6 (UNESCO, 2021). This surge in demand gives rise to risks like hacking, unauthorized data reuse, extraction of privacy-sensitive information, digital surveillance, criminal exploitation of data, and other forms of abuse. These risks prompt the question of whether neural data needs distinct definition and safeguarding measures.
These issues are particularly relevant today as a wide range of electroencephalogram (EEG) headsets that can be used at home are now available in consumer markets for purposes that range from meditation assistance to controlling electronic devices through the mind. Imagine an individual is using one of these devices to play a neurofeedback game, which records the person’s brain waves during the game. Without the person being aware, the system can also identify the patterns associated with an undiagnosed mental health condition, such as anxiety. If the game company sells this data to third parties, e.g. health insurance providers, this may lead to an increase of insurance fees based on undisclosed information. This hypothetical situation would represent a clear violation of mental privacy and of unethical use of neural data.
Another example is in the field of advertising, where companies are increasingly interested in using neuroimaging to better understand consumers’ responses to their products or advertisements, a practice known as neuromarketing. For instance, a company might use neural data to determine which advertisements elicit the most positive emotional responses in consumers. While this can help companies improve their marketing strategies, it raises significant concerns about mental privacy. Questions arise in relation to consumers being aware or not that their neural data is being used, and in the extent to which this can lead to manipulative advertising practices that unfairly exploit unconscious preferences. Such potential abuses underscore the need for explicit consent and rigorous data protection measures in the use of neurotechnology for neuromarketing purposes. [pp. 21-22]
Legalities
Some countries already have laws and regulations regarding neurotechnology data,
At the national level, only a few countries have enacted laws and regulations to protect mental integrity or have included neuro-data in personal data protection laws (UNESCO, University of Milan-Bicocca (Italy) and State University of New York – Downstate Health Sciences University, 2023). Examples are the constitutional reform undertaken by Chile (Republic of Chile, 2021), the Charter for the responsible development of neurotechnologies of the Government of France (Government of France, 2022), and the Digital Rights Charter of the Government of Spain (Government of Spain, 2021). They propose different approaches to the regulation and protection of human rights in relation to neurotechnology. Countries such as the UK are also examining under which circumstances neural data may be considered as a special category of data under the general data protection framework (i.e. UK’s GDPR) (UK’s Information Commissioner’s Office, 2023) [p. 24]
As you can see, these are recent laws. There doesn’t seem to be any attempt here in Canada even though there is an act being reviewed in Parliament that could conceivably include neural data. This is from my May 1, 2023 posting,
Bill C-27 (Digital Charter Implementation Act, 2022) is what I believe is called an omnibus bill as it includes three different pieces of proposed legislation (the Consumer Privacy Protection Act [CPPA], the Artificial Intelligence and Data Act [AIDA], and the Personal Information and Data Protection Tribunal Act [PIDPTA]). [emphasis added July 11, 2023] You can read the Innovation, Science and Economic Development (ISED) Canada summary here or a detailed series of descriptions of the act here on the ISED’s Canada’s Digital Charter webpage.
My focus at the time was artificial intelligence and, now, after reading this UNESCO report and briefly looking at the Innovation, Science and Economic Development (ISED) Canada summary and a detailed series of descriptions of the act on ISED’s Canada’s Digital Charter webpage, I don’t see anything that specifies neural data but it’s not excluded either.
IP5 patents
Here’s the explanation (the footnote is included at the end of the excerpt),
IP5 patents represent a subset of overall patents filed worldwide, which have the characteristic of having been filed in at least one top intellectual property offices (IPO) worldwide (the so called IP5, namely the Chinese National Intellectual Property Administration, CNIPA (formerly SIPO); the European Patent Office, EPO; the Japan Patent Office, JPO; the Korean Intellectual Property Office, KIPO; and the United States Patent and Trademark Office, USPTO) as well as another country, which may or may not be an IP5. This signals their potential applicability worldwide, as their inventiveness and industrial viability have been validated by at least two leading IPOs. This gives these patents a sort of “quality” check, also since patenting inventions is costly and if applicants try to protect the same invention in several parts of the world, this normally mirrors that the applicant has expectations about their importance and expected value. If we were to conduct the same analysis using information about individually considered patent applied worldwide, i.e. without filtering for quality nor considering patent families, we would risk conducting a biased analysis based on duplicated data. Also, as patentability standards vary across countries and IPOs, and what matters for patentability is the existence (or not) of prior art in the IPO considered, we would risk mixing real innovations with patents related to catching up phenomena in countries that are not at the forefront of the technology considered.
9 The five IP offices (IP5) is a forum of the five largest intellectual property offices in the world that was set up to improve the efficiency of the examination process for patents worldwide. The IP5 Offices together handle about 80% of the world’s patent applications, and 95% of all work carried out under the Patent Cooperation Treaty (PCT), see http://www.fiveipoffices.org. (Dernis et al., 2015) [p. 31]
AI assistance on this report
As noted earlier I have next to no experience with the analytical tools having not attempted this kind of work in several years. Here’s an example of what they were doing,
We utilize a combination of text embeddings based on Bidirectional Encoder Representations from Transformer (BERT), dimensionality reduction, and hierarchical clustering inspired by the BERTopic methodology 12 to identify latent themes within research literature. Latent themes or topics in the context of topic modeling represent clusters of words that frequently appear together within a collection of documents (Blei, 2012). These groupings are not explicitly labeled but are inferred through computational analysis examining patterns in word usage. These themes are ‘hidden’ within the text, only to be revealed through this analysis. …
…
We further utilize OpenAI’s GPT-4 model to enrich our understanding of topics’ keywords and to generate topic labels (OpenAI, 2023), thus supplementing expert review of the broad interdisciplinary corpus. Recently, GPT-4 has shown impressive results in medical contexts across various evaluations (Nori et al., 2023), making it a useful tool to enhance the information obtained from prior analysis stages, and to complement them. The automated process enhances the evaluation workflow, effectively emphasizing neuroscience themes pertinent to potential neurotechnology patents. Notwithstanding existing concerns about hallucinations (Lee, Bubeck and Petro, 2023) and errors in generative AI models, this methodology employs the GPT-4 model for summarization and interpretation tasks, which significantly mitigates the likelihood of hallucinations. Since the model is constrained to the context provided by the keyword collections, it limits the potential for fabricating information outside of the specified boundaries, thereby enhancing the accuracy and reliability of the output. [pp. 33-34]
I couldn’t resist adding the ChatGPT paragraph given all of the recent hoopla about it.
Multimodal neuromodulation and neuromorphic computing patents
I think this gives a pretty good indication of the activity on the patent front,
The largest, coherent topic, termed “multimodal neuromodulation,” comprises 535 patents detailing methodologies for deep or superficial brain stimulation designed to address neurological and psychiatric ailments. These patented technologies interact with various points in neural circuits to induce either Long-Term Potentiation (LTP) or Long-Term Depression (LTD), offering treatment for conditions such as obsession, compulsion, anxiety, depression, Parkinson’s disease, and other movement disorders. The modalities encompass implanted deep-brain stimulators (DBS), Transcranial Magnetic Stimulation (TMS), and transcranial Direct Current Stimulation (tDCS). Among the most representative documents for this cluster are patents with titles: Electrical stimulation of structures within the brain or Systems and methods for enhancing or optimizing neural stimulation therapy for treating symptoms of Parkinson’s disease and or other movement disorders. [p.65]
Given my longstanding interest in memristors, which (I believe) have to a large extent helped to stimulate research into neuromorphic computing, this had to be included. Then, there was the brain-computer interfaces cluster,
A cluster identified as “Neuromorphic Computing” consists of 366 patents primarily focused on devices designed to mimic human neural networks for efficient and adaptable computation. The principal elements of these inventions are resistive memory cells and artificial synapses. They exhibit properties similar to the neurons and synapses in biological brains, thus granting these devices the ability to learn and modulate responses based on rewards, akin to the adaptive cognitive capabilities of the human brain.
The primary technology classes associated with these patents fall under specific IPC codes, representing the fields of neural network models, analog computers, and static storage structures. Essentially, these classifications correspond to technologies that are key to the construction of computers and exhibit cognitive functions similar to human brain processes.
Examples for this cluster include neuromorphic processing devices that leverage variations in resistance to store and process information, artificial synapses exhibiting spike-timing dependent plasticity, and systems that allow event-driven learning and reward modulation within neuromorphic computers.
In relation to neurotechnology as a whole, the “neuromorphic computing” cluster holds significant importance. It embodies the fusion of neuroscience and technology, thereby laying the basis for the development of adaptive and cognitive computational systems. Understanding this specific cluster provides a valuable insight into the progressing domain of neurotechnology, promising potential advancements across diverse fields, including artificial intelligence and healthcare.
The “Brain-Computer Interfaces” cluster, consisting of 146 patents, embodies a key aspect of neurotechnology that focuses on improving the interface between the brain and external devices. The technology classification codes associated with these patents primarily refer to methods or devices for treatment or protection of eyes and ears, devices for introducing media into, or onto, the body, and electric communication techniques, which are foundational elements of brain-computer interface (BCI) technologies.
Key patents within this cluster include a brain-computer interface apparatus adaptable to use environment and method of operating thereof, a double closed circuit brain-machine interface system, and an apparatus and method of brain-computer interface for device controlling based on brain signal. These inventions mainly revolve around the concept of using brain signals to control external devices, such as robotic arms, and improving the classification performance of these interfaces, even after long periods of non-use.
The inventions described in these patents improve the accuracy of device control, maintain performance over time, and accommodate multiple commands, thus significantly enhancing the functionality of BCIs.
Other identified technologies include systems for medical image analysis, limb rehabilitation, tinnitus treatment, sleep optimization, assistive exoskeletons, and advanced imaging techniques, among others. [pp. 66-67]
Having sections on neuromorphic computing and brain-computer interface patents in immediate proximity led to more speculation on my part. Imagine how much easier it would be to initiate a BCI connection if it’s powered with a neuromorphic (brainlike) computer/device. [ETA July 21, 2023: Following on from that thought, it might be more than just easier to initiate a BCI connection. Could a brainlike computer become part of your brain? Why not? it’s been successfully argued that a robotic wheelchair was part of someone’s body, see my January 30, 2013 posting and scroll down about 40% of the way.)]
Neurotechnology is a complex and rapidly evolving technological paradigm whose trajectories have the power to shape people’s identity, autonomy, privacy, sentiments, behaviors and overall well-being, i.e. the very essence of what it means to be human.
Designing and implementing careful and effective norms and regulations ensuring that neurotechnology is developed and deployed in an ethical manner, for the good of individuals and for society as a whole, call for a careful identification and characterization of the issues at stake. This entails shedding light on the whole neurotechnology ecosystem, that is what is being developed, where and by whom, and also understanding how neurotechnology interacts with other developments and technological trajectories, especially AI. Failing to do so may result in ineffective (at best) or distorted policies and policy decisions, which may harm human rights and human dignity.
…
Addressing the need for evidence in support of policy making, the present report offers first time robust data and analysis shedding light on the neurotechnology landscape worldwide. To this end, its proposes and implements an innovative approach that leverages artificial intelligence and deep learning on data from scientific publications and paten[t]s to identify scientific and technological developments in the neurotech space. The methodology proposed represents a scientific advance in itself, as it constitutes a quasi- automated replicable strategy for the detection and documentation of neurotechnology- related breakthroughs in science and innovation, to be repeated over time to account for the evolution of the sector. Leveraging this approach, the report further proposes an IPC-based taxonomy for neurotechnology which allows for a structured framework to the exploration of neurotechnology, to enable future research, development and analysis. The innovative methodology proposed is very flexible and can in fact be leveraged to investigate different emerging technologies, as they arise.
…
In terms of technological trajectories, we uncover a shift in the neurotechnology industry, with greater emphasis being put on computer and medical technologies in recent years, compared to traditionally dominant trajectories related to biotechnology and pharmaceuticals. This shift warrants close attention from policymakers, and calls for attention in relation to the latest (converging) developments in the field, especially AI and related methods and applications and neurotechnology.
This is all the more important and the observed growth and specialization patterns are unfolding in the context of regulatory environments that, generally, are either not existent or not fit for purpose. Given the sheer implications and impact of neurotechnology on the very essence of human beings, this lack of regulation poses key challenges related to the possible infringement of mental integrity, human dignity, personal identity, privacy, freedom of thought, and autonomy, among others. Furthermore, issues surrounding accessibility and the potential for neurotech enhancement applications triggers significant concerns, with far-reaching implications for individuals and societies. [pp. 72-73]
Last words about the report
Informative, readable, and thought-provoking. And, it helped broaden my understanding of neurotechnology.
Future endeavours?
I’m hopeful that one of these days one of these groups (UNESCO, Canadian Science Policy Centre, or ???) will tackle the issue of business bankruptcy in the neurotechnology sector. It has already occurred as noted in my ““Going blind when your neural implant company flirts with bankruptcy [long read]” April 5, 2022 posting. That story opens with a woman going blind in a New York subway when her neural implant fails. It’s how she found out the company, which supplied her implant was going out of business.
In my July 7, 2023 posting about the UNESCO July 2023 dialogue on neurotechnology, I’ve included information on Neuralink (one of Elon Musk’s companies) and its approval (despite some investigations) by the US Food and Drug Administration to start human clinical trials. Scroll down about 75% of the way to the “Food for thought” subhead where you will find stories about allegations made against Neuralink.
The end
If you want to know more about the field, the report offers a seven-page bibliography and there’s a lot of material here where you can start with this December 3, 2019 posting “Neural and technological inequalities” which features an article mentioning a discussion between two scientists. Surprisingly (to me), the source article is in Fast Company (a leading progressive business media brand), according to their tagline)..
I have two categories you may want to check: Human Enhancement and Neuromorphic Engineering. There are also a number of tags: neuromorphic computing, machine/flesh, brainlike computing, cyborgs, neural implants, neuroprosthetics, memristors, and more.
Should you have any observations or corrections, please feel free to leave them in the Comments section of this posting.
It’s fascinating to see all the current excitement (distressed and/or enthusiastic) around the act of writing and artificial intelligence. Easy to forget that it’s not new. First, the ‘non-human authors’ and then the panic(s). *What follows the ‘nonhuman authors’ is essentially a survey of situation/panic.*
How to handle non-human authors (ChatGPT and other AI agents)—the medical edition
The folks at the Journal of the American Medical Association (JAMA) have recently adopted a pragmatic approach to the possibility of nonhuman authors of scientific and medical papers, from a January 31, 2022 JAMA editorial,
Artificial intelligence (AI) technologies to help authors improve the preparation and quality of their manuscripts and published articles are rapidly increasing in number and sophistication. These include tools to assist with writing, grammar, language, references, statistical analysis, and reporting standards. Editors and publishers also use AI-assisted tools for myriad purposes, including to screen submissions for problems (eg, plagiarism, image manipulation, ethical issues), triage submissions, validate references, edit, and code content for publication in different media and to facilitate postpublication search and discoverability..1
In November 2022, OpenAI released a new open source, natural language processing tool called ChatGPT.2,3 ChatGPT is an evolution of a chatbot that is designed to simulate human conversation in response to prompts or questions (GPT stands for “generative pretrained transformer”). The release has prompted immediate excitement about its many potential uses4 but also trepidation about potential misuse, such as concerns about using the language model to cheat on homework assignments, write student essays, and take examinations, including medical licensing examinations.5 In January 2023, Nature reported on 2 preprints and 2 articles published in the science and health fields that included ChatGPT as a bylined author.6 Each of these includes an affiliation for ChatGPT, and 1 of the articles includes an email address for the nonhuman “author.” According to Nature, that article’s inclusion of ChatGPT in the author byline was an “error that will soon be corrected.”6 However, these articles and their nonhuman “authors” have already been indexed in PubMed and Google Scholar.
Nature has since defined a policy to guide the use of large-scale language models in scientific publication, which prohibits naming of such tools as a “credited author on a research paper” because “attribution of authorship carries with it accountability for the work, and AI tools cannot take such responsibility.”7 The policy also advises researchers who use these tools to document this use in the Methods or Acknowledgment sections of manuscripts.7 Other journals8,9 and organizations10 are swiftly developing policies that ban inclusion of these nonhuman technologies as “authors” and that range from prohibiting the inclusion of AI-generated text in submitted work8 to requiring full transparency, responsibility, and accountability for how such tools are used and reported in scholarly publication.9,10 The International Conference on Machine Learning, which issues calls for papers to be reviewed and discussed at its conferences, has also announced a new policy: “Papers that include text generated from a large-scale language model (LLM) such as ChatGPT are prohibited unless the produced text is presented as a part of the paper’s experimental analysis.”11 The society notes that this policy has generated a flurry of questions and that it plans “to investigate and discuss the impact, both positive and negative, of LLMs on reviewing and publishing in the field of machine learning and AI” and will revisit the policy in the future.11
…
This is a link to and a citation for the JAMA editorial,
Dr. Andrew Maynard (scientist, author, and professor of Advanced Technology Transitions in the Arizona State University [ASU] School for the Future if Innovation in Society and founder of the ASU Future of Being Human initiative and Director of the ASU Risk Innovation Nexus) also takes a pragmatic approach in a March 14, 2023 posting on his eponymous blog,
Like many of my colleagues, I’ve been grappling with how ChatGPT and other Large Language Models (LLMs) are impacting teaching and education — especially at the undergraduate level.
…
We’re already seeing signs of the challenges here as a growing divide emerges between LLM-savvy students who are experimenting with novel ways of using (and abusing) tools like ChatGPT, and educators who are desperately trying to catch up. As a result, educators are increasingly finding themselves unprepared and poorly equipped to navigate near-real-time innovations in how students are using these tools. And this is only exacerbated where their knowledge of what is emerging is several steps behind that of their students.
To help address this immediate need, a number of colleagues and I compiled a practical set of Frequently Asked Questions on ChatGPT in the classroom. These covers the basics of what ChatGPT is, possible concerns over use by students, potential creative ways of using the tool to enhance learning, and suggestions for class-specific guidelines.
Crawford Kilian, a longtime educator, author, and contributing editor to The Tyee, expresses measured enthusiasm for the new technology (as does Dr. Maynard), in a December 13, 2022 article for thetyee.ca, Note: Links have been removed,
…
ChatGPT, its makers tell us, is still in beta form. Like a million other new users, I’ve been teaching it (tuition-free) so its answers will improve. It’s pretty easy to run a tutorial: once you’ve created an account, you’re invited to ask a question or give a command. Then you watch the reply, popping up on the screen at the speed of a fast and very accurate typist.
…
Early responses to ChatGPT have been largely Luddite: critics have warned that its arrival means the end of high school English, the demise of the college essay and so on. But remember that the Luddites were highly skilled weavers who commanded high prices for their products; they could see that newfangled mechanized looms would produce cheap fabrics that would push good weavers out of the market. ChatGPT, with sufficient tweaks, could do just that to educators and other knowledge workers.
Having spent 40 years trying to teach my students how to write, I have mixed feelings about this prospect. But it wouldn’t be the first time that a technological advancement has resulted in the atrophy of a human mental skill.
Writing arguably reduced our ability to memorize — and to speak with memorable and persuasive coherence. …
…
Writing and other technological “advances” have made us what we are today — powerful, but also powerfully dangerous to ourselves and our world. If we can just think through the implications of ChatGPT, we may create companions and mentors that are not so much demonic as the angels of our better nature.
More than writing: emergent behaviour
The ChatGPT story extends further than writing and chatting. From a March 6, 2023 article by Stephen Ornes for Quanta Magazine, Note: Links have been removed,
What movie do these emojis describe?
That prompt was one of 204 tasks chosen last year to test the ability of various large language models (LLMs) — the computational engines behind AI chatbots such as ChatGPT. The simplest LLMs produced surreal responses. “The movie is a movie about a man who is a man who is a man,” one began. Medium-complexity models came closer, guessing The Emoji Movie. But the most complex model nailed it in one guess: Finding Nemo.
“Despite trying to expect surprises, I’m surprised at the things these models can do,” said Ethan Dyer, a computer scientist at Google Research who helped organize the test. It’s surprising because these models supposedly have one directive: to accept a string of text as input and predict what comes next, over and over, based purely on statistics. Computer scientists anticipated that scaling up would boost performance on known tasks, but they didn’t expect the models to suddenly handle so many new, unpredictable ones.
…
“That language models can do these sort of things was never discussed in any literature that I’m aware of,” said Rishi Bommasani, a computer scientist at Stanford University. Last year, he helped compile a list of dozens of emergent behaviors [emphasis mine], including several identified in Dyer’s project. That list continues to grow.
Now, researchers are racing not only to identify additional emergent abilities but also to figure out why and how they occur at all — in essence, to try to predict unpredictability. Understanding emergence could reveal answers to deep questions around AI and machine learning in general, like whether complex models are truly doing something new or just getting really good at statistics. It could also help researchers harness potential benefits and curtail emergent risks.
…
Biologists, physicists, ecologists and other scientists use the term “emergent” to describe self-organizing, collective behaviors that appear when a large collection of things acts as one. Combinations of lifeless atoms give rise to living cells; water molecules create waves; murmurations of starlings swoop through the sky in changing but identifiable patterns; cells make muscles move and hearts beat. Critically, emergent abilities show up in systems that involve lots of individual parts. But researchers have only recently been able to document these abilities in LLMs as those models have grown to enormous sizes.
…
But the debut of LLMs also brought something truly unexpected. Lots of somethings. With the advent of models like GPT-3, which has 175 billion parameters — or Google’s PaLM, which can be scaled up to 540 billion — users began describing more and more emergent behaviors. One DeepMind engineer even reported being able to convince ChatGPT that it was a Linux terminal and getting it to run some simple mathematical code to compute the first 10 prime numbers. Remarkably, it could finish the task faster than the same code running on a real Linux machine.
As with the movie emoji task, researchers had no reason to think that a language model built to predict text would convincingly imitate a computer terminal. Many of these emergent behaviors illustrate “zero-shot” or “few-shot” learning, which describes an LLM’s ability to solve problems it has never — or rarely — seen before. This has been a long-time goal in artificial intelligence research, Ganguli [Deep Ganguli, a computer scientist at the AI startup Anthropic] said. Showing that GPT-3 could solve problems without any explicit training data in a zero-shot setting, he said, “led me to drop what I was doing and get more involved.”
…
There is an obvious problem with asking these models to explain themselves: They are notorious liars. [emphasis mine] “We’re increasingly relying on these models to do basic work,” Ganguli said, “but I do not just trust these. I check their work.” As one of many amusing examples, in February [2023] Google introduced its AI chatbot, Bard. The blog post announcing the new tool shows Bard making a factual error.
Perhaps not entirely unrelated to current developments, there was this announcement in a May 1, 2023 article by Hannah Alberga for CTV (Canadian Television Network) news, Note: Links have been removed,
Toronto’s pioneer of artificial intelligence quits Google to openly discuss dangers of AI
…
Geoffrey Hinton, professor at the University of Toronto and the “godfather” of deep learning – a field of artificial intelligence that mimics the human brain – announced his departure from the company on Monday [May 1, 2023] citing the desire to freely discuss the implications of deep learning and artificial intelligence, and the possible consequences if it were utilized by “bad actors.”
Hinton, a British-Canadian computer scientist, is best-known for a series of deep neural network breakthroughs that won him, Yann LeCun and Yoshua Bengio the 2018 Turing Award, known as the Nobel Prize of computing.
…
Hinton has been invested in the now-hot topic of artificial intelligence since its early stages. In 1970, he got a Bachelor of Arts in experimental psychology from Cambridge, followed by his Ph.D. in artificial intelligence in Edinburgh, U.K. in 1978.
He joined Google after spearheading a major breakthrough with two of his graduate students at the University of Toronto in 2012, in which the team uncovered and built a new method of artificial intelligence: neural networks. The team’s first neural network was incorporated and sold to Google for $44 million.
Neural networks are a method of deep learning that effectively teaches computers how to learn the way humans do by analyzing data, paving the way for machines to classify objects and understand speech recognition.
…
There’s a bit more from Hinton in a May 3, 2023 article by Sheena Goodyear for the Canadian Broadcasting Corporation’s (CBC) radio programme, As It Happens (the 10 minute radio interview is embedded in the article), Note: A link has been removed,
There was a time when Geoffrey Hinton thought artificial intelligence would never surpass human intelligence — at least not within our lifetimes.
Nowadays, he’s not so sure.
“I think that it’s conceivable that this kind of advanced intelligence could just take over from us,” the renowned British-Canadian computer scientist told As It Happens host Nil Köksal. “It would mean the end of people.”
…
For the last decade, he [Geoffrey Hinton] divided his career between teaching at the University of Toronto and working for Google’s deep-learning artificial intelligence team. But this week, he announced his resignation from Google in an interview with the New York Times.
Now Hinton is speaking out about what he fears are the greatest dangers posed by his life’s work, including governments using AI to manipulate elections or create “robot soldiers.”
But other experts in the field of AI caution against his visions of a hypothetical dystopian future, saying they generate unnecessary fear, distract from the very real and immediate problems currently posed by AI, and allow bad actors to shirk responsibility when they wield AI for nefarious purposes.
…
Ivana Bartoletti, founder of the Women Leading in AI Network, says dwelling on dystopian visions of an AI-led future can do us more harm than good.
“It’s important that people understand that, to an extent, we are at a crossroads,” said Bartoletti, chief privacy officer at the IT firm Wipro.
“My concern about these warnings, however, is that we focus on the sort of apocalyptic scenario, and that takes us away from the risks that we face here and now, and opportunities to get it right here and now.”
…
Ziv Epstein, a PhD candidate at the Massachusetts Institute of Technology who studies the impacts of technology on society, says the problems posed by AI are very real, and he’s glad Hinton is “raising the alarm bells about this thing.”
“That being said, I do think that some of these ideas that … AI supercomputers are going to ‘wake up’ and take over, I personally believe that these stories are speculative at best and kind of represent sci-fi fantasy that can monger fear” and distract from more pressing issues, he said.
He especially cautions against language that anthropomorphizes — or, in other words, humanizes — AI.
…
“It’s absolutely possible I’m wrong. We’re in a period of huge uncertainty where we really don’t know what’s going to happen,” he [Hinton] said.
Don Pittis in his May 4, 2022 business analysis for CBC news online offers a somewhat jaundiced view of Hinton’s concern regarding AI, Note: Links have been removed,
As if we needed one more thing to terrify us, the latest warning from a University of Toronto scientist considered by many to be the founding intellect of artificial intelligence, adds a new layer of dread.
Others who have warned in the past that thinking machines are a threat to human existence seem a little miffed with the rock-star-like media coverage Geoffrey Hinton, billed at a conference this week as the Godfather of AI, is getting for what seems like a last minute conversion. Others say Hinton’s authoritative voice makes a difference.
…
Not only did Hinton tell an audience of experts at Wednesday’s [May 3, 2023] EmTech Digital conference that humans will soon be supplanted by AI — “I think it’s serious and fairly close.” — he said that due to national and business competition, there is no obvious way to prevent it.
“What we want is some way of making sure that even if they’re smarter than us, they’re going to do things that are beneficial,” said Hinton on Wednesday [May 3, 2023] as he explained his change of heart in detailed technical terms.
“But we need to try and do that in a world where there’s bad actors who want to build robot soldiers that kill people and it seems very hard to me.”
“I wish I had a nice and simple solution I could push, but I don’t,” he said. “It’s not clear there is a solution.”
So when is all this happening?
“In a few years time they may be significantly more intelligent than people,” he told Nil Köksal on CBC Radio’s As It Happens on Wednesday [May 3, 2023].
While he may be late to the party, Hinton’s voice adds new clout to growing anxiety that artificial general intelligence, or AGI, has now joined climate change and nuclear Armageddon as ways for humans to extinguish themselves.
But long before that final day, he worries that the new technology will soon begin to strip away jobs and lead to a destabilizing societal gap between rich and poor that current politics will be unable to solve.
The EmTech Digital conference is a who’s who of AI business and academia, fields which often overlap. Most other participants at the event were not there to warn about AI like Hinton, but to celebrate the explosive growth of AI research and business.
…
As one expert I spoke to pointed out, the growth in AI is exponential and has been for a long time. But even knowing that, the increase in the dollar value of AI to business caught the sector by surprise.
Eight years ago when I wrote about the expected increase in AI business, I quoted the market intelligence group Tractica that AI spending would “be worth more than $40 billion in the coming decade,” which sounded like a lot at the time. It appears that was an underestimate.
“The global artificial intelligence market size was valued at $428 billion U.S. in 2022,” said an updated report from Fortune Business Insights. “The market is projected to grow from $515.31 billion U.S. in 2023.” The estimate for 2030 is more than $2 trillion.
…
This week the new Toronto AI company Cohere, where Hinton has a stake of his own, announced it was “in advanced talks” to raise $250 million. The Canadian media company Thomson Reuters said it was planning “a deeper investment in artificial intelligence.” IBM is expected to “pause hiring for roles that could be replaced with AI.” The founders of Google DeepMind and LinkedIn have launched a ChatGPT competitor called Pi.
And that was just this week.
…
“My one hope is that, because if we allow it to take over it will be bad for all of us, we could get the U.S. and China to agree, like we did with nuclear weapons,” said Hinton. “We’re all the in same boat with respect to existential threats, so we all ought to be able to co-operate on trying to stop it.”
Interviewer and moderator Will Douglas Heaven, an editor at MIT Technology Review finished Hinton’s sentence for him: “As long as we can make some money on the way.”
Geoffrey Hinton, the 75-year-old computer scientist known as the “Godfather of AI,” made headlines this week after resigning from Google to sound the alarm about the technology he helped create. In a series of high-profile interviews, the machine learning pioneer has speculated that AI will surpass humans in intelligence and could even learn to manipulate or kill people on its own accord.
But women who for years have been speaking out about AI’s problems—even at the expense of their jobs—say Hinton’s alarmism isn’t just opportunistic but also overshadows specific warnings about AI’s actual impacts on marginalized people.
“It’s disappointing to see this autumn-years redemption tour [emphasis mine] from someone who didn’t really show up” for other Google dissenters, says Meredith Whittaker, president of the Signal Foundation and an AI researcher who says she was pushed out of Google in 2019 in part over her activism against the company’s contract to build machine vision technology for U.S. military drones. (Google has maintained that Whittaker chose to resign.)
…
Another prominent ex-Googler, Margaret Mitchell, who co-led the company’s ethical AI team, criticized Hinton for not denouncing Google’s 2020 firing of her coleader Timnit Gebru, a leading researcher who had spoken up about AI’s risks for women and people of color.
“This would’ve been a moment for Dr. Hinton to denormalize the firing of [Gebru],” Mitchell tweeted on Monday. “He did not. This is how systemic discrimination works.”
Gebru, who is Black, was sacked in 2020 after refusing to scrap a research paper she coauthored about the risks of large language models to multiply discrimination against marginalized people. …
…
… An open letter in support of Gebru was signed by nearly 2,700 Googlers in 2020, but Hinton wasn’t one of them.
Instead, Hinton has used the spotlight to downplay Gebru’s voice. In an appearance on CNN Tuesday [May 2, 2023], for example, he dismissed a question from Jake Tapper about whether he should have stood up for Gebru, saying her ideas “aren’t as existentially serious as the idea of these things getting more intelligent than us and taking over.” [emphasis mine]
…
Gebru has been mentioned here a few times. She’s mentioned in passing in a June 23, 2022 posting “Racist and sexist robots have flawed AI” and in a little more detail in an August 30, 2022 posting “Should AI algorithms get patents for their inventions and is anyone talking about copyright for texts written by AI algorithms?” scroll down to the ‘Consciousness and ethical AI’ subhead
Chan has another Fast Company article investigating AI issues also published on May 5, 2023, “Researcher Meredith Whittaker says AI’s biggest risk isn’t ‘consciousness’—it’s the corporations that control them.”
The last two existential AI panics
The term “autumn-years redemption tour”is striking and while the reference to age could be viewed as problematic, it also hints at the money, honours, and acknowledgement that Hinton has enjoyed as an eminent scientist. I’ve covered two previous panics set off by eminent scientists. “Existential risk” is the title of my November 26, 2012 posting which highlights Martin Rees’ efforts to found the Centre for Existential Risk at the University of Cambridge.
Rees is a big deal. From his Wikipedia entry, Note: Links have been removed,
Martin John Rees, Baron Rees of Ludlow OM FRS FREng FMedSci FRAS HonFInstP[10][2] (born 23 June 1942) is a British cosmologist and astrophysicist.[11] He is the fifteenth Astronomer Royal, appointed in 1995,[12][13][14] and was Master of Trinity College, Cambridge, from 2004 to 2012 and President of the Royal Society between 2005 and 2010.[15][16][17][18][19][20]
The next panic was set off by Stephen Hawking (1942 – 2018; also at the University of Cambridge, Wikipedia entry) a few years before he died. (Note: Rees, Hinton, and Hawking were all born within five years of each other and all have/had ties to the University of Cambridge. Interesting coincidence, eh?) From a January 9, 2015 article by Emily Chung for CBC news online,
Machines turning on their creators has been a popular theme in books and movies for decades, but very serious people are starting to take the subject very seriously. Physicist Stephen Hawking says, “the development of full artificial intelligence could spell the end of the human race.” Tesla Motors and SpaceX founder Elon Musk suggests that AI is probably “our biggest existential threat.”
Artificial intelligence experts say there are good reasons to pay attention to the fears expressed by big minds like Hawking and Musk — and to do something about it while there is still time.
Hawking made his most recent comments at the beginning of December [2014], in response to a question about an upgrade to the technology he uses to communicate, He relies on the device because he has amyotrophic lateral sclerosis, a degenerative disease that affects his ability to move and speak.
…
Popular works of science fiction – from the latest Terminator trailer, to the Matrix trilogy, to Star Trek’s borg – envision that beyond that irreversible historic event, machines will destroy, enslave or assimilate us, says Canadian science fiction writer Robert J. Sawyer.
Sawyer has written about a different vision of life beyond singularity [when machines surpass humans in general intelligence,] — one in which machines and humans work together for their mutual benefit. But he also sits on a couple of committees at the Lifeboat Foundation, a non-profit group that looks at future threats to the existence of humanity, including those posed by the “possible misuse of powerful technologies” such as AI. He said Hawking and Musk have good reason to be concerned.
To sum up, the first panic was in 2012, the next in 2014/15, and the latest one began earlier this year (2023) with a letter. A March 29, 2023 Thompson Reuters news item on CBC news online provides information on the contents,
Elon Musk and a group of artificial intelligence experts and industry executives are calling for a six-month pause in developing systems more powerful than OpenAI’s newly launched GPT-4, in an open letter citing potential risks to society and humanity.
Earlier this month, Microsoft-backed OpenAI unveiled the fourth iteration of its GPT (Generative Pre-trained Transformer) AI program, which has wowed users with its vast range of applications, from engaging users in human-like conversation to composing songs and summarizing lengthy documents.
The letter, issued by the non-profit Future of Life Institute and signed by more than 1,000 people including Musk, called for a pause on advanced AI development until shared safety protocols for such designs were developed, implemented and audited by independent experts.
…
Co-signatories included Stability AI CEO Emad Mostaque, researchers at Alphabet-owned DeepMind, and AI heavyweights Yoshua Bengio, often referred to as one of the “godfathers of AI,” and Stuart Russell, a pioneer of research in the field.
According to the European Union’s transparency register, the Future of Life Institute is primarily funded by the Musk Foundation, as well as London-based effective altruism group Founders Pledge, and Silicon Valley Community Foundation.
The concerns come as EU police force Europol on Monday {March 27, 2023] joined a chorus of ethical and legal concerns over advanced AI like ChatGPT, warning about the potential misuse of the system in phishing attempts, disinformation and cybercrime.
Meanwhile, the U.K. government unveiled proposals for an “adaptable” regulatory framework around AI.
The government’s approach, outlined in a policy paper published on Wednesday [March 29, 2023], would split responsibility for governing artificial intelligence (AI) between its regulators for human rights, health and safety, and competition, rather than create a new body dedicated to the technology.
…
The engineers have chimed in, from an April 7, 2023 article by Margo Anderson for the IEEE (institute of Electrical and Electronics Engineers) Spectrum magazine, Note: Links have been removed,
The open letter [published March 29, 2023], titled “Pause Giant AI Experiments,” was organized by the nonprofit Future of Life Institute and signed by more than 27,565 people (as of 8 May). It calls for cessation of research on “all AI systems more powerful than GPT-4.”
It’s the latest of a host of recent “AI pause” proposals including a suggestion by Google’s François Chollet of a six-month “moratorium on people overreacting to LLMs” in either direction.
In the news media, the open letter has inspired straight reportage, critical accounts for not going far enough (“shut it all down,” Eliezer Yudkowsky wrote in Time magazine), as well as critical accounts for being both a mess and an alarmist distraction that overlooks the real AI challenges ahead.
IEEE members have expressed a similar diversity of opinions.
…
There was an earlier open letter in January 2015 according to Wikipedia’s “Open Letter on Artificial Intelligence” entry, Note: Links have been removed,
In January 2015, Stephen Hawking, Elon Musk, and dozens of artificial intelligence experts[1] signed an open letter on artificial intelligence calling for research on the societal impacts of AI. The letter affirmed that society can reap great potential benefits from artificial intelligence, but called for concrete research on how to prevent certain potential “pitfalls”: artificial intelligence has the potential to eradicate disease and poverty, but researchers must not create something which is unsafe or uncontrollable.[1] The four-paragraph letter, titled “Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter”, lays out detailed research priorities in an accompanying twelve-page document.
…
As for ‘Mr. ChatGPT’ or Sam Altman, CEO of OpenAI, while he didn’t sign the March 29, 2023 letter, he appeared before US Congress suggesting AI needs to be regulated according to May 16, 2023 news article by Mohar Chatterjee for Politico.
You’ll notice I’ve arbitrarily designated three AI panics by assigning their origins to eminent scientists. In reality, these concerns rise and fall in ways that don’t allow for such a tidy analysis. As Chung notes, science fiction regularly addresses this issue. For example, there’s my October 16, 2013 posting, “Wizards & Robots: a comic book encourages study in the sciences and maths and discussions about existential risk.” By the way, will.i.am (of the Black Eyed Peas band was involved in the comic book project and he us a longtime supporter of STEM (science, technology, engineering, and mathematics) initiatives.
Finally (but not quite)
Puzzling, isn’t it? I’m not sure we’re asking the right questions but it’s encouraging to see that at least some are being asked.
Dr. Andrew Maynard in a May 12, 2023 essay for The Conversation (h/t May 12, 2023 item on phys.org) notes that ‘Luddites’ questioned technology’s inevitable progress and were vilified for doing so, Note: Links have been removed,
The term “Luddite” emerged in early 1800s England. At the time there was a thriving textile industry that depended on manual knitting frames and a skilled workforce to create cloth and garments out of cotton and wool. But as the Industrial Revolution gathered momentum, steam-powered mills threatened the livelihood of thousands of artisanal textile workers.
Faced with an industrialized future that threatened their jobs and their professional identity, a growing number of textile workers turned to direct action. Galvanized by their leader, Ned Ludd, they began to smash the machines that they saw as robbing them of their source of income.
It’s not clear whether Ned Ludd was a real person, or simply a figment of folklore invented during a period of upheaval. But his name became synonymous with rejecting disruptive new technologies – an association that lasts to this day.
Questioning doesn’t mean rejecting
Contrary to popular belief, the original Luddites were not anti-technology, nor were they technologically incompetent. Rather, they were skilled adopters and users of the artisanal textile technologies of the time. Their argument was not with technology, per se, but with the ways that wealthy industrialists were robbing them of their way of life
…
In December 2015, Stephen Hawking, Elon Musk and Bill Gates were jointly nominated for a “Luddite Award.” Their sin? Raising concerns over the potential dangers of artificial intelligence.
The irony of three prominent scientists and entrepreneurs being labeled as Luddites underlines the disconnect between the term’s original meaning and its more modern use as an epithet for anyone who doesn’t wholeheartedly and unquestioningly embrace technological progress.
Yet technologists like Musk and Gates aren’t rejecting technology or innovation. Instead, they’re rejecting a worldview that all technological advances are ultimately good for society. This worldview optimistically assumes that the faster humans innovate, the better the future will be.
…
In an age of ChatGPT, gene editing and other transformative technologies, perhaps we all need to channel the spirit of Ned Ludd as we grapple with how to ensure that future technologies do more good than harm.
In fact, “Neo-Luddites” or “New Luddites” is a term that emerged at the end of the 20th century.
In 1990, the psychologist Chellis Glendinning published an essay titled “Notes toward a Neo-Luddite Manifesto.”
…
Then there are the Neo-Luddites who actively reject modern technologies, fearing that they are damaging to society. New York City’s Luddite Club falls into this camp. Formed by a group of tech-disillusioned Gen-Zers, the club advocates the use of flip phones, crafting, hanging out in parks and reading hardcover or paperback books. Screens are an anathema to the group, which sees them as a drain on mental health.
I’m not sure how many of today’s Neo-Luddites – whether they’re thoughtful technologists, technology-rejecting teens or simply people who are uneasy about technological disruption – have read Glendinning’s manifesto. And to be sure, parts of it are rather contentious. Yet there is a common thread here: the idea that technology can lead to personal and societal harm if it is not developed responsibly.
…
Getting back to where this started with nonhuman authors, Amelia Eqbal has written up an informal transcript of a March 16, 2023 CBC radio interview (radio segment is embedded) about ChatGPT-4 (the latest AI chatbot from OpenAI) between host Elamin Abdelmahmoud and tech journalist, Alyssa Bereznak.
I was hoping to add a little more Canadian content, so in March 2023 and again in April 2023, I sent a question about whether there were any policies regarding nonhuman or AI authors to Kim Barnhardt at the Canadian Medical Association Journal (CMAJ). To date, there has been no reply but should one arrive, I will place it here.
In the meantime, I have this from Canadian writer, Susan Baxter in her May 15, 2023 blog posting “Coming soon: Robot Overlords, Sentient AI and more,”
…
The current threat looming (Covid having been declared null and void by the WHO*) is Artificial Intelligence (AI) which, we are told, is becoming too smart for its own good and will soon outsmart humans. Then again, given some of the humans I’ve met along the way that wouldn’t be difficult.
…
All this talk of scary-boo AI seems to me to have become the worst kind of cliché, one that obscures how our lives have become more complicated and more frustrating as apps and bots and cyber-whatsits take over.
The trouble with clichés, as Alain de Botton wrote in How Proust Can Change Your Life, is not that they are wrong or contain false ideas but more that they are “superficial articulations of good ones”. Cliches are oversimplifications that become so commonplace we stop noticing the more serious subtext. (This is rife in medicine where metaphors such as talk of “replacing” organs through transplants makes people believe it’s akin to changing the oil filter in your car. Or whatever it is EV’s have these days that needs replacing.)
…
Should you live in Vancouver (Canada) and are attending a May 28, 2023 AI event, you may want to read Susan Baxter’s piece as a counterbalance to, “Discover the future of artificial intelligence at this unique AI event in Vancouver,” a May 19, 2023 sponsored content by Katy Brennan for the Daily Hive,
…
If you’re intrigued and eager to delve into the rapidly growing field of AI, you’re not going to want to miss this unique Vancouver event.
On Sunday, May 28 [2023], a Multiplatform AI event is coming to the Vancouver Playhouse — and it’s set to take you on a journey into the future of artificial intelligence.
The exciting conference promises a fusion of creativity, tech innovation, and thought–provoking insights, with talks from renowned AI leaders and concept artists, who will share their experiences and opinions.
Guests can look forward to intense discussions about AI’s pros and cons, hear real-world case studies, and learn about the ethical dimensions of AI, its potential threats to humanity, and the laws that govern its use.
Live Q&A sessions will also be held, where leading experts in the field will address all kinds of burning questions from attendees. There will also be a dynamic round table and several other opportunities to connect with industry leaders, pioneers, and like-minded enthusiasts.
…
This conference is being held at The Playhouse, 600 Hamilton Street, from 11 am to 7:30 pm, ticket prices range from $299 to $349 to $499 (depending on when you make your purchase, From the Multiplatform AI Conference homepage,
Event Speakers
Max Sills General Counsel at Midjourney
From Jan 2022 – present (Advisor – now General Counsel) – Midjourney – An independent research lab exploring new mediums of thought and expanding the imaginative powers of the human species (SF) Midjourney – a generative artificial intelligence program and service created and hosted by a San Francisco-based independent research lab Midjourney, Inc. Midjourney generates images from natural language descriptions, called “prompts”, similar to OpenAI’s DALL-E and Stable Diffusion. For now the company uses Discord Server as a source of service and, with huge 15M+ members, is the biggest Discord server in the world. In the two-things-at-once department, Max Sills also known as an owner of Open Advisory Services, firm which is set up to help small and medium tech companies with their legal needs (managing outside counsel, employment, carta, TOS, privacy). Their clients are enterprise level, medium companies and up, and they are here to help anyone on open source and IP strategy. Max is an ex-counsel at Block, ex-general manager of the Crypto Open Patent Alliance. Prior to that Max led Google’s open source legal group for 7 years.
…
So, the first speaker listed is a lawyer associated with Midjourney, a highly controversial generative artificial intelligence programme used to generate images. According to their entry on Wikipedia, the company is being sued, Note: Links have been removed,
…
On January 13, 2023, three artists – Sarah Andersen, Kelly McKernan, and Karla Ortiz – filed a copyright infringement lawsuit against Stability AI, Midjourney, and DeviantArt, claiming that these companies have infringed the rights of millions of artists, by training AI tools on five billion images scraped from the web, without the consent of the original artists.[32]
…
My October 24, 2022 posting highlights some of the issues with generative image programmes and Midjourney is mentioned throughout.
As I noted earlier, I’m glad to see more thought being put into the societal impact of AI and somewhat disconcerted by the hyperbole from the like of Geoffrey Hinton and the like of Vancouver’s Multiplatform AI conference organizers. Mike Masnick put it nicely in his May 24, 2023 posting on TechDirt (Note 1: I’ve taken a paragraph out of context, his larger issue is about proposals for legislation; Note 2: Links have been removed),
…
Honestly, this is partly why I’ve been pretty skeptical about the “AI Doomers” who keep telling fanciful stories about how AI is going to kill us all… unless we give more power to a few elite people who seem to think that it’s somehow possible to stop AI tech from advancing. As I noted last month, it is good that some in the AI space are at least conceptually grappling with the impact of what they’re building, but they seem to be doing so in superficial ways, focusing only on the sci-fi dystopian futures they envision, and not things that are legitimately happening today from screwed up algorithms.
…
For anyone interested in the Canadian government attempts to legislate AI, there’s my May 1, 2023 posting, “Canada, AI regulation, and the second reading of the Digital Charter Implementation Act, 2022 (Bill C-27).”
Addendum (June 1, 2023)
Another statement warning about runaway AI was issued on Tuesday, May 30, 2023. This was far briefer than the previous March 2023 warning, from the Center for AI Safety’s “Statement on AI Risk” webpage,
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war [followed by a list of signatories] …
Vanessa Romo’s May 30, 2023 article (with contributions from Bobby Allyn) for NPR ([US] National Public Radio) offers an overview of both warnings. Rae Hodge’s May 31, 2023 article for Salon offers a more critical view, Note: Links have been removed,
The artificial intelligence world faced a swarm of stinging backlash Tuesday morning, after more than 350 tech executives and researchers released a public statement declaring that the risks of runaway AI could be on par with those of “nuclear war” and human “extinction.” Among the signatories were some who are actively pursuing the profitable development of the very products their statement warned about — including OpenAI CEO Sam Altman and Google DeepMind CEO Demis Hassabis.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement from the non-profit Center for AI Safety said.
But not everyone was shaking in their boots, especially not those who have been charting AI tech moguls’ escalating use of splashy language — and those moguls’ hopes for an elite global AI governance board.
TechCrunch’s Natasha Lomas, whose coverage has been steeped in AI, immediately unravelled the latest panic-push efforts with a detailed rundown of the current table stakes for companies positioning themselves at the front of the fast-emerging AI industry.
“Certainly it speaks volumes about existing AI power structures that tech execs at AI giants including OpenAI, DeepMind, Stability AI and Anthropic are so happy to band and chatter together when it comes to publicly amplifying talk of existential AI risk. And how much more reticent to get together to discuss harms their tools can be seen causing right now,” Lomas wrote.
“Instead of the statement calling for a development pause, which would risk freezing OpenAI’s lead in the generative AI field, it lobbies policymakers to focus on risk mitigation — doing so while OpenAI is simultaneously crowdfunding efforts to shape ‘democratic processes for steering AI,'” Lomas added.
…
The use of scary language and fear as a marketing tool has a long history in tech. And, as the LA Times’ Brian Merchant pointed out in an April column, OpenAI stands to profit significantly from a fear-driven gold rush of enterprise contracts.
“[OpenAI is] almost certainly betting its longer-term future on more partnerships like the one with Microsoft and enterprise deals serving large companies,” Merchant wrote. “That means convincing more corporations that if they want to survive the coming AI-led mass upheaval, they’d better climb aboard.”
…
Fear, after all, is a powerful sales tool.
Romo’s May 30, 2023 article for NPR offers a good overview and, if you have the time, I recommend reading Hodge’s May 31, 2023 article for Salon in its entirety.
*ETA June 8, 2023: This sentence “What follows the ‘nonhuman authors’ is essentially a survey of situation/panic.” was added to the introductory paragraph at the beginning of this post.