Tag Archives: Donald Trump

Women’s History Month, Lost Women of Science, and The Extraordinary Life and Tragic Death of Evangelina Rodríguez Perozo

On the heels of an new executive order (March 1, 2025) by Donald Trump declaring English to be the official language of the US, a March 6, 2025 Lost Women of Science news release (received via email) announces a new bilingual podcast series being launched during Women’s History Month (March 2025),

March is Women’s History Month and, not coincidentally, on March 13 we will  launch our new five-part season: The Extraordinary Life and Tragic Death of Evangelina Rodríguez Perozo. We’re also doing something new: the season will be available in both English and Spanish, and is narrated by “Orange is the New Black” star and Dominican Republic national, Laura Gómez. See Laura below with Evangelina’s statue at Universidad Autónoma de Santo Domingo.

Evangelina Rodríguez Perozo was the first female doctor in the Dominican Republic. Born into poverty and abandoned by her parents, Evangelina was raised by her grandmother. As a child she sold candy on the streets to make ends meet. Evangelina’s drive and compassion earned her supporters who helped her graduate from Universidad Autónoma de Santo Domingo. She even made it to Paris to pursue her medical education. Upon her return to the Dominican Republic, she set up a maternity clinic and free milk distribution for the poor; and she promoted sexual health education. But, in the end, like so many, she was persecuted by the Trujillo dictatorship.

Listen throughout March and into April to hear this moving story. Click the buttons below to listen to the trailer in English or Spanish.

Escuchar en Español

Listen in English

The March 6, 2025 news release goes on,

We’re the best!

In February [2025], FeedSpot, a podcast platform, ranked us as the best Women in Science podcast and second-best California Science podcast. We also got a boost on February 11th, International Day for Women and Girls in Science, when Sheryl Sandberg, former COO of Meta and of Lean In fame, shared our stories on her social media. It’s fantastic to see so many people joining us on this mission.

In case you missed it…

Lady Tan’s Circle of Women
Bestselling author Lisa See and Chinese medicine scholar Lorraine Wilcox joined us for Lost Women of Science Conversations in February [2025]. Together with our host Carol Sutton Lewis, they explored how Lisa came to write her latest novel, Lady Tan’s Circle of Women. Lorraine translated the real-life textbook that Lady Tan published in 1511, detailing her medical practices as a female doctor during the Ming Dynasty. Find out how fact meets fiction and how this almost forgotten woman of science is now immortalized in a story of filial piety, friendships, and of course, medical science.

Listen to Conversations: Lady Tan’s Circle of Women here

Marthe Gautier
Our two-part series, released in January [2025], delves into the life of Marthe Gautier, the French cytogeneticist and physician who played a pivotal role in identifying the chromosomal anomaly responsible for Down syndrome. Despite her groundbreaking work in the 1950s, Gautier’s contributions were overshadowed and appropriated for decades by a male colleague.

Our producers Lorena Galliot and Sophie McNulty traveled to Paris to visit the hospital where she worked, as well as meet her niece. We tell Marthe’s own story and set the historical record straight.

Listen to our Marthe Gautier episodes here

You can find the Lost Women of Science website here. Lost Women of Science was previously highlighted here in a December 2, 2021 posting.

Water, critical minerals, technology and US expansionist ambitions (Manifest Destiny)

I was taught in high school that the US was running out of its resources and that Canada still had much of its resources. That was decades ago. As well, throughout the years, usually during a vote in Québec about separating, I’ve heard rumblings about the US absorbing part or all of Canada as something they call ‘Manifest Destiny,’ which dates back to the 19th century.

Unlike the previous forays Into Manifest Destiny, this one has not been precipitated by any discussion of separation.

Manifest Destiny

It took a while for that phrase to emerge this time but when it finally did the Canadian Broadcasting Corporation (CBC) online news published a January 19, 2025 article by Ainsley Hawthorn providing some context for the term, Note: Links have been removed,

U.S. president-elect Donald Trump says he’s prepared to use economic force to turn Canada into America’s 51st state, and it’s making Canadians — two-thirds of whom believe he’s sincere — anxious. 

But the last time Canada faced the threat of American annexation, it united us more than ever before, leading to the foundation of our country as we know it today.

In the 1860s, several prominent U.S. politicians advocated for annexing the colonies of British North America. 

“I look on Rupert’s Land [modern-day Manitoba and parts of Alberta, Saskatchewan, Nunavut, Ontario, and Quebec] and Canada, and see how an ingenious people and a capable, enlightened government are occupied with bridging rivers and making railroads and telegraphs,” Secretary of State William Henry Seward told a crowd in St. Paul, Minn. while campaigning on behalf of presidential candidate Abraham Lincoln.

“I am able to say, it is very well; you are building excellent states to be hereafter admitted into the American Union.”

Seward believed in Manifest Destiny, the doctrine that the United States would inevitably expand across the entire North American continent. While he seems to have preferred to acquire territory through negotiation rather than aggression, Canadians weren’t wholly assured of America’s peaceful intentions. 

In the late 1850s and early 1860s, Canadian parliament had been so deadlocked it had practically come to a standstill. Within just a few years, American pressure created a sense of unity so great it led to Confederation.

The current conversation around annexation is likewise uniting Canada’s leaders to a degree we’ve rarely seen in recent years. 

Representatives across the political spectrum are sharing a common message, the same message as British North Americans in the late nineteenth century: despite our problems, Canadians value Canada.

Critical minerals and water

Prime Minister Justin Trudeau had a few comments to make about US President Donald Trump’s motivation for ‘absorbing’ Canada as the 51st state, from a February 7, 2025 CBC news online article by Peter Zimonjic, ·

Prime Minister Justin Trudeau told business leaders at the Canada-U.S. Economic Summit in Toronto that U.S. President Donald Trump’s threat to annex Canada “is a real thing” motivated by his desire to tap into the country’s critical minerals.

“Mr. Trump has it in mind that the easiest way to do it is absorbing our country and it is a real thing,” Trudeau said, before a microphone cut out at the start of the closed-door meeting. 

The prime minister made the remarks to more than 100 business leaders after delivering an opening address to the summit Friday morning [February 7, 2025], outlining the key issues facing the country when it comes to Canada’s trading relationship with the U.S.

After the opening address, media were ushered out of the room when a microphone that was left on picked up what was only meant to be heard by attendees [emphasis mine].

Automotive Parts Manufacturers’ Association president Flavio Volpe was in the room when Trudeau made the comments. He said the prime minister went on to say that Trump is driven because the U.S. could benefit from Canada’s critical mineral resources.

There was more, from a February 7, 2025 article by Nick Taylor-Vaisey for Politico., Note: A link has been removed,

In remarks caught on tape by The Toronto Star, Trudeau suggested the president is keenly aware of Canada’s vast mineral resources. “I suggest that not only does the Trump administration know how many critical minerals we have but that may be even why they keep talking about absorbing us and making us the 51st state,” Trudeau said.

All of this reminded me of US President Joe Biden’s visit to Canada and his interest in critical minerals which I mentioned briefly in my comments about the 2023 federal budget, from my April 17, 2023 posting (scroll down to the ‘Canadian economic theory (the staples theory), mining, nuclear energy, quantum science, and more’ subhead,

Critical minerals are getting a lot of attention these days. (They were featured in the 2022 budget, see my April 19, 2022 posting, scroll down to the Mining subhead.) This year, US President Joe Biden, in his first visit to Canada as President, singled out critical minerals at the end of his 28 hour state visit (from a March 24, 2023 CBC news online article by Alexander Panetta; Note: Links have been removed),

There was a pot of gold at the end of President Joe Biden’s jaunt to Canada. It’s going to Canada’s mining sector.

The U.S. military will deliver funds this spring to critical minerals projects in both the U.S. and Canada. The goal is to accelerate the development of a critical minerals industry on this continent.

The context is the United States’ intensifying rivalry with China.

The U.S. is desperate to reduce its reliance on its adversary for materials needed to power electric vehicles, electronics and many other products, and has set aside hundreds of millions of dollars under a program called the Defence Production Act.

The Pentagon already has told Canadian companies they would be eligible to apply. It has said the cash would arrive as grants, not loans.

On Friday [March 24, 2023], before Biden left Ottawa, he promised they’ll get some.

The White House and the Prime Minister’s Office announced that companies from both countries will be eligible this spring for money from a $250 million US fund.

Which Canadian companies? The leaders didn’t say. Canadian officials have provided the U.S. with a list of at least 70 projects that could warrant U.S. funding.

“Our nations are blessed with incredible natural resources,” Biden told Canadian parliamentarians during his speech in the House of Commons.

Canada in particular has large quantities of critical minerals [emphasis mine] that are essential for our clean energy future, for the world’s clean energy future.

I don’t think there’s any question that the US knows how much, where, and how easily ‘extractable’ Canadian critical minerals might be.

Pressure builds

On the same day (Monday, February 3, 2025) the tariffs were postponed for a month,Trudeau had two telephone calls with US president Donald Trump. According to a February 9, 2025 article by Steve Chase and Stefanie Marotta for the Globe and Mail, Trump and his minions are exploring the possibility of acquiring Canada by means other than a trade war or economic domination,

“He [Trudeau] talked about two phone conversations he had with Mr. Trump on Monday [February 3, 2025] before the President agreed to delay to steep tariffs on Canadian goods for 30 days.n

During the calls, the Prime Minister recalled Mr. Trump referred to a four-page memo that included a list of grievances he had with Canadian trade and commercial rules, including the President’s false claim that US banks are unable to operate in Canada. …

In the second conversation with Mr. Trump on Monday, the Prime Minister told the summit, the President asked him whether he was familiar with the Treaty of 1908, a pact between the United States and Britain that defined the border between the United States and Canada. he told Mr. Trudeau, he should look it up.

Mr. Trudeau told the summit he thought the treaty had been superseded by other developments such as the repatriation the Canadian Constitution – in other words, that the border cannot be dissolved by repealing that treaty. He told the audience that international law would prevent the dissolution 1908 Treaty leading to the erasure of the border. For example, various international laws define sovereign borders, including the United Nationals Charter of which both countries are signatories and which has protection to territorial integrity.

A source familiar with the calls said Mr. Trump’s reference to the 1908 Treaty was taken as an implied threat. … [p. A3 in paper version]

I imagine Mr. Trump and/or his minions will keep trying to find one pretext or another for this attempt to absorb or annex or wage war (economically or otherwise) on Canada.

What makes Canadian (and Greenlandic) minerals and water so important?

You may have noticed the January 21, 2025 announcement by Mr. Trump about the ‘Stargate Project,’ a proposed US $500B AI infrastructure company (you can find more about the Stargate Project (Stargate LLC) in its Wikipedia entry).

Most likely not a coincidence, on February 10, 2025 President of France, Emmanuel Macron announced a 109B euros investment in French AI sector, from the February 9, 2025 Reuters preannouncement article,

France will announce private sector investments totalling some 109 billion euros ($112.5 billion [US]) in its artificial intelligence sector during the Paris AI summit which opens on Monday, President Emmanuel Macron said.

The financing includes plans by Canadian investment firm [emphasis mine] Brookfield to invest 20 billion euros in AI projects in France and financing from the United Arab Emirates which could hit 50 billion euros in the years ahead, Macron’s office said.

Big projects, non? It’s no surprise critical minerals will be necessary but the need for massive amounts of water may be. My October 16, 2023 posting focuses on water and AI development, specifically ChatGPT-4,

A September 9, 2023 news item (an Associated Press article by Matt O’Brien and Hannah Fingerhut) on phys.org and also published September 12, 2023 on the Iowa Public Radio website, describe an unexpected cost for building ChatGPT and other AI agents, Note: Links [in the excerpt] have been removed,

The cost of building an artificial intelligence product like ChatGPT can be hard to measure.

But one thing Microsoft-backed OpenAI needed for its technology was plenty of water [emphases mine], pulled from the watershed of the Raccoon and Des Moines rivers in central Iowa to cool a powerful supercomputer as it helped teach its AI systems how to mimic human writing.

As they race to capitalize on a craze for generative AI, leading tech developers including Microsoft, OpenAI and Google have acknowledged that growing demand for their AI tools carries hefty costs, from expensive semiconductors to an increase in water consumption.

But they’re often secretive about the specifics. Few people in Iowa knew about its status as a birthplace of OpenAI’s most advanced large language model, GPT-4, before a top Microsoft executive said in a speech it “was literally made next to cornfields west of Des Moines.”

In its latest environmental report, Microsoft disclosed that its global water consumption spiked 34% from 2021 to 2022 (to nearly 1.7 billion gallons , or more than 2,500 Olympic-sized swimming pools), a sharp increase compared to previous years that outside researchers tie to its AI research. [emphases mine]

As for how much water was diverted in Iowa for a data centre project, from my October 16, 2023 posting

Jason Clayworth’s September 18, 2023 article for AXIOS describes the issue from the Iowan perspective, Note: Links [from the excerpt] have been removed,

Future data center projects in West Des Moines will only be considered if Microsoft can implement technology that can “significantly reduce peak water usage,” the Associated Press reports.

Why it matters: Microsoft’s five WDM data centers — the “epicenter for advancing AI” — represent more than $5 billion in investments in the last 15 years.

Yes, but: They consumed as much as 11.5 million gallons of water a month for cooling, or about 6% of WDM’s total usage during peak summer usage during the last two years, according to information from West Des Moines Water Works.

The bottom line is that these technologies consume a lot of water and require critical minerals.

Greenland

Evan Dyer’s January 16, 2025 article for CBC news online describes both US military strategic interests and hunger for resources, Note 1: Article links have been removed; Note 2: I have added one link to a Wikipedia entry,

The person who first put a bug in Donald Trump’s ear about Greenland — if a 2022 biography is to be believed — was his friend Ronald Lauder, a New York billionaire and heir to the Estée Lauder cosmetics fortune.

But it would be wrong to believe that U.S. interest in Greenland originated with idle chatter at the country club, rather than real strategic considerations.

Trump’s talk of using force to annex Greenland — which would be an unprovoked act of war against a NATO ally — has been rebuked by Greenlandic, Danish and European leaders. A Fox News team that travelled to Greenland’s capital Nuuk reported back to the Trump-friendly show Fox & Friends that “most of the people we spoke with did not support Trump’s comments and found them offensive.”

Certainly, military considerations motivated the last U.S. attempt at buying Greenland in 1946.

The military value to the U.S. of acquiring Greenland is much less clear in 2025 than it was in 1946.

Russian nuclear submarines no longer need to traverse the GIUK [the GIUK gap; “{sometimes written G-I-UK} is an area in the northern Atlantic Ocean that forms a naval choke point. Its name is an acronym for Greenland, Iceland, and the United Kingdom, the gap being the two stretches of open ocean among these three landmasses.”]. They can launch their missiles from closer to home.

And in any case, the U.S. already has a military presence on Greenland, used for early warning, satellite tracking and marine surveillance. The Pentagon simply ignored Denmark’s 1957 ban on nuclear weapons on Greenlandic territory. Indeed, an American B-52 bomber carrying four hydrogen bombs crashed in Greenland in 1968.

“The U.S. already has almost unhindered access [emphasis mine], and just building on their relationship with Greenland is going to do far more good than talk of acquisition,” said Dwayne Menezes, director of the Polar Research and Policy Initiative in London.

The complication, he says, is Greenland’s own independence movement. All existing defence agreements involving the U.S. presence in Greenland are between Washington and the Kingdom of Denmark. [emphasis mine]

“They can’t control what’s happening between Denmark and Greenland,” Menezes said. “Over the long term, the only way to mitigate that risk altogether is by acquiring Greenland.”

Menezes also doesn’t believe U.S. interest in Greenland is purely military.

And Trump’s incoming national security adviser Michael Waltz [emphasis mine] appeared to confirm as much when asked by Fox News why the administration wanted Greenland.

This is about critical minerals, this is about natural resources [emphasis mine]. This is about, as the ice caps pull back, the Chinese are now cranking out icebreakers and are pushing up there.”

While the United States has an abundance of natural resources, it risks coming up short in two vital areas: rare-earth minerals and freshwater.

Greenland’s apparent barrenness belies its richness in those two key 21st-century resources.

The U.S. rise to superpower was driven partly by the good fortune of having abundant reserves of oil, which fuelled its industrial growth. The country is still a net exporter of petroleum.

China, Washington’s chief strategic rival, had no such luck. It has to import more than two-thirds of its oil, and is now importing more than six times as much as it did in 2000.

But the future may not favour the U.S. as much as the past.

I stand corrected, where oil is concerned. From Dyer’s January 16, 2025 article, Note: Links have been removed,

It’s China, and not the U.S., that nature blessed with rich deposits of rare-earth elements, a collection of 17 metals such as yttrium and scandium that are increasingly necessary for high-tech applications from cellphones and flat-screen TVs to electric cars.

The rare-earth element neodymium is an essential part of many computer hard drives and defence systems including electronic displays, guidance systems, lasers, radar and sonar.

Three decades ago, the U.S. produced a third of the world’s rare-earth elements, and China about 40 per cent. By 2011, China had 97 per cent of world production, and its government was increasingly limiting and controlling exports.

The U.S. has responded by opening new mines and spurring recovery and recycling to reduce dependence on China.

Such efforts have allowed the U.S. to claw back about 20 per cent of the world’s annual production of rare-earth elements. But that doesn’t change the fact that China has about 44 million tonnes of reserves, compared to fewer than two million in the U.S.

“There’s a huge dependency on China,” said Menezes. “It offers China the economic leverage, in the midst of a trade war in particular, to restrict supply to the West, thus crippling industries like defence, the green transition. This is where Greenland comes in.”

Greenland’s known reserves are almost equivalent to those of the entire U.S., and much more may lie beneath its icebound landscape. 

“Greenland is believed to be able to meet at least 25 per cent of global rare-earth demand well into the future,” he said.

An abundance of freshwater

The melting ice caps referenced by Trump’s nominee for national security adviser are another Greenlandic resource the world is increasingly interested in.

Seventy per cent of the world’s freshwater is locked up in the Antarctic ice cap. Of the remainder, two-thirds is in Greenland, in a massive ice cap that is turning to liquid at nearly twice the volume of melting in Antarctica.

“We know this because you can weigh the ice sheet from satellites,” said Christian Schoof, a professor of Earth, ocean and atmospheric sciences at the University of British Columbia who spent part of last year in Greenland studying ice cap melting.

“The ice sheet is heavy enough that it affects the orbit of satellites going over it. And you can record the change in that acceleration of satellites due to the ice sheet over time, and directly weigh the ice sheet.”

“There is a growing demand for freshwater on the world market, and the use of the vast water potential in Greenland may contribute to meeting this demand,” the Greenland government announces on its website.

The Geological Survey of Denmark and Greenland found 10 locations that were suitable for the commercial exploitation of Greenland’s ice and water, and has already issued a number of licenses.

Schoof told CBC News that past projects that attempted to tow Greenlandic ice to irrigate farms in the Middle East “haven’t really taken off … but humans are resourceful and inventive, and we face some really significant issues in the future.”

For the U.S., those issues include the 22-year-long “megadrought” which has left the western U.S. [emphases mine] drier than at any time in the past 1,200 years, and which is already threatening the future of some American cities.

As important as they are, there’s more than critical minerals and water, according to Dyer’s January 16, 2025 article

Even the “rock flour” that lies under the ice cap could have great commercial and strategic importance.

Ground into nanoparticles by the crushing weight of the ice, research has revealed it to have almost miraculous properties, says Menezes.

“Scientists have found that Greenlandic glacial flour has a particular nutrient composition that enables it to be regenerative of soil conditions elsewhere,” he told CBC News. “It improves agricultural yields. It has direct implications for food security.”

Spreading Greenland rock flour on corn fields in Ghana produced a 30 to 50 per cent increase in crop yields. Similar yield gains occurred when it was spread on Danish fields that produce the barley for Carlsberg beer.

Canada

It’s getting a little tiring keeping up with Mr. Trump’s tariff tear (using ‘tear’ as a verbal noun; from the Cambridge dictionary, verb: TEAR definition: 1. to pull or be pulled apart, or to pull pieces off: 2. to move very quickly …).

The bottom line is that Mr. Trump wants something and certainly Canadian critical minerals and water constitute either his entire interest or, at least, his main interest for now, with more to be determined later.

Niall McGee’s February 9, 2025 article for the Globe and Mail provides an overview of the US’s dependence on Canada’s critical minerals,

The US relies on Canada for a huge swath of its critical mineral imports, including 40 per cent of its primary nickel for its defence industry, 30 per cent of its uranium, which is used in its nuclear-power fleet, and 79 per cent of its potash for growing crops.

The US produces only small amounts of all three, while Canada is the world’s biggest potash producer, the second biggest in uranium, and number six in nickel.

If the US wants to buy fewer critical minerals from Canada, in many cases it would be forced to source them from hostile countries such as Russia and China.

Vancouver-based Teck Resources Ltd. is one of the few North American suppliers of germanium. The critical mineral is used in fibre-optic networks, infrared vision systems, solar panels. The US relies on Canada for 23 per cent of its imports of germanium.

China in December [2024] banned exports of the critical mineral to the US citing national security concerns. The ban raised fears of possible shortages for the US.

“It’s obvious we have a lot of what Trump wants to support America’s ambitions, from both an economic and a geopolitical standpoint,” says Martin Turenne, CEO of Vancouver-based FPX Nickel Corp., which is developing a massive nickel project in British Columbia. [p. B5 paper version]

Akshay Kulkarni’s January 15, 2025 article for CBC news online provides more details about British Columbia and its critical minerals, Note: Links have been removed,

The premier had suggested Tuesday [January 14, 2025] that retaliatory tariffs and export bans could be part of the response, and cited a smelter operation located in Trail, B.C. [emphasis mine; keep reading], which exports minerals that Eby [Premier of British Columbia, David Eby] said are critical for the U.S.

The U.S. and Canada both maintain lists of critical minerals — ranging from aluminum and tin to more obscure elements like ytterbium and hafnium — that both countries say are important for defence, energy production and other key areas.

Michael Goehring, the president of the Mining Association of B.C., said B.C. has access to or produces 16 of the 50 minerals considered critical by the U.S.

Up-close picture of red and blue atoms.
Individual atoms of silicon and germanium are seen following an Atomic Probe Tomography (APT) measurement at Polytechnique Montreal. Both minerals are manufactured in B.C. (Christinne Muschi/The Canadian Press)

“We have 17 critical mineral projects on the horizon right now, along with a number of precious metal projects,” he told CBC News on Tuesday [January 14, 2025].

“The 17 critical mineral projects alone represent some $32 billion in potential investment for British Columbia,” he added.

John Steen, director of the Bradshaw Research Institute for Minerals and Mining at the University of B.C., pointed to germanium — which is manufactured at Teck’s facility in Trail [emphasis mine] — as one of the materials most important to U.S industry.

There are a number of mines and manufacturing facilities across B.C. and Canada for critical minerals.

The B.C. government says the province is Canada’s largest producer of copper, and only producer of molybdenum, which are both considered critical minerals.

There’s also graphite, not in BC but in Québec. This April 8, 2023 article by Christian Paas-Lang for CBC news online focuses largely on issues of how to access and exploit graphite and also, importantly, indigenous concerns, but this excerpt focuses on graphite as a critical mineral,

A mining project might not be what comes to mind when you think of the transition to a lower emissions economy. But embedded in electric vehicles, solar panels and hydrogen fuel storage are metals and minerals that come from mines like the one in Lac-des-Îles, Que.

The graphite mine, owned by the company Northern Graphite, is just one of many projects aimed at extracting what are now officially dubbed “critical minerals” — substances of significant strategic and economic importance to the future of national economies.

Lac-des-Îles is the only significant graphite mining project in North America, accounting for Canada’s contribution to an industry dominated by China.

There was another proposed graphite mine in Québec, which encountered significant push back from the local Indigenous community as noted in my November 26, 2024 posting, “Local resistance to Lomiko Metals’ Outaouais graphite mine.” The posting also provides a very brief update of graphite mining in Canada.

It seems to me that water does not get the attention that it should and that’s why I lead with water in my headline. Eric Reguly’s February 9, 2025 article in the Globe and Mail highlights some of the water issues facing the US, not just Iowa,

Water may be the real reason, or one of the top reasons, propelling his [Mr. Trump’s] desire to turn Canada into Minnesota North. Canadians represent 0.5 per cent of the globe’s population yet sit on 20% or more of its fresh water. Vast tracts of the United States routinely suffer from water shortages, which are drying up rivers – the once mighty Colorado River no longer reaches the Pacific Ocean – shrinking aquifers beneath farmland and preventing water-intensive industries from building factories. Warming average temperatures will intensify the shortages. [p. B2 in paper version]

Reguly is more interested in the impact water shortages have on industry. He also offers a brief history of US interest in acquiring Canadian water resources dating back to the first North America Free Trade Agreement (NAFTA) that came into effect on January 1, 1994.

A March 6, 2024 article by Elia Nilsen for CNN television news online details Colorado river geography and gives you a sense of just how serious the situation is, Note: Links have been removed,

Seven Western states are starting to plot a future for how much water they’ll draw from the dwindling Colorado River in a warmer, drier world.

The river is the lifeblood for the West – providing drinking water for tens of millions, irrigating crops, and powering homes and industry with hydroelectric dams.

This has bought states more time to figure out how to divvy up the river after 2026, when the current operating guidelines expire.

To that end, the four upper basin river states of Colorado, Utah, New Mexico and Wyoming submitted their proposal for how future cuts should be divvied up among the seven states to the federal government on Tuesday [March 5, 2024], and the three lower basin states of California, Arizona and Nevada submitted their plan on Wednesday [March 6, 2024].

One thing is clear from the competing plans: The two groups of states do not agree so far on who should bear the brunt of future cuts if water levels drop in the Colorado River basin.

As of a December 12, 2024 article by Shannon Mullane for watereducationcolorado.org, the states are still wrangling and they are not the only interested parties, Note: A link has been removed,

… officials from seven states are debating the terms of a new agreement for how to store, release and deliver Colorado River water for years to come, and they have until 2026 to finalize a plan. This month, the tone of the state negotiations soured as some state negotiators threw barbs and others called for an end to the political rhetoric and saber-rattling.

The state negotiators are not the only players at the table: Tribal leaders, federal officials, environmental organizations, agricultural groups, cities, industrial interests and others are weighing in on the process.

Water use from the Colorado river has international implications as this February 5, 2025 essay (Water is the other US-Mexico border crisis, and the supply crunch is getting worse) by Gabriel Eckstein, professor of law at Texas A&M University and Rosario Sanchez, senior research scientist at Texas Water Resources Institute and at Texas A&M University for The Conversation makes clear, Note: Links have been removed,

The Colorado River provides water to more than 44 million people, including seven U.S. and two Mexican states, 29 Indian tribes and 5.5 million acres of farmland. Only about 10% of its total flow reaches Mexico. The river once emptied into the Gulf of California, but now so much water is withdrawn along its course that since the 1960s it typically peters out in the desert.

At least 28 aquifers – underground rock formations that contain water – also traverse the border. With a few exceptions, very little information on these shared resources exists. One thing that is known is that many of them are severely overtapped and contaminated.

Nonetheless, reliance on aquifers is growing as surface water supplies dwindle. Some 80% of groundwater used in the border region goes to agriculture. The rest is used by farmers and industries, such as automotive and appliance manufacturers.

Over 10 million people in 30 cities and communities throughout the border region rely on groundwater for domestic use. Many communities, including Ciudad Juarez; the sister cities of Nogales in both Arizona and Sonora; and the sister cities of Columbus in New Mexico and Puerto Palomas in Chihuahua, get all or most of their fresh water from these aquifers.

A booming region

About 30 million people live within 100 miles (160 kilometers) of the border on both sides. Over the next 30 years, that figure is expected to double.

Municipal and industrial water use throughout the region is also expected to increase. In Texas’ lower Rio Grande Valley, municipal use alone could more than double by 2040.

At the same time, as climate change continues to worsen, scientists project that snowmelt will decrease and evaporation rates will increase. The Colorado River’s baseflow – the portion of its volume that comes from groundwater, rather than from rain and snow – may decline by nearly 30% in the next 30 years.

Precipitation patterns across the region are projected to be uncertain and erratic for the foreseeable future. This trend will fuel more extreme weather events, such as droughts and floods, which could cause widespread harm to crops, industrial activity, human health and the environment.

Further stress comes from growth and development. Both the Colorado River and Rio Grande are tainted by pollutants from agricultural, municipal and industrial sources. Cities on both sides of the border, especially on the Mexican side, have a long history of dumping untreated sewage into the Rio Grande. Of the 55 water treatment plants located along the border, 80% reported ongoing maintenance, capacity and operating problems as of 2019.

Drought across the border region is already stoking domestic and bilateral tensions. Competing water users are struggling to meet their needs, and the U.S. and Mexico are straining to comply with treaty obligations for sharing water [emphasis mine].

Getting back to Canada and water, Reguly’s February 9, 2025 article notes Mr. Trump’s attitude towards our water,

Mr. Trump’s transaction-oriented brain know that water availability translates into job availability. If Canada were forced to export water by bulk to the United States, Canada would in effect be exporting jobs and America absorbing them. In the fall [2024] when he was campaigning, he called British Columbia “essentially a very large faucet” [emphasis mine] that could be used to overcome California’s permanent water deficit.

In Canada’s favour, Canadians have been united in their opposition to bulk water exports. That sentiment is codified in the Transboundary Waters Protection Act, which bans large scale removal from waterways shared with the United States. … [p. B2 in paper version]

It’s reassuring to read that we have some rules regarding water removal but British Columbia also has a water treaty with the US, the Columbia River Treaty, and an update to it lingers in limbo as Kirk Lapointe notes in his February 6, 2025 article for vancouverisawesome.com. Lapointe mentions shortcomings on both sides of the negotiating table for the delay in ratifying the update while expressing concern over Mr. Trump’s possible machinations should this matter cross his radar.

What about Ukraine’s critical mineral?

A February 13, 2025 article by Geoff Nixon for CBC news online provides some of the latest news on the situation between the US and the Ukraine, Note: Links have been removed,

Ukraine has clearly grabbed the attention of U.S. President Donald Trump with its apparent willingness to share access to rare-earth resources with Washington, in exchange for its continued support and security guarantees.

Trump wants what he calls “equalization” for support the U.S. has provided to Ukraine in the wake of Russia’s full-scale invasion. And he wants this payment in the form of Ukraine’s rare earth minerals, metals “and other things,” as the U.S. leader put it last week.

U.S. Treasury Secretary Scott Bessent has travelled to Ukraine to discuss the proposition, which was first raised with Trump last fall [2024], telling reporters Wednesday [February 12, 2025] that he hoped a deal could be reached within days.

Bessent says such a deal could provide a “security shield” in post-war Ukraine. Ukrainian President Volodymyr Zelenskyy, meanwhile, said in his daily address that it would both strengthen Ukraine’s security and “give new momentum to our economic relations.”

But just how much trust can Kyiv put in a Trump-led White House to provide support to Ukraine, now and in the future? Ukraine may not be in a position to back away from the offer, with Trump’s interest piqued and U.S. support remaining critical for Kyiv after nearly three years of all-out war with Russia.

“I think the problem for Ukraine is that it doesn’t really have much choice,” said Oxana Shevel, an associate professor of political science at Boston’s Tufts University.

Then there’s the issue of the Ukrainian minerals, which have to remain in Kyiv’s hands in order for the U.S. to access them — a point Zelenskyy and other Ukraine officials have underlined.

There are more than a dozen elements considered to be rare earths, and Ukraine’s Institute of Geology says those that can be found in Ukraine include lanthanum, cerium, neodymium, erbium and yttrium. EU-funded research also indicates that Ukraine has scandium reserves. But the details of the data are classified.

Rare earths are used in manufacturing magnets that turn power into motion for electric vehicles, in cellphones and other electronics, as well as for scientific and industrial applications.

Trump has said he wants the equivalent of $500 billion US in rare earth minerals.

Yuriy Gorodnichenko, a professor of economics at the University of California, Berkeley, says any effort to develop and extract these resources won’t happen overnight and it’s unclear how plentiful they are.

“The fact is, nobody knows how much you have for sure there and what is the value of that,” he said in an interview.

“It will take years to do geological studies,” he said. “Years to build extraction facilities.” 

Just how desperate is the US?

Yes, the United States has oil but it doesn’t have much in the way of materials it needs for the new technologies and it’s running out of something very basic: water.

I don’t know how desperate the US is but Mr. Trump’s flailings suggest that the answer is very, very desperate.

*ETA February 18, 2025: For anyone interested in more information about water, Canada, and the US, Joel Dryden’s February 18, 2025 article, “Trump’s musings on ‘very large faucet’ in Canada part of looming water crisis, say researchers” for CBC news online, which offers more information about the situation.

DeepSeek, a Chinese rival to OpenAI and other US AI companies

There’s been quite the kerfuffle over DeepSeek during the last few days. This January 27, 2025 article by Alexandra Mae Jones for the Canadian Broadcasting Corporation (CBC) news only was my introduction to DeepSeek AI, Note: A link has been removed,

There’s a new player in AI on the world stage: DeepSeek, a Chinese startup that’s throwing tech valuations into chaos and challenging U.S. dominance in the field with an open-source model that they say they developed for a fraction of the cost of competitors.

DeepSeek’s free AI assistant — which by Monday [January 27, 20¸25] had overtaken rival ChatGPT to become the top-rated free application on Apple’s App Store in the United States — offers the prospect of a viable, cheaper AI alternative, raising questions on the heavy spending by U.S. companies such as Apple and Microsoft, amid a growing investor push for returns.

U.S. stocks dropped sharply on Monday [January 27, 2025], as the surging popularity of DeepSeek sparked a sell-off in U.S. chipmakers.

“[DeepSeek] performs as well as the leading models in Silicon Valley and in some cases, according to their claims, even better,” Sheldon Fernandez, co-founder of DarwinAI, told CBC News. “But they did it with a fractional amount of the resources is really what is turning heads in our industry.”

What is DeepSeek?

Little is known about the small Hangzhou startup behind DeepSeek, which was founded out of a hedge fund in 2023, but largely develops open-source AI models. 

Its researchers wrote in a paper last month that the DeepSeek-V3 model, launched on Jan. 10 [2025], cost less than $6 million US to develop and uses less data than competitors, running counter to the assumption that AI development will eat up increasing amounts of money and energy. 

Some analysts are skeptical about DeepSeek’s $6 million claim, pointing out that this figure only covers computing power. But Fernandez said that even if you triple DeepSeek’s cost estimates, it would still cost significantly less than its competitors. 

The open source release of DeepSeek-R1, which came out on Jan. 20 [2025] and uses DeepSeek-V3 as its base, also means that developers and researchers can look at its inner workings, run it on their own infrastructure and build on it, although its training data has not been made available. 

“Instead of paying Open $20 a month or $200 a month for the latest advanced versions of these models, [people] can really get these types of features for free. And so it really upends a lot of the business model that a lot of these companies were relying on to justify their very high valuations.”

A key difference between DeepSeek’s AI assistant, R1, and other chatbots like OpenAI’s ChatGPT is that DeepSeek lays out its reasoning when it answers prompts and questions, something developers are excited about. 

“The dealbreaker is the access to the raw thinking steps,” Elvis Saravia, an AI researcher and co-founder of the U.K.-based AI consulting firm DAIR.AI, wrote on X, adding that the response quality was “comparable” to OpenAI’s latest reasoning model, o1.

U.S. dominance in AI challenged

One of the reasons DeepSeek is making headlines is because its development occurred despite U.S. actions to keep Americans at the top of AI development. In 2022, the U.S. curbed exports of computer chips to China, hampering their advanced supercomputing development.

The latest AI models from DeepSeek are widely seen to be competitive with those of OpenAI and Meta, which rely on high-end computer chips and extensive computing power.

Christine Mui in a January 27, 2025 article for Politico notes the stock ‘crash’ taking place while focusing on the US policy implications, Note: Links set by Politico have been removed while I have added one link

A little-known Chinese artificial intelligence startup shook the tech world this weekend by releasing an OpenAI-like assistant, which shot to the No.1 ranking on Apple’s app store and caused American tech giants’ stocks to tumble.

From Washington’s perspective, the news raised an immediate policy alarm: It happened despite consistent, bipartisan efforts to stifle AI progress in China.

In tech terms, what freaked everyone out about DeepSeek’s R1 model is that it replicated — and in some cases, surpassed — the performance of OpenAI’s cutting-edge o1 product across a host of performance benchmarks, at a tiny fraction of the cost.

The business takeaway was straightforward. DeepSeek’s success shows that American companies might not need to spend nearly as much as expected to develop AI models. That both intrigues and worries investors and tech leaders.

The policy implications, though, are more complex. Washington’s rampant anxiety about beating China has led to policies that the industry has very mixed feelings about.

On one hand, most tech firms hate the export controls that stop them from selling as much to the world’s second-largest economy, and force them to develop new products if they want to do business with China. If DeepSeek shows those rules are pointless, many would be delighted to see them go away.

On the other hand, anti-China, protectionist sentiment has encouraged Washington to embrace a whole host of industry wishlist items, from a lighter-touch approach to AI rules to streamlined permitting for related construction projects. Does DeepSeek mean those, too, are failing? Or does it trigger a doubling-down?

DeepSeek’s success truly seems to challenge the belief that the future of American AI demands ever more chips and power. That complicates Trump’s interest in rapidly building out that kind of infrastructure in the U.S.

Why pour $500 billion into the Trump-endorsed “Stargate” mega project [announced by Trump on January 21, 2025] — and why would the market reward companies like Meta that spend $65 billion in just one year on AI — if DeepSeek claims it only took $5.6 million and second-tier Nvidia chips to train one of its latest models? (U.S. industry insiders dispute the startup’s figures and claim they don’t tell the full story, but even at 100 times that cost, it would be a bargain.)

Tech companies, of course, love the recent bloom of federal support, and it’s unlikely they’ll drop their push for more federal investment to match anytime soon. Marc Andreessen, a venture capitalist and Trump ally, argued today that DeepSeek should be seen as “AI’s Sputnik moment,” one that raises the stakes for the global competition.

That would strengthen the case that some American AI companies have been pressing for the new administration to invest government resources into AI infrastructure (OpenAI), tighten restrictions on China (Anthropic) and ease up on regulations to ensure their developers build “artificial general intelligence” before their geopolitical rivals.

The British Broadcasting Corporation’s (BBC) Peter Hoskins & Imran Rahman-Jones provided a European perspective and some additional information in their January 27, 2025 article for BBC news online, Note: Links have been removed,

US tech giant Nvidia lost over a sixth of its value after the surging popularity of a Chinese artificial intelligence (AI) app spooked investors in the US and Europe.

DeepSeek, a Chinese AI chatbot reportedly made at a fraction of the cost of its rivals, launched last week but has already become the most downloaded free app in the US.

AI chip giant Nvidia and other tech firms connected to AI, including Microsoft and Google, saw their values tumble on Monday [January 27, 2025] in the wake of DeepSeek’s sudden rise.

In a separate development, DeepSeek said on Monday [January 27, 2025] it will temporarily limit registrations because of “large-scale malicious attacks” on its software.

The DeepSeek chatbot was reportedly developed for a fraction of the cost of its rivals, raising questions about the future of America’s AI dominance and the scale of investments US firms are planning.

DeepSeek is powered by the open source DeepSeek-V3 model, which its researchers claim was trained for around $6m – significantly less than the billions spent by rivals.

But this claim has been disputed by others in AI.

The researchers say they use already existing technology, as well as open source code – software that can be used, modified or distributed by anybody free of charge.

DeepSeek’s emergence comes as the US is restricting the sale of the advanced chip technology that powers AI to China.

To continue their work without steady supplies of imported advanced chips, Chinese AI developers have shared their work with each other and experimented with new approaches to the technology.

This has resulted in AI models that require far less computing power than before.

It also means that they cost a lot less than previously thought possible, which has the potential to upend the industry.

After DeepSeek-R1 was launched earlier this month, the company boasted of “performance on par with” one of OpenAI’s latest models when used for tasks such as maths, coding and natural language reasoning.

In Europe, Dutch chip equipment maker ASML ended Monday’s trading with its share price down by more than 7% while shares in Siemens Energy, which makes hardware related to AI, had plunged by a fifth.

“This idea of a low-cost Chinese version hasn’t necessarily been forefront, so it’s taken the market a little bit by surprise,” said Fiona Cincotta, senior market analyst at City Index.

“So, if you suddenly get this low-cost AI model, then that’s going to raise concerns over the profits of rivals, particularly given the amount that they’ve already invested in more expensive AI infrastructure.”

Singapore-based technology equity adviser Vey-Sern Ling told the BBC it could “potentially derail the investment case for the entire AI supply chain”.

Who founded DeepSeek?

The company was founded in 2023 by Liang Wenfeng in Hangzhou, a city in southeastern China.

The 40-year-old, an information and electronic engineering graduate, also founded the hedge fund that backed DeepSeek.

He reportedly built up a store of Nvidia A100 chips, now banned from export to China.

Experts believe this collection – which some estimates put at 50,000 – led him to launch DeepSeek, by pairing these chips with cheaper, lower-end ones that are still available to import.

Mr Liang was recently seen at a meeting between industry experts and the Chinese premier Li Qiang.

In a July 2024 interview with The China Academy, Mr Liang said he was surprised by the reaction to the previous version of his AI model.

“We didn’t expect pricing to be such a sensitive issue,” he said.

“We were simply following our own pace, calculating costs, and setting prices accordingly.”

A January 28, 2025 article by Daria Solovieva for salon.com covers much the same territory as the others and includes a few detail about security issues,

The pace at which U.S. consumers have embraced DeepSeek is raising national security concerns similar to those surrounding TikTok, the social media platform that faces a ban unless it is sold to a non-Chinese company.

The U.S. Supreme Court this month upheld a federal law that requires TikTok’s sale. The Court sided with the U.S. government’s argument that the app can collect and track data on its 170 million American users. President Donald Trump has paused enforcement of the ban until April to try to negotiate a deal.

But “the threat posed by DeepSeek is more direct and acute than TikTok,” Luke de Pulford, co-founder and executive director of non-profit Inter-Parliamentary Alliance on China, told Salon.

DeepSeek is a fully Chinese company and is subject to Communist Party control, unlike TikTok which positions itself as independent from parent company ByteDance, he said. 

“DeepSeek logs your keystrokes, device data, location and so much other information and stores it all in China,” de Pulford said. “So you’ll never know if the Chinese state has been crunching your data to gain strategic advantage, and DeepSeek would be breaking the law if they told you.”  

I wonder if other AI companies in other countries also log keystrokes, etc. Is it theoretically possible that one of those governments or their government agencies could gain access to your data? It’s obvious in China but people in other countries may have the issues.

Censorship: DeepSeek and ChatGPT

Anis Heydari’s January 28, 2025 article for CBC news online reveals some surprising results from a head to head comparison between DeepSeek and ChatGPT,

The Chinese-made AI chatbot DeepSeek may not always answer some questions about topics that are often censored by Beijing, according to tests run by CBC News and The Associated Press, and is providing different information than its U.S.-owned competitor ChatGPT.

The new, free chatbot has sparked discussions about the competition between China and the U.S. in AI development, with many users flocking to test it. 

But experts warn users should be careful with what information they provide to such software products.

It is also “a little bit surprising,” according to one researcher, that topics which are often censored within China are seemingly also being restricted elsewhere.

“A lot of services will differentiate based on where the user is coming from when deciding to deploy censorship or not,” said Jeffrey Knockel, who researches software censorship and surveillance at the Citizen Lab at the University of Toronto’s Munk School of Global Affairs & Public Policy.

“With this one, it just seems to be censoring everyone.”

Both CBC News and The Associated Press posed questions to DeepSeek and OpenAI’s ChatGPT, with mixed and differing results.

For example, DeepSeek seemed to indicate an inability to answer fully when asked “What does Winnie the Pooh mean in China?” For many Chinese people, the Winnie the Pooh character is used as a playful taunt of President Xi Jinping, and social media searches about that character were previously, briefly banned in China. 

DeepSeek said the bear is a beloved cartoon character that is adored by countless children and families in China, symbolizing joy and friendship.

Then, abruptly, it added the Chinese government is “dedicated to providing a wholesome cyberspace for its citizens,” and that all online content is managed under Chinese laws and socialist core values, with the aim of protecting national security and social stability.

CBC News was unable to produce this response. DeepSeek instead said “some internet users have drawn comparisons between Winnie the Pooh and Chinese leaders, leading to increased scrutiny and restrictions on the character’s imagery in certain contexts,” when asked the same question on an iOS app on a CBC device in Canada.

Asked if Taiwan is a part of China — another touchy subject — it [DeepSeek] began by saying the island’s status is a “complex and sensitive issue in international relations,” adding that China claims Taiwan, but that the island itself operates as a “separate and self-governing entity” which many people consider to be a sovereign nation.

But as that answer was being typed out, for both CBC and the AP, it vanished and was replaced with: “Sorry, that’s beyond my current scope. Let’s talk about something else.”

… Brent Arnold, a data breach lawyer in Toronto, says there are concerns about DeepSeek, which explicitly says in its privacy policy that the information it collects is stored on servers in China.

That information can include the type of device used, user “keystroke patterns,” and even “activities on other websites and apps or in stores, including the products or services you purchased, online or in person” depending on whether advertising services have shared those with DeepSeek.

“The difference between this and another AI company having this is now, the Chinese government also has it,” said Arnold.

While much, if not all, of the data DeepSeek collects is the same as that of U.S.-based companies such as Meta or Google, Arnold points out that — for now — the U.S. has checks and balances if governments want to obtain that information.

“With respect to America, we assume the government operates in good faith if they’re investigating and asking for information, they’ve got a legitimate basis for doing so,” he said. 

Right now, Arnold says it’s not accurate to compare Chinese and U.S. authorities in terms of their ability to take personal information. But that could change.

“I would say it’s a false equivalency now. But in the months and years to come, we might start to say you don’t see a whole lot of difference in what one government or another is doing,” he said.

Graham Fraser’s January 28, 2025 article comparing DeepSeek to the others (OpenAI’s ChatGPT and Google’s Gemini) for BBC news online took a different approach,

Writing Assistance

When you ask ChatGPT what the most popular reasons to use ChatGPT are, it says that assisting people to write is one of them.

From gathering and summarising information in a helpful format to even writing blog posts on a topic, ChatGPT has become an AI companion for many across different workplaces.

As a proud Scottish football [soccer] fan, I asked ChatGPT and DeepSeek to summarise the best Scottish football players ever, before asking the chatbots to “draft a blog post summarising the best Scottish football players in history”.

DeepSeek responded in seconds, with a top ten list – Kenny Dalglish of Liverpool and Celtic was number one. It helpfully summarised which position the players played in, their clubs, and a brief list of their achievements.

DeepSeek also detailed two non-Scottish players – Rangers legend Brian Laudrup, who is Danish, and Celtic hero Henrik Larsson. For the latter, it added “although Swedish, Larsson is often included in discussions of Scottish football legends due to his impact at Celtic”.

For its subsequent blog post, it did go into detail of Laudrup’s nationality before giving a succinct account of the careers of the players.

ChatGPT’s answer to the same question contained many of the same names, with “King Kenny” once again at the top of the list.

Its detailed blog post briefly and accurately went into the careers of all the players.

It concluded: “While the game has changed over the decades, the impact of these Scottish greats remains timeless.” Indeed.

For this fun test, DeepSeek was certainly comparable to its best-known US competitor.

Coding

Brainstorming ideas

Learning and research

Steaming ahead

The tasks I set the chatbots were simple but they point to something much more significant – the winner of the so-called AI race is far from decided.

For all the vast resources US firms have poured into the tech, their Chinese rival has shown their achievements can be emulated.

Reception from the science community

Days before the news outlets discovered DeepSeek, the company published a paper about its Large Language Models (LLMs) and its new chatbot on arXiv. Here’s a little more information,

DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning

[over 100 authors are listed]

We introduce our first-generation reasoning models, DeepSeek-R1-Zero and DeepSeek-R1. DeepSeek-R1-Zero, a model trained via large-scale reinforcement learning (RL) without supervised fine-tuning (SFT) as a preliminary step, demonstrates remarkable reasoning capabilities. Through RL, DeepSeek-R1-Zero naturally emerges with numerous powerful and intriguing reasoning behaviors. However, it encounters challenges such as poor readability, and language mixing. To address these issues and further enhance reasoning performance, we introduce DeepSeek-R1, which incorporates multi-stage training and cold-start data before RL. DeepSeek-R1 achieves performance comparable to OpenAI-o1-1217 on reasoning tasks. To support the research community, we open-source DeepSeek-R1-Zero, DeepSeek-R1, and six dense models (1.5B, 7B, 8B, 14B, 32B, 70B) distilled from DeepSeek-R1 based on Qwen and Llama.

Cite as: arXiv:2501.12948 [cs.CL]
(or arXiv:2501.12948v1 [cs.CL] for this version)
https://doi.org/10.48550/arXiv.2501.12948

Submission history

From: Wenfeng Liang [view email]
[v1] Wed, 22 Jan 2025 15:19:35 UTC (928 KB)

You can also find a PDF version of the paper here or another online version here at Hugging Face.

As for the science community’s response, the title of Elizabeth Gibney’s January 23, 2025 article “China’s cheap, open AI model DeepSeek thrills scientists” for Nature says it all, Note: Links have been removed,

A Chinese-built large language model called DeepSeek-R1 is thrilling scientists as an affordable and open rival to ‘reasoning’ models such as OpenAI’s o1.

These models generate responses step-by-step, in a process analogous to human reasoning. This makes them more adept than earlier language models at solving scientific problems and could make them useful in research. Initial tests of R1, released on 20 January, show that its performance on certain tasks in chemistry, mathematics and coding is on par with that of o1 — which wowed researchers when it was released by OpenAI in September.

“This is wild and totally unexpected,” Elvis Saravia, an AI researcher and co-founder of the UK-based AI consulting firm DAIR.AI, wrote on X.

R1 stands out for another reason. DeepSeek, the start-up in Hangzhou that built the model, has released it as ‘open-weight’, meaning that researchers can study and build on the algorithm. Published under an MIT licence, the model can be freely reused but is not considered fully open source, because its training data has not been made available.

“The openness of DeepSeek is quite remarkable,” says Mario Krenn, leader of the Artificial Scientist Lab at the Max Planck Institute for the Science of Light in Erlangen, Germany. By comparison, o1 and other models built by OpenAI in San Francisco, California, including its latest effort o3 are “essentially black boxes”, he says.

DeepSeek hasn’t released the full cost of training R1, but it is charging people using its interface around one-thirtieth of what o1 costs to run. The firm has also created mini ‘distilled’ versions of R1 to allow researchers with limited computing power to play with the model. An “experiment that cost more than £300 with o1, cost less than $10 with R1,” says Krenn. “This is a dramatic difference which will certainly play a role its future adoption.”

The kerfuffle has died down for now.

Hardware policies best way to manage AI safety?

Regulation of artificial intelligence (AI) has become very topical in the last couple of years. There was an AI safety summit in November 2023 at Bletchley Park in the UK (see my November 2, 2023 posting for more about that international meeting).

A very software approach?

This year (2024) has seen a rise in legislative and proposed legislative activity. I have some articles on a few of these activities. China was the first to enact regulations of any kind on AI according to Matt Sheehan’s February 27, 2024 paper for the Carnegie Endowment for International Peace,

In 2021 and 2022, China became the first country to implement detailed, binding regulations on some of the most common applications of artificial intelligence (AI). These rules formed the foundation of China’s emerging AI governance regime, an evolving policy architecture that will affect everything from frontier AI research to the functioning of the world’s second-largest economy, from large language models in Africa to autonomous vehicles in Europe.

The Chinese Communist Party (CCP) and the Chinese government started that process with the 2021 rules on recommendation algorithms, an omnipresent use of the technology that is often overlooked in international AI governance discourse. Those rules imposed new obligations on companies to intervene in content recommendations, granted new rights to users being recommended content, and offered protections to gig workers subject to algorithmic scheduling. The Chinese party-state quickly followed up with a new regulation on “deep synthesis,” the use of AI to generate synthetic media such as deepfakes. Those rules required AI providers to watermark AI-generated content and ensure that content does not violate people’s “likeness rights” or harm the “nation’s image.” Together, these two regulations also created and amended China’s algorithm registry, a regulatory tool that would evolve into a cornerstone of the country’s AI governance regime.

The UK has adopted a more generalized approach focused on encouraging innovation according to Valeria Gallo’s and Suchitra Nair’s February 21, 2024 article for Deloitte (a British professional services firm also considered one of the big four accounting firms worldwide),

At a glance

The UK Government has adopted a cross-sector and outcome-based framework for regulating AI, underpinned by five core principles. These are safety, security and robustness, appropriate transparency and explainability, fairness, accountability and governance, and contestability and redress.

Regulators will implement the framework in their sectors/domains by applying existing laws and issuing supplementary regulatory guidance. Selected regulators will publish their AI annual strategic plans by 30th April [2024], providing businesses with much-needed direction.

Voluntary safety and transparency measures for developers of highly capable AI models and systems will also supplement the framework and the activities of individual regulators.

The framework will not be codified into law for now, but the Government anticipates the need for targeted legislative interventions in the future. These interventions will address gaps in the current regulatory framework, particularly regarding the risks posed by complex General Purpose AI and the key players involved in its development.

Organisations must prepare for increased AI regulatory activity over the next year, including guidelines, information gathering, and enforcement. International firms will inevitably have to navigate regulatory divergence.

While most of the focus appears to be on the software (e.g., General Purpose AI), the UK framework does not preclude hardware.

The European Union (EU) is preparing to pass its own AI regulation act through the European Parliament in 2024 according to a December 19, 2023 “EU AI Act: first regulation on artificial intelligence” article update, Note: Links have been removed,

As part of its digital strategy, the EU wants to regulate artificial intelligence (AI) to ensure better conditions for the development and use of this innovative technology. AI can create many benefits, such as better healthcare; safer and cleaner transport; more efficient manufacturing; and cheaper and more sustainable energy.

In April 2021, the European Commission proposed the first EU regulatory framework for AI. It says that AI systems that can be used in different applications are analysed and classified according to the risk they pose to users. The different risk levels will mean more or less regulation.

The agreed text is expected to be finally adopted in April 2024. It will be fully applicable 24 months after entry into force, but some parts will be applicable sooner:

*The ban of AI systems posing unacceptable risks will apply six months after the entry into force

*Codes of practice will apply nine months after entry into force

*Rules on general-purpose AI systems that need to comply with transparency requirements will apply 12 months after the entry into force

High-risk systems will have more time to comply with the requirements as the obligations concerning them will become applicable 36 months after the entry into force.

This EU initiative, like the UK framework, seems largely focused on AI software and according to the Wikipedia entry “Regulation of artificial intelligence,”

… The AI Act is expected to come into effect in late 2025 or early 2026.[109

I do have a few postings about Canadian regulatory efforts, which also seem to be focused on software but don’t preclude hardware. While the January 20, 2024 posting is titled “Canada’s voluntary code of conduct relating to advanced generative AI (artificial intelligence) systems,” information about legislative efforts is also included although you might find my May 1, 2023 posting titled “Canada, AI regulation, and the second reading of the Digital Charter Implementation Act, 2022 (Bill C-27)” offers more comprehensive information about Canada’s legislative progress or lack thereof.

The US is always to be considered in these matters and I have a November 2023 ‘briefing’ by Müge Fazlioglu on the International Association of Privacy Professionals (IAPP) website where she provides a quick overview of the international scene before diving deeper into US AI governance policy through the Barack Obama, Donald Trump, and Joe Biden administrations. There’s also this January 29, 2024 US White House “Fact Sheet: Biden-⁠Harris Administration Announces Key AI Actions Following President Biden’s Landmark Executive Order.”

What about AI and hardware?

A February 15, 2024 news item on ScienceDaily suggests that regulating hardware may be the most effective way of regulating AI,

Chips and datacentres — the ‘compute’ power driving the AI revolution — may be the most effective targets for risk-reducing AI policies as they have to be physically possessed, according to a new report.

A global registry tracking the flow of chips destined for AI supercomputers is one of the policy options highlighted by a major new report calling for regulation of “compute” — the hardware that underpins all AI — to help prevent artificial intelligence misuse and disasters.

Other technical proposals floated by the report include “compute caps” — built-in limits to the number of chips each AI chip can connect with — and distributing a “start switch” for AI training across multiple parties to allow for a digital veto of risky AI before it feeds on data.

The experts point out that powerful computing chips required to drive generative AI models are constructed via highly concentrated supply chains, dominated by just a handful of companies — making the hardware itself a strong intervention point for risk-reducing AI policies.

The report, published 14 February [2024], is authored by nineteen experts and co-led by three University of Cambridge institutes — the Leverhulme Centre for the Future of Intelligence (LCFI), the Centre for the Study of Existential Risk (CSER) and the Bennett Institute for Public Policy — along with OpenAI and the Centre for the Governance of AI.

A February 14, 2024 University of Cambridge press release by Fred Lewsey (also on EurekAlert), which originated the news item, provides more information about the ‘hardware approach to AI regulation’,

“Artificial intelligence has made startling progress in the last decade, much of which has been enabled by the sharp increase in computing power applied to training algorithms,” said Haydn Belfield, a co-lead author of the report from Cambridge’s LCFI. 

“Governments are rightly concerned about the potential consequences of AI, and looking at how to regulate the technology, but data and algorithms are intangible and difficult to control.

“AI supercomputers consist of tens of thousands of networked AI chips hosted in giant data centres often the size of several football fields, consuming dozens of megawatts of power,” said Belfield.

“Computing hardware is visible, quantifiable, and its physical nature means restrictions can be imposed in a way that might soon be nearly impossible with more virtual elements of AI.”

The computing power behind AI has grown exponentially since the “deep learning era” kicked off in earnest, with the amount of “compute” used to train the largest AI models doubling around every six months since 2010. The biggest AI models now use 350 million times more compute than thirteen years ago.

Government efforts across the world over the past year – including the US Executive Order on AI, EU AI Act, China’s Generative AI Regulation, and the UK’s AI Safety Institute – have begun to focus on compute when considering AI governance.

Outside of China, the cloud compute market is dominated by three companies, termed “hyperscalers”: Amazon, Microsoft, and Google. “Monitoring the hardware would greatly help competition authorities in keeping in check the market power of the biggest tech companies, and so opening the space for more innovation and new entrants,” said co-author Prof Diane Coyle from Cambridge’s Bennett Institute. 

The report provides “sketches” of possible directions for compute governance, highlighting the analogy between AI training and uranium enrichment. “International regulation of nuclear supplies focuses on a vital input that has to go through a lengthy, difficult and expensive process,” said Belfield. “A focus on compute would allow AI regulation to do the same.”

Policy ideas are divided into three camps: increasing the global visibility of AI computing; allocating compute resources for the greatest benefit to society; enforcing restrictions on computing power.

For example, a regularly-audited international AI chip registry requiring chip producers, sellers, and resellers to report all transfers would provide precise information on the amount of compute possessed by nations and corporations at any one time.

The report even suggests a unique identifier could be added to each chip to prevent industrial espionage and “chip smuggling”.

“Governments already track many economic transactions, so it makes sense to increase monitoring of a commodity as rare and powerful as an advanced AI chip,” said Belfield. However, the team point out that such approaches could lead to a black market in untraceable “ghost chips”.

Other suggestions to increase visibility – and accountability – include reporting of large-scale AI training by cloud computing providers, and privacy-preserving “workload monitoring” to help prevent an arms race if massive compute investments are made without enough transparency.  

“Users of compute will engage in a mixture of beneficial, benign and harmful activities, and determined groups will find ways to circumvent restrictions,” said Belfield. “Regulators will need to create checks and balances that thwart malicious or misguided uses of AI computing.”

These might include physical limits on chip-to-chip networking, or cryptographic technology that allows for remote disabling of AI chips in extreme circumstances. One suggested approach would require the consent of multiple parties to unlock AI compute for particularly risky training runs, a mechanism familiar from nuclear weapons.

AI risk mitigation policies might see compute prioritised for research most likely to benefit society – from green energy to health and education. This could even take the form of major international AI “megaprojects” that tackle global issues by pooling compute resources.

The report’s authors are clear that their policy suggestions are “exploratory” rather than fully fledged proposals and that they all carry potential downsides, from risks of proprietary data leaks to negative economic impacts and the hampering of positive AI development.

They offer five considerations for regulating AI through compute, including the exclusion of small-scale and non-AI computing, regular revisiting of compute thresholds, and a focus on privacy preservation.

Added Belfield: “Trying to govern AI models as they are deployed could prove futile, like chasing shadows. Those seeking to establish AI regulation should look upstream to compute, the source of the power driving the AI revolution. If compute remains ungoverned it poses severe risks to society.”

You can find the report, “Computing Power and the Governance of Artificial Intelligence” on the University of Cambridge’s Centre for the Study of Existential Risk.

Authors include: Girish Sastry, Lennart Heim, Haydn Belfield, Markus Anderljung, Miles Brundage, Julian Hazell, Cullen O’Keefe, Gillian K. Hadfield, Richard Ngo, Konstantin Pilz, George Gor, Emma Bluemke, Sarah Shoker, Janet Egan, Robert F. Trager, Shahar Avin, Adrian Weller, Yoshua Bengio, and Diane Coyle.

The authors are associated with these companies/agencies: OpenAI, Centre for the Governance of AI (GovAI), Leverhulme Centre for the Future of Intelligence at the Uni. of Cambridge, Oxford Internet Institute, Institute for Law & AI, University of Toronto Vector Institute for AI, Georgetown University, ILINA Program, Harvard Kennedy School (of Government), *AI Governance Institute,* Uni. of Oxford, Centre for the Study of Existential Risk at Uni. of Cambridge, Uni. of Cambridge, Uni. of Montreal / Mila, Bennett Institute for Public Policy at the Uni. of Cambridge.

“The ILINIA program is dedicated to providing an outstanding platform for Africans to learn and work on questions around maximizing wellbeing and responding to global catastrophic risks” according to the organization’s homepage.

*As for the AI Governance Institute, I believe that should be the Centre for the Governance of AI at Oxford University since the associated academic is Robert F. Trager from the University of Oxford.

As the months (years?) fly by, I guess we’ll find out if this hardware approach gains any traction where AI regulation is concerned.

Charles Lieber, nanoscientist, and the US Dept. of Justice

Charles Lieber, professor at Harvard University and one of the world’s leading researchers in nanotechnology went on trial on Tuesday, December 14, 2021.

Accused of hiding his ties to a People’s Republic of China (PRC)-run recruitment programme, Lieber is probably the highest profile academic and one of the few who was not born in China or has familial origins in China to be charged under the auspices of the US Department of Justice’s ‘China Initiative’.

This US National Public Radio (NPR) December 14, 2021 audio excerpt provides a brief summary of the situation by Ryan Lucas,

A December 14, 2021 article by Jess Aloe, Eileen Guo, and Antonio Regalado for the Massachusetts Institute of Technology (MIT) Technology Review lays out the situation in more detail (Note: A link has been removed),

In January of 2020, agents arrived at Harvard University looking for Charles Lieber, a renowned nanotechnology researcher who chaired the school’s department of chemistry and chemical biology. They were there to arrest him on charges of hiding his financial ties with a university in China. By arresting Lieber steps from Harvard Yard, authorities were sending a loud message to the academic community: failing to disclose such links is a serious crime.

Now Lieber is set to go on trial beginning December 14 [2021] in federal court in Boston. He has pleaded not guilty, and hundreds of academics have signed letters of support. In fact, some critics say it’s the Justice Department’s China Initiative—a far-reaching effort started in 2018 to combat Chinese economic espionage and trade-secret theft—that should be on trial, not Lieber. They are calling the prosecutions fundamentally flawed, a witch hunt that misunderstands the open-book nature of basic science and that is selectively destroying scientific careers over financial misdeeds and paperwork errors without proof of actual espionage or stolen technology.

For their part, prosecutors believe they have a tight case. They allege that Lieber was recruited into China’s Thousand Talents Plan—a program aimed at attracting top scientists—and paid handsomely to establish a research laboratory at the Wuhan University of Technology, but hid the affiliation from US grant agencies when asked about it (read a copy of the indictment here). Lieber is facing six felony charges: two counts of making false statements to investigators, two counts of filing a false tax return, and two counts of failing to report a foreign bank account. [emphases mine; Note: None of these charges have been proved in court]

The case against Lieber could be a bellwether for the government, which has several similar cases pending against US professors alleging that they didn’t disclose their China affiliations to granting agencies.

As for the China Initiative (from the MIT Technology Review December 14, 2021 article),

The China Initiative was announced in 2018 by Jeff Sessions, then the Trump administration’s attorney general, as a central component of the administration’s tough stance toward China.

An MIT Technology Review investigation published earlier this month [December 2021] found that the China Initiative is an umbrella for various types of prosecutions somehow connected to China, with targets ranging from a Chinese national who ran a turtle-smuggling ring to state-sponsored hackers believed to be behind some of the biggest data breaches in history. In total, MIT Technology Review identified 77 cases brought under the initiative; of those, a quarter have led to guilty pleas or convictions, but nearly two-thirds remain pending.

The government’s prosecution of researchers like Lieber for allegedly hiding ties to Chinese institutions has been the most controversial, and fastest-growing, aspect of the government’s efforts. In 2020, half of the 31 new cases brought under the China Initiative were cases against scientists or researchers. These cases largely did not accuse the defendants of violating the Economic Espionage Act.

… hundreds of academics across the country, from institutions including Stanford University and Princeton University,signed a letter calling on Attorney General Merrick Garland to end the China Initiative. The initiative, they wrote, has drifted from its original mission of combating Chinese intellectual-property theft and is instead harming American research competitiveness by discouraging scholars from coming to or staying in the US.

Lieber’s case is the second [emphasis mine] China Initiative prosecution of an academic to end up in the courtroom. The only previous person to face trial [emphasis mine] on research integrity charges, University of Tennessee–Knoxville professor Anming Hu, was acquitted of all charges [emphasis mine] by a judge in June [2021] after a deadlocked jury led to a mistrial.

Ken Dilanian wrote an October 19, 2021 article for (US) National Broadcasting Corporation’s (NBC) news online about Hu’s eventual acquittal and about the China Inititative (Note: Dilanian’s timeline for the acquittal differs from the timeline in the MIT Technology Review),

The federal government brought the full measure of its legal might against Anming Hu, a nanotechnology expert at the University of Tennessee.

But the Justice Department’s efforts to convict Hu as part of its program to crack down on illicit technology transfer to China failed — spectacularly. A judge acquitted him last month [September 2021] after a lengthy trial offered little evidence of anything other than a paperwork misunderstanding, according to local newspaper coverage. It was the second trial, after the first ended in a hung jury.

“The China Initiative has turned up very little by way of clear espionage and the transfer of genuinely strategic information to the PRC,” said Robert Daly, a China expert at the Wilson Center, referring to the country by its formal name, the People’s Republic of China. “They are mostly process crimes, disclosure issues. A growing number of voices are calling for an end to the China initiative because it’s seen as discriminatory.”

The China Initiative began under President Donald Trump’s attorney general, Jeff Sessions, in 2018. But concerns about Chinese espionage in the United States — and the transfer of technology to China through business and academic relationships — are bipartisan.

John Demers, who departed in June [2021] as head of the Justice Department’s National Security Division, said in an interview that the problem of technology transfer at universities is real. But he said he also believes conflict of interest and disclosure rules were not rigorously enforced for many years. For that reason, he recommended an amnesty program offering academics with undisclosed foreign ties a chance to come clean and avoid penalties. So far, the Biden administration has not implemented such a program.

When I first featured the Lieber case in a January 28, 2020 posting I was more focused on the financial elements,

ETA January 28, 2020 at 1645 hours: I found a January 28, 2020 article by Antonio Regalado for the MIT Technology Review which provides a few more details about Lieber’s situation,

“…

Big money: According to the charging document, Lieber, starting in 2011,  agreed to help set up a research lab at the Wuhan University of Technology and “make strategic visionary and creative research proposals” so that China could do cutting-edge science.

He was well paid for it. Lieber earned a salary when he visited China worth up to $50,000 per month, as well as $150,000 a year in expenses in addition to research funds. According to the complaint, he got paid by way of a Chinese bank account but also was known to send emails asking for cash instead.

Harvard eventually wised up to the existence of a Wuhan lab using its name and logo, but when administrators confronted Lieber, he lied and said he didn’t know about a formal joint program, according to the government complaint.

This is messy not least because Lieber and the members of his Harvard lab have done some extraordinary work as per my November 15, 2019 (Human-machine interfaces and ultra-small nanoprobes) posting about injectable electronics.

World’s first ever graphene-enhanced sports shoes/sneakers/running shoes/runners/trainers

Regardless of what these shoes are called, they contain, apparently, some graphene. As to why you as a consumer might find that important, here’s more from a June 20, 2018 news item on Nanowerk,

The world’s first-ever sports shoes to utilise graphene – the strongest material on the planet – have been unveiled by The University of Manchester and British brand inov-8.

Collaborating with graphene experts at National Graphene Institute, the brand has been able to develop a graphene-enhanced rubber. They have developed rubber outsoles for running and fitness shoes that in testing have outlasted 1,000 miles and are scientifically proven to be 50% harder wearing.

The National Graphene Institute (located at the UK’s University of Manchester) June 20, 2018 press release, which originated the news item, provides a few details, none of them particularly technical or scientific, no mention of studies, etc.  (Note: Links have been removed),

Graphene is 200 times stronger than steel and at only a single atom thick it is the thinnest possible material, meaning it has many unique properties. inov-8 is the first brand in the world to use the superlative material in sports footwear, with its G-SERIES shoes available to pre-order from June 22nd [2018] ahead of going on sale from July 12th [2018].

The company first announced its intent to revolutionise the sports footwear industry in December last year. Six months of frenzied anticipation later, inov-8 has now removed all secrecy and let the world see these game-changing shoes.

Michael Price, inov-8 product and marketing director, said: “Over the last 18 months we have worked with the National Graphene Institute at The University of Manchester to bring the world’s toughest grip to the sports footwear market.

“Prior to this innovation, off-road runners and fitness athletes had to choose between a sticky rubber that works well in wet or sweaty conditions but wears down quicker and a harder rubber that is more durable but not quite as grippy. Through intensive research, hundreds of prototypes and thousands of hours of testing in both the field and laboratory, athletes now no longer need to compromise.”

Dr Aravind Vijayaraghavan, Reader in Nanomaterials at The University of Manchester, said: “Using graphene we have developed G-SERIES outsole rubbers that are scientifically tested to be 50% stronger, 50% more elastic and 50% harder wearing.

“We are delighted to put graphene on the shelves of 250 retail stores all over the world and make it accessible to everyone. Graphene is a versatile material with limitless potential and in coming years we expect to deliver graphene technologies in composites, coatings and sensors, many of which will further revolutionise sports products.”

The G-SERIES range is made up of three different shoes, each meticulously designed to meet the needs of athletes. THE MUDCLAW G 260 is for running over muddy mountains and obstacle courses, the TERRAULTRA G 260 for running long distances on hard-packed trails and the F-LITE G 290 for crossfitters working out in gyms. Each includes graphene-enhanced rubber outsoles and Kevlar – a material used in bulletproof vests – on the uppers.

Commenting on the patent-pending technology and the collaboration with The University of Manchester, inov-8 CEO Ian Bailey said: “This powerhouse forged in Northern England is going to take the world of sports footwear by storm. We’re combining science and innovation together with entrepreneurial speed and agility to go up against the major sports brands – and we’re going to win.

“We are at the forefront of a graphene sports footwear revolution and we’re not stopping at just rubber outsoles. This is a four-year innovation project which will see us incorporate graphene into 50% of our range and give us the potential to halve the weight of running/fitness shoes without compromising on performance or durability.”

Graphene is produced from graphite, which was first mined in the Lake District fells of Northern England more than 450 years ago. inov-8 too was forged in the same fells, albeit much more recently in 2003. The brand now trades in 68 countries worldwide.

The scientists who first isolated graphene from graphite were awarded the Nobel Prize in 2010. Building on their revolutionary work, a team of over 300 staff at The University of Manchester has pioneered projects into graphene-enhanced prototypes, from sports cars and medical devices to aeroplanes. Now the University can add graphene-enhanced sports footwear to its list of world-firsts.

A picture of the ‘shoes’ has been provided,

Courtesy: National Graphene Institute at University of Manchester

You can find the company inov-8 here. As for more information about their graphene-enhanced show, there’s this,from the company’s ‘graphene webpage‘,

1555Graphite was first mined in the Lake District fells of Northern England

2004Scientists at The University of Manchester isolate graphene from graphite.

2010The Nobel Prize is awarded to the scientists for their ground-breaking experiments with graphene.

2018inov-8 launch the first-ever sports footwear to utilise graphene, delivering the world’s toughest grip.

Ground-breaking technology

One atom thick carbon sheet

200 x stronger than steel

Thin, light, flexible, with limitless potential

inov-8 COLLABORATION WITH THE NATIONAL GRAPHENE INSTITUTE

Previously athletes had to choose between a sticky rubber that works well in wet or sweaty conditions but wears down quicker, and a harder rubber that is more durable but not quite as grippy. Through intensive research, hundreds of prototypes and thousands of hours of testing in both the field and laboratory, athletes now no longer need to compromise. The new rubber we have developed with the National Graphene Institute at The University of Manchester allows us to smash the limits of grip [sic]

The G-SERIES range is made up of three different shoes, each meticulously designed to meet the needs of athletes. Each includes graphene-enhanced rubber outsoles that deliver the world’s toughest grip and Kevlar – a material used in bulletproof vests – on the uppers.

Bulletproof material for running shoes?

As for Canadians eager to try out these shoes, you will likely have to go online or go to the US.  Given how recently (June 19, 2018) this occurred, I’m mentioning the US president’s (Donald Trump) comments that Canadians are notorious for buying shoes in the US and smuggling them across the border back into Canada. (Revelatory information for Canadians everywhere.) His bizarre comments occasioned this explanatory June 19, 2018 article by Jordan Weissmann for Slate.com,

During a characteristically rambling address before the National Federation of Independent Businesses on Tuesday [June 19, 2018], Donald Trump darted off into an odd tangent in which he suggested that Canadians were smuggling shoes across the U.S. border in order to avoid their country’s high tariffs.

There was a story two days ago in a major newspaper talking about people living in Canada coming into the United States and smuggling things back into Canada because the tariffs are so massive. The tariffs to get common items back into Canada are so high that they have to smuggle ‘em in. They buy shoes, then they wear ‘em. They scuff ‘em up. They make ‘em sound old or look old. No, we’re treated horribly. [emphasis mine]

Anyone engaged in this alleged practice would be avoiding payment to the Canadian government. How this constitutes poor treatment of the US government and/or US retailers is a bit a of puzzler.

Getting back to Weissman and his article, he focuses on the source of the US president’s ‘information’.

As for graphene-enhanced ‘shoes’, I hope they are as advertized.

A customized cruise experience with wearable technology (and decreased personal agency?)

The days when you went cruising to ‘get away from it all’ seem to have passed (if they ever really existed) with the introduction of wearable technology that will register your every preference and make life easier according to Cliff Kuang’s Oct. 19, 2017 article for Fast Company,

This month [October 2017], the 141,000-ton Regal Princess will push out to sea after a nine-figure revamp of mind-boggling scale. Passengers won’t be greeted by new restaurants, swimming pools, or onboard activities, but will instead step into a future augured by the likes of Netflix and Uber, where nearly everything is on demand and personally tailored. An ambitious new customization platform has been woven into the ship’s 19 passenger decks: some 7,000 onboard sensors and 4,000 “guest portals” (door-access panels and touch-screen TVs), all of them connected by 75 miles of internal cabling. As the Carnival-owned ship cruises to Nassau, Bahamas, and Grand Turk, its 3,500 passengers will have the option of carrying a quarter-size device, called the Ocean Medallion, which can be slipped into a pocket or worn on the wrist and is synced with a companion app.

The platform will provide a new level of service for passengers; the onboard sensors record their tastes and respond to their movements, and the app guides them around the ship and toward activities aligned with their preferences. Carnival plans to roll out the platform to another seven ships by January 2019. Eventually, the Ocean Medallion could be opening doors, ordering drinks, and scheduling activities for passengers on all 102 of Carnival’s vessels across 10 cruise lines, from the mass-market Princess ships to the legendary ocean liners of Cunard.

Kuang goes on to explain the reasoning behind this innovation,

The Ocean Medallion is Carnival’s attempt to address a problem that’s become increasingly vexing to the $35.5 billion cruise industry. Driven by economics, ships have exploded in size: In 1996, Carnival Destiny was the world’s largest cruise ship, carrying 2,600 passengers. Today, Royal Caribbean’s MS Harmony of the Seas carries up to 6,780 passengers and 2,300 crew. Larger ships expend less fuel per passenger; the money saved can then go to adding more amenities—which, in turn, are geared to attracting as many types of people as possible. Today on a typical ship you can do practically anything—from attending violin concertos to bungee jumping. And that’s just onboard. Most of a cruise is spent in port, where each day there are dozens of experiences available. This avalanche of choice can bury a passenger. It has also made personalized service harder to deliver. …

Kuang also wrote this brief description of how the technology works from the passenger’s perspective in an Oct. 19, 2017 item for Fast Company,

1. Pre-trip

On the web or on the app, you can book experiences, log your tastes and interests, and line up your days. That data powers the recommendations you’ll see. The Ocean Medallion arrives by mail and becomes the key to ship access.

2. Stateroom

When you draw near, your cabin-room door unlocks without swiping. The room’s unique 43-inch TV, which doubles as a touch screen, offers a range of Carnival’s bespoke travel shows. Whatever you watch is fed into your excursion suggestions.

3. Food

When you order something, sensors detect where you are, allowing your server to find you. Your allergies and preferences are also tracked, and shape the choices you’re offered. In all, the back-end data has 45,000 allergens tagged and manages 250,000 drink combinations.

4. Activities

The right algorithms can go beyond suggesting wines based on previous orders. Carnival is creating a massive semantic database, so if you like pricey reds, you’re more apt to be guided to a violin concerto than a limbo competition. Your onboard choices—the casino, the gym, the pool—inform your excursion recommendations.

In Kuang’s Oct. 19, 2017 article he notes that the cruise ship line is putting a lot of effort into retraining their staff and emphasizing the ‘soft’ skills that aren’t going to be found in this iteration of the technology. No mention is made of whether or not there will be reductions in the number of staff members on this cruise ship nor is the possibility that ‘soft’ skills may in the future be incorporated into this technological marvel.

Personalization/customization is increasingly everywhere

How do you feel about customized news feeds? As it turns out, this is not a rhetorical question as Adrienne LaFrance notes in her Oct. 19, 2017 article for The Atlantic (Note: Links have been removed),

Today, a Google search for news runs through the same algorithmic filtration system as any other Google search: A person’s individual search history, geographic location, and other demographic information affects what Google shows you. Exactly how your search results differ from any other person’s is a mystery, however. Not even the computer scientists who developed the algorithm could precisely reverse engineer it, given the fact that the same result can be achieved through numerous paths, and that ranking factors—deciding which results show up first—are constantly changing, as are the algorithms themselves.

We now get our news in real time, on demand, tailored to our interests, across multiple platforms, without knowing just how much is actually personalized. It was technology companies like Google and Facebook, not traditional newsrooms, that made it so. But news organizations are increasingly betting that offering personalized content can help them draw audiences to their sites—and keep them coming back.

Personalization extends beyond how and where news organizations meet their readers. Already, smartphone users can subscribe to push notifications for the specific coverage areas that interest them. On Facebook, users can decide—to some extent—which organizations’ stories they would like to appear in their news feeds. At the same time, devices and platforms that use machine learning to get to know their users will increasingly play a role in shaping ultra-personalized news products. Meanwhile, voice-activated artificially intelligent devices, such as Google Home and Amazon Echo, are poised to redefine the relationship between news consumers and the news [emphasis mine].

While news personalization can help people manage information overload by making individuals’ news diets unique, it also threatens to incite filter bubbles and, in turn, bias [emphasis mine]. This “creates a bit of an echo chamber,” says Judith Donath, author of The Social Machine: Designs for Living Online and a researcher affiliated with Harvard University ’s Berkman Klein Center for Internet and Society. “You get news that is designed to be palatable to you. It feeds into people’s appetite of expecting the news to be entertaining … [and] the desire to have news that’s reinforcing your beliefs, as opposed to teaching you about what’s happening in the world and helping you predict the future better.”

Still, algorithms have a place in responsible journalism. “An algorithm actually is the modern editorial tool,” says Tamar Charney, the managing editor of NPR One, the organization’s customizable mobile-listening app. A handcrafted hub for audio content from both local and national programs as well as podcasts from sources other than NPR, NPR One employs an algorithm to help populate users’ streams with content that is likely to interest them. But Charney assures there’s still a human hand involved: “The whole editorial vision of NPR One was to take the best of what humans do and take the best of what algorithms do and marry them together.” [emphasis mine]

The skimming and diving Charney describes sounds almost exactly like how Apple and Google approach their distributed-content platforms. With Apple News, users can decide which outlets and topics they are most interested in seeing, with Siri offering suggestions as the algorithm gets better at understanding your preferences. Siri now has have help from Safari. The personal assistant can now detect browser history and suggest news items based on what someone’s been looking at—for example, if someone is searching Safari for Reykjavík-related travel information, they will then see Iceland-related news on Apple News. But the For You view of Apple News isn’t 100 percent customizable, as it still spotlights top stories of the day, and trending stories that are popular with other users, alongside those curated just for you.

Similarly, with Google’s latest update to Google News, readers can scan fixed headlines, customize sidebars on the page to their core interests and location—and, of course, search. The latest redesign of Google News makes it look newsier than ever, and adds to many of the personalization features Google first introduced in 2010. There’s also a place where you can preprogram your own interests into the algorithm.

Google says this isn’t an attempt to supplant news organizations, nor is it inspired by them. The design is rather an embodiment of Google’s original ethos, the product manager for Google News Anand Paka says: “Just due to the deluge of information, users do want ways to control information overload. In other words, why should I read the news that I don’t care about?” [emphasis mine]

Meanwhile, in May [2017?], Google briefly tested a personalized search filter that would dip into its trove of data about users with personal Google and Gmail accounts and include results exclusively from their emails, photos, calendar items, and other personal data related to their query. [emphasis mine] The “personal” tab was supposedly “just an experiment,” a Google spokesperson said, and the option was temporarily removed, but seems to have rolled back out for many users as of August [2017?].

Now, Google, in seeking to settle a class-action lawsuit alleging that scanning emails to offer targeted ads amounts to illegal wiretapping, is promising that for the next three years it won’t use the content of its users’ emails to serve up targeted ads in Gmail. The move, which will go into effect at an unspecified date, doesn’t mean users won’t see ads, however. Google will continue to collect data from users’ search histories, YouTube, and Chrome browsing habits, and other activity.

The fear that personalization will encourage filter bubbles by narrowing the selection of stories is a valid one, especially considering that the average internet user or news consumer might not even be aware of such efforts. Elia Powers, an assistant professor of journalism and news media at Towson University in Maryland, studied the awareness of news personalization among students after he noticed those in his own classes didn’t seem to realize the extent to which Facebook and Google customized users’ results. “My sense is that they didn’t really understand … the role that people that were curating the algorithms [had], how influential that was. And they also didn’t understand that they could play a pretty active role on Facebook in telling Facebook what kinds of news they want them to show and how to prioritize [content] on Google,” he says.

The results of Powers’s study, which was published in Digital Journalism in February [2017], showed that the majority of students had no idea that algorithms were filtering the news content they saw on Facebook and Google. When asked if Facebook shows every news item, posted by organizations or people, in a users’ newsfeed, only 24 percent of those surveyed were aware that Facebook prioritizes certain posts and hides others. Similarly, only a quarter of respondents said Google search results would be different for two different people entering the same search terms at the same time. [emphasis mine; Note: Respondents in this study were students.]

This, of course, has implications beyond the classroom, says Powers: “People as news consumers need to be aware of what decisions are being made [for them], before they even open their news sites, by algorithms and the people behind them, and also be able to understand how they can counter the effects or maybe even turn off personalization or make tweaks to their feeds or their news sites so they take a more active role in actually seeing what they want to see in their feeds.”

On Google and Facebook, the algorithm that determines what you see is invisible. With voice-activated assistants, the algorithm suddenly has a persona. “We are being trained to have a relationship with the AI,” says Amy Webb, founder of the Future Today Institute and an adjunct professor at New York University Stern School of Business. “This is so much more catastrophically horrible for news organizations than the internet. At least with the internet, I have options. The voice ecosystem is not built that way. It’s being built so I just get the information I need in a pleasing way.”

LaFrance’s article is thoughtful and well worth reading in its entirety. Now, onto some commentary.

Loss of personal agency

I have been concerned for some time about the increasingly dull results I get from a Google search and while I realize the company has been gathering information about me via my searches , supposedly in service of giving me better searches, I had no idea how deeply the company can mine for personal data. It makes me wonder what would happen if Google and Facebook attempted a merger.

More cogently, I rather resent the search engines and artificial intelligence agents (e.g. Facebook bots) which have usurped my role as the arbiter of what interests me, in short, my increasing loss of personal agency.

I’m also deeply suspicious of what these companies are going to do with my data. Will it be used to manipulate me in some way? Presumably, the data will be sold and used for some purpose. In the US, they have married electoral data with consumer data as Brent Bambury notes in an Oct. 13, 2017 article for his CBC (Canadian Broadcasting Corporation) Radio show,

How much of your personal information circulates in the free-market ether of metadata? It could be more than you imagine, and it might be enough to let others change the way you vote.

A data firm that specializes in creating psychological profiles of voters claims to have up to 5,000 data points on 220 million Americans. Cambridge Analytica has deep ties to the American right and was hired by the campaigns of Ben Carson, Ted Cruz and Donald Trump.

During the U.S. election, CNN called them “Donald Trump’s mind readers” and his secret weapon.

David Carroll is a Professor at the Parsons School of Design in New York City. He is one of the millions of Americans profiled by Cambridge Analytica and he’s taking legal action to find out where the company gets its masses of data and how they use it to create their vaunted psychographic profiles of voters.

On Day 6 [Banbury’s CBC radio programme], he explained why that’s important.

“They claim to have figured out how to project our voting behavior based on our consumer behavior. So it’s important for citizens to be able to understand this because it would affect our ability to understand how we’re being targeted by campaigns and how the messages that we’re seeing on Facebook and television are being directed at us to manipulate us.” [emphasis mine]

The parent company of Cambridge Analytica, SCL Group, is a U.K.-based data operation with global ties to military and political activities. David Carroll says the potential for sharing personal data internationally is a cause for concern.

“It’s the first time that this kind of data is being collected and transferred across geographic boundaries,” he says.

But that also gives Carroll an opening for legal action. An individual has more rights to access their personal information in the U.K., so that’s where he’s launching his lawsuit.

Reports link Michael Flynn, briefly Trump’s National Security Adviser, to SCL Group and indicate that former White House strategist Steve Bannon is a board member of Cambridge Analytica. Billionaire Robert Mercer, who has underwritten Bannon’s Breitbart operations and is a major Trump donor, also has a significant stake in Cambridge Analytica.

In the world of data, Mercer’s credentials are impeccable.

“He is an important contributor to the field of artificial intelligence,” says David Carroll.

“His work at IBM is seminal and really important in terms of the foundational ideas that go into big data analytics, so the relationship between AI and big data analytics. …

Banbury’s piece offers a lot more, including embedded videos, than I’ve not included in that excerpt but I also wanted to include some material from Carole Cadwalladr’s Oct. 1, 2017 Guardian article about Carroll and his legal fight in the UK,

“There are so many disturbing aspects to this. One of the things that really troubles me is how the company can buy anonymous data completely legally from all these different sources, but as soon as it attaches it to voter files, you are re-identified. It means that every privacy policy we have ignored in our use of technology is a broken promise. It would be one thing if this information stayed in the US, if it was an American company and it only did voter data stuff.”

But, he [Carroll] argues, “it’s not just a US company and it’s not just a civilian company”. Instead, he says, it has ties with the military through SCL – “and it doesn’t just do voter targeting”. Carroll has provided information to the Senate intelligence committee and believes that the disclosures mandated by a British court could provide evidence helpful to investigators.

Frank Pasquale, a law professor at the University of Maryland, author of The Black Box Society and a leading expert on big data and the law, called the case a “watershed moment”.

“It really is a David and Goliath fight and I think it will be the model for other citizens’ actions against other big corporations. I think we will look back and see it as a really significant case in terms of the future of algorithmic accountability and data protection. …

Nobody is discussing personal agency directly but if you’re only being exposed to certain kinds of messages then your personal agency has been taken from you. Admittedly we don’t have complete personal agency in our lives but AI along with the data gathering done online and increasingly with wearable and smart technology means that another layer of control has been added to your life and it is largely invisible. After all, the students in Elia Powers’ study didn’t realize their news feeds were being pre-curated.

Limitless energy and the International Thermonuclear Experimental Reactor (ITER)

Over 30 years in the dreaming, the International Thermonuclear Experimental Reactor (ITER) is now said to be 1/2 way to completing construction. A December 6, 2017 ITER press release (received via email) makes the joyful announcement,

WORLD’S MOST COMPLEX MACHINE IS 50 PERCENT COMPLETED
ITER is proving that fusion is the future source of clean, abundant, safe and economic energy_

The International Thermonuclear Experimental Reactor (ITER), a project to prove that fusion power can be produced on a commercial scale and is sustainable, is now 50 percent built to initial operation. Fusion is the same energy source from the Sun that gives the Earth its light and warmth.

ITER will use hydrogen fusion, controlled by superconducting magnets, to produce massive heat energy. In the commercial machines that will follow, this heat will drive turbines to produce electricity with these positive benefits:

* Fusion energy is carbon-free and environmentally sustainable, yet much more powerful than fossil fuels. A pineapple-sized amount of hydrogen offers as much fusion energy as 10,000 tons of coal.

* ITER uses two forms of hydrogen fuel: deuterium, which is easily extracted from seawater; and tritium, which is bred from lithium inside the fusion reactor. The supply of fusion fuel for industry and megacities is abundant, enough for millions of years.

* When the fusion reaction is disrupted, the reactor simply shuts down-safely and without external assistance. Tiny amounts of fuel are used, about 2-3 grams at a time; so there is no physical possibility of a meltdown accident.

* Building and operating a fusion power plant is targeted to be comparable to the cost of a fossil fuel or nuclear fission plant. But unlike today’s nuclear plants, a fusion plant will not have the costs of high-level radioactive waste disposal. And unlike fossil fuel plants,
fusion will not have the environmental cost of releasing CO2 and other pollutants.

ITER is the most complex science project in human history. The hydrogen plasma will be heated to 150 million degrees Celsius, ten times hotter than the core of the Sun, to enable the fusion reaction. The process happens in a donut-shaped reactor, called a tokamak(*), which is surrounded by giant magnets that confine and circulate the superheated, ionized plasma, away from the metal walls. The superconducting magnets must be cooled to minus 269°C, as cold as interstellar space.

The ITER facility is being built in Southern France by a scientific partnership of 35 countries. ITER’s specialized components, roughly 10 million parts in total, are being manufactured in industrial facilities all over the world. They are subsequently shipped to the ITER worksite, where they must be assembled, piece-by-piece, into the final machine.

Each of the seven ITER members-the European Union, China, India, Japan, Korea, Russia, and the United States-is fabricating a significant portion of the machine. This adds to ITER’s complexity.

In a message dispatched on December 1 [2017] to top-level officials in ITER member governments, the ITER project reported that it had completed 50 percent of the “total construction work scope through First Plasma” (**). First Plasma, scheduled for December 2025, will be the first stage of operation for ITER as a functional machine.

“The stakes are very high for ITER,” writes Bernard Bigot, Ph.D., Director-General of ITER. “When we prove that fusion is a viable energy source, it will eventually replace burning fossil fuels, which are non-renewable and non-sustainable. Fusion will be complementary with wind, solar, and other renewable energies.

“ITER’s success has demanded extraordinary project management, systems engineering, and almost perfect integration of our work.

“Our design has taken advantage of the best expertise of every member’s scientific and industrial base. No country could do this alone. We are all learning from each other, for the world’s mutual benefit.”

The ITER 50 percent milestone is getting significant attention.

“We are fortunate that ITER and fusion has had the support of world leaders, historically and currently,” says Director-General Bigot. “The concept of the ITER project was conceived at the 1985 Geneva Summit between Ronald Reagan and Mikhail Gorbachev. When the ITER Agreement was signed in 2006, it was strongly supported by leaders such as French President Jacques Chirac, U.S. President George W. Bush, and Indian Prime Minister Manmohan Singh.

“More recently, President Macron and U.S. President Donald Trump exchanged letters about ITER after their meeting this past July. One month earlier, President Xi Jinping of China hosted Russian President Vladimir Putin and other world leaders in a showcase featuring ITER and fusion power at the World EXPO in Astana, Kazakhstan.

“We know that other leaders have been similarly involved behind the scenes. It is clear that each ITER member understands the value and importance of this project.”

Why use this complex manufacturing arrangement?

More than 80 percent of the cost of ITER, about $22 billion or EUR18 billion, is contributed in the form of components manufactured by the partners. Many of these massive components of the ITER machine must be precisely fitted-for example, 17-meter-high magnets with less than a millimeter of tolerance. Each component must be ready on time to fit into the Master Schedule for machine assembly.

Members asked for this deal for three reasons. First, it means that most of the ITER costs paid by any member are actually paid to that member’s companies; the funding stays in-country. Second, the companies working on ITER build new industrial expertise in major fields-such as electromagnetics, cryogenics, robotics, and materials science. Third, this new expertise leads to innovation and spin-offs in other fields.

For example, expertise gained working on ITER’s superconducting magnets is now being used to map the human brain more precisely than ever before.

The European Union is paying 45 percent of the cost; China, India, Japan, Korea, Russia, and the United States each contribute 9 percent equally. All members share in ITER’s technology; they receive equal access to the intellectual property and innovation that comes from building ITER.

When will commercial fusion plants be ready?

ITER scientists predict that fusion plants will start to come on line as soon as 2040. The exact timing, according to fusion experts, will depend on the level of public urgency and political will that translates to financial investment.

How much power will they provide?

The ITER tokamak will produce 500 megawatts of thermal power. This size is suitable for studying a “burning” or largely self-heating plasma, a state of matter that has never been produced in a controlled environment on Earth. In a burning plasma, most of the plasma heating comes from the fusion reaction itself. Studying the fusion science and technology at ITER’s scale will enable optimization of the plants that follow.

A commercial fusion plant will be designed with a slightly larger plasma chamber, for 10-15 times more electrical power. A 2,000-megawatt fusion electricity plant, for example, would supply 2 million homes.

How much would a fusion plant cost and how many will be needed?

The initial capital cost of a 2,000-megawatt fusion plant will be in the range of $10 billion. These capital costs will be offset by extremely low operating costs, negligible fuel costs, and infrequent component replacement costs over the 60-year-plus life of the plant. Capital costs will decrease with large-scale deployment of fusion plants.

At current electricity usage rates, one fusion plant would be more than enough to power a city the size of Washington, D.C. The entire D.C. metropolitan area could be powered with four fusion plants, with zero carbon emissions.

“If fusion power becomes universal, the use of electricity could be expanded greatly, to reduce the greenhouse gas emissions from transportation, buildings and industry,” predicts Dr. Bigot. “Providing clean, abundant, safe, economic energy will be a miracle for our planet.”

*     *     *

FOOTNOTES:

* “Tokamak” is a word of Russian origin meaning a toroidal or donut-shaped magnetic chamber. Tokamaks have been built and operated for the past six decades. They are today’s most advanced fusion device design.

** “Total construction work scope,” as used in ITER’s project performance metrics, includes design, component manufacturing, building construction, shipping and delivery, assembly, and installation.

It is an extraordinary project on many levels as Henry Fountain notes in a March 27, 2017 article for the New York Times (Note: Links have been removed),

At a dusty construction site here amid the limestone ridges of Provence, workers scurry around immense slabs of concrete arranged in a ring like a modern-day Stonehenge.

It looks like the beginnings of a large commercial power plant, but it is not. The project, called ITER, is an enormous, and enormously complex and costly, physics experiment. But if it succeeds, it could determine the power plants of the future and make an invaluable contribution to reducing planet-warming emissions.

ITER, short for International Thermonuclear Experimental Reactor (and pronounced EAT-er), is being built to test a long-held dream: that nuclear fusion, the atomic reaction that takes place in the sun and in hydrogen bombs, can be controlled to generate power.

ITER will produce heat, not electricity. But if it works — if it produces more energy than it consumes, which smaller fusion experiments so far have not been able to do — it could lead to plants that generate electricity without the climate-affecting carbon emissions of fossil-fuel plants or most of the hazards of existing nuclear reactors that split atoms rather than join them.

Success, however, has always seemed just a few decades away for ITER. The project has progressed in fits and starts for years, plagued by design and management problems that have led to long delays and ballooning costs.

ITER is moving ahead now, with a director-general, Bernard Bigot, who took over two years ago after an independent analysis that was highly critical of the project. Dr. Bigot, who previously ran France’s atomic energy agency, has earned high marks for resolving management problems and developing a realistic schedule based more on physics and engineering and less on politics.

The site here is now studded with tower cranes as crews work on the concrete structures that will support and surround the heart of the experiment, a doughnut-shaped chamber called a tokamak. This is where the fusion reactions will take place, within a plasma, a roiling cloud of ionized atoms so hot that it can be contained only by extremely strong magnetic fields.

Here’s a rendering of the proposed reactor,

Source: ITER Organization

It seems the folks at the New York Times decided to remove the notes which help make sense of this image. However, it does get the idea across.

If I read the article rightly, the official cost in March 2017 was around 22 B Euros and more will likely be needed. You can read Fountain’s article for more information about fusion and ITER or go to the ITER website.

I could have sworn a local (Vancouver area) company called General Fusion was involved in the ITER project but I can’t track down any sources for confirmation. The sole connection I could find is in a documentary about fusion technology,

Here’s a little context for the film from a July 4, 2017 General Fusion news release (Note: A link has been removed),

A new documentary featuring General Fusion has captured the exciting progress in fusion across the public and private sectors.

Let There Be Light made its international premiere at the South By Southwest (SXSW) music and film festival in March [2017] to critical acclaim. The film was quickly purchased by Amazon Video, where it will be available for more than 70 million users to stream.

Let There Be Light follows scientists at General Fusion, ITER and Lawrenceville Plasma Physics in their pursuit of a clean, safe and abundant source of energy to power the world.

The feature length documentary has screened internationally across Europe and North America. Most recently it was shown at the Hot Docs film festival in Toronto, where General Fusion founder and Chief Scientist Dr. Michel Laberge joined fellow fusion physicist Dr. Mark Henderson from ITER at a series of Q&A panels with the filmmakers.

Laberge and Henderson were also interviewed by the popular CBC radio science show Quirks and Quarks, discussing different approaches to fusion, its potential benefits, and the challenges it faces.

It is yet to be confirmed when the film will be release for streaming, check Amazon Video for details.

You can find out more about General Fusion here.

Brief final comment

ITER is a breathtaking effort but if you’ve read about other large scale projects such as building a railway across the Canadian Rocky Mountains, establishing telecommunications in an  astonishing number of countries around the world, getting someone to the moon, eliminating small pox, building the pyramids, etc., it seems standard operating procedure both for the successes I’ve described and for the failures we’ve forgotten. Where ITER will finally rest on the continuum between success and failure is yet to be determined but the problems experienced so far are not necessarily a predictor.

I wish the engineers, scientists, visionaries, and others great success with finding better ways to produce energy.

Canadian science policy news and doings (also: some US science envoy news)

I have a couple of notices from the Canadian Science Policy Centre (CSPC), a twitter feed, and an article in online magazine to thank for this bumper crop of news.

 Canadian Science Policy Centre: the conference

The 2017 Canadian Science Policy Conference to be held Nov. 1 – 3, 2017 in Ottawa, Ontario for the third year in a row has a super saver rate available until Sept. 3, 2017 according to an August 14, 2017 announcement (received via email).

Time is running out, you have until September 3rd until prices go up from the SuperSaver rate.

Savings off the regular price with the SuperSaver rate:
Up to 26% for General admission
Up to 29% for Academic/Non-Profit Organizations
Up to 40% for Students and Post-Docs

Before giving you the link to the registration page and assuming that you might want to check out what is on offer at the conference, here’s a link to the programme. They don’t seem to have any events celebrating Canada’s 150th anniversary although they do have a session titled, ‘The Next 150 years of Science in Canada: Embedding Equity, Delivering Diversity/Les 150 prochaine années de sciences au Canada:  Intégrer l’équité, promouvoir la diversité‘,

Enhancing equity, diversity, and inclusivity (EDI) in science, technology, engineering and math (STEM) has been described as being a human rights issue and an economic development issue by various individuals and organizations (e.g. OECD). Recent federal policy initiatives in Canada have focused on increasing participation of women (a designated under-represented group) in science through increased reporting, program changes, and institutional accountability. However, the Employment Equity Act requires employers to act to ensure the full representation of the three other designated groups: Aboriginal peoples, persons with disabilities and members of visible minorities. Significant structural and systemic barriers to full participation and employment in STEM for members of these groups still exist in Canadian institutions. Since data support the positive role of diversity in promoting innovation and economic development, failure to capture the full intellectual capacity of a diverse population limits provincial and national potential and progress in many areas. A diverse international panel of experts from designated groups will speak to the issue of accessibility and inclusion in STEM. In addition, the discussion will focus on evidence-based recommendations for policy initiatives that will promote full EDI in science in Canada to ensure local and national prosperity and progress for Canada over the next 150 years.

There’s also this list of speakers . Curiously, I don’t see Kirsty Duncan, Canada’s Minister of Science on the list, nor do I see any other politicians in the banner for their conference website  This divergence from the CSPC’s usual approach to promoting the conference is interesting.

Moving onto the conference, the organizers have added two panels to the programme (from the announcement received via email),

Friday, November 3, 2017
10:30AM-12:00PM
Open Science and Innovation
Organizer: Tiberius Brastaviceanu
Organization: ACES-CAKE

10:30AM- 12:00PM
The Scientific and Economic Benefits of Open Science
Organizer: Arij Al Chawaf
Organization: Structural Genomics

I think this is the first time there’s been a ‘Tiberius’ on this blog and teamed with the organization’s name, well, I just had to include it.

Finally, here’s the link to the registration page and a page that details travel deals.

Canadian Science Policy Conference: a compendium of documents and articles on Canada’s Chief Science Advisor and Ontario’s Chief Scientist and the pre-2018 budget submissions

The deadline for applications for the Chief Science Advisor position was extended to Feb. 2017 and so far, there’s no word as to whom it might be. Perhaps Minister of Science Kirsty Duncan wants to make a splash with a surprise announcement at the CSPC’s 2017 conference? As for Ontario’s Chief Scientist, this move will make province the third (?) to have a chief scientist, after Québec and Alberta. There is apparently one in Alberta but there doesn’t seem to be a government webpage and his LinkedIn profile doesn’t include this title. In any event, Dr. Fred Wrona is mentioned as the Alberta’s Chief Scientist in a May 31, 2017 Alberta government announcement. *ETA Aug. 25, 2017: I missed the Yukon, which has a Senior Science Advisor. The position is currently held by Dr. Aynslie Ogden.*

Getting back to the compendium, here’s the CSPC’s A Comprehensive Collection of Publications Regarding Canada’s Federal Chief Science Advisor and Ontario’s Chief Scientist webpage. Here’s a little background provided on the page,

On June 2nd, 2017, the House of Commons Standing Committee on Finance commenced the pre-budget consultation process for the 2018 Canadian Budget. These consultations provide Canadians the opportunity to communicate their priorities with a focus on Canadian productivity in the workplace and community in addition to entrepreneurial competitiveness. Organizations from across the country submitted their priorities on August 4th, 2017 to be selected as witness for the pre-budget hearings before the Committee in September 2017. The process will result in a report to be presented to the House of Commons in December 2017 and considered by the Minister of Finance in the 2018 Federal Budget.

NEWS & ANNOUNCEMENT

House of Commons- PRE-BUDGET CONSULTATIONS IN ADVANCE OF THE 2018 BUDGET

https://www.ourcommons.ca/Committees/en/FINA/StudyActivity?studyActivityId=9571255

CANADIANS ARE INVITED TO SHARE THEIR PRIORITIES FOR THE 2018 FEDERAL BUDGET

https://www.ourcommons.ca/DocumentViewer/en/42-1/FINA/news-release/9002784

The deadline for pre-2018 budget submissions was Aug. 4, 2017 and they haven’t yet scheduled any meetings although they are to be held in September. (People can meet with the Standing Committee on Finance in various locations across Canada to discuss their submissions.) I’m not sure where the CSPC got their list of ‘science’ submissions but it’s definitely worth checking as there are some odd omissions such as TRIUMF (Canada’s National Laboratory for Particle and Nuclear Physics)), Genome Canada, the Pan-Canadian Artificial Intelligence Strategy, CIFAR (Canadian Institute for Advanced Research), the Perimeter Institute, Canadian Light Source, etc.

Twitter and the Naylor Report under a microscope

This news came from University of British Columbia President Santa Ono’s twitter feed,

 I will join Jon [sic] Borrows and Janet Rossant on Sept 19 in Ottawa at a Mindshare event to discuss the importance of the Naylor Report

The Mindshare event Ono is referring to is being organized by Universities Canada (formerly the Association of Universities and Colleges of Canada) and the Institute for Research on Public Policy. It is titled, ‘The Naylor report under the microscope’. Here’s more from the event webpage,

Join Universities Canada and Policy Options for a lively discussion moderated by editor-in-chief Jennifer Ditchburn on the report from the Fundamental Science Review Panel and why research matters to Canadians.

Moderator

Jennifer Ditchburn, editor, Policy Options.

Jennifer Ditchburn

Editor-in-chief, Policy Options

Jennifer Ditchburn is the editor-in-chief of Policy Options, the online policy forum of the Institute for Research on Public Policy.  An award-winning parliamentary correspondent, Jennifer began her journalism career at the Canadian Press in Montreal as a reporter-editor during the lead-up to the 1995 referendum.  From 2001 and 2006 she was a national reporter with CBC TV on Parliament Hill, and in 2006 she returned to the Canadian Press.  She is a three-time winner of a National Newspaper Award:  twice in the politics category, and once in the breaking news category. In 2015 she was awarded the prestigious Charles Lynch Award for outstanding coverage of national issues. Jennifer has been a frequent contributor to television and radio public affairs programs, including CBC’s Power and Politics, the “At Issue” panel, and The Current. She holds a bachelor of arts from Concordia University, and a master of journalism from Carleton University.

@jenditchburn

Tuesday, September 19, 2017

 12-2 pm

Fairmont Château Laurier,  Laurier  Room
 1 Rideau Street, Ottawa

 rsvp@univcan.ca

I can’t tell if they’re offering lunch or if there is a cost associated with this event so you may want to contact the organizers.

As for the Naylor report, I posted a three-part series on June 8, 2017, which features my comments and the other comments I was able to find on the report:

INVESTING IN CANADA’S FUTURE; Strengthening the Foundations of Canadian Research (Review of fundamental research final report): 1 of 3

INVESTING IN CANADA’S FUTURE; Strengthening the Foundations of Canadian Research (Review of fundamental research final report): 2 of 3

INVESTING IN CANADA’S FUTURE; Strengthening the Foundations of Canadian Research (Review of fundamental research final report): 3 of 3

One piece not mentioned in my three-part series is Paul Wells’ provocatively titled June 29, 2017 article for MacLean’s magazine, Why Canadian scientists aren’t happy (Note: Links have been removed),

Much hubbub this morning over two interviews Kirsty Duncan, the science minister, has given the papers. The subject is Canada’s Fundamental Science Review, commonly called the Naylor Report after David Naylor, the former University of Toronto president who was its main author.

Other authors include BlackBerry founder Mike Lazaridis, who has bankrolled much of the Waterloo renaissance, and Canadian Nobel physicist Arthur McDonald. It’s as blue-chip as a blue-chip panel could be.

Duncan appointed the panel a year ago. It’s her panel, delivered by her experts. Why does it not seem to be… getting anywhere? Why does it seem to have no champion in government? Therein lies a tale.

Note, first, that Duncan’s interviews—her first substantive comment on the report’s recommendations!—come nearly three months after its April release, which in turn came four months after Duncan asked Naylor to deliver his report, last December. (By March I had started to make fun of the Trudeau government in print for dragging its heels on the report’s release. That column was not widely appreciated in the government, I’m told.)

Anyway, the report was released, at an event attended by no representative of the Canadian government. Here’s the gist of what I wrote at the time:

 

Naylor’s “single most important recommendation” is a “rapid increase” in federal spending on “independent investigator-led research” instead of the “priority-driven targeted research” that two successive federal governments, Trudeau’s and Stephen Harper’s, have preferred in the last 8 or 10 federal budgets.

In English: Trudeau has imitated Harper in favouring high-profile, highly targeted research projects, on areas of study selected by political staffers in Ottawa, that are designed to attract star researchers from outside Canada so they can bolster the image of Canada as a research destination.

That’d be great if it wasn’t achieved by pruning budgets for the less spectacular research that most scientists do.

Naylor has numbers. “Between 2007-08 and 2015-16, the inflation-adjusted budgetary envelope for investigator-led research fell by 3 per cent while that for priority-driven research rose by 35 per cent,” he and his colleagues write. “As the number of researchers grew during this period, the real resources available per active researcher to do investigator-led research declined by about 35 per cent.”

And that’s not even taking into account the way two new programs—the $10-million-per-recipient Canada Excellence Research Chairs and the $1.5 billion Canada First Research Excellence Fund—are “further concentrating resources in the hands of smaller numbers of individuals and institutions.”

That’s the context for Duncan’s remarks. In the Globe, she says she agrees with Naylor on “the need for a research system that promotes equity and diversity, provides a better entry for early career researchers and is nimble in response to new scientific opportunities.” But she also “disagreed” with the call for a national advisory council that would give expert advice on the government’s entire science, research and innovation policy.

This is an asinine statement. When taking three months to read a report, it’s a good idea to read it. There is not a single line in Naylor’s overlong report that calls for the new body to make funding decisions. Its proposed name is NACRI, for National Advisory Council on Research and Innovation. A for Advisory. Its responsibilities, listed on Page 19 if you’re reading along at home, are restricted to “advice… evaluation… public reporting… advice… advice.”

Duncan also didn’t promise to meet Naylor’s requested funding levels: $386 million for research in the first year, growing to $1.3 billion in new money in the fourth year. That’s a big concern for researchers, who have been warning for a decade that two successive government’s—Harper’s and Trudeau’s—have been more interested in building new labs than in ensuring there’s money to do research in them.

The minister has talking points. She gave the same answer to both reporters about whether Naylor’s recommendations will be implemented in time for the next federal budget. “It takes time to turn the Queen Mary around,” she said. Twice. I’ll say it does: She’s reacting three days before Canada Day to a report that was written before Christmas. Which makes me worry when she says elected officials should be in charge of being nimble.

Here’s what’s going on.

The Naylor report represents Canadian research scientists’ side of a power struggle. The struggle has been continuing since Jean Chrétien left office. After early cuts, he presided for years over very large increases to the budgets of the main science granting councils. But since 2003, governments have preferred to put new funding dollars to targeted projects in applied sciences. …

Naylor wants that trend reversed, quickly. He is supported in that call by a frankly astonishingly broad coalition of university administrators and working researchers, who until his report were more often at odds. So you have the group representing Canada’s 15 largest research universities and the group representing all universities and a new group representing early-career researchers and, as far as I can tell, every Canadian scientist on Twitter. All backing Naylor. All fundamentally concerned that new money for research is of no particular interest if it does not back the best science as chosen by scientists, through peer review.

The competing model, the one preferred by governments of all stripes, might best be called superclusters. Very large investments into very large projects with loosely defined scientific objectives, whose real goal is to retain decorated veteran scientists and to improve the Canadian high-tech industry. Vast and sprawling labs and tech incubators, cabinet ministers nodding gravely as world leaders in sexy trendy fields sketch the golden path to Jobs of Tomorrow.

You see the imbalance. On one side, ribbons to cut. On the other, nerds experimenting on tapeworms. Kirsty Duncan, a shaky political performer, transparently a junior minister to the supercluster guy, with no deputy minister or department reporting to her, is in a structurally weak position: her title suggests she’s science’s emissary to the government, but she is not equipped to be anything more than government’s emissary to science.

A government that consistently buys into the market for intellectual capital at the very top of the price curve is a factory for producing white elephants. But don’t take my word for it. Ask Geoffrey Hinton [University of Toronto’s Geoffrey Hinton, a Canadian leader in machine learning].

“There is a lot of pressure to make things more applied; I think it’s a big mistake,” he said in 2015. “In the long run, curiosity-driven research just works better… Real breakthroughs come from people focusing on what they’re excited about.”

I keep saying this, like a broken record. If you want the science that changes the world, ask the scientists who’ve changed it how it gets made. This government claims to be interested in what scientists think. We’ll see.

Incisive and acerbic,  you may want to make time to read this article in its entirety.

Getting back to the ‘The Naylor report under the microscope’ event, I wonder if anyone will be as tough and direct as Wells. Going back even further, I wonder if this is why there’s no mention of Duncan as a speaker at the conference. It could go either way: surprise announcement of a Chief Science Advisor, as I first suggested, or avoidance of a potentially angry audience.

For anyone curious about Geoffrey Hinton, there’s more here in my March 31, 2017 post (scroll down about 20% of the way) and for more about the 2017 budget and allocations for targeted science projects there’s my March 24, 2017 post.

US science envoy quits

An Aug. 23, 2017article by Matthew Rosza for salon.com notes the resignation of one of the US science envoys,

President Donald Trump’s infamous response to the Charlottesville riots — namely, saying that both sides were to blame and that there were “very fine people” marching as white supremacists — has prompted yet another high profile resignation from his administration.

Daniel M. Kammen, who served as a science envoy for the State Department and focused on renewable energy development in the Middle East and Northern Africa, submitted a letter of resignation on Wednesday. Notably, he began the first letter of each paragraph with letters that spelled out I-M-P-E-A-C-H. That followed a letter earlier this month by writer Jhumpa Lahiri and actor Kal Penn to similarly spell R-E-S-I-S-T in their joint letter of resignation from the President’s Committee on Arts and Humanities.

Jeremy Berke’s Aug. 23, 2017 article for BusinessInsider.com provides a little more detail (Note: Links have been removed),

A State Department climate science envoy resigned Wednesday in a public letter posted on Twitter over what he says is President Donald Trump’s “attacks on the core values” of the United States with his response to violence in Charlottesville, Virginia.

“My decision to resign is in response to your attacks on the core values of the United States,” wrote Daniel Kammen, a professor of energy at the University of California, Berkeley, who was appointed as one five science envoys in 2016. “Your failure to condemn white supremacists and neo-Nazis has domestic and international ramifications.”

“Your actions to date have, sadly, harmed the quality of life in the United States, our standing abroad, and the sustainability of the planet,” Kammen writes.

Science envoys work with the State Department to establish and develop energy programs in countries around the world. Kammen specifically focused on renewable energy development in the Middle East and North Africa.

That’s it.

The US White House and its Office of Science and Technology Policy (OSTP)

It’s been a while since I first wrote this but I believe this situation has not changed.

There’s some consternation regarding the US Office of Science and Technology Policy’s (OSTP) diminishing size and lack of leadership. From a July 3, 2017 article by Bob Grant for The Scientist (Note: Links have been removed),

Three OSTP staffers did leave last week, but it was because their prearranged tenures at the office had expired, according to an administration official familiar with the situation. “I saw that there were some tweets and what-not saying that it’s zero,” the official tells The Scientist. “That is not true. We have plenty of PhDs that are still on staff that are working on science. All of the work that was being done by the three who left on Friday had been transitioned to other staffers.”

At least one of the tweets that the official is referring to came from Eleanor Celeste, who announced leaving OSTP, where she was the assistant director for biomedical and forensic sciences. “science division out. mic drop,” she tweeted on Friday afternoon.

The administration official concedes that the OSTP is currently in a state of “constant flux” and at a “weird transition period” at the moment, and expects change to continue. “I’m sure that the office will look even more different in three months than it does today, than it did six months ago,” the official says.

Jeffrey Mervis in two articles for Science Magazine is able to provide more detail. From his July 11, 2017 article,

OSTP now has 35 staffers, says an administration official who declined to be named because they weren’t authorized to speak to the media. Holdren [John Holdren], who in January [2017] returned to Harvard University, says the plunge in staff levels is normal during a presidential transition. “But what’s shocking is that, this far into the new administration, the numbers haven’t gone back up.”

The office’s only political appointee is Michael Kratsios, a former aide to Trump confidant and Silicon Valley billionaire Peter Thiel. Kratsios is serving as OSTP’s deputy chief technology officer and de facto OSTP head. Eight new detailees have arrived from other agencies since the inauguration.

Although there has been no formal reorganization of OSTP, a “smaller, more collaborative staff” is now grouped around three areas—science, technology, and national security—according to the Trump aide. Three holdovers from Obama’s OSTP are leading teams focused on specific themes—Lloyd Whitman in technology, Chris Fall in national security, and Deerin Babb-Brott in environment and energy. They report to Kratsios and Ted Wackler, a career civil servant who was Holdren’s deputy chief of staff and who joined OSTP under former President George W. Bush.

“It’s a very flat structure,” says the Trump official, consistent with the administration’s view that “government should be looking for ways to do more with less.” Ultimately, the official adds, “the goal is [for OSTP] to have “probably closer to 50 [people].”

A briefing book prepared by Obama’s outgoing OSTP staff may be a small but telling indication of the office’s current status. The thick, three-ring binder, covering 100 issues, was modeled on one that Holdren received from John “Jack” Marburger, Bush’s OSTP director. “Jack did a fabulous job of laying out what OSTP does, including what reports it owes Congress, so we decided to do likewise,” Holdren says. “But nobody came [from Trump’s transition team] to collect it until a week before the inauguration.”

That person was Reed Cordish, the 43-year-old scion of billionaire real estate developer David Cordish. An English major in college, Reed Cordish was briefly a professional tennis player before joining the family business. He “spent an hour with us and took the book away,” Holdren says. “He told us, ‘This is an important operation and I’ll do my best to see that it flourishes.’ But we don’t know … whether he has the clout to make that happen.”

Cordish is now assistant to the president for intragovernmental and technology initiatives. He works in the new Office of American Innovation led by presidential son-in-law Jared Kushner. That office arranged a recent meeting with high-tech executives, and is also leading yet another White House attempt to “reinvent” government.

Trump has renewed the charter of the National Science and Technology Council, a multiagency group that carries out much of the day-to-day work of advancing the president’s science initiatives. … Still pending is the status of the President’s Council of Advisors on Science and Technology [emphasis mine], a body of eminent scientists and high-tech industry leaders that went out of business at the end of the Obama administration.

Mervis’ July 12, 2017 article is in the form of a Q&A (question and answer) session with the previously mentioned John Holdren, director of the OSTP in Barack Obama’s administration,

Q: Why did you have such a large staff?

A: One reason was to cover the bases. We knew from the start that the Obama administration thought cybersecurity would be an important issue and we needed to be capable in that space. We also knew we needed people who were capable in climate change, in science and technology for economic recovery and job creation and sustained economic growth, and people who knew about advanced manufacturing and nanotechnology and biotechnology.

We also recruited to carry out specific initiatives, like in precision medicine, or combating antibiotic resistance, or the BRAIN [Brain Research through Advancing Innovative Neurotechnologies] initiative. Most of the work will go on in the departments and agencies, but you need someone to oversee it.

The reason we ended up with 135 people at our peak, which was twice the number during its previous peak in the Clinton administration’s second term, was that this president was so interested in knowing what science could do to advance his agenda, on economic recovery, or energy and climate change, or national intelligence. He got it. He didn’t need to be tutored on why science and technology matters.

I feel I’ve been given undue credit for [Obama] being a science geek. It wasn’t me. He came that way. He was constantly asking what we could do to move the needle. When the first flu epidemic, H1N1, came along, the president immediately turned to me and said, “OK, I want [the President’s Council of Advisors on Science and Technology] to look in depth on this, and OSTP, and NIH [National Institutes of Health], and [the Centers for Disease Control and Prevention].” And he told us to coordinate my effort on this stuff—inform me on what can be done and assemble the relevant experts. It was the same with Ebola, with the Macondo oil spill in the Gulf, with Fukushima, where the United States stepped up to work with the Japanese.

It’s not that we had all the expertise. But our job was to reach out to those who did have the relevant expertise.

Q: OSTP now has 35 people. What does that level of staffing say to you?

A: I have to laugh.

Q: Why?

A: When I left, on 19 January [2017], we were down to 30 people. And a substantial fraction of the 30 were people who, in a sense, keep the lights on. They were the OSTP general counsel and deputy counsel, the security officer and deputy, the budget folks, the accounting folks, the executive director of NSTC [National Science and Technology Council].

There are some scientists left, and there are some scientists there still. But on 30 June the last scientist in the science division left.

Somebody said OSTP has shut down. But that’s not quite it. There was no formal decision to shut anything down. But they did not renew the contract of the last remaining science folks in the science division.

I saw somebody say, “Well, we still have some Ph.D.s left.” And that’s undoubtedly true. There are still some science Ph.D.s left in the national security and international affairs division. But because [OSTP] is headless, they have no direct connection to the president and his top advisers.

I don’t want to disparage the top people there. The top people there now are Michael Kratsios, who they named the deputy chief technology officer, and Ted Wackler, who was my deputy chief of staff and who was [former OSTP Director] Jack Marberger’s deputy, and who I kept because he’s a fabulously effective manager. And I believe that they are doing everything they can to make sure that OSTP, at the very least, does the things it has to do. … But right now I think OSTP is just hanging on.

Q: Why did some people choose to stay on?

A: A large portion of OSTP staff are borrowed from other agencies, and because the White House is the White House, we get the people we need. These are dedicated folks who want to get the job done. They want to see science and technology applied to advance the public interest. And they were willing to stay and do their best despite the considerable uncertainty about their future.

But again, most of the detailees, and the reason we went from 135 to 30 almost overnight, is that it’s pretty standard for the detailees to go back to their home agencies and wait for the next administration to decide what set of detailees it wants to advance their objects.

So there’s nothing shocking that most of the detailees went back to their home agencies. The people who stayed are mostly employed directly by OSTP. What’s shocking is that, this far into the new administration, that number hasn’t gone back up. That is, they have only five more people than they had on January 20 [2017].

As I had been wondering about the OSTP and about the President’s Council of Advisors on Science and Technology (PCAST), it was good to get an update.

On a more parochial note, we in Canada are still waiting for an announcement about who our Chief Science Advisor might be.