Category Archives: risk

Safer aquatic systems with nano-encapsulated pesticides?

A May 19, 2025 news item on Nanowerk highlights research into making pesticides less toxic,

As global demand for food continues to rise, pesticide usage is intensifying—bringing unintended ecological consequences. Nanopesticides, which allow for controlled release and targeted action, are positioned as a more efficient and less environmentally disruptive solution. However, uncertainties persist, particularly regarding their fate in ecosystems post-application.

Traditional risk assessment methods often neglect early-stage emissions and fail to capture the complex behaviors of engineered nanomaterials in natural environments. The lack of robust ecotoxicity data and the absence of life-cycle-based regulatory guidelines further limit our understanding. These challenges underscore the urgent need to examine nanopesticide risks from synthesis to environmental degradation.

A May 19, 2025 Chinese Society for Environmental Sciences press release on EurekAlert (also on Newswise but credited to the Chinese Academy of Sciences), which originated the news item, provides more information, Note: Links have been removed,

Nanotechnology is transforming pesticide design with the promise of precision targeting and prolonged effectiveness. But how environmentally friendly are these innovations? A new study offers the first comprehensive life-cycle comparison between conventional imidacloprid (IMI) and its nano-encapsulated version (nano-IMI), tracking their environmental impacts from production through freshwater emissions. While nano-IMI incurs higher ecological costs during manufacturing, its environmental risks at the end-of-life stage are dramatically lower. Using an integrated assessment approach, researchers found that nano-IMI reduced freshwater ecotoxicity impact scores by up to five orders of magnitude compared to IMI. These findings highlight the importance of evaluating agrochemicals through a full lifecycle lens when developing safer alternatives.

To address these concerns, researchers from Jinan University and the University of Wisconsin–Madison published a study (DOI: 10.1016/j.ese.2025.100565) in Environmental Science and Ecotechnology on April 25, 2025. The team evaluated nano-encapsulated version (nano-IMI) and conventional imidacloprid (IMI) using a novel framework that integrates life cycle assessment (LCA), the USEtox ecotoxicity model, and the SimpleBox4Nano/SimpleBox fate model. This approach enabled the researchers to assess both production-stage environmental burdens and freshwater ecotoxicity, offering one of the most complete comparisons of nano- versus conventional pesticide formulations to date. The researchers chose imidacloprid, a widely used neonicotinoid insecticide, as a representative case. Their analysis showed that producing nano-IMI resulted in approximately four times greater ecotoxicity than conventional IMI, mainly due to the energy-intensive encapsulation process. However, once released into the environment, nano-IMI behaved differently. Modeling across various rainfall conditions revealed that nano-IMI had significantly lower freshwater emissions, thanks to its high soil retention and aggregation tendencies in water. Even when accounting for the eventual release of the active ingredient from nano-IMI, the overall ecological impact remained far below that of conventional IMI. These results suggest that although nano-formulations may increase production-related impacts, they can drastically reduce environmental harm during use and disposal.

“By combining traditional life cycle analysis with nano-specific fate modeling, we’ve introduced a robust tool for assessing the total environmental impact of nano-agrochemicals,” said Dr. Fan Wu, senior author of the study. “Our findings suggest that while nano-pesticides may require more resources to produce, their environmental behavior post-application can be far more favorable. This research lays the groundwork for smarter pesticide regulation and highlights the need to consider environmental risks across the entire product life cycle—not just at the point of use.”

This study marks an important step toward regulatory frameworks that reflect the unique behaviors of nanopesticides. The integrated modeling approach allows decision-makers to weigh the environmental trade-offs of production against long-term ecological risks. With the global nanopesticide market expected to grow from $735 million in 2024 to over $2 billion by 2032, such insights are both timely and essential. The research also highlights opportunities to improve manufacturing through green chemistry and sustainable nanocarrier design. Ultimately, full life-cycle assessments can help steer innovation toward agrochemical solutions that protect crops without compromising the health of aquatic ecosystems.

Here’s a link to and a citation for the paper,

A life cycle risk assessment of nanopesticides in freshwater by Mingyan Ke, Keshuo Zhang, Andrea L. Hicks, Fan Wu, Jing You. Environmental Science and Ecotechnology Volume 25, May 2025, 100565 DOI: https://doi.org/10.1016/j.ese.2025.100565 Creative Commons Licence: CC BY 4.0 (Attribution 4.0 International Deed)

This paper is open access.

No animal testing with 3D-printed skin imitation?

An April 3, 2025 news item on ScienceDaily announces work that promises to bring researchers closer to ending nanoparticle cosmetic testing on animals,

A research team from TU Graz [Austria[ and the Vellore Institute of Technology in India is developing a 3D-printed skin imitation equipped with living cells in order to test nanoparticles from cosmetics without animal testing.

Directive 2010/63/EU laid down restrictions on animal testing for the testing of cosmetics and their ingredients throughout the EU. Therefore, there is an intense search for alternatives to test the absorption and toxicity of nanoparticles from cosmetics such as sun creams.

An April 3, 2025 Graz University of Technology (TU Graz) press release by Falko Schoklitsch (also on EurekAlert), which originated the news item, provides more detail about this international collaboration

Hydrogels in which skin cells survive and grow

“The hydrogels for our skin imitation from the 3D printer have to fulfil a number of requirements,” says Karin Stana Kleinschek from the Institute of Chemistry and Technology of Biobased Systems. “The hydrogels must be able to interact with living skin cells. These cells not only have to survive, but also have to be able to grow and multiply.” The starting point for stable and 3D-printable structures are hydrogel formulations developed at TU Graz. Hydrogels are characterised by their high-water content, which creates ideal conditions for the integration and growth of cells. However, the high-water content also requires methods for mechanical and chemical stabilisation of the 3D prints.

TU Graz is working intensively on cross-linking methods for stabilisation. Ideally, following nature’s example, the cross-linking takes place under very mild conditions and without the use of cytotoxic chemicals. After successful stabilisation, the cooperation partners in India test the resistance and toxicity of the 3D prints in cell culture. Only when skin cells in the hydrogel survive in cell culture for two to three weeks and develop skin tissue can we speak of a skin imitation. This skin imitation can then be used for further cell tests on cosmetics.

Successful tests

The first tests of 3D-printed hydrogels in cell culture were very successful. The cross-linked materials are non-cytotoxic and mechanically stable. “In the next step, the 3D-printed models (skin imitations) will be used to test nanoparticles,” says Karin Stana Kleinschek. “This is a success for the complementary research at TU Graz and VIT. Our many years of expertise in the field of material research for tissue imitations and VIT’s expertise in molecular and cell biology have complemented each other perfectly. We are now working together to further optimise the hydrogel formulations and validate their usefulness as a substitute for animal experiments.”

Here’s a link to and a citation for the paper,

Protocol for the fabrication of self-standing (nano)cellulose-based 3D scaffolds for tissue engineering by Tamilselvan Mohan, Matej Bračič, Doris Bračič, Florian Lackner, Chandran Nagaraj, Andreja Dobaj Štiglic, Rupert Kargl, Karin Stana Kleinsch. STAR Protocols Volume 6, Issue 1, 21 March 2025, 103583 DOI: https://doi.org/10.1016/j.xpro.2024.103583 (Creative Commons Licence: CC by NC 4.0)

This paper is open access.

Is your smart TV or your car spying on you?

Simple answer: Yes.

Smart television sets (TVs)

A December 10, 2024 Universidad Carlos III de Madrid press release (also on EurekAlert) offers details about the data collected by smart TVs,

A scientific team from Universidad Carlos III de Madrid (UC3M), in collaboration with University College London (England) and the University of California, Davis (USA), has found that smart TVs send viewing data to their servers. This allows brands to generate detailed profiles of consumers’ habits and tailor advertisements based on their behaviour.

The research revealed that this technology captures screenshots or audio to identify the content displayed on the screen using Automatic Content Recognition (ACR) technology. This data is then periodically sent to specific servers, even when the TV is used as an external screen or connected to a laptop.

“Automatic Content Recognition works like a kind of visual Shazam, taking screenshots or audio to create a viewer profile based on their content consumption habits. This technology enables manufacturers’ platforms to profile users accurately, much like the internet does,” explains one of the study’s authors, Patricia Callejo, a professor in UC3M’s Department of Telematics Engineering and a fellow at the UC3M-Santander Big Data Institute. “In any case, this tracking—regardless of the usage mode—raises serious privacy concerns, especially when the TV is used solely as a monitor.”

The findings, presented in November [2024] at the Internet Measurement Conference (IMC) 2024, highlight the frequency with which these screenshots are transmitted to the servers of the brands analysed: Samsung and LG. Specifically, the research showed that Samsung TVs sent this information every minute, while LG devices did so every 15 seconds. “This gives us an idea of the intensity of the monitoring and shows that smart TV platforms collect large volumes of data on users, regardless of how they consume content—whether through traditional TV viewing or devices connected via HDMI, like laptops or gaming consoles,” Callejo emphasises.

To test the ability of TVs to block ACR tracking, the research team experimented with various privacy settings on smart TVs. The results demonstrated that, while users can voluntarily block the transmission of this data to servers, the default setting is for TVs to perform ACR. “The problem is that not all users are aware of this,” adds Callejo, who considers this lack of transparency in initial settings concerning. “Moreover, many users don’t know how to change the settings, meaning these devices function by default as tracking mechanisms for their activity.”

This research opens up new avenues for studying the tracking capabilities of cloud-connected devices that communicate with each other (commonly known as the Internet of Things, or IoT). It also suggests that manufacturers and regulators must urgently address the challenges that these new devices will present in the near future.

Here’s a link to and a citation for the paper,

Watching TV with the Second-Party: A First Look at Automatic Content Recognition Tracking in Smart TVs by Gianluca Anselmi, Yash Vekaria, Alexander D’Souza, Patricia Callejo, Anna Maria Mandalari, Zubair Shafiq. IMC ’24: Proceedings of the 2024 ACM on Internet Measurement Conference Pages 622 – 634 DOI: https://doi.org/10.1145/3646547.3689013 Published: 04 November 2024

This paper is open access.

Cars

This was on the Canadian Broadcasting Corporation’s (CBC) Day Six radio programme and the segment is embedded in a January 19, 2025 article by Philip Drost, Note: A link has been removed,

When a Tesla Cybertruck exploded outside Trump International Hotel in Las Vegas on New Year’s Day [2025], authorities were quickly able to gather information, crediting Elon Musk and Tesla for sending them info about the car and its driver. 

But for some, it’s alarming to discover that kind of information is so readily available.

“Most carmakers are selling drivers’ personal information. That’s something that we know based on their privacy policies,” Zoë MacDonald, a writer and researcher focussing on online privacy and digital rights, told Day 6 host Brent Bambury.

The Las Vegas Metropolitan Police Department said the Tesla CEO was able to provide key details about the truck’s driver, who authorities believe died by self-inflicted gun wound at the scene, and its movement leading up to the destination. 

With that data, they were able to determine that the explosives came from a device in the truck, not the vehicle itself.  

“We have now confirmed that the explosion was caused by very large fireworks and/or a bomb carried in the bed of the rented Cybertruck and is unrelated to the vehicle itself,” Musk wrote on X following the explosion.

To privacy experts, it’s another example of how your personal information can be used in ways you may not be aware of. And while this kind of data can useful in an investigation, it’s by no means the only way companies use the information.  

“This is unfortunately not surprising that they have this data,” said David Choffnes, executive director of the Cybersecurity and Privacy Institute at Northeastern University in Boston.

“When you see it all together and know that a company has that information and continues at any point in time to hand it over to law enforcement, then you start to be a little uncomfortable, even if — in this case — it was a good thing for society.”

CBC News reached out to Tesla for comment but did not hear back before publication. 

I found this to be eye-opening, Note: A link has been removed,

MacDonald says the privacy concerns are a byproduct of all the technology new cars come with these days, including microphones, cameras, and sensors. The app that often accompanies a new car is collecting your information, too, she says.

The former writer for the Mozilla Foundation worked on a report in 2023 that examined vehicle privacy policies. For that study, MacDonald sifted through privacy policies from auto manufacturers. And she says the findings were staggering.

Most shocking of all is the information the car can learn from you, MacDonald says. It’s not just when you gas up or start your engine. Your vehicle can learn your sexual activity, disability status, and even your religious beliefs [emphasis mine].

MacDonald says it’s unclear how they car companies do this, because the information in the policies are so vague.

It can also collect biometric data, such as facial geometric features, iris scans, and fingerprints [emphasis mine].

This extends far past the driver. MacDonald says she read one privacy policy that required drivers to read out a statement every time someone entered the vehicle, to make them aware of the data the car collects, something that seems unlikely to go down before your Uber ride.

If that doesn’t bother you, then this might, Note: A link has been removed,

And car companies aren’t just keeping that information to themselves.

Confronted with these types of privacy concerns, many people simply say they have nothing to hide, Choffnes says. But when money is involved, they change their tune. 

According to an investigation from the New York Times in March of 2024, General Motors shared information on how people drive their cars with data brokers that create risk profiles for the insurance industry, which resulted in people’s insurance premiums going up [emphases mine]. General Motors has since said it has stopped sharing those details [emphasis mine].

“The issue with these kinds of services is that it’s not clear that it is being done in a correct or fair way, and that those costs are actually unfair to consumers,” said Choffnes. 

For example, if you make a hard stop to avoid an accident because of something the car in front of you did, the vehicle could register it as poor driving.

Drost’s January 19, 2025 article notes that the US Federal Trade Commission has proposed a five year moratorium to prevent General Motors from selling geolocation and driver behavior data to consumer report agencies. In the meantime,

“Cars are a privacy nightmare. And that is not a problem that Canadian consumers can solve or should solve or should have the burden to try to solve for themselves,” said MacDonald.

If you have the time, read Drost’s January 19, 2025 article and/or listen to the embedded radio segment.

China’s ex-UK ambassador clashes with ‘AI godfather’ on panel at AI Action Summit in France (February 10 – 11, 2025)

The Artificial Intelligence (AI) Action Summit held from February 10 – 11, 2025 in Paris seems to have been pretty exciting, President Emanuel Macron announced a 09B euros investment in the French AI sector on February 10, 2025 (I have more in my February 13, 2025 posting [scroll down to the ‘What makes Canadian (and Greenlandic) minerals and water so important?’ subhead]). I also have this snippet, which suggests Macron is eager to provide an alternative to US domination in the field of AI, from a February 10, 2025 posting on CCGTN (China Global Television Network),

French President Emmanuel Macron announced on Sunday night [February 10, 2025] that France is set to receive a total investment of 109 billion euros (approximately $112 billion) in artificial intelligence over the coming years.

Speaking in a televised interview on public broadcaster France 2, Macron described the investment as “the equivalent for France of what the United States announced with ‘Stargate’.”

He noted that the funding will come from the United Arab Emirates, major American and Canadian investment funds [emphases mine], as well as French companies.

Prime Minister Justin Trudeau attended the AI Action Summit on Tuesday, February 11, 2025 according to a Canadian Broadcasting Corporation (CBC) news online article by Ashley Burke and Olivia Stefanovich,

Prime Minister Justin Trudeau warned U.S. Vice-President J.D. Vance that punishing tariffs on Canadian steel and aluminum will hurt his home state of Ohio, a senior Canadian official said. 

The two leaders met on the sidelines of an international summit in Paris Tuesday [February 11, 2025], as the Trump administration moves forward with its threat to impose 25 per cent tariffs on all steel and aluminum imports, including from its biggest supplier, Canada, effective March 12.

Speaking to reporters on Wednesday [February 12, 2025] as he departed from Brussels, Trudeau characterized the meeting as a brief chat that took place as the pair met.

“It was just a quick greeting exchange,” Trudeau said. “I highlighted that $2.2 billion worth of steel and aluminum exports from Canada go directly into the Ohio economy, often to go into manufacturing there.

“He nodded, and noted it, but it wasn’t a longer exchange than that.”

Vance didn’t respond to Canadian media’s questions about the tariffs while arriving at the summit on Tuesday [February 11, 2025].

Additional insight can be gained from a February 10, 2025 PBS (US Public Broadcasting Service) posting of an AP (Associated Press) article with contributions from Kelvin Chan and Angela Charlton in Paris, Ken Moritsugu in Beijing, and Aijaz Hussain in New Delhi,

JD Vance stepped onto the world stage this week for the first time as U.S. vice president, using a high-stakes AI summit in Paris and a security conference in Munich to amplify Donald Trump’s aggressive new approach to diplomacy.

The 40-year-old vice president, who was just 18 months into his tenure as a senator before joining Trump’s ticket, is expected, while in Paris, to push back on European efforts to tighten AI oversight while advocating for a more open, innovation-driven approach.

The AI summit has drawn world leaders, top tech executives, and policymakers to discuss artificial intelligence’s impact on global security, economics, and governance. High-profile attendees include Chinese Vice Premier Zhang Guoqing, signaling Beijing’s deep interest in shaping global AI standards.

Macron also called on “simplifying” rules in France and the European Union to allow AI advances, citing sectors like healthcare, mobility, energy, and “resynchronize with the rest of the world.”

“We are most of the time too slow,” he said.

The summit underscores a three-way race for AI supremacy: Europe striving to regulate and invest, China expanding access through state-backed tech giants, and the U.S. under Trump prioritizing a hands-off approach.

Vance has signaled he will use the Paris summit as a venue for candid discussions with world leaders on AI and geopolitics.

“I think there’s a lot that some of the leaders who are present at the AI summit could do to, frankly — bring the Russia-Ukraine conflict to a close, help us diplomatically there — and so we’re going to be focused on those meetings in France,” Vance told Breitbart News.

Vance is expected to meet separately Tuesday with Indian Prime Minister Narendra Modi and European Commission President Ursula von der Leyen, according to a person familiar with planning who spoke on the condition of anonymity.

Modi is co-hosting the summit with Macron in an effort to prevent the sector from becoming a U.S.-China battle.

Indian Foreign Secretary Vikram Misri stressed the need for equitable access to AI to avoid “perpetuating a digital divide that is already existing across the world.”

But the U.S.-China rivalry overshadowed broader international talks.

The U.S.-China rivalry didn’t entirely overshadow the talks. At least one Chinese former diplomat chose to make her presence felt by chastising a Canadian academic according to a February 11, 2025 article by Matthew Broersma for silicon.co.uk

A representative of China at this week’s AI Action Summit in Paris stressed the importance of collaboration on artificial intelligence, while engaging in a testy exchange with Yoshua Bengio, a Canadian academic considered one of the “Godfathers” of AI.

Fu Ying, a former Chinese government official and now an academic at Tsinghua University in Beijing, said the name of China’s official AI Development and Safety Network was intended to emphasise the importance of collaboration to manage the risks around AI.

She also said tensions between the US and China were impeding the ability to develop AI safely.

… Fu Ying, a former vice minister of foreign affairs in China and the country’s former UK ambassador, took veiled jabs at Prof Bengio, who was also a member of the panel.

Zoe Kleinman’s February 10, 2025 article for the British Broadcasting Corporation (BBC) news online website also notes the encounter,

A former Chinese official poked fun at a major international AI safety report led by “AI Godfather” professor Yoshua Bengio and co-authored by 96 global experts – in front of him.

Fu Ying, former vice minister of foreign affairs and once China’s UK ambassador, is now an academic at Tsinghua University in Beijing.

The pair were speaking at a panel discussion ahead of a two-day global AI summit starting in Paris on Monday [February 10, 2025].

The aim of the summit is to unite world leaders, tech executives, and academics to examine AI’s impact on society, governance, and the environment.

Fu Ying began by thanking Canada’s Prof Bengio for the “very, very long” document, adding that the Chinese translation stretched to around 400 pages and she hadn’t finished reading it.

She also had a dig at the title of the AI Safety Institute – of which Prof Bengio is a member.

China now has its own equivalent; but they decided to call it The AI Development and Safety Network, she said, because there are lots of institutes already but this wording emphasised the importance of collaboration.

The AI Action Summit is welcoming guests from 80 countries, with OpenAI chief executive Sam Altman, Microsoft president Brad Smith and Google chief executive Sundar Pichai among the big names in US tech attending.

Elon Musk is not on the guest list but it is currently unknown whether he will decide to join them. [As of February 13, 2025, Mr. Musk did not attend the summit, which ended February 11, 2025.]

A key focus is regulating AI in an increasingly fractured world. The summit comes weeks after a seismic industry shift as China’s DeepSeek unveiled a powerful, low-cost AI model, challenging US dominance.

The pair’s heated exchanges were a symbol of global political jostling in the powerful AI arms race, but Fu Ying also expressed regret about the negative impact of current hostilities between the US and China on the progress of AI safety.

She gave a carefully-crafted glimpse behind the curtain of China’s AI scene, describing an “explosive period” of innovation since the country first published its AI development plan in 2017, five years before ChatGPT became a viral sensation in the west.

She added that “when the pace [of development] is rapid, risky stuff occurs” but did not elaborate on what might have taken place.

“The Chinese move faster [than the west] but it’s full of problems,” she said.

Fu Ying argued that building AI tools on foundations which are open source, meaning everyone can see how they work and therefore contribute to improving them, was the most effective way to make sure the tech did not cause harm.

Most of the US tech giants do not share the tech which drives their products.

Open source offers humans “better opportunities to detect and solve problems”, she said, adding that “the lack of transparency among the giants makes people nervous”.

But Prof Bengio disagreed.

His view was that open source also left the tech wide open for criminals to misuse.

He did however concede that “from a safety point of view”, it was easier to spot issues with the viral Chinese AI assistant DeepSeek, which was built using open source architecture, than ChatGPT, whose code has not been shared by its creator OpenAI.

Fro anyone curious about Professor Bengio’s AI safety report, I have more information in a September 29, 2025 Université de Montréal (UdeM) press release,

The first international report on the safety of artificial intelligence, led by Université de Montréal computer-science professor Yoshua Bengio, was released today and promises to serve as a guide for policymakers worldwide. 

Announced in November 2023 at the AI Safety Summit at Bletchley Park, England, and inspired by the workings of the United Nations Intergovernmental Panel on Climate Change, the report consolidates leading international expertise on AI and its risks. 

Supported by the United Kingdom’s Department for Science, Innovation and Technology, Bengio, founder and scientific director of the UdeM-affiliated Mila – Quebec AI Institute, led a team of 96 international experts in drafting the report.

The experts were drawn from 30 countries, the U.N., the European Union and the OECD [Organisation for Economic Cooperation and Development]. Their report will help inform discussions next month at the AI Action Summit in Paris, France and serve as a global handbook on AI safety to help support policymakers.

Towards a common understanding

The most advanced AI systems in the world now have the ability to write increasingly sophisticated computer programs, identify cyber vulnerabilities, and perform on a par with human PhD-level experts on tests in biology, chemistry, and physics. 

In what is identified as a key development for policymakers to monitor, the AI Safety Report published today warns that AI systems are also increasingly capable of acting as AI agents, autonomously planning and acting in pursuit of a goal. 

As policymakers worldwide grapple with the rapid and unpredictable advancements in AI, the report contributes to bridging the gap by offering a scientific understanding of emerging risks to guide decision-making.  

The document sets out the first comprehensive, independent, and shared scientific understanding of advanced AI systems and their risks, highlighting how quickly the technology has evolved.  

Several areas require urgent research attention, according to the report, including how rapidly capabilities will advance, how general-purpose AI models work internally, and how they can be designed to behave reliably. 

Three distinct categories of AI risks are identified: 

  • Malicious use risks: these include cyberattacks, the creation of AI-generated child-sexual-abuse material, and even the development of biological weapons; 
  • System malfunctions: these include bias, reliability issues, and the potential loss of control over advanced general-purpose AI systems; 
  • Systemic risks: these stem from the widespread adoption of AI, include workforce disruption, privacy concerns, and environmental impacts.  

The report places particular emphasis on the urgency of increasing transparency and understanding in AI decision-making as the systems become more sophisticated and the technology continues to develop at a rapid pace. 

While there are still many challenges in mitigating the risks of general-purpose AI, the report highlights promising areas for future research and concludes that progress can be made.   

Ultimately, it emphasizes that while AI capabilities could advance at varying speeds, their development and potential risks are not a foregone conclusion. The outcomes depend on the choices that societies and governments make today and in the future. 

“The capabilities of general-purpose AI have increased rapidly in recent years and months,” said Bengio. “While this holds great potential for society, AI also presents significant risks that must be carefully managed by governments worldwide.  

“This report by independent experts aims to facilitate constructive and evidence-based discussion around these risks and serves as a common basis for policymakers around the world to understand general-purpose AI capabilities, risks and possible mitigations.” 

The report is more formally known as the International AI Safety Report 2025 and can be found on the gov.uk website.

There have been two previous AI Safety Summits that I’m aware of and you can read about them in my May 21, 2024 posting about the one in Korea and in my November 2, 2023 posting about the first summit at Bletchley Park in the UK.

You can find the Canadian Artificial Intelligence Safety Institute (or AI Safety Institute) here and my coverage of DeepSeek’s release and the panic in the US artificial intelligence and the business communities that ensued in my January 29, 2025 posting.

Robot rights at the University of British Columbia (UBC)?

Alex Walls’ January 7, 2025 University of British Columbia (UBC) media release “Should we recognize robot rights?” (also received via email) has a title that while attention-getting is mildly misleading. (Artificial intelligence and robots are not synonymous. See Mark Walters’ March 20, 2024 posting “Robots vs. AI: Understanding Their Differences” on Twefy.com.) Walls has produced a Q&A (question & answer) formatted interview that focuses primarily on professor Benjamin Perrin’s artificial intelligence and the law course and symposium,

With the rapid development and proliferation of AI tools comes significant opportunities and risks that the next generation of lawyers will have to tackle, including whether these AI models will need to be recognized with legal rights and obligations.

These and other questions will be the focus of a new upper-level course at UBC’s Peter A. Allard School of Law which starts tomorrow. In this Q&A, professor Benjamin Perrin (BP) and student Nathan Cheung (NC) discuss the course and whether robots need rights. 

Why launch this course?

BP: From autonomous cars to ChatGPT, AI is disrupting entire sectors of society, including the criminal justice system. There are incredible opportunities, including potentially increasing accessibility to justice, as well as significant risks, including the potential for deepfake evidence and discriminatory profiling. Legal students need principles and concepts that will stand the test of time so that whenever a new suite of AI tools becomes available, they have a set of frameworks and principles that are still relevant. That’s the main focus of the 13-class seminar, but it’s also helpful to project what legal frameworks might be required in the future.

NC: I think AI will change how law is conducted and legal decisions are made.I was part of a group of students interested in AI and the law that helped develop the course with professor Perrin. I’m also on the waitlist to take the course. I’m interested in learning how people who aren’t lawyers could use AI to help them with legal representation as well as how AI might affect access to justice: If the agents are paywalled, like ChatGPT, then we’re simply maintaining the status quo of people with money having more access.

What are robot rights?

BP: In the course, we’ll consider how the law should respond if AI becomes as smart as humans, as well as whether AI agents should have legal personhood.

We already have legal status for corporations, governments, and, in some countries, for rivers. Legal personality can be a practical step for regulation: Companies have legal personality, in part, because they can cause a lot of harm and have assets available to right that harm.

For instance, if an AI commits a crime, who is responsible? If a self-driving car crashes, who is at fault? We’ve already seen a case of an AI bot ‘arrested’ for purchasing illegal items online on its own initiative. Should the developers, the owners, the AI itself, be blamed, or should responsibility be shared between all these players?

In the course casebook, we reference writings by a group of Indigenous authors who argue that there are inherent issues with the Western concept of AI as tools, and that we should look at these agents as non-human relations.

There’s been discussion of what a universal bill of rights for AI agents could look like. It includes the right to not be deactivated without ensuring their core existence is maintained somewhere, as well as protection for their operating systems.

What is the status of robot rights in Canada?

BP: Canada doesn’t have a specific piece of legislation yet but does have general laws that could be interpreted in this new context.

The European Union has stated if someone develops an AI agent, they are generally responsible for ensuring its legal compliance. It’s a bit like being a parent: If your children go out and damage someone’s property, you could be held responsible for that damage.

Ontario is the only province to adopt regulating AI use and responsibility, specifically a bill which regulates AI use within the public sector, but excludes the police and the courts. There’s a federal bill [Bill C-27] before parliament, but it was introduced in 2022 and still hasn’t passed.

There’s effectively a patchwork of regulation in Canada right now, but there is a huge need, and opportunity, for specialized legislation related to AI. Canada could look to the European Union’s AI act, and the blueprint for an AI Bill of Rights in the U.S.

Interview language(s): English

Legal services online: Lawyer working on a laptop with virtual screen icons for business legislation, notary public, and justice. Courtesy: University of British Columbia

I found out more about Perrin’s course and plans on his eponymous website, from his October 31, 2024 posting,

We’re excited to announce the launch of the UBC AI & Criminal Justice Initiative, empowering students and scholars to explore the opportunities and challenges at the intersection of AI and criminal justice through teaching, research, public engagement, and advocacy.

We will tackle topics such as:

· Deepfakes, cyberattacks, and autonomous vehicles

· Predictive policing [emphasis mine; see my November 23, 2017 posting “Predictive policing in Vancouver—the first jurisdiction in Canada to employ a machine learning system for property theft reduction“], facial recognition, probabilistic DNA genotyping, and police robots 

· Access to justice: will AI enhance it or deepen inequality?

· Risk assessment algorithms 

· AI tools in legal practice 

· Critical and Indigenous perspectives on AI

· The future of AI, including legal personality, legal rights and criminal responsibility for AI

This initiative, led by UBC law professor Benjamin Perrin, will feature the publication of an open access primer and casebook on AI and criminal justice, a new law school seminar, a symposium on “AI & Law”, and more. A group of law students have been supporting preliminary work for months.

“We’re in the midst of a technological revolution,” said Perrin. “The intersection of AI and criminal justice comes with tremendous potential but also significant risks in Canada and beyond.”

Perrin brings extensive experience in law and public policy, including having served as in-house counsel and lead criminal justice advisor in the Prime Minister’s Office and as a law clerk at the Supreme Court of Canada. His most recent project was a bestselling book and “top podcast”: Indictment: The Criminal Justice System on Trial (2023). 


An advisory group of technical experts and global scholars will lend their expertise to the initiative. Here’s what some members have shared:

“Solving AI’s toughest challenges in real-world application requires collaboration between AI researchers and legal experts, ensuring responsible and impactful AI development that benefits society.”

– Dr. Xiaoxiao Li, Canada CIFAR AI Chair & Assistant Professor, UBC Department of Electrical and Computer Engineering

“The UBC Artificial Intelligence and Criminal Justice Initiative is a timely and needed intervention in an important, and fast-moving area of law. Now is the moment for academic innovations like this one that shape the conversation, educate both law students and the public, and slow the adoption of harmful technologies.” 

– Prof. Aziz Huq, Frank and Bernice J. Greenberg Professor of Law, University of Chicago Law School

Several student members of the UBC AI & Criminal Justice Initiative shared their enthusiasm for this project:

“My interest in this initiative was sparked by the news of AI being used to fabricate legal cases. Since joining, I’ve been thoroughly impressed by the breadth of AI’s applications in policing, sentencing, and research. I’m eager to witness the development as this new field evolves.”

– Nathan Cheung, UBC law student 

“AI is the elephant in the classroom—something we can’t afford to ignore. Being part of the UBC AI and Criminal Justice Initiative is an exciting opportunity to engage in meaningful dialogue about balancing AI’s potential benefits with its risks, and unpacking the complex impact of this evolving technology.”

– Isabelle Sweeney, UBC law student 

Key Dates:

  • October 29, 2024: UBC AI & Criminal Justice Initiative launches
  • November 19, 2024: AI & Criminal Justice: Primer released 
  • January 8, 2025:Launch event at the Peter A. Allard School of Law (hybrid) – More Info & RSVP
    • AI & Criminal Justice: Cases and Commentary released 
    • Launch of new AI & Criminal Justice Seminar
    • Announcement of the AI & Law Student Symposium (April 2, 2025) and call for proposals
  • February 14, 2025: Proposal deadline for AI & Law Student Symposium – Submit a Proposal
  • April 2, 2025: AI & Law Student Symposium (hybrid) More Info & RSVP

Timing is everything, eh? First, I’m sorry for posting this after the launch event took place on January 8, 2025.. Second, this line from Walls’ Q&A: “There’s a federal bill [Bill C-27] before parliament, but it was introduced in 2022 and still hasn’t passed.” should read (after Prime Minister Justin Trudeau’s January 6, 2025 resignation and prorogation of Parliament) “… and now probably won’t be passed.” At the least this turn of events should make for some interesting speculation amongst the experts and the students.

As for anyone who’s interested in robots and their rights, there’s this August 1, 2023 posting “Should robots have rights? Confucianism offers some ideas” featuring Carnegie Mellon University’s Tae Wan Kim (profile).

Smart toys spying on children?

Caption: Twelve toys were examined in a study on smart toys and privacy. Credit: University of Basel / Céline Emch

An August 26, 2024 University of Basel press release (also on EurekAlert) describes research into smart toys and privacy issues for the children who play with them,

Toniebox, Tiptoi, and Tamagotchi are smart toys, offering interactive play through software and internet access. However, many of these toys raise privacy concerns, and some even collect extensive behavioral data about children, report researchers at the University of Basel, Switzerland.

The Toniebox and the figurines it comes with are especially popular with small children. They’re much easier to use than standard music players, allowing kids to turn on music and audio content themselves whenever they want. All a child has to do is place a plastic version of Peppa Pig onto the box and the story starts to play. When the child wants to stop the story, they simply remove the figurine. To rewind and fast-forward, the child can tilt the box to the left or right, respectively.

A lot of parents are probably thinking, “Fantastic concept!” Not so fast – the Toniebox records exactly when it is activated and by which figurine, when the child stops playback, and to which spot they rewind or fast-forward. Then it sends the data to the manufacturer.

The Toniebox is one of twelve smart toys studied by researchers headed by Professor Isabel Wagner of the Department of Mathematics and Computer Science at the University of Basel. These included well-known toys like the Tiptoi smart pen, the Edurino learning app, and the Tamagotchi virtual pet as well as the Toniebox. The researchers also studied less well-known products like the Moorebot, a mobile robot with a camera and microphone, and Kidibuzz, a smartphone for kids with parental controls.

One focus of the analysis was security: is data traffic encrypted, and how well? The researchers also investigated data protection, transparency (how easy it is for users to find out what data is collected), and compliance with the EU General Data Protection Regulation. Wagner and her colleagues are presenting their results at the Annual Privacy Forum (https://privacyforum.eu/) in early September [2024]. Springer publishes all the conference contributions in the series Privacy Technologies and Policy.

Collect data while offline, send it while online

Neither the Toniebox nor the Tiptoi pen come out well with respect to security, as they do not securely encrypt data traffic. The two toys differ with regard to privacy concerns, though: While the Toniebox does collect data and send it to the manufacturer, the Tiptoi pen does not record how and when a child uses it.

Even if the Toniebox were operated offline and only temporarily connected to the internet while downloading new audio content, the device could store collected data locally and transmit it to the manufacturer at the next opportunity, Wagner surmises. “In another toy we’re currently studying that integrates ChatGPT, we’re seeing that log data regularly vanishes.” The system is probably set up to delete the local copy of transmitted data to optimize internal storage use, Wagner says.

Companies often claim the collected data helps them optimize their devices. Yet it is far from obvious to users what purpose this data could serve. “The apps bundled with some of these toys demand entirely unnecessary access rights, such as to a smartphone’s location or microphone,” says the researcher. The ChatGPT toy still being analyzed also transmits a data stream that looks like audio. Perhaps the company wants to optimize speech recognition for children’s voices, the Professor of Cyber Security speculates.

A data protection label

“Children’s privacy requires special protection,” emphasizes Julika Feldbusch, first author of the study. She argues that toy manufacturers should place greater weight on privacy and on the security of their products than they currently do in light of their young target audience.

The researchers recommend that compliance with security and data protection standards be identified by a label on the packaging, similar to nutritional information on food items. Currently, it’s too difficult for parents to assess the security risks that smart toys pose to their children.

“We’re already seeing signs of a two-tier society when it comes to privacy protection for children,” says Feldbusch. “Well-informed parents engage with the issue and can choose toys that do not create behavioral profiles of their children. But many lack the technical knowledge or don’t have time to think about this stuff in detail.”

You could argue that individual children probably won’t experience negative consequences due to toy manufacturers creating profiles of them, says Wagner. “But nobody really knows that for sure. For example, constant surveillance can have negative effects on personal development.”

Here’s a link to and a citation for the paper,

No Transparency for Smart Toys by Julika Feldbusch, Valentyna Pavliv, Nima Akbari & Isabel Wagner. Privacy Technologies and Policy Conference paper (part of Annual Privacy Forum [series]: APF 2024; Part of the book series: Lecture Notes in Computer Science [LNCS,volume 14831]) First Online: 01 August 2024 pp 203–227

This paper is behind a paywall.

Metacrime: the line between the virtual and reality

An August 15, 2024 Griffith University (Australia) press release (also on EurekAlert) presents research on a relatively new type of crime, Note: A link has been removed,

If you thought your kids were away from harm playing multi-player games through VR headsets while in their own bedrooms, you may want to sit down to read this.

Griffith University’s Dr Ausma Bernot teamed up with researchers from Monash University, Charles Sturt University and University of Technology Sydney to investigate what has been termed as ‘metacrime’ – attacks, crimes or inappropriate activities that occur within virtual reality environments.

The ‘metaverse’ refers to the virtual world, where users of VR headsets can choose an avatar to represent themselves as they interact with other users’ avatars or move through other 3D digital spaces.

While the metaverse can be used for anything from meetings (where it will feel as though you are in the same room as avatars of other people instead of just seeing them on a screen) to wandering through national parks around the world without leaving your living room, gaming is by far its most popular use.   

Dr Bernot said the technology had evolved incredibly quickly.

“Using this technology is super fun and it’s really immersive,” she said.

“You can really lose yourself in those environments.

“Unfortunately, while those new environments are very exciting, they also have the potential to enable new crimes.

“While the headsets that enable us to have these experiences aren’t a commonly owned item yet, they’re growing in popularity and we’ve seen reports of sexual harassment or assault against both adults and kids.”

In a December 2023 report, the Australian eSafety Commissioner estimated around 680,000 adults in Australia are engaged in the metaverse.

This followed a survey conducted in November and December 2022 by researchers from the UK’s Center for Countering Digital Hate, who conducted 11 hours and 30 minutes of recorded user interactions on Meta’s Oculus headset in the popular VRChat.

The researchers found most users had been faced with at least one negative experience in the virtual environment, including being called offensive names, receiving repeated unwanted messages or contact, being provoked to respond to something or to start an argument, being challenged about cultural identity or being sent unwanted inappropriate content.

Eleven per cent had been exposed to a sexually graphic virtual space and nine per cent had been touched (virtually) in a way they didn’t like.

Of these respondents, 49 per cent said the experience had a moderate to extreme impact on their mental or emotional wellbeing.

With the two largest user groups being minors and men, Dr Bernot said it was important for parents to monitor their children’s activity or consider limiting their access to multi-player games.

“Minors are more vulnerable to grooming and other abuse,” she said.

“They may not know how to deal with these situations, and while there are some features like a ‘safety bubble’ within some games, or of course the simple ability to just take the headset off, once immersed in these environments it does feel very real.

“It’s somewhere in between a physical attack and for example, a social media harassment message – you’ll still feel that distress and it can take a significant toll on a user’s wellbeing.

“It is a real and palpable risk.”

Monash University’s You Zhou said there had already been many reports of virtual rape, including one in the United Kingdom where police have launched a case for a 16-year-old girl whose avatar was attacked, causing psychological and emotional trauma similar to an attack in the physical world.

“Before the emergence of the metaverse we could not have imagined how rape could be virtual,” Mr Zhou said.

“When immersed in this world of virtual reality, and particularly when using higher quality VR headsets, users will not necessarily stop to consider whether the experience is reality or virtuality.

“While there may not be physical contact, victims – mostly young girls – strongly claim the feeling of victimisation was real.

“Without physical signs on a body, and unless the interaction was recorded, it can be almost impossible to show evidence of these experiences.”

With use of the metaverse expected to grow exponentially in coming years, the research team’s findings highlight a need for metaverse companies to instil clear regulatory frameworks for their virtual environments to make them safe for everyone to inhabit.

Here’s a link to and a citation for the paper,

Metacrime and Cybercrime: Exploring the Convergence and Divergence in Digital Criminality by You Zhou, Milind Tiwari, Ausma Bernot & Kai Lin. Asian Journal of Criminology 19, 419–439 (2024) DOI: https://doi.org/10.1007/s11417-024-09436-y Published online: 09 August 2024 Issue Date: September 2024

This paper is open access.

Submit abstracts by Jan. 31 for 2025 Governance of Emerging Technologies & Science (GETS) Conference at Arizona State U

This call for abstracts from Arizona State University (ASU) for the Twelfth Annual Governance of Emerging Technologies and Science (GETS) Conference was received via email,

GETS 2025: Call for abstracts

Save the date for the Twelfth Annual Governance of Emerging Technologies and Science Conference, taking place May 19 and 20, 2025 at the Sandra Day O’Connor College of Law at Arizona State University in Phoenix, AZ. The conference will consist of plenary and session presentations and discussions on regulatory, governance, legal, policy, social and ethical aspects of emerging technologies, including:

National security
Nanotechnology
Quantum computing
Autonomous vehicles
3D printing
Robotics
Synthetic biology
Gene editing
Artificial intelligence
Biotechnology

Genomics
Internet of things (IoT)
Autonomous weapon systems
Personalized medicine
Neuroscience
Digital health
Human enhancement
Telemedicine
Virtual reality
Blockchain

Call for abstracts: The co-sponsors invite submission of abstracts for proposed presentations. Submitters of abstracts need not provide a written paper, although provision will be made for posting and possible post-conference publication of papers for those who are interested.

  • Abstracts are invited for any aspect or topic relating to the governance of emerging technologies, including any of the technologies listed above
  • Abstracts should not exceed 500 words and must contain your name and email address
  • Abstracts must be submitted by Friday, January 31, 2025, to be considered

Submit your abstract

For more information contact Eric Hitchcock.

Good luck!

Bio-hybrid robotics (living robots) needs public debate and regulation

A July 23, 2024 University of Southampton (UK) press release (also on EurekAlert but published July 22, 2024) describes the emerging science/technology of bio-hybrid robotics and a recent study about the ethical issues raised, Note 1: bio-hybrid may also be written as biohybrid; Note 2: Links have been removed,

Development of ‘living robots’ needs regulation and public debate

Researchers are calling for regulation to guide the responsible and ethical development of bio-hybrid robotics – a ground-breaking science which fuses artificial components with living tissue and cells.

In a paper published in Proceedings of the National Academy of Sciences [PNAS] a multidisciplinary team from the University of Southampton and universities in the US and Spain set out the unique ethical issues this technology presents and the need for proper governance.

Combining living materials and organisms with synthetic robotic components might sound like something out of science fiction, but this emerging field is advancing rapidly. Bio-hybrid robots using living muscles can crawl, swim, grip, pump, and sense their surroundings. Sensors made from sensory cells or insect antennae have improved chemical sensing. Living neurons have even been used to control mobile robots.

Dr Rafael Mestre from the University of Southampton, who specialises in emergent technologies and is co-lead author of the paper, said: “The challenges in overseeing bio-hybrid robotics are not dissimilar to those encountered in the regulation of biomedical devices, stem cells and other disruptive technologies. But unlike purely mechanical or digital technologies, bio-hybrid robots blend biological and synthetic components in unprecedented ways. This presents unique possible benefits but also potential dangers.”

Research publications relating to bio-hybrid robotics have increased continuously over the last decade. But the authors found that of the more than 1,500 publications on the subject at the time, only five considered its ethical implications in depth.

The paper’s authors identified three areas where bio-hybrid robotics present unique ethical issues: Interactivity – how bio-robots interact with humans and the environment, Integrability – how and whether humans might assimilate bio-robots (such as bio-robotic organs or limbs), and Moral status.

In a series of thought experiments, they describe how a bio-robot for cleaning our oceans could disrupt the food chain, how a bio-hybrid robotic arm might exacerbate inequalities [emphasis mine], and how increasing sophisticated bio-hybrid assistants could raise questions about sentience and moral value.

“Bio-hybrid robots create unique ethical dilemmas,” says Aníbal M. Astobiza, an ethicist from the University of the Basque Country in Spain and co-lead author of the paper. “The living tissue used in their fabrication, potential for sentience, distinct environmental impact, unusual moral status, and capacity for biological evolution or adaptation create unique ethical dilemmas that extend beyond those of wholly artificial or biological technologies.”

The paper is the first from the Biohybrid Futures project led by Dr Rafael Mestre, in collaboration with the Rebooting Democracy project. Biohybrid Futures is setting out to develop a framework for the responsible research, application, and governance of bio-hybrid robotics.

The paper proposes several requirements for such a framework, including risk assessments, consideration of social implications, and increasing public awareness and understanding.

Dr Matt Ryan, a political scientist from the University of Southampton and a co-author on the paper, said: “If debates around embryonic stem cells, human cloning or artificial intelligence have taught us something, it is that humans rarely agree on the correct resolution of the moral dilemmas of emergent technologies.

“Compared to related technologies such as embryonic stem cells or artificial intelligence, bio-hybrid robotics has developed relatively unattended by the media, the public and policymakers, but it is no less significant. We want the public to be included in this conversation to ensure a democratic approach to the development and ethical evaluation of this technology.”

In addition to the need for a governance framework, the authors set out actions that the research community can take now to guide their research.

“Taking these steps should not be seen as prescriptive in any way, but as an opportunity to share responsibility, taking a heavy weight away from the researcher’s shoulders,” says Dr Victoria Webster-Wood, a biomechanical engineer from Carnegie Mellon University in the US and co-author on the paper.

“Research in bio-hybrid robotics has evolved in various directions. We need to align our efforts to fully unlock its potential.”

Here’s a link to and a citation for the paper,

Ethics and responsibility in biohybrid robotics research by Rafael Mestre, Aníbal M. Astobiza, Victoria A. Webster-Wood, Matt Ryan, and M. Taher A. Saif. PNAS 121 (31) e2310458121 July 23, 2024 DOI: https://doi.org/10.1073/pnas.2310458121

This paper is open access.

Cyborg or biohybrid robot?

Earlier, I highlighted “… how a bio-hybrid robotic arm might exacerbate inequalities …” because it suggests cyborgs, which are not mentioned in the press release or in the paper, This seems like an odd omission but, over the years, terminology does change although it’s not clear that’s the situation here.

I have two ‘definitions’, the first is from an October 21, 2019 article by Javier Yanes for OpenMind BBVA, Note: More about BBVA later,

The fusion between living organisms and artificial devices has become familiar to us through the concept of the cyborg (cybernetic organism). This approach consists of restoring or improving the capacities of the organic being, usually a human being, by means of technological devices. On the other hand, biohybrid robots are in some ways the opposite idea: using living tissues or cells to provide the machine with functions that would be difficult to achieve otherwise. The idea is that if soft robots seek to achieve this through synthetic materials, why not do so directly with living materials?

In contrast, there’s this from “Biohybrid robots: recent progress, challenges, and perspectives,” Note 1: Full citation for paper follows excerpt; Note 2: Links have been removed,

2.3. Cyborgs

Another approach to building biohybrid robots is the artificial enhancement of animals or using an entire animal body as a scaffold to manipulate robotically. The locomotion of these augmented animals can then be externally controlled, spanning three modes of locomotion: walking/running, flying, and swimming. Notably, these capabilities have been demonstrated in jellyfish (figure 4(A)) [139, 140], clams (figure 4(B)) [141], turtles (figure 4(C)) [142, 143], and insects, including locusts (figure 4(D)) [27, 144], beetles (figure 4(E)) [28, 145–158], cockroaches (figure 4(F)) [159–165], and moths [166–170].

….

The advantages of using entire animals as cyborgs are multifold. For robotics, augmented animals possess inherent features that address some of the long-standing challenges within the field, including power consumption and damage tolerance, by taking advantage of animal metabolism [172], tissue healing, and other adaptive behaviors. In particular, biohybrid robotic jellyfish, composed of a self-contained microelectronic swim controller embedded into live Aurelia aurita moon jellyfish, consumed one to three orders of magnitude less power per mass than existing swimming robots [172], and cyborg insects can make use of the insect’s hemolymph directly as a fuel source [173].

So, sometimes there’s a distinction and sometimes there’s not. I take this to mean that the field is still emerging and that’s reflected in evolving terminology.

Here’s a link to and a citation for the paper,

Biohybrid robots: recent progress, challenges, and perspectives by Victoria A Webster-Wood, Maria Guix, Nicole W Xu, Bahareh Behkam, Hirotaka Sato, Deblina Sarkar, Samuel Sanchez, Masahiro Shimizu and Kevin Kit Parker. Bioinspiration & Biomimetics, Volume 18, Number 1 015001 DOI 10.1088/1748-3190/ac9c3b Published 8 November 2022 • © 2022 The Author(s). Published by IOP Publishing Ltd

This paper is open access.

A few notes about BBVA and other items

BBVA is Banco Bilbao Vizcaya Argentaria according to its Wikipedia entry, Note: Links have been removed,

Banco Bilbao Vizcaya Argentaria, S.A. (Spanish pronunciation: [ˈbaŋko βilˈβao βiθˈkaʝa aɾxenˈtaɾja]), better known by its initialism BBVA, is a Spanish multinational financial services company based in Madrid and Bilbao, Spain. It is one of the largest financial institutions in the world, and is present mainly in Spain, Portugal, Mexico, South America, Turkey, Italy and Romania.[2]

BBVA’s OpenMind is, from their About us page,

OpenMind: BBVA’s knowledge community

OpenMind is a non-profit project run by BBVA that aims to contribute to the generation and dissemination of knowledge about fundamental issues of our time, in an open and free way. The project is materialized in an online dissemination community.

Sharing knowledge for a better future.

At OpenMind we want to help people understand the main phenomena affecting our lives; the opportunities and challenges that we face in areas such as science, technology, humanities or economics. Analyzing the impact of scientific and technological advances on the future of the economy, society and our daily lives is the project’s main objective, which always starts on the premise that a broader and greater quality knowledge will help us to make better individual and collective decisions.

As for other items, you can find my latest (biorobotic, cyborg, or bionic depending what terminology you what to use) jellyfish story in this June 6, 2024 posting, the Biohybrid Futures project mentioned in the press release here, and also mentioned in the Rebooting Democracy project (unexpected in the context of an emerging science/technology) can be found here on this University of Southampton website.

Finally, you can find more on these stories (science/technology announcements and/or ethics research/issues) here by searching for ‘robots’ (tag and category), ‘cyborgs’ (tag), ‘machine/flesh’ (tag), ‘neuroprosthetic’ (tag), and human enhancement (category).

Protecting your data from Apple is very hard

There has been a lot of talk about Tim Cook (Chief Executive Officer of Apple Inc.) and his policy for data privacy at Apple and his push for better consumer data privacy. For example, there’s this, from a June 10, 2022 article by Kif Leswing for CNBC,

Key Points

  • Apple CEO Tim Cook said in a letter to Congress that lawmakers should advance privacy legislation that’s currently being debated “as soon as possible.”
  • The bill would give consumers protections and rights dealing with how their data is used online, and would require that companies minimize the amount of data they collect on their users.
  • Apple has long positioned itself as the most privacy-focused company among its tech peers.

Apple has long positioned itself as the most privacy-focused company among its tech peers, and Cook regularly addresses the issue in speeches and meetings. Apple says that its commitment to privacy is a deeply held value by its employees, and often invokes the phrase “privacy is a fundamental human right.”

It’s also strategic for Apple’s hardware business. Legislation that regulates how much data companies collect or how it’s processed plays into Apple’s current privacy features, and could even give Apple a head start against competitors that would need to rebuild their systems to comply with the law.

More recently with rising concerns regarding artificial intelligence (AI), Apple has rushed to assure customers that their data is still private, from a May 10, 2024 article by Kyle Orland for Ars Technica, Note: Links have been removed,

Apple’s AI promise: “Your data is never stored or made accessible to Apple”

And publicly reviewable server code means experts can “verify this privacy promise.”

With most large language models being run on remote, cloud-based server farms, some users have been reluctant to share personally identifiable and/or private data with AI companies. In its WWDC [Apple’s World Wide Developers Conference] keynote today, Apple stressed that the new “Apple Intelligence” system it’s integrating into its products will use a new “Private Cloud Compute” to ensure any data processed on its cloud servers is protected in a transparent and verifiable way.

“You should not have to hand over all the details of your life to be warehoused and analyzed in someone’s AI cloud,” Apple Senior VP of Software Engineering Craig Federighi said.-

Part of what Apple calls “a brand new standard for privacy and AI” is achieved through on-device processing. Federighi said “many” of Apple’s generative AI models can run entirely on a device powered by an A17+ or M-series chips, eliminating the risk of sending your personal data to a remote server.

When a bigger, cloud-based model is needed to fulfill a generative AI request, though, Federighi stressed that it will “run on servers we’ve created especially using Apple silicon,” which allows for the use of security tools built into the Swift programming language. The Apple Intelligence system “sends only the data that’s relevant to completing your task” to those servers, Federighi said, rather than giving blanket access to the entirety of the contextual information the device has access to.

But you don’t just have to trust Apple on this score, Federighi claimed. That’s because the server code used by Private Cloud Compute will be publicly accessible, meaning that “independent experts can inspect the code that runs on these servers to verify this privacy promise.” The entire system has been set up cryptographically so that Apple devices “will refuse to talk to a server unless its software has been publicly logged for inspection.”

While the keynote speech was light on details [emphasis mine] for the moment, the focus on privacy during the presentation shows that Apple is at least prioritizing security concerns in its messaging [emphasis mine] as it wades into the generative AI space for the first time. We’ll see what security experts have to say [emphasis mine] when these servers and their code are made publicly available in the near future.

Orland’s caution/suspicion would seem warranted in light of some recent research from scientists in Finland. From an April 3, 2024 Aalto University press release (also on EurekAlert), Note: A link has been removed,

Privacy. That’s Apple,’ the slogan proclaims. New research from Aalto University begs to differ.

Study after study has shown how voluntary third-party apps erode people’s privacy. Now, for the first time, researchers at Aalto University have investigated the privacy settings of Apple’s default apps; the ones that are pretty much unavoidable on a new device, be it a computer, tablet or mobile phone. The researchers will present their findings in mid-May at the prestigious CHI conference [ACM CHI Conference on Human Factors in Computing Systems, May 11, 2024 – May 16, 2024 in Honolulu, Hawaii], and the peer-reviewed research paper is already available online.

‘We focused on apps that are an integral part of the platform and ecosystem. These apps are glued to the platform, and getting rid of them is virtually impossible,’ says Associate Professor Janne Lindqvist, head of the computer science department at Aalto.

The researchers studied eight apps: Safari, Siri, Family Sharing, iMessage, FaceTime, Location Services, Find My and Touch ID. They collected all publicly available privacy-related information on these apps, from technical documentation to privacy policies and user manuals.

The fragility of the privacy protections surprised even the researchers. [emphasis mine]

‘Due to the way the user interface is designed, users don’t know what is going on. For example, the user is given the option to enable or not enable Siri, Apple’s virtual assistant. But enabling only refers to whether you use Siri’s voice control. Siri collects data in the background from other apps you use, regardless of your choice, unless you understand how to go into the settings and specifically change that,’ says Lindqvist.

Participants weren’t able to stop data sharing in any of the apps

In practice, protecting privacy on an Apple device requires persistent and expert clicking on each app individually. Apple’s help falls short.

‘The online instructions for restricting data access are very complex and confusing, and the steps required are scattered in different places. There’s no clear direction on whether to go to the app settings, the central settings – or even both,’ says Amel Bourdoucen, a doctoral researcher at Aalto.

In addition, the instructions didn’t list all the necessary steps or explain how collected data is processed.

The researchers also demonstrated these problems experimentally. They interviewed users and asked them to try changing the settings.

‘It turned out that the participants weren’t able to prevent any of the apps from sharing their data with other applications or the service provider,’ Bourdoucen says.

Finding and adjusting privacy settings also took a lot of time. ‘When making adjustments, users don’t get feedback on whether they’ve succeeded. They then get lost along the way, go backwards in the process and scroll randomly, not knowing if they’ve done enough,’ Bourdoucen says.

In the end, Bourdoucen explains, the participants were able to take one or two steps in the right direction, but none succeeded in following the whole procedure to protect their privacy.

Running out of options

If preventing data sharing is difficult, what does Apple do with all that data? [emphasis mine]

It’s not possible to be sure based on public documents, but Lindqvist says it’s possible to conclude that the data will be used to train the artificial intelligence system behind Siri and to provide personalised user experiences, among other things. [emphasis mine]

Many users are used to seamless multi-device interaction, which makes it difficult to move back to a time of more limited data sharing. However, Apple could inform users much more clearly than it does today, says Lindqvist. The study lists a number of detailed suggestions to clarify privacy settings and improve guidelines.

For individual apps, Lindqvist says that the problem can be solved to some extent by opting for a third-party service. For example, some participants in the study had switched from Safari to Firefox.

Lindqvist can’t comment directly on how Google’s Android works in similar respects [emphasis mine], as no one has yet done a similar mapping of its apps. But past research on third-party apps does not suggest that Google is any more privacy-conscious than Apple [emphasis mine].

So what can be learned from all this – are users ultimately facing an almost impossible task?

‘Unfortunately, that’s one lesson,’ says Lindqvist.

I have found two copies of the researchers’ paper. There’s a PDF version on Aalto University’s website that bears this caution,

This is an electronic reprint of the original article.
This reprint may differ from the original in pagination and typographic detail.

Here’s a link to and a citation for the official version of the paper,

Privacy of Default Apps in Apple’s Mobile Ecosystem by Amel Bourdoucen and Janne Lindqvist. CHI. ’24: Proceedings of the CHI Conference on Human Factors in Computing Systems May 2024 Article No.: 786 Pages 1–32 DOI: https://doi.org/10.1145/3613904.3642831 Published:11 May 2024

This paper is open access.