Category Archives: artificial intelligence (AI)

FrogHeart’s 2023 comes to an end as 2024 comes into view

My personal theme for this last year (2023) and for the coming year was and is: catching up. On the plus side, my 2023 backlog (roughly six months) to be published was whittled down considerably. On the minus side, I start 2024 with a backlog of two to three months.

2023 on this blog had a lot in common with 2022 (see my December 31, 2022 posting), which may be due to what’s going on in the world of emerging science and technology or to my personal interests or possibly a bit of both. On to 2023 and a further blurring of boundaries:

Energy, computing and the environment

The argument against paper is that it uses up resources, it’s polluting, it’s affecting the environment, etc. Somehow the part where electricity which underpins so much of our ‘smart’ society does the same thing is left out of the discussion.

Neuromorphic (brainlike) computing and lower energy

Before launching into the stories about lowering energy usage, here’s an October 16, 2023 posting “The cost of building ChatGPT” that gives you some idea of the consequences of our insatiable desire for more computing and more ‘smart’ devices,

In its latest environmental report, Microsoft disclosed that its global water consumption spiked 34% from 2021 to 2022 (to nearly 1.7 billion gallons , or more than 2,500 Olympic-sized swimming pools), a sharp increase compared to previous years that outside researchers tie to its AI research. [emphases mine]

“It’s fair to say the majority of the growth is due to AI,” including “its heavy investment in generative AI and partnership with OpenAI,” said Shaolei Ren, [emphasis mine] a researcher at the University of California, Riverside who has been trying to calculate the environmental impact of generative AI products such as ChatGPT.

Why it matters: Microsoft’s five WDM [West Des Moines in Iowa] data centers — the “epicenter for advancing AI” — represent more than $5 billion in investments in the last 15 years.

Yes, but: They consumed as much as 11.5 million gallons of water a month for cooling, or about 6% of WDM’s total usage during peak summer usage during the last two years, according to information from West Des Moines Water Works.

The focus is AI but it doesn’t take long to realize that all computing has energy and environmental costs. I have more about Ren’s work and about water shortages in the “The cost of building ChatGPT” posting.

This next posting would usually be included with my other art/sci postings but it touches on the issues. My October 13, 2023 posting about Toronto’s Art/Sci Salon events, in particular, there’s the Streaming Carbon Footprint event (just scroll down to the appropriate subhead). For the interested, I also found this 2022 paper “The Carbon Footprint of Streaming Media:; Problems, Calculations, Solutions” co-authored by one of the artist/researchers (Laura U. Marks, philosopher and scholar of new media and film at Simon Fraser University) who presented at the Toronto event.

I’m late to the party; Thomas Daigle posted a January 2, 2020 article about energy use and our appetite for computing and ‘smart’ devices for the Canadian Broadcasting Corporation’s online news,

For those of us binge-watching TV shows, installing new smartphone apps or sharing family photos on social media over the holidays, it may seem like an abstract predicament.

The gigabytes of data we’re using — although invisible — come at a significant cost to the environment. Some experts say it rivals that of the airline industry. 

And as more smart devices rely on data to operate (think internet-connected refrigerators or self-driving cars), their electricity demands are set to skyrocket.

“We are using an immense amount of energy to drive this data revolution,” said Jane Kearns, an environment and technology expert at MaRS Discovery District, an innovation hub in Toronto.

“It has real implications for our climate.”

Some good news

Researchers are working on ways to lower the energy and environmental costs, here’s a sampling of 2023 posts with an emphasis on brainlike computing that attest to it,

If there’s an industry that can make neuromorphic computing and energy savings sexy, it’s the automotive indusry,

On the energy front,

Most people are familiar with nuclear fission and some its attendant issues. There is an alternative nuclear energy, fusion, which is considered ‘green’ or greener anyway. General Fusion is a local (Vancouver area) company focused on developing fusion energy, alongside competitors from all over the planet.

Part of what makes fusion energy attractive is that salt water or sea water can be used in its production and, according to that December posting, there are other applications for salt water power,

More encouraging developments in environmental science

Again, this is a selection. You’ll find a number of nano cellulose research projects and a couple of seaweed projects (seaweed research seems to be of increasing interest).

All by myself (neuromorphic engineering)

Neuromorphic computing is a subset of neuromorphic engineering and I stumbled across an article that outlines the similarities and differences. My ‘summary’ of the main points and a link to the original article can be found here,

Oops! I did it again. More AI panic

I included an overview of the various ‘recent’ panics (in my May 25, 2023 posting below) along with a few other posts about concerning developments but it’s not all doom and gloom..

Governments have realized that regulation might be a good idea. The European Union has a n AI act, the UK held an AI Safety Summit in November 2023, the US has been discussing AI regulation with its various hearings, and there’s impending legislation in Canada (see professor and lawyer Michael Geist’s blog for more).

A long time coming, a nanomedicine comeuppance

Paolo Macchiarini is now infamous for his untested, dangerous approach to medicine. Like a lot of people, I was fooled too as you can see in my August 2, 2011 posting, “Body parts nano style,”

In early July 2011, there were reports of a new kind of transplant involving a body part made of a biocomposite. Andemariam Teklesenbet Beyene underwent a trachea transplant that required an artificial windpipe crafted by UK experts then flown to Sweden where Beyene’s stem cells were used to coat the windpipe before being transplanted into his body.

It is an extraordinary story not least because Beyene, a patient in a Swedish hospital planning to return to Eritrea after his PhD studies in Iceland, illustrates the international cooperation that made the transplant possible.

The scaffolding material for the artificial windpipe was developed by Professor Alex Seifalian at the University College London in a landmark piece of nanotechnology-enabled tissue engineering. …

Five years later I stumbled across problems with Macchiarini’s work as outlined in my April 19, 2016 posting, “Macchiarini controversy and synthetic trachea transplants (part 1 of 2)” and my other April 19, 2016 posting, “Macchiarini controversy and synthetic trachea transplants (part 2 of 2)“.

This year, Gretchen Vogel (whose work was featured in my 2016 posts) has written a June 21, 2023 update about the Macchiarini affair for Science magazine, Note: Links have been removed,

Surgeon Paolo Macchiarini, who was once hailed as a pioneer of stem cell medicine, was found guilty of gross assault against three of his patients today and sentenced to 2 years and 6 months in prison by an appeals court in Stockholm. The ruling comes a year after a Swedish district court found Macchiarini guilty of bodily harm in two of the cases and gave him a suspended sentence. After both the prosecution and Macchiarini appealed that ruling, the Svea Court of Appeal heard the case in April and May. Today’s ruling from the five-judge panel is largely a win for the prosecution—it had asked for a 5-year sentence whereas Macchiarini’s lawyer urged the appeals court to acquit him of all charges.

Macchiarini performed experimental surgeries on the three patients in 2011 and 2012 while working at the renowned Karolinska Institute. He implanted synthetic windpipes seeded with stem cells from the patients’ own bone marrow, with the hope the cells would multiply over time and provide an enduring replacement. All three patients died when the implants failed. One patient died suddenly when the implant caused massive bleeding just 4 months after it was implanted; the two others survived for 2.5 and nearly 5 years, respectively, but suffered painful and debilitating complications before their deaths.

In the ruling released today, the appeals judges disagreed with the district court’s decision that the first two patients were treated under “emergency” conditions. Both patients could have survived for a significant length of time without the surgeries, they said. The third case was an “emergency,” the court ruled, but the treatment was still indefensible because by then Macchiarini was well aware of the problems with the technique. (One patient had already died and the other had suffered severe complications.)

A fictionalized tv series ( part of the Dr. Death anthology series) based on Macchiarini’s deceptions and a Dr. Death documentary are being broadcast/streamed in the US during January 2024. These come on the heels of a November 2023 Macchiarini documentary also broadcast/streamed on US television.

Dr. Death (anthology), based on the previews I’ve seen, is heavily US-centric, which is to be expected since Adam Ciralsky is involved in the production. Ciralsky wrote an exposé about Macchiarini for Vanity Fair published in 2016 (also featured in my 2016 postings). From a December 20, 2023 article by Julie Miller for Vanity Fair, Note: A link has been removed,

Seven years ago [2016], world-renowned surgeon Paolo Macchiarini was the subject of an ongoing Vanity Fair investigation. He had seduced award-winning NBC producer Benita Alexander while she was making a special about him, proposed, and promised her a wedding officiated by Pope Francis and attended by political A-listers. It was only after her designer wedding gown was made that Alexander learned Macchiarini was still married to his wife, and seemingly had no association with the famous names on their guest list.

Vanity Fair contributor Adam Ciralsky was in the midst of reporting the story for this magazine in the fall of 2015 when he turned to Dr. Ronald Schouten, a Harvard psychiatry professor. Ciralsky sought expert insight into the kind of fabulist who would invent and engage in such an audacious lie.

“I laid out the story to him, and he said, ‘Anybody who does this in their private life engages in the same conduct in their professional life,” recalls Ciralsky, in a phone call with Vanity Fair. “I think you ought to take a hard look at his CVs.”

That was the turning point in the story for Ciralsky, a former CIA lawyer who soon learned that Macchiarini was more dangerous as a surgeon than a suitor. …

Here’s a link to Ciralsky’s original article, which I described this way, from my April 19, 2016 posting (part 2 of the Macchiarini controversy),

For some bizarre frosting on this disturbing cake (see part 1 of the Macchiarini controversy and synthetic trachea transplants for the medical science aspects), a January 5, 2016 Vanity Fair article by Adam Ciralsky documents Macchiarini’s courtship of an NBC ([US] National Broadcasting Corporation) news producer who was preparing a documentary about him and his work.

[from Ciralsky’s article]

“Macchiarini, 57, is a magnet for superlatives. He is commonly referred to as “world-renowned” and a “super-surgeon.” He is credited with medical miracles, including the world’s first synthetic organ transplant, which involved fashioning a trachea, or windpipe, out of plastic and then coating it with a patient’s own stem cells. That feat, in 2011, appeared to solve two of medicine’s more intractable problems—organ rejection and the lack of donor organs—and brought with it major media exposure for Macchiarini and his employer, Stockholm’s Karolinska Institute, home of the Nobel Prize in Physiology or Medicine. Macchiarini was now planning another first: a synthetic-trachea transplant on a child, a two-year-old Korean-Canadian girl named Hannah Warren, who had spent her entire life in a Seoul hospital. … “

Other players in the Macchiarini story

Pierre Delaere, a trachea expert and professor of head and neck surgery at KU Leuven (a university in Belgium) was one of the first to draw attention to Macchiarini’s dangerous and unethical practices. To give you an idea of how difficult it was to get attention for this issue, there’s a September 1, 2017 article by John Rasko and Carl Power for the Guardian illustrating the issue. Here’s what they had to say about Delaere and other early critics of the work, Note: Links have been removed,

Delaere was one of the earliest and harshest critics of Macchiarini’s engineered airways. Reports of their success always seemed like “hot air” to him. He could see no real evidence that the windpipe scaffolds were becoming living, functioning airways – in which case, they were destined to fail. The only question was how long it would take – weeks, months or a few years.

Delaere’s damning criticisms appeared in major medical journals, including the Lancet, but weren’t taken seriously by Karolinska’s leadership. Nor did they impress the institute’s ethics council when Delaere lodged a formal complaint. [emphases mine]

Support for Macchiarini remained strong, even as his patients began to die. In part, this is because the field of windpipe repair is a niche area. Few people at Karolinska, especially among those in power, knew enough about it to appreciate Delaere’s claims. Also, in such a highly competitive environment, people are keen to show allegiance to their superiors and wary of criticising them. The official report into the matter dubbed this the “bandwagon effect”.

With Macchiarini’s exploits endorsed by management and breathlessly reported in the media, it was all too easy to jump on that bandwagon.

And difficult to jump off. In early 2014, four Karolinska doctors defied the reigning culture of silence [emphasis mine] by complaining about Macchiarini. In their view, he was grossly misrepresenting his results and the health of his patients. An independent investigator agreed. But the vice-chancellor of Karolinska Institute, Anders Hamsten, wasn’t bound by this judgement. He officially cleared Macchiarini of scientific misconduct, allowing merely that he’d sometimes acted “without due care”.

For their efforts, the whistleblowers were punished. [emphasis mine] When Macchiarini accused one of them, Karl-Henrik Grinnemo, of stealing his work in a grant application, Hamsten found him guilty. As Grinnemo recalls, it nearly destroyed his career: “I didn’t receive any new grants. No one wanted to collaborate with me. We were doing good research, but it didn’t matter … I thought I was going to lose my lab, my staff – everything.”

This went on for three years until, just recently [2017], Grinnemo was cleared of all wrongdoing.

It is fitting that Macchiarini’s career unravelled at the Karolinska Institute. As the home of the Nobel prize in physiology or medicine, one of its ambitions is to create scientific celebrities. Every year, it gives science a show-business makeover, picking out from the mass of medical researchers those individuals deserving of superstardom. The idea is that scientific progress is driven by the genius of a few.

It’s a problematic idea with unfortunate side effects. A genius is a revolutionary by definition, a risk-taker and a law-breaker. Wasn’t something of this idea behind the special treatment Karolinska gave Macchiarini? Surely, he got away with so much because he was considered an exception to the rules with more than a whiff of the Nobel about him. At any rate, some of his most powerful friends were themselves Nobel judges until, with his fall from grace, they fell too.

The September 1, 2017 article by Rasko and Power is worth the read if you have the interest and the time. And, Delaere has written up a comprehensive analysis, which includes basic information about tracheas and more, “The Biggest Lie in Medical History” 2020, PDF, 164 pp., Creative Commons Licence).

I also want to mention Leonid Schneider, science journalist and molecular cell biologist, whose work the Macchiarini scandal on his ‘For Better Science’ website was also featured in my 2016 pieces. Schneider’s site has a page titled, ‘Macchiarini’s trachea transplant patients: the full list‘ started in 2017 and which he continues to update with new information about the patients. The latest update was made on December 20, 2023.

Promising nanomedicine research but no promises and a caveat

Most of the research mentioned here is still in the laboratory. i don’t often come across work that has made its way to clinical trials since the focus of this blog is emerging science and technology,

*If you’re interested in the business of neurotechnology, the July 17, 2023 posting highlights a very good UNESCO report on the topic.

Funky music (sound and noise)

I have couple of stories about using sound for wound healing, bioinspiration for soundproofing applications, detecting seismic activity, more data sonification, etc.

Same old, same old CRISPR

2023 was relatively quiet (no panics) where CRISPR developments are concerned but still quite active.

Art/Sci: a pretty active year

I didn’t realize how active the year was art/sciwise including events and other projects until I reviewed this year’s postings. This is a selection from 2023 but there’s a lot more on the blog, just use the search term, “art/sci,” or “art/science,” or “sciart.”

While I often feature events and projects from these groups (e.g., June 2, 2023 posting, “Metacreation Lab’s greatest hits of Summer 2023“), it’s possible for me to miss a few. So, you can check out Toronto’s Art/Sci Salon’s website (strong focus on visual art) and Simon Fraser University’s Metacreation Lab for Creative Artificial Intelligence website (strong focus on music).

My selection of this year’s postings is more heavily weighted to the ‘writing’ end of things.

Boundaries: life/nonlife

Last year I subtitled this section, ‘Aliens on earth: machinic biology and/or biological machinery?” Here’s this year’s selection,

Canada’s 2023 budget … military

2023 featured an unusual budget where military expenditures were going to be increased, something which could have implications for our science and technology research.

Then things changed as Murray Brewster’s November 21, 2023 article for the Canadian Broadcasting Corporation’s (CBC) news online website comments, Note: A link has been removed,

There was a revelatory moment on the weekend as Defence Minister Bill Blair attempted to bridge the gap between rhetoric and reality in the Liberal government’s spending plans for his department and the Canadian military.

Asked about an anticipated (and long overdue) update to the country’s defence policy (supposedly made urgent two years ago by Russia’s full-on invasion of Ukraine), Blair acknowledged that the reset is now being viewed through a fiscal lens.

“We said we’re going to bring forward a new defence policy update. We’ve been working through that,” Blair told CBC’s Rosemary Barton Live on Sunday.

“The current fiscal environment that the country faces itself does require (that) that defence policy update … recognize (the) fiscal challenges. And so it’ll be part of … our future budget processes.”

One policy goal of the existing defence plan, Strong, Secure and Engaged, was to require that the military be able to concurrently deliver “two sustained deployments of 500 [to] 1,500 personnel in two different theaters of operation, including one as a lead nation.”

In a footnote, the recent estimates said the Canadian military is “currently unable to conduct multiple operations concurrently per the requirements laid out in the 2017 Defence Policy. Readiness of CAF force elements has continued to decrease over the course of the last year, aggravated by decreasing number of personnel and issues with equipment and vehicles.”

Some analysts say they believe that even if the federal government hits its overall budget reduction targets, what has been taken away from defence — and what’s about to be taken away — won’t be coming back, the minister’s public assurances notwithstanding.

10 years: Graphene Flagship Project and Human Brain Project

Graphene and Human Brain Project win biggest research award in history (& this is the 2000th post)” on January 28, 2013 was how I announced the results of what had been a a European Union (EU) competition that stretched out over several years and many stages as projects were evaluated and fell to the wayside or were allowed onto the next stage. The two finalists received €1B each to be paid out over ten years.

Future or not

As you can see, there was plenty of interesting stuff going on in 2023 but no watershed moments in the areas I follow. (Please do let me know in the Comments should you disagree with this or any other part of this posting.) Nanotechnology seems less and less an emerging science/technology in itself and more like a foundational element of our science and technology sectors. On that note, you may find my upcoming (in 2024) post about a report concerning the economic impact of its National Nanotechnology Initiative (NNI) from 2002 to 2022 of interest.

Following on the commercialization theme, I have noticed an increase of interest in commercializing brain and brainlike engineering technologies, as well as, more discussion about ethics.

Colonizing the brain?

UNESCO held events such as, this noted in my July 17, 2023 posting, “Unveiling the Neurotechnology Landscape: Scientific Advancements, Innovations and Major Trends—a UNESCO report” and this noted in my July 7, 2023 posting “Global dialogue on the ethics of neurotechnology on July 13, 2023 led by UNESCO.” An August 21, 2023 posting, “Ethical nanobiotechnology” adds to the discussion.

Meanwhile, Australia has been producing some very interesting mind/robot research, my June 13, 2023 posting, “Mind-controlled robots based on graphene: an Australian research story.” I have more of this kind of research (mind control or mind reading) from Australia to be published in early 2024. The Australians are not alone, there’s also this April 12, 2023 posting, “Mind-reading prosthetic limbs” from Germany.

My May 12, 2023 posting, “Virtual panel discussion: Canadian Strategies for Responsible Neurotechnology Innovation on May 16, 2023” shows Canada is entering the discussion. Unfortunately, the Canadian Science Policy Centre (CSPC), which held the event, has not posted a video online even though they have a youtube channel featuring other of their events.

As for neurmorphic engineering, China has produced a roadmap for its research in this area as noted in my March 20, 2023 posting, “A nontraditional artificial synaptic device and roadmap for Chinese research into neuromorphic devices.”

Quantum anybody?

I haven’t singled it out in this end-of-year posting but there is a great deal of interest in quantum computer both here in Canada and elsewhere. There is a 2023 report from the Council of Canadian Academies on the topic of quantum computing in Canada, which I hope to comment on soon.

Final words

I have a shout out for the Canadian Science Policy Centre, which celebrated its 15th anniversary in 2023. Congratulations!

For everyone, I wish peace on earth and all the best for you and yours in 2024!

Consciousness, energy, and matter

Credit: Rice University [downloaded from https://phys.org/news/2023-10-energy-consciousness-physics-thorny-topic.html]

There’s an intriguing approach tying together ideas about consciousness, artificial intelligence, and physics in an October 8, 2023 news item on phys.org,

With the rise of brain-interface technology and artificial intelligence that can imitate brain functions, understanding the nature of consciousness and how it interacts with reality is not just an age-old philosophical question but also a salient challenge for humanity.

An October 9, 2023 University of Technology Sydney (UTS) press release (also on EurekAlert but published on October 8, 2023), which originated the news item, delves further into the subject matter, Note: Links have been removed,

Can AI become conscious, and how would we know? Should we incorporate human or animal cells, such as neurons, into machines and robots? Would they be conscious and have subjective experiences? Does consciousness reduce to physicalism, or is it fundamental? And if machine-brain interaction influenced you to commit a crime, or caused a crime, would you be responsible beyond a reasonable doubt? Do we have a free will?

AI and computer science specialist Dr Mahendra Samarawickrama, winner of the Australian Computer Society’s Information and Communications Technology (ICT) Professional of the year, has applied his knowledge of physics and artificial neural networks to this thorny topic.

He presented a peer-reviewed paper on fundamental physics and consciousness at the 11th International Conference on Mathematical Modelling in Physical Sciences, Unifying Matter, Energy and Consciousness, which has just been published in the AIP (the American Institute of Physics) Conference Proceedings. 

“Consciousness is an evolving topic connected to physics, engineering, neuroscience and many other fields. Understanding the interplay between consciousness, energy and matter could bring important insights to our fundamental understanding of reality,” said Dr Samarawickrama.

“Einstein’s dream of a unified theory is a quest that occupies the minds of many theoretical physicists and engineers. Some solutions completely change existing frameworks, which increases complexity and creates more problems than it solves.

“My theory brings the notion of consciousness to fundamental physics such that it complements the current physics models and explains the time, causality, and interplay of consciousness, energy and matter.

“I propose that consciousness is a high-speed sequential flow of awareness subjected to relativity. The quantised energy of consciousness can interplay with matter creating reality while adhering to laws of physics, including quantum physics and relativity.

“Awareness can be seen in life, AI and even physical realities like entangled particles. Studying consciousness helps us be aware of and differentiate realities that exist in nature,” he said. 

Dr Samarawickrama is an honorary Visiting Scholar in the School of Computer Science at the University of Technology Sydney, where he has contributed to UTS research on data science and AI, focusing on social impact.

“Research in this field could pave the way towards the development of conscious AI, with robots that are aware and have the ability to think becoming a reality. We want to ensure that artificial intelligence is ethical and responsible in emerging solutions,” Dr Samarawickrama said.

Here’s a link to and a citation for the paper Samarawickrama presented at the 11th International Conference on Mathematical Modelling in Physical Sciences, Unifying Matter, Energy and Consciousness,

Unifying matter, energy and consciousness by Mahendra Samarawickrama. AIP Conf. Proc. Volume 2872, Issue 1, 28 September 2023, 110001 (2023) DOI: https://doi.org/10.1063/5.0162815

This paper is open access.

The researcher has made a video of his presentation and further information available,

It’s a little bit over my head but hopefully repeated viewings and readings will help me better understand Dr. Samarawickrama’s work.

AI-led corporate entities as a new species of legal subject

An AI (artificial intelligence) agent running a business? Not to worry, lawyers are busy figuring out the implications according to this October 26, 2023 American Association for the Advancement of Science (AAAS) news release on EurekAlert,

For the first time in human history, say Daniel Gervais and John Nay in a Policy Forum, nonhuman entities that are not directed by humans – such as artificial intelligence (AI)-operated corporations – should enter the legal system as a new “species” of legal subject. AI has evolved to the point where it could function as a legal subject with rights and obligations, say the authors. As such, before the issue becomes too complex and difficult to disentangle, “interspecific” legal frameworks need to be developed by which AI can be treated as legal subjects, they write. Until now, the legal system has been univocal – it allows only humans to speak to its design and use. Nonhuman legal subjects like animals have necessarily instantiated their rights through human proxies. However, their inclusion is less about defining and protecting the rights and responsibilities of these nonhuman subjects and more a vehicle for addressing human interests and obligations as it relates to them. In the United States, corporations are recognized as “artificial persons” within the legal system. However, the laws of some jurisdictions do not always explicitly require corporate entities to have human owners or managers at their helm. Thus, by law, nothing generally prevents an AI from operating a corporate entity. Here, Gervais and Nay highlight the rapidly realizing concept of AI-operated “zero-member LLCs” – or a corporate entity operating autonomously without any direct human involvement in the process. The authors discuss several pathways in which such AI-operated LLCs and their actions could be handled within the legal system. As the idea of ceasing AI development and use is highly unrealistic, Gervais and Nay discuss other options, including regulating AI by treating the machines as legally inferior to humans or engineering AI systems to be law-abiding and bringing them into the legal fold now before it becomes too complicated to do so.

Gervais and Nay have written an October 26, 2023 essay “AIs could soon run businesses – it’s an opportunity to ensure these ‘artificial persons’ follow the law” for The Conversation, which helps clarify matters, Note: Links have been removed,

Only “persons” can engage with the legal system – for example, by signing contracts or filing lawsuits. There are two main categories of persons: humans, termed “natural persons,” and creations of the law, termed “artificial persons.” These include corporations, nonprofit organizations and limited liability companies (LLCs).

Up to now, artificial persons have served the purpose of helping humans achieve certain goals. For example, people can pool assets in a corporation and limit their liability vis-à-vis customers or other persons who interact with the corporation. But a new type of artificial person is poised to enter the scene – artificial intelligence systems, and they won’t necessarily serve human interests.

As scholars who study AI and law we believe that this moment presents a significant challenge to the legal system: how to regulate AI within existing legal frameworks to reduce undesirable behaviors, and how to assign legal responsibility for autonomous actions of AIs.

One solution is teaching AIs to be law-abiding entities.

This is far from a philosophical question. The laws governing LLCs in several U.S. states do not require that humans oversee the operations of an LLC. In fact, in some states it is possible to have an LLC with no human owner, or “member” [emphasis mine] – for example, in cases where all of the partners have died. Though legislators probably weren’t thinking of AI when they crafted the LLC laws, the possibility for zero-member LLCs opens the door to creating LLCs operated by AIs.

Many functions inside small and large companies have already been delegated to AI in part, including financial operations, human resources and network management, to name just three. AIs can now perform many tasks as well as humans do. For example, AIs can read medical X-rays and do other medical tasks, and carry out tasks that require legal reasoning. This process is likely to accelerate due to innovation and economic interests.

I found the essay illuminating and the abstract for the paper (link and citation for paper at end of this post), a little surprising,

Several experts have warned about artificial intelligence (AI) exceeding human capabilities, a “singularity” [emphasis mine] at which it might evolve beyond human control. Whether this will ever happen is a matter of conjecture. A legal singularity is afoot, however: For the first time, nonhuman entities that are not directed by humans may enter the legal system as a new “species” of legal subjects. This possibility of an “interspecific” legal system provides an opportunity to consider how AI might be built and governed. We argue that the legal system may be more ready for AI agents than many believe. Rather than attempt to ban development of powerful AI, wrapping of AI in legal form could reduce undesired AI behavior by defining targets for legal action and by providing a research agenda to improve AI governance, by embedding law into AI agents, and by training AI compliance agents.

it was a little unexpected to see the ‘singularity’ mentioned. it’s a term I associate with the tech and the sci fi communities.For anyone unfamiliar with the term, here’s a description from the ‘Technological singularity’ Wikipedia entry, Note: Links have been removed,

The technological singularity—or simply the singularity[1]—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization.[2][3] According to the most popular version of the singularity hypothesis, I. J. Good’s intelligence explosion model, an upgradable intelligent agent will eventually enter a “runaway reaction” of self-improvement cycles, each new and more intelligent generation appearing more and more rapidly, causing an “explosion” in intelligence and resulting in a powerful superintelligence that qualitatively far surpasses all human intelligence.[4]

The first person to use the concept of a “singularity” in the technological context was the 20th-century Hungarian-American mathematician John von Neumann.[5] Stanislaw Ulam reports in 1958 an earlier discussion with von Neumann “centered on the accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue”.[6] Subsequent authors have echoed this viewpoint.[3][7]

The concept and the term “singularity” were popularized by Vernor Vinge first in 1983 in an article that claimed that once humans create intelligences greater than their own, there will be a technological and social transition similar in some sense to “the knotted space-time at the center of a black hole”,[8] and later in his 1993 essay The Coming Technological Singularity,[4][7] in which he wrote that it would signal the end of the human era, as the new superintelligence would continue to upgrade itself and would advance technologically at an incomprehensible rate. He wrote that he would be surprised if it occurred before 2005 or after 2030.[4] Another significant contributor to wider circulation of the notion was Ray Kurzweil’s 2005 book The Singularity Is Near, predicting singularity by 2045.[7]

Finally, here’s a link to and a citation for the paper,

Law could recognize nonhuman AI-led corporate entities by Daniel J. Gervais and John J. Nay. Science 26 Oct 2023 Vol 382, Issue 6669 pp. 376-378 DOI: 10.1126/science.adi8678

This paper is behind a paywall.

AI for salmon recovery

Hopefully you won’t be subjected to a commercial prior to this 3 mins. 49 secs. video about the salmon and how artificial intelligence (AI) could make a difference in theirs and our continued survival,

Video caption: Wild Salmon Center is partnering with First Nations to pilot the Salmon Vision technology. (Credit: Olivia Leigh Nowak/Le Colibri Studio.)

An October 19, 2023 news item on phys.org announces this research, Note: Links have been removed,

Scientists and natural resource managers from Canadian First Nations, governments, academic institutions, and conservation organizations published the first results of a unique salmon population monitoring tool in Frontiers in Marine Science.

This groundbreaking new technology, dubbed “Salmon Vision,” combines artificial intelligence with age-old fishing weir technology. Early assessments show it to be remarkably adept at identifying and counting fish species, potentially enabling real-time salmon population monitoring for fisheries managers.

An October 19, 2023 Wild Salmon Center news release on EurekAlert, which originated the news item, provides more detail about the work,

“In recent years, we’ve seen the promise of underwater video technology to help us literally see salmon return to rivers,” says lead author Dr. Will Atlas, Senior Watershed Scientist with the Portland-based Wild Salmon Center. “That dovetails with what many of our First Nations partners are telling us: that we need to automate fish counting to make informed decisions while salmon are still running.” 

The Salmon Vision pilot study annotates more than 500,000 individual video frames captured at two Indigenous-run fish counting weirs on the Kitwanga and Bear Rivers of B.C.’s Central Coast. 

The first-of-its-kind deep learning computer model, developed in data partnership with the Gitanyow Fisheries Authority and Skeena Fisheries Commission, shows promising accuracy in identifying salmon species. It yielded mean average precision rates of 67.6 percent in tracking 12 different fish species passing through custom fish-counting boxes at the two weirs, with scores surpassing 90 and 80 percent for coho and sockeye salmon: two of the principal fish species targeted by First Nations, commercial, and recreational fishers. 

“When we envisioned providing fast grants for projects focused on Indigenous futurism and climate resilience, this is the type of project that we hoped would come our way,” says Dr. Keolu Fox, a professor at the University of California-San Diego, and one of several reviewers in an early crowdfunding round for the development of Salmon Vision. 

Collaborators on the model, funded by the British Columbia Salmon Recovery and Innovation Fund, include researchers and fisheries managers with Simon Fraser University and Douglas College computing sciences, the Pacific Salmon Foundation, Gitanyow Fisheries Authority, and the Skeena Fisheries Commission. Following these exciting early results, the next step is to expand the model with partner First Nations into a half-dozen new watersheds on B.C.’s North and Central Coast.

Real-time data on salmon returns is critical on several fronts. According to Dr. Atlas, many fisheries in British Columbia have been data-poor for decades. That leaves fisheries managers to base harvest numbers on early-season catch data, rather than the true number of salmon returning. Meanwhile, changing weather patterns, stream flows, and ocean conditions are creating more variable salmon returns: uncertainty that compounds the ongoing risks of overfishing already-vulnerable populations.

“Without real-time data on salmon returns, it’s extremely difficult to build climate-smart, responsive fisheries,” says Dr. Atlas. “Salmon Vision data collection and analysis can fill that information gap.” 

It’s a tool that he says will be invaluable to First Nation fisheries managers and other organizations both at the decision-making table—in providing better information to manage conservation risks and fishing opportunities—and in remote rivers across salmon country, where on-the-ground data collection is challenging and costly. 

The Salmon Vision team is implementing automated counting on a trial basis in several rivers around the B.C. North and Central Coasts in 2023. The goal is to provide reliable real-time count data by 2024.

This October 18, 2023 article by Ramona DeNies for the Wild Salmon Center (WSC) is nicely written although it does cover some of the same material seen in the news release, Note: A link has been removed,

Right now, in rivers across British Columbia’s Central Coast, we don’t know how many salmon are actually returning. At least, not until fishing seasons are over.

And yet, fisheries managers still have to make decisions. They have to make forecasts, modeled on data from the past. They have to set harvest targets for commercial and recreational fisheries. And increasingly, they have to make the call on emergency closures, when things start looking grim.

“On the north and central coast of BC, we’ve seen really wildly variable returns of salmon over the last decade,” says Dr. Will Atlas, Wild Salmon Center Senior Watershed Scientist. “With accelerating climate change, every year is unprecedented now. Yet from a fisheries management perspective, we’re still going into most seasons assuming that this year will look like the past.”

One answer, Dr. Atlas says, is “Salmon Vision.” Results from this first-of-its-kind technology—developed by WSC in data partnership with the Gitanyow Fisheries Authority and Skeena Fisheries Commission—were recently published in Frontiers in Marine Science.

There are embedded images in DeNies’ October 18, 2023 article; it’s where I found the video.

Here’s a link to and a citation for the paper,

Wild salmon enumeration and monitoring using deep learning empowered detection and tracking by William I. Atlas, Sami Ma, Yi Ching Chou, Katrina Connors, Daniel Scurfield, Brandon Nam, Xiaoqiang Ma, Mark Cleveland, Janvier Doire, Jonathan W. Moore, Ryan Shea, Jiangchuan Liu. Front. Mar. Sci., 20 September 2023 Volume 10 – 2023 DOI: https://doi.org/10.3389/fmars.2023.1200408

This paper appears to be open access.

Youthful Canadian inventors win awards

Two teenagers stand next two each other displaying their inventions. One holds a laptop, while the other holds a wireless headset.
Vinny Gu, left, and Anush Mutyala, right, hope to continue to work to improve their inventions. (Niza Lyapa Nondo/CBC)

This November 28, 2023 article by Philip Drost for the Canadian Broadcasting Corporation’s (CBC) The Current radio programme highl8ights two youthful inventors, Note: Links have been removed,

Anush Mutyala [emphasis mine] may only be in Grade 12, but he already has hopes that his innovations and inventions will rival that of Elon Musk.

“I always tell my friends something that would be funny is if I’m competing head-to-head with Elon Musk in the race to getting people [neural] implants,” Mutyala told Matt Galloway on The Current

Mutyala, a student at Chinguacousy Secondary School in Brampton, Ont., created a brain imaging system that he says opens the future for permanent wireless neural implants. 

For his work, he received an award from Youth Science Canada at the National Fair in 2023, which highlights young people pushing innovation. 

Mutyala wanted to create a way for neural implants to last longer. Implants can help people hear better, or move parts of the body they otherwise couldn’t, but neural implants in particular face issues with regard to power consumption, and traditionally must be replaced by surgery after their batteries die. That can be every five years. 

But Mutyala thinks his system, Enerspike, can change that. The algorithm he designed lowers the energy consumption needed for implants to process and translate brain signals into making a limb move.

“You would essentially never need to replace wireless implants again for the purpose of battery replacement,” said Mutyala. 

Mutyala was inspired by Stephen Hawking, who famously spoke with the use of a speech synthesizer.

“What if we used technology like this and we were able to restore his complete communication ability? He would have been able to communicate at a much faster rate and he would have had a much greater impact on society,” said Mutyala. 

… Mutyala isn’t the only innovator. Vinny Gu [emphasis mine], a Grade 11 student at Markville Secondary School in Markham, Ont., also received an award for creating DermaScan, an online application that can look at a photo and predict whether the person photographed has skin cancer or not.

“There has [sic] been some attempts at this problem in the past. However, they usually result in very low accuracy. However, I incorporated a technology to help my model better detect the minor small details in the image in order for it to get a better prediction,” said Gu. 

He says it doesn’t replace visiting a dermatologist — but it can give people an option to do pre-screenings with ease, which can help them decide if they need to go see a dermatologist. He says his model is 90-per-cent accurate. 

He is currently testing Dermascan, and he hopes to one day make it available for free to anyone who needs it. 

Drost’s November 28, 2023 article hosts an embedded audio file of the radio interview and more.

You can find out about Anoush Mutyala and his work on his LinkedIn profile (in a addition to being a high school student, since October 2023, he’s also a neuromorphics researcher at York University). If my link to his profile fails, search Mutyala’s name online and access his public page at the LinkedIn website. There’s something else, Mutyala has an eponymous website.

My online searches for more about Vinny (or Vincent) Gu were not successful.

You can find a bit more information about Mutyala’s Enerspike here and Gu’s DermaScan here. Youth Science Canada can be found here.

Not to forget, there’s grade nine student Arushi Nath and her work on planetary defence, which is being recognized in a number of ways. (See my November 17, 2023 posting, Arushi Nath gives the inside story about the 2023 Natural Sciences and Engineering Research Council of Canada (NSERC) Awards and my November 23, 2023 posting, Margot Lee Shetterly [Hidden Figures author] in Toronto, Canada and a little more STEM [science, technology, engineering, and mathematics] information.) I must say November 2023 has been quite the banner month for youth science in Canada.

An artificial, multisensory integrated neuron makes AI (artificial intelligence) smarter

More brainlike (neuromorphic) computing but this time, it’s all about the senses. From a September 15, 2023 news item on ScienceDaily, Note: A link has been removed,

The feel of a cat’s fur can reveal some information, but seeing the feline provides critical details: is it a housecat or a lion? While the sound of fire crackling may be ambiguous, its scent confirms the burning wood. Our senses synergize to give a comprehensive understanding, particularly when individual signals are subtle. The collective sum of biological inputs can be greater than their individual contributions. Robots tend to follow more straightforward addition, but researchers have now harnessed the biological concept for application in artificial intelligence (AI) to develop the first artificial, multisensory integrated neuron.

Led by Saptarshi Das, associate professor of engineering science and mechanics at Penn State, the team published their work today (Sept. 15 [2023]) in Nature Communications.

A September 12, 2023 Pennsylvania State University (Penn State) news release (also on EurekAlert but published September 15, 2023) by Ashley WennersHerron, which originated the news item, provides more detail about the research,

“Robots make decisions based on the environment they are in, but their sensors do not generally talk to each other,” said Das, who also has joint appointments in electrical engineering and in materials science and engineering. “A collective decision can be made through a sensor processing unit, but is that the most efficient or effective method? In the human brain, one sense can influence another and allow the person to better judge a situation.”

For instance, a car might have one sensor scanning for obstacles, while another senses darkness to modulate the intensity of the headlights. Individually, these sensors relay information to a central unit which then instructs the car to brake or adjust the headlights. According to Das, this process consumes more energy. Allowing sensors to communicate directly with each other can be more efficient in terms of energy and speed — particularly when the inputs from both are faint.

“Biology enables small organisms to thrive in environments with limited resources, minimizing energy consumption in the process,” said Das, who is also affiliated with the Materials Research Institute. “The requirements for different sensors are based on the context — in a dark forest, you’d rely more on listening than seeing, but we don’t make decisions based on just one sense. We have a complete sense of our surroundings, and our decision making is based on the integration of what we’re seeing, hearing, touching, smelling, etcetera. The senses evolved together in biology, but separately in AI. In this work, we’re looking to combine sensors and mimic how our brains actually work.”

The team focused on integrating a tactile sensor and a visual sensor so that the output of one sensor modifies the other, with the help of visual memory. According to Muhtasim Ul Karim Sadaf, a third-year doctoral student in engineering science and mechanics, even a short-lived flash of light can significantly enhance the chance of successful movement through a dark room.

“This is because visual memory can subsequently influence and aid the tactile responses for navigation,” Sadaf said. “This would not be possible if our visual and tactile cortex were to respond to their respective unimodal cues alone. We have a photo memory effect, where light shines and we can remember. We incorporated that ability into a device through a transistor that provides the same response.”

The researchers fabricated the multisensory neuron by connecting a tactile sensor to a phototransistor based on a monolayer of molybdenum disulfide, a compound that exhibits unique electrical and optical characteristics useful for detecting light and supporting transistors. The sensor generates electrical spikes in a manner reminiscent of neurons processing information, allowing it to integrate both visual and tactile cues.

It’s the equivalent of seeing an “on” light on the stove and feeling heat coming off of a burner — seeing the light on doesn’t necessarily mean the burner is hot yet, but a hand only needs to feel a nanosecond of heat before the body reacts and pulls the hand away from the potential danger. The input of light and heat triggered signals that induced the hand’s response. In this case, the researchers measured the artificial neuron’s version of this by seeing signaling outputs resulted from visual and tactile input cues.

To simulate touch input, the tactile sensor used triboelectric effect, in which two layers slide against one another to produce electricity, meaning the touch stimuli was encoded into electrical impulses. To simulate visual input, the researchers shined a light into the monolayer molybdenum disulfide photo memtransistor — or a transistor that can remember visual input, like how a person can hold onto the general layout of a room after a quick flash illuminates it.

They found that the sensory response of the neuron — simulated as electrical output — increased when both visual and tactile signals were weak.

“Interestingly, this effect resonates remarkably well with its biological counterpart — a visual memory naturally enhances the sensitivity to tactile stimulus,” said co-first author Najam U Sakib, a third-year doctoral student in engineering science and mechanics. “When cues are weak, you need to combine them to better understand the information, and that’s what we saw in the results.”

Das explained that an artificial multisensory neuron system could enhance sensor technology’s efficiency, paving the way for more eco-friendly AI uses. As a result, robots, drones and self-driving vehicles could navigate their environment more effectively while using less energy.

“The super additive summation of weak visual and tactile cues is the key accomplishment of our research,” said co-author Andrew Pannone, a fourth-year doctoral student in engineering science and mechanics. “For this work, we only looked into two senses. We’re working to identify the proper scenario to incorporate more senses and see what benefits they may offer.”

Harikrishnan Ravichandran, a fourth-year doctoral student in engineering science and mechanics at Penn State, also co-authored this paper.

The Army Research Office and the National Science Foundation supported this work.

Here’s a link to and a citation for the paper,

A bio-inspired visuotactile neuron for multisensory integration by Muhtasim Ul Karim Sadaf, Najam U Sakib, Andrew Pannone, Harikrishnan Ravichandran & Saptarshi Das. Nature Communications volume 14, Article number: 5729 (2023) DOI: https://doi.org/10.1038/s41467-023-40686-z Published: 15 September 2023

This paper is open access.

AI incites hiring of poets

This is not the first time that I’ve come across information such as this. According to a September 28, 2023 posting by Karl Bode for the TechDirt website, companies in Silicon Valley (California, US) are hiring poets (and other writers) to help train AI (artificial intelligence), Note: Links have been removed,

… however much AI hype-men would like to pretend AI makes human beings irrelevant, they remain essential for the underlying illusion and reality to function. As such, a growing number of Silicon Valley companies are increasingly hiring poets, English PHDs, and other writers to write short stories for LLMs [language learning models] to train on in a bid to improve the quality of their electro-mimics:

“A string of job postings from high-profile training data companies, such as Scale AI and Appen, are recruiting poets, novelists, playwrights, or writers with a PhD or master’s degree. Dozens more seek general annotators with humanities degrees, or years of work experience in literary fields. The listings aren’t limited to English: Some are looking specifically for poets and fiction writers in Hindi and Japanese, as well as writers in languages less represented on the internet.”

So it’s clear we still have a long way to go before these technologies actually get anywhere close to matching both the hype and employment apocalypse many predicted. LLMs are effectively mimics that create from what already exists. Since it’s not real artificial intelligence, it’s still not actually capable of true creativity:

“They are trained to reproduce. They are not designed to be great, they try to be as close as possible to what exists,” Fabricio Goes, who teaches informatics at the University of Leicester, told Rest of World, explaining a popular stance among AI researchers. “So, by design, many people argue that those systems are not creative.”

The problem remains that while the underlying technology will continuously improve, the folks rushing to implement it without thinking likely won’t. Most seem dead set on using AI primarily as a bludgeon against labor in the hopes the public won’t notice the drop in quality, and professional writers, editors, and creatives won’t mind increasingly lower pay and tenuous position in the food chain.

In the last paragraph, Bode appears to be alluding to the Writers Guild of America strike (known popularly as the Hollywood writers strike), which ended on September 26, 2023 (for more details, see this September 26, 2023 article by Samantha Delouya for CNN).

Four years ago, I used this head “Ghosts, mechanical turks, and pseudo-AI (artificial intelligence)—Is it all a con game?” for a more in depth look at how AI is overhyped; see my September 24, 2019 posting.

UK AI Summit (November 1 – 2, 2023) at Bletchley Park finishes

This is the closest I’ve ever gotten to writing a gossip column (see my October 18, 2023 posting and scroll down to the “Insight into political jockeying [i.e., some juicy news bits]” subhead )for the first half.

Given the role that Canadian researchers (for more about that see my May 25, 2023 posting and scroll down to “The Panic” subhead) have played in the development of artificial intelligence (AI), it’s been surprising that the Canadian Broadcasting Corporation (CBC) has given very little coverage to the event in the UK. However, there is an October 31, 2023 article by Kelvin Chang and Jill Lawless for the Associated Press posted on the CBC website,

Digital officials, tech company bosses and researchers are converging Wednesday [November 1, 2023] at a former codebreaking spy base [Bletchley Park] near London [UK] to discuss and better understand the extreme risks posed by cutting-edge artificial intelligence.

The two-day summit focusing on so-called frontier AI notched up an early achievement with officials from 28 nations and the European Union signing an agreement on safe and responsible development of the technology.

Frontier AI is shorthand for the latest and most powerful general purpose systems that take the technology right up to its limits, but could come with as-yet-unknown dangers. They’re underpinned by foundation models, which power chatbots like OpenAI’s ChatGPT and Google’s Bard and are trained on vast pools of information scraped from the internet.

The AI Safety Summit is a labour of love for British Prime Minister Rishi Sunak, a tech-loving former banker who wants the U.K. to be a hub for computing innovation and has framed the summit as the start of a global conversation about the safe development of AI.[emphasis mine]

But U.S. Vice President Kamala Harris may divert attention Wednesday [November 1, 2023] with a separate speech in London setting out the Biden administration’s more hands-on approach.

Canada’s Minister of Innovation, Science and Industry Francois-Philippe Champagne said AI would not be constrained by national borders, and therefore interoperability between different regulations being put in place was important.

As the meeting began, U.K. Technology Secretary Michelle Donelan announced that the 28 countries and the European Union had signed the Bletchley Declaration on AI Safety. It outlines the “urgent need to understand and collectively manage potential risks through a new joint global effort.”

South Korea has agreed to host a mini virtual AI summit in six months, followed by an in-person one in France in a year’s time, the U.K. government said.

Chris Stokel-Walker’s October 31, 2023 article for Fast Company presents a critique of the summit prior to the opening, Note: Links have been removed,

… one problem, critics say: The summit, which begins on November 1, is too insular and its participants are homogeneous—an especially damning critique for something that’s trying to tackle the huge, possibly intractable questions around AI. The guest list is made up of 100 of the great and good of governments, including representatives from China, Europe, and Vice President Kamala Harris. And it also includes luminaries within the tech sector. But precious few others—which means a lack of diversity in discussions about the impact of AI.

“Self-regulation didn’t work for social media companies, it didn’t work for the finance sector, and it won’t work for AI,” says Carsten Jung, a senior economist at the Institute for Public Policy Research, a progressive think tank that recently published a report advising on key policy pillars it believes should be discussed at the summit. (Jung isn’t on the guest list.) “We need to learn lessons from our past mistakes and create a strong supervisory hub for all things AI, right from the start.”

Kriti Sharma, chief product officer for legal tech at Thomson Reuters, who will be watching from the wings, not receiving an invite, is similarly circumspect about the goals of the summit. “I hope to see leaders moving past the doom to take practical steps to address known issues and concerns in AI, giving businesses the clarity they urgently need,” she says. “Ideally, I’d like to see movement towards putting some fundamental AI guardrails in place, in the form of a globally aligned, cross-industry regulatory framework.”

But it’s uncertain whether the summit will indeed discuss the more practical elements of AI. Already it seems as if the gathering is designed to quell public fears around AI while convincing those developing AI products that the U.K. will not take too strong an approach in regulating the technology, perhaps in contrasts to near neighbors in the European Union, who have been open about their plans to ensure the technology is properly fenced in to ensure user safety.

Already, there are suggestions that the summit has been drastically downscaled in its ambitions, with others, including the United States, where President Biden just announced a sweeping executive order on AI, and the United Nations, which announced its AI advisory board last week.

Ingrid Lunden in her October 31, 2023 article for TechCrunch is more blunt,

As we wrote yesterday, the U.K. is partly using this event — the first of its kind, as it has pointed out — to stake out a territory for itself on the AI map — both as a place to build AI businesses, but also as an authority in the overall field.

That, coupled with the fact that the topics and approach are focused on potential issues, the affair feel like one very grand photo opportunity and PR exercise, a way for the government to show itself off in the most positive way at the same time that it slides down in the polls and it also faces a disastrous, bad-look inquiry into how it handled the COVID-19 pandemic. On the other hand, the U.K. does have the credentials for a seat at the table, so if the government is playing a hand here, it’s able to do it because its cards are strong.

The subsequent guest list, predictably, leans more toward organizations and attendees from the U.K. It’s also almost as revealing to see who is not participating.

Lunden’s October 30, 2023 article “Existential risk? Regulatory capture? AI for one and all? A look at what’s going on with AI in the UK” includes a little ‘inside’ information,

That high-level aspiration is also reflected in who is taking part: top-level government officials, captains of industry, and notable thinkers in the space are among those expected to attend. (Latest late entry: Elon Musk; latest no’s reportedly include President Biden, Justin Trudeau and Olaf Scholz.) [Scholz’s no was mentioned in my my October 18, 2023 posting]

It sounds exclusive, and it is: “Golden tickets” (as Azeem Azhar, a London-based tech founder and writer, describes them) to the Summit are in scarce supply. Conversations will be small and mostly closed. So because nature abhors a vacuum, a whole raft of other events and news developments have sprung up around the Summit, looping in the many other issues and stakeholders at play. These have included talks at the Royal Society (the U.K.’s national academy of sciences); a big “AI Fringe” conference that’s being held across multiple cities all week; many announcements of task forces; and more.

Earlier today, a group of 100 trade unions and rights campaigners sent a letter to the prime minister saying that the government is “squeezing out” their voices in the conversation by not having them be a part of the Bletchley Park event. (They may not have gotten their golden tickets, but they were definitely canny how they objected: The group publicized its letter by sharing it with no less than the Financial Times, the most elite of economic publications in the country.)

And normal people are not the only ones who have been snubbed. “None of the people I know have been invited,” Carissa Véliz, a tutor in philosophy at the University of Oxford, said during one of the AI Fringe events today [October 30, 2023].

More broadly, the summit has become an anchor and only one part of the bigger conversation going on right now. Last week, U.K. prime minister Rishi Sunak outlined an intention to launch a new AI safety institute and a research network in the U.K. to put more time and thought into AI implications; a group of prominent academics, led by Yoshua Bengio [University of Montreal, Canada) and Geoffrey Hinton [University of Toronto, Canada], published a paper called “Managing AI Risks in an Era of Rapid Progress” to put their collective oar into the the waters; and the UN announced its own task force to explore the implications of AI. Today [October 30, 2023], U.S. president Joe Biden issued the country’s own executive order to set standards for AI security and safety.

There are a couple more articles* from the BBC (British Broadcasting Corporation) covering the start of the summit, a November 1, 2023 article by Zoe Kleinman & Tom Gerken, “King Charles: Tackle AI risks with urgency and unity” and another November 1, 2023 article this time by Tom Gerken & Imran Rahman-Jones, “Rishi Sunak: AI firms cannot ‘mark their own homework‘.”

Politico offers more US-centric coverage of the event with a November 1, 2023 article by Mark Scott, Tom Bristow and Gian Volpicelli, “US and China join global leaders to lay out need for AI rulemaking,” a November 1, 2023 article by Vincent Manancourt and Eugene Daniels, “Kamala Harris seizes agenda as Rishi Sunak’s AI summit kicks off,” and a November 1, 2023 article by Vincent Manancourt, Eugene Daniels and Brendan Bordelon, “‘Existential to who[m]?’ US VP Kamala Harris urges focus on near-term AI risks.”

I want to draw special attention to the second Politico article,

Kamala just showed Rishi who’s boss.

As British Prime Minister Rishi Sunak’s showpiece artificial intelligence event kicked off in Bletchley Park on Wednesday, 50 miles south in the futuristic environs of the American Embassy in London, U.S. Vice President Kamala Harris laid out her vision for how the world should govern artificial intelligence.

It was a raw show of U.S. power on the emerging technology.

Did she or was this an aggressive interpretation of events?

*’article’ changed to ‘articles’ on January 17, 2024.

Machine decision-making (artificial intelligence) in British Columbia’s government (Canada)

Jeremy Hainsworth’s September 19, 2023 article on the Vancouver is Awesome website was like a dash of cold water. I had no idea that plans for using AI (artificial intelligence) in municipal administration were so far advanced (although I did cover this AI development, “Predictive policing in Vancouver—the first jurisdiction in Canada to employ a machine learning system for property theft reduction” in a November 23, 2017 posting). From Hainsworth’s September 19, 2023 article, Note: A link has been removed,

Human discretion and the ability to follow decision-making must remain top of mind employing artificial intelligence (AI) to providing public services, Union of BC Municipalities conference delegates heard Sept. 19 [2023].

And, delegates heard from Office of the Ombudsperson of B.C. representatives, decisions made by machines must be fair and transparent.

“This is the way of the future — using AI systems for delivering municipal services,” Zoë Macmillan, office manager of investigations, health and local services.

The risk in getting it wrong on fairness and privacy issues, said Wendy Byrne, office consultation and training officer, is a loss of trust in government.

It’s an issue the office has addressed itself, due to the impacts automated decision-making could have on British Columbians, in terms of the fairness they receive around public services. The issue has been covered in a June 2021 report, Getting Ahead of the Curve [emphasis mine]. The work was done jointly with B.C.’s Office of the Information and Privacy Commissioner.

And, said office representatives, there also needs to be AI decision-making trails that can be audited when it comes to transparency in decision-making and for people appealing decisions made by machines.

She [Zoë Macmillan] said many B.C. communities are on the verge of implementing AI for providing citizens with services. In Vancouver and Kelowna, AI is already being used [emphasis mine] in some permitting systems.

The public, meanwhile, needs to be aware when an automated decision-making system is assisting them with an issue, she [Wendy Byrne] noted.

It’s not clear from Hainsworth’s article excerpts seen here but the report, “Getting Ahead of the Curve” was a joint Yukon and British Columbia (BC) effort. Here’s a link to the report (PDF) and an excerpt, Note: I’d call this an executive summary,

Message from the Officers

With the proliferation of instantaneous and personalized services increasingly being delivered to people in many areas in the private sector, the public is increasingly expecting the same approach when receiving government services. Artificial intelligence (AI) is touted as an effective, efficient and cost-saving solution to these growing expectations. However, ethical and legal concerns are being raised as governments in Canada and abroad are experimenting with AI technologies in
decision-making under inadequate regulation and, at times, in a less than transparent manner.

As public service oversight officials upholding the privacy and fairness rights of citizens, it is our responsibility to be closely acquainted with emerging issues that threaten those rights. There is no timelier an issue that intersects with our
respective mandates as privacy commissioners and ombudsman, than the increasing use of artificial intelligence by the governments and public bodies we oversee.

The digital era has brought swift and significant change to the delivery of public services. The benefits of providing the public with increasingly convenient and timely service has spurred a range of computer-based platforms, from digital assistants to automated systems of approval for a range of services – building permits, inmate releases, social assistance applications, and car insurance premiums [emphasis mine] to name a few. While this kind of machine-based service delivery was once narrowly applied in the public sector, the use of artificial intelligence by the public sector is gaining a stronger foothold in countries around the world, including here in Canada. As public bodies become larger and more complex, the perceived benefits of efficiency, accessibility and accuracy of algorithms to make decisions once made by humans, can be initially challenging to refute.

Fairness and privacy issues resulting from the use of AI are well documented, with many commercial facial recognition systems and assessment tools demonstrating bias and augmenting the ability to use personal information in ways that infringe
privacy interests. Similar privacy and fairness issues are raised by the use of AI in government. People often have no choice but to interact with government and the decisions of government can have serious, long-lasting impacts on our lives. A failure to consider how AI technologies create tension with the fairness and privacy obligations of democratic institutions poses risks for the public and undermines trust in government.

In examining examples of how these algorithms have been used in practice, this report demonstrates that there are serious legal and ethical concerns for public sector administrators. Key privacy concerns relate to the lack of transparency of closed proprietary systems that prove challenging to review, test and monitor. Current privacy laws do not contemplate the use of AI and as such lack obligations for key
imperatives around the collection and use of personal information in machine-based
systems. From a fairness perspective, the use of AI in the public sector challenges key pillars of administrative fairness. For example, how algorithmic decisions are made, explained, reviewed or appealed, and how bias is prevented all present challenging questions.

As the application of AI in public administration continues to gain momentum, the intent of this report is to provide both important context regarding the challenges AI presents in public sector decision-making, as well as practical recommendations that aim to set consistent parameters for transparency, accountability, legality and procedural fairness for AI’s use by public bodies. The critically important values of
privacy protection and administrative fairness cannot be left behind as the field of AI continues to evolve and these principles must be more expressly articulated in legislation, policy and applicable procedural applications moving forward.

This joint report urges governments to respect and fulfill fairness and privacy principles in their adoption of AI technologies. It builds on extensive literature on public sector AI by providing concrete, technology-sensitive, implementable guidance on building fairness and privacy into public sector AI. The report also recommends capacity-building, co-operation and public engagement initiatives government should undertake to promote the public’s trust and buy-in of AI.

This report pinpoints the persistent challenges with AI that merit attention from a fairness and privacy perspective; identifies where existing regulatory measures and instruments for administrative fairness and privacy protection in the age of AI fall short and where they need to be enhanced; and sets out detailed, implementable guidance on incorporating administrative fairness and privacy principles across the various stages of the AI lifecycle, from inception and design, to testing, implementation and mainstreaming.

The final chapter contains our recommendations for the development of a framework to facilitate the responsible use of AI systems by governments. Our recommendations include:

– The need for public authorities to make a public commitment to guiding principles for the use of AI that incorporate transparency, accountability, legality, procedural fairness and the protection of privacy. These principles should apply to all existing and new programs or activities, be included in any tendering documents by public authorities for third-party contracts or AI systems delivered by service providers, and be used to assess legacy projects so they are brought into compliance within a reasonable timeframe.

– The need for public authorities to notify an individual when an AI system is used to make a decision about them and describe to the individual in a way that is understandable how that system operates.

– Government promote capacity building, co-operation, and public engagement on AI.
This should be carried out through public education initiatives, building subject-matter knowledge and expertise on AI across government ministries, developing capacity to support knowledge sharing and expertise between government and AI developers and vendors, and establishing or growing the capacity to develop open-source, high-quality data sets for training and testing Automated Decision Systems (ADS).

– Requiring all public authorities to complete and submit an Artificial Intelligence Fairness and Privacy Impact Assessment (AIFPIA) for all existing and future AI programs for review by the relevant oversight body.

– Special rules or restrictions for the use of highly sensitive information by AI.

… [pp. 1-3]

These are the contributors to the report: Alexander Agnello: Policy Analyst, B.C. Office of the Ombudsperson; Ethan Plato: Policy Analyst, B.C. Office of the Information and Privacy Commissioner; and Sebastian Paauwe: Investigator and Compliance Review Officer, Office of the Yukon Ombudsman and Information and Privacy Commissioner.

A bit startling to see how pervasive ” … automated systems of approval for a range of services – building permits, inmate releases, social assistance applications, and car insurance premiums …” are already. Not sure I’d call this 60 pp. report “Getting Ahead of the Curve” (PDF). It seems more like it was catching up—even in 2021.

Finally, there’s my October 27, 2023 post about the 2023 Canadian Science Policy Conference highlighting a few of the sessions. Scroll down to the second session, “901 – The new challenges of information in parliaments“, where you’ll find this,

… This panel proposes an overview … including a discussion on emerging issues impacting them, such as the integration of artificial intelligence and the risks of digital interference in democratic processes.

Interesting, eh?

Artificial intelligence (AI) with ability to look inward performs better

An August 31, 2022 news item on ScienceDaily highlights the power of an introspective AI,

An artificial intelligence with the ability to look inward and fine tune its own neural network performs better when it chooses diversity over lack of diversity, a new study finds. The resulting diverse neural networks were particularly effective at solving complex tasks.

“We created a test system with a non-human intelligence, an artificial intelligence (AI), to see if the AI would choose diversity over the lack of diversity and if its choice would improve the performance of the AI,” says William Ditto, professor of physics at North Carolina State University, director of NC State’s Nonlinear Artificial Intelligence Laboratory (NAIL) and co-corresponding author of the work. “The key was giving the AI the ability to look inward and learn how it learns.”

An August 31, 2023 North Carolina State University (NCSU) news release (also on EurekAlert), describes how an AI can become ‘introspective’ and employ neural ‘diversity’, Note: A link has been removed,

Neural networks are an advanced type of AI loosely based on the way that our brains work. Our natural neurons exchange electrical impulses according to the strengths of their connections. Artificial neural networks create similarly strong connections by adjusting numerical weights and biases during training sessions. For example, a neural network can be trained to identify photos of dogs by sifting through a large number of photos, making a guess about whether the photo is of a dog, seeing how far off it is and then adjusting its weights and biases until they are closer to reality.

Conventional AI uses neural networks to solve problems, but these networks are typically composed of large numbers of identical artificial neurons. The number and strength of connections between those identical neurons may change as it learns, but once the network is optimized, those static neurons are the network.

Ditto’s team, on the other hand, gave its AI the ability to choose the number, shape and connection strength between neurons in its neural network, creating sub-networks of different neuron types and connection strengths within the network as it learns.

“Our real brains have more than one type of neuron,” Ditto says. “So we gave our AI the ability to look inward and decide whether it needed to modify the composition of its neural network. Essentially, we gave it the control knob for its own brain. So it can solve the problem, look at the result, and change the type and mixture of artificial neurons until it finds the most advantageous one. It’s meta-learning for AI.

“Our AI could also decide between diverse or homogenous neurons,” Ditto says. “And we found that in every instance the AI chose diversity as a way to strengthen its performance.”

The team tested the AI’s accuracy by asking it to perform a standard numerical classifying exercise, and saw that its accuracy increased as the number of neurons and neuronal diversity increased. A standard, homogenous AI could identify the numbers with 57% accuracy, while the meta-learning, diverse AI was able to reach 70% accuracy.

According to Ditto, the diversity-based AI is up to 10 times more accurate than conventional AI in solving more complicated problems, such as predicting a pendulum’s swing or the motion of galaxies.

“We have shown that if you give an AI the ability to look inward and learn how it learns it will change its internal structure – the structure of its artificial neurons – to embrace diversity and improve its ability to learn and solve problems efficiently and more accurately,” Ditto says. “Indeed, we also observed that as the problems become more complex and chaotic the performance improves even more dramatically over an AI that does not embrace diversity.”

The research appears in Scientific Reports, and was supported by the Office of Naval Research (under grant N00014-16-1-3066) and by United Therapeutics. Former post-doctoral researcher Anshul Choudhary is first author. John Lindner, visiting professor and emeritus professor of physics at the College of Wooster, NC State graduate student Anil Radhakrishnan and Sudeshna Sinha, professor of physics at the Indian Institute of Science Education and Research Mohali, also contributed to the work.

Here’s a link to and a citation for the paper,

Neuronal diversity can improve machine learning for physics and beyond by Anshul Choudhary, Anil Radhakrishnan, John F. Lindner, Sudeshna Sinha & William L. Ditto. Scientific Reports volume 13, Article number: 13962 (2023) DOI: https://doi.org/10.1038/s41598-023-40766-6 Published: 26 August 2023

This paper is open access.