Tag Archives: Ray Kurzweil

AI-led corporate entities as a new species of legal subject

An AI (artificial intelligence) agent running a business? Not to worry, lawyers are busy figuring out the implications according to this October 26, 2023 American Association for the Advancement of Science (AAAS) news release on EurekAlert,

For the first time in human history, say Daniel Gervais and John Nay in a Policy Forum, nonhuman entities that are not directed by humans – such as artificial intelligence (AI)-operated corporations – should enter the legal system as a new “species” of legal subject. AI has evolved to the point where it could function as a legal subject with rights and obligations, say the authors. As such, before the issue becomes too complex and difficult to disentangle, “interspecific” legal frameworks need to be developed by which AI can be treated as legal subjects, they write. Until now, the legal system has been univocal – it allows only humans to speak to its design and use. Nonhuman legal subjects like animals have necessarily instantiated their rights through human proxies. However, their inclusion is less about defining and protecting the rights and responsibilities of these nonhuman subjects and more a vehicle for addressing human interests and obligations as it relates to them. In the United States, corporations are recognized as “artificial persons” within the legal system. However, the laws of some jurisdictions do not always explicitly require corporate entities to have human owners or managers at their helm. Thus, by law, nothing generally prevents an AI from operating a corporate entity. Here, Gervais and Nay highlight the rapidly realizing concept of AI-operated “zero-member LLCs” – or a corporate entity operating autonomously without any direct human involvement in the process. The authors discuss several pathways in which such AI-operated LLCs and their actions could be handled within the legal system. As the idea of ceasing AI development and use is highly unrealistic, Gervais and Nay discuss other options, including regulating AI by treating the machines as legally inferior to humans or engineering AI systems to be law-abiding and bringing them into the legal fold now before it becomes too complicated to do so.

Gervais and Nay have written an October 26, 2023 essay “AIs could soon run businesses – it’s an opportunity to ensure these ‘artificial persons’ follow the law” for The Conversation, which helps clarify matters, Note: Links have been removed,

Only “persons” can engage with the legal system – for example, by signing contracts or filing lawsuits. There are two main categories of persons: humans, termed “natural persons,” and creations of the law, termed “artificial persons.” These include corporations, nonprofit organizations and limited liability companies (LLCs).

Up to now, artificial persons have served the purpose of helping humans achieve certain goals. For example, people can pool assets in a corporation and limit their liability vis-à-vis customers or other persons who interact with the corporation. But a new type of artificial person is poised to enter the scene – artificial intelligence systems, and they won’t necessarily serve human interests.

As scholars who study AI and law we believe that this moment presents a significant challenge to the legal system: how to regulate AI within existing legal frameworks to reduce undesirable behaviors, and how to assign legal responsibility for autonomous actions of AIs.

One solution is teaching AIs to be law-abiding entities.

This is far from a philosophical question. The laws governing LLCs in several U.S. states do not require that humans oversee the operations of an LLC. In fact, in some states it is possible to have an LLC with no human owner, or “member” [emphasis mine] – for example, in cases where all of the partners have died. Though legislators probably weren’t thinking of AI when they crafted the LLC laws, the possibility for zero-member LLCs opens the door to creating LLCs operated by AIs.

Many functions inside small and large companies have already been delegated to AI in part, including financial operations, human resources and network management, to name just three. AIs can now perform many tasks as well as humans do. For example, AIs can read medical X-rays and do other medical tasks, and carry out tasks that require legal reasoning. This process is likely to accelerate due to innovation and economic interests.

I found the essay illuminating and the abstract for the paper (link and citation for paper at end of this post), a little surprising,

Several experts have warned about artificial intelligence (AI) exceeding human capabilities, a “singularity” [emphasis mine] at which it might evolve beyond human control. Whether this will ever happen is a matter of conjecture. A legal singularity is afoot, however: For the first time, nonhuman entities that are not directed by humans may enter the legal system as a new “species” of legal subjects. This possibility of an “interspecific” legal system provides an opportunity to consider how AI might be built and governed. We argue that the legal system may be more ready for AI agents than many believe. Rather than attempt to ban development of powerful AI, wrapping of AI in legal form could reduce undesired AI behavior by defining targets for legal action and by providing a research agenda to improve AI governance, by embedding law into AI agents, and by training AI compliance agents.

it was a little unexpected to see the ‘singularity’ mentioned. it’s a term I associate with the tech and the sci fi communities.For anyone unfamiliar with the term, here’s a description from the ‘Technological singularity’ Wikipedia entry, Note: Links have been removed,

The technological singularity—or simply the singularity[1]—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization.[2][3] According to the most popular version of the singularity hypothesis, I. J. Good’s intelligence explosion model, an upgradable intelligent agent will eventually enter a “runaway reaction” of self-improvement cycles, each new and more intelligent generation appearing more and more rapidly, causing an “explosion” in intelligence and resulting in a powerful superintelligence that qualitatively far surpasses all human intelligence.[4]

The first person to use the concept of a “singularity” in the technological context was the 20th-century Hungarian-American mathematician John von Neumann.[5] Stanislaw Ulam reports in 1958 an earlier discussion with von Neumann “centered on the accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue”.[6] Subsequent authors have echoed this viewpoint.[3][7]

The concept and the term “singularity” were popularized by Vernor Vinge first in 1983 in an article that claimed that once humans create intelligences greater than their own, there will be a technological and social transition similar in some sense to “the knotted space-time at the center of a black hole”,[8] and later in his 1993 essay The Coming Technological Singularity,[4][7] in which he wrote that it would signal the end of the human era, as the new superintelligence would continue to upgrade itself and would advance technologically at an incomprehensible rate. He wrote that he would be surprised if it occurred before 2005 or after 2030.[4] Another significant contributor to wider circulation of the notion was Ray Kurzweil’s 2005 book The Singularity Is Near, predicting singularity by 2045.[7]

Finally, here’s a link to and a citation for the paper,

Law could recognize nonhuman AI-led corporate entities by Daniel J. Gervais and John J. Nay. Science 26 Oct 2023 Vol 382, Issue 6669 pp. 376-378 DOI: 10.1126/science.adi8678

This paper is behind a paywall.

True love with AI (artificial intelligence): The Nature of Things explores emotional and creative AI (long read)

The Canadian Broadcasting Corporation’s (CBC) science television series,The Nature of Things, which has been broadcast since November 1960, explored the world of emotional, empathic and creative artificial intelligence (AI) in a Friday, November 19, 2021 telecast titled, The Machine That Feels,

The Machine That Feels explores how artificial intelligence (AI) is catching up to us in ways once thought to be uniquely human: empathy, emotional intelligence and creativity.

As AI moves closer to replicating humans, it has the potential to reshape every aspect of our world – but most of us are unaware of what looms on the horizon.

Scientists see AI technology as an opportunity to address inequities and make a better, more connected world. But it also has the capacity to do the opposite: to stoke division and inequality and disconnect us from fellow humans. The Machine That Feels, from The Nature of Things, shows viewers what they need to know about a field that is advancing at a dizzying pace, often away from the public eye.

What does it mean when AI makes art? Can AI interpret and understand human emotions? How is it possible that AI creates sophisticated neural networks that mimic the human brain? The Machine That Feels investigates these questions, and more.

In Vienna, composer Walter Werzowa has — with the help of AI — completed Beethoven’s previously unfinished 10th symphony. By feeding data about Beethoven, his music, his style and the original scribbles on the 10th symphony into an algorithm, AI has created an entirely new piece of art.

In Atlanta, Dr. Ayanna Howard and her robotics lab at Georgia Tech are teaching robots how to interpret human emotions. Where others see problems, Howard sees opportunity: how AI can help fill gaps in education and health care systems. She believes we need a fundamental shift in how we perceive robots: let’s get humans and robots to work together to help others.

At Tufts University in Boston, a new type of biological robot has been created: the xenobot. The size of a grain of sand, xenobots are grown from frog heart and skin cells, and combined with the “mind” of a computer. Programmed with a specific task, they can move together to complete it. In the future, they could be used for environmental cleanup, digesting microplastics and targeted drug delivery (like releasing chemotherapy compounds directly into tumours).

The film includes interviews with global leaders, commentators and innovators from the AI field, including Geoff Hinton, Yoshua Bengio, Ray Kurzweil and Douglas Coupland, who highlight some of the innovative and cutting-edge AI technologies that are changing our world.

The Machine That Feels focuses on one central question: in the flourishing age of artificial intelligence, what does it mean to be human?

I’ll get back to that last bit, “… what does it mean to be human?” later.

There’s a lot to appreciate in this 44 min. programme. As you’d expect, there was a significant chunk of time devoted to research being done in the US but Poland and Japan also featured and Canadian content was substantive. A number of tricky topics were covered and transitions from one topic to the next were smooth.

In the end credits, I counted over 40 source materials from Getty Images, Google Canada, Gatebox, amongst others. It would have been interesting to find out which segments were produced by CBC.

David Suzuki’s (programme host) script was well written and his narration was enjoyable, engaging, and non-intrusive. That last quality is not always true of CBC hosts who can fall into the trap of overdramatizing the text.

Drilling down

I have followed artificial intelligence stories in a passive way (i.e., I don’t seek them out) for many years. Even so, there was a lot of material in the programme that was new to me.

For example, there was this love story (from the ‘I love her and see her as a real woman.’ Meet a man who ‘married’ an artificial intelligence hologram webpage on the CBC),

In the The Machine That Feels, a documentary from The Nature of Things, we meet Kondo Akihiko, a Tokyo resident who “married” a hologram of virtual pop singer Hatsune Miku using a certificate issued by Gatebox (the marriage isn’t recognized by the state, and Gatebox acknowledges the union goes “beyond dimensions”).

I found Akihiko to be quite moving when he described his relationship, which is not unique. It seems some 4,000 men have ‘wed’ their digital companions, you can read about that and more on the ‘I love her and see her as a real woman.’ Meet a man who ‘married’ an artificial intelligence hologram webpage.

What does it mean to be human?

Overall, this Nature of Things episode embraces certainty, which means the question of what it means to human is referenced rather than seriously discussed. An unanswerable philosophical question, the programme is ill-equipped to address it, especially since none of the commentators are philosophers or seem inclined to philosophize.

The programme presents AI as a juggernaut. Briefly mentioned is the notion that we need to make some decisions about how our juggernaut is developed and utilized. No one discusses how we go about making changes to systems that are already making critical decisions for us. (For more about AI and decision-making, see my February 28, 2017 posting and scroll down to the ‘Algorithms and big data’ subhead for Cathy O’Neil’s description of how important decisions that affect us are being made by AI systems. She is the author of the 2016 book, ‘Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy’; still a timely read.)

In fact, the programme’s tone is mostly one of breathless excitement. A few misgivings are expressed, e.g,, one woman who has an artificial ‘texting friend’ (Replika; a chatbot app) noted that it can ‘get into your head’ when she had a chat where her ‘friend’ told her that all of a woman’s worth is based on her body; she pushed back but intimated that someone more vulnerable could find that messaging difficult to deal with.

The sequence featuring Akihiko and his hologram ‘wife’ is followed by one suggesting that people might become more isolated and emotionally stunted as they interact with artificial friends. It should be noted, Akihiko’s wife is described as ‘perfect’. I gather perfection means that you are always understanding and have no needs of your own. She also seems to be about 18″ high.

Akihiko has obviously been asked about his ‘wife’ before as his answers are ready. They boil down to “there are many types of relationships” and there’s nothing wrong with that. It’s an intriguing thought which is not explored.

Also unexplored, these relationships could be said to resemble slavery. After all, you pay for these friends over which you have control. But perhaps that’s alright since AI friends don’t have consciousness. Or do they? In addition to not being able to answer the question, “what is it to be human?” we still can’t answer the question, “what is consciousness?”

AI and creativity

The Nature of Things team works fast. ‘Beethoven X – The AI Project’ had its first performance on October 9, 2021. (See my October 1, 2021 post ‘Finishing Beethoven’s unfinished 10th Symphony’ for more information from Ahmed Elgammal’s (Director of the Art & AI Lab at Rutgers University) technical perspective on the project.

Briefly, Beethoven died before completing his 10th symphony and a number of computer scientists, musicologists, AI, and musicians collaborated to finish the symphony.)

The one listener (Felix Mayer, music professor at the Technical University Munich) in the hall during a performance doesn’t consider the work to be a piece of music. He does have a point. Beethoven left some notes but this ’10th’ is at least partly mathematical guesswork. A set of probabilities where an algorithm chooses which note comes next based on probability.

There was another artist also represented in the programme. Puzzlingly, it was the still living Douglas Coupland. In my opinion, he’s better known as a visual artist than a writer (his Wikipedia entry lists him as a novelist first) but he has succeeded greatly in both fields.

What makes his inclusion in the Nature of Things ‘The Machine That Feels’ programme puzzling, is that it’s not clear how he worked with artificial intelligence in a collaborative fashion. Here’s a description of Coupland’s ‘AI’ project from a June 29, 2021 posting by Chris Henry on the Google Outreach blog (Note: Links have been removed),

… when the opportunity presented itself to explore how artificial intelligence (AI) inspires artistic expression — with the help of internationally renowned Canadian artist Douglas Coupland — the Google Research team jumped on it. This collaboration, with the support of Google Arts & Culture, culminated in a project called Slogans for the Class of 2030, which spotlights the experiences of the first generation of young people whose lives are fully intertwined with the existence of AI. 

This collaboration was brought to life by first introducing Coupland’s written work to a machine learning language model. Machine learning is a form of AI that provides computer systems the ability to automatically learn from data. In this case, Google research scientists tuned a machine learning algorithm with Coupland’s 30-year body of written work — more than a million words — so it would familiarize itself with the author’s unique style of writing. From there, curated general-public social media posts on selected topics were added to teach the algorithm how to craft short-form, topical statements. [emphases mine]

Once the algorithm was trained, the next step was to process and reassemble suggestions of text for Coupland to use as inspiration to create twenty-five Slogans for the Class of 2030. [emphasis mine]

I would comb through ‘data dumps’ where characters from one novel were speaking with those in other novels in ways that they might actually do. It felt like I was encountering a parallel universe Doug,” Coupland says. “And from these outputs, the statements you see here in this project appeared like gems. Did I write them? Yes. No. Could they have existed without me? No.” [emphases mine]

So, the algorithms crunched through Coupland’s word and social media texts to produce slogans, which Coupland then ‘combed through’ to pick out 25 slogans for the ‘Slogans For The Class of 2030’ project. (Note: In the programme, he says that he started a sentence and then the AI system completed that sentence with material gleaned from his own writings, which brings to Exquisite Corpse, a collaborative game for writers originated by the Surrealists, possibly as early as 1918.)

The ‘slogans’ project also reminds me of William S. Burroughs and the cut-up technique used in his work. From the William S. Burroughs Cut-up technique webpage on the Language is a Virus website (Thank you to Lake Rain Vajra for a very interesting website),

The cutup is a mechanical method of juxtaposition in which Burroughs literally cuts up passages of prose by himself and other writers and then pastes them back together at random. This literary version of the collage technique is also supplemented by literary use of other media. Burroughs transcribes taped cutups (several tapes spliced into each other), film cutups (montage), and mixed media experiments (results of combining tapes with television, movies, or actual events). Thus Burroughs’s use of cutups develops his juxtaposition technique to its logical conclusion as an experimental prose method, and he also makes use of all contemporary media, expanding his use of popular culture.

[Burroughs says] “All writing is in fact cut-ups. A collage of words read heard overheard. What else? Use of scissors renders the process explicit and subject to extension and variation. Clear classical prose can be composed entirely of rearranged cut-ups. Cutting and rearranging a page of written words introduces a new dimension into writing enabling the writer to turn images in cinematic variation. Images shift sense under the scissors smell images to sound sight to sound to kinesthetic. This is where Rimbaud was going with his color of vowels. And his “systematic derangement of the senses.” The place of mescaline hallucination: seeing colors tasting sounds smelling forms.

“The cut-ups can be applied to other fields than writing. Dr Neumann [emphasis mine] in his Theory of Games and Economic behavior introduces the cut-up method of random action into game and military strategy: assume that the worst has happened and act accordingly. … The cut-up method could be used to advantage in processing scientific data. [emphasis mine] How many discoveries have been made by accident? We cannot produce accidents to order. The cut-ups could add new dimension to films. Cut gambling scene in with a thousand gambling scenes all times and places. Cut back. Cut streets of the world. Cut and rearrange the word and image in films. There is no reason to accept a second-rate product when you can have the best. And the best is there for all. Poetry is for everyone . . .”

First, John von Neumann (1902 – 57) is a very important figure in the history of computing. From a February 25, 2017 John von Neumann and Modern Computer Architecture essay on the ncLab website, “… he invented the computer architecture that we use today.”

Here’s Burroughs on the history of writers and cutups (thank you to QUEDEAR for posting this clip),

You can hear Burroughs talk about the technique and how he started using it in 1959.

There is no explanation from Coupland as to how his project differs substantively from Burroughs’ cut-ups or a session of Exquisite Corpse. The use of a computer programme to crunch through data and give output doesn’t seem all that exciting. *(More about computers and chatbots at end of posting).* It’s hard to know if this was an interview situation where he wasn’t asked the question or if the editors decided against including it.

Kazuo Ishiguro?

Given that Ishiguro’s 2021 book (Klara and the Sun) is focused on an artificial friend and raises the question of ‘what does it mean to be human’, as well as the related question, ‘what is the nature of consciousness’, it would have been interesting to hear from him. He spent a fair amount of time looking into research on machine learning in preparation for his book. Maybe he was too busy?

AI and emotions

The work being done by Georgia Tech’s Dr. Ayanna Howard and her robotics lab is fascinating. They are teaching robots how to interpret human emotions. The segment which features researchers teaching and interacting with robots, Pepper and Salt, also touches on AI and bias.

Watching two African American researchers talk about the ways in which AI is unable to read emotions on ‘black’ faces as accurately as ‘white’ faces is quite compelling. It also reinforces the uneasiness you might feel after the ‘Replika’ segment where an artificial friend informs a woman that her only worth is her body.

(Interestingly, Pepper and Salt are produced by Softbank Robotics, part of Softbank, a multinational Japanese conglomerate, [see a June 28, 2021 article by Ian Carlos Campbell for The Verge] whose entire management team is male according to their About page.)

While Howard is very hopeful about the possibilities of a machine that can read emotions, she doesn’t explore (on camera) any means for pushing back against bias other than training AI by using more black faces to help them learn. Perhaps more representative management and coding teams in technology companies?

While the programme largely focused on AI as an algorithm on a computer, robots can be enabled by AI (as can be seen in the segment with Dr. Howard).

My February 14, 2019 posting features research with a completely different approach to emotions and machines,

“I’ve always felt that robots shouldn’t just be modeled after humans [emphasis mine] or be copies of humans,” he [Guy Hoffman, assistant professor at Cornell University)] said. “We have a lot of interesting relationships with other species. Robots could be thought of as one of those ‘other species,’ not trying to copy what we do but interacting with us with their own language, tapping into our own instincts.”

[from a July 16, 2018 Cornell University news release on EurekAlert]

This brings the question back to, what is consciousness?

What scientists aren’t taught

Dr. Howard notes that scientists are not taught to consider the implications of their work. Her comment reminded me of a question I was asked many years ago after a presentation, it concerned whether or not science had any morality. (I said, no.)

My reply angered an audience member (a visual artist who was working with scientists at the time) as she took it personally and started defending scientists as good people who care and have morals and values. She failed to understand that the way in which we teach science conforms to a notion that somewhere there are scientific facts which are neutral and objective. Society and its values are irrelevant in the face of the larger ‘scientific truth’ and, as a consequence, you don’t need to teach or discuss how your values or morals affect that truth or what the social implications of your work might be.

Science is practiced without much if any thought to values. By contrast, there is the medical injunction, “Do no harm,” which suggests to me that someone recognized competing values. E.g., If your important and worthwhile research is harming people, you should ‘do no harm’.

The experts, the connections, and the Canadian content

It’s been a while since I’ve seen Ray Kurzweil mentioned but he seems to be getting more attention these days. (See this November 16, 2021 posting by Jonny Thomson titled, “The Singularity: When will we all become super-humans? Are we really only a moment away from “The Singularity,” a technological epoch that will usher in a new era in human evolution?” on The Big Think for more). Note: I will have a little more about evolution later in this post.

Interestingly, Kurzweil is employed by Google these days (see his Wikipedia entry, the column to the right). So is Geoffrey Hinton, another one of the experts in the programme (see Hinton’s Wikipedia entry, the column to the right, under Institutions).

I’m not sure about Yoshu Bengio’s relationship with Google but he’s a professor at the Université de Montréal, and he’s the Scientific Director for Mila ((Quebec’s Artificial Intelligence research institute)) & IVADO (Institut de valorisation des données), Note: IVADO is not particularly relevant to what’s being discussed in this post.

As for Mila, the Canada Google blog in a November 21, 2016 posting notes a $4.5M grant to the institution,

Google invests $4.5 Million in Montreal AI Research

A new grant from Google for the Montreal Institute for Learning Algorithms (MILA) will fund seven faculty across a number of Montreal institutions and will help tackle some of the biggest challenges in machine learning and AI, including applications in the realm of systems that can understand and generate natural language. In other words, better understand a fan’s enthusiasm for Les Canadien [sic].

Google is expanding its academic support of deep learning at MILA, renewing Yoshua Bengio’s Focused Research Award and offering Focused Research Awards to MILA faculty at University of Montreal and McGill University:

Google reaffirmed their commitment to Mila in 2020 with a grant worth almost $4M (from a November 13, 2020 posting on the Mila website, Note: A link has been removed),

Google Canada announced today [November 13, 2020] that it will be renewing its funding of Mila – Quebec Artificial Intelligence Institute, with a generous pledge of nearly $4M over a three-year period. Google previously invested $4.5M US in 2016, enabling Mila to grow from 25 to 519 researchers.

In a piece written for Google’s Official Canada Blog, Yoshua Bengio, Mila Scientific Director, says that this year marked a “watershed moment for the Canadian AI community,” as the COVID-19 pandemic created unprecedented challenges that demanded rapid innovation and increased interdisciplinary collaboration between researchers in Canada and around the world.

COVID-19 has changed the world forever and many industries, from healthcare to retail, will need to adapt to thrive in our ‘new normal.’ As we look to the future and how priorities will shift, it is clear that AI is no longer an emerging technology but a useful tool that can serve to solve world problems. Google Canada recognizes not only this opportunity but the important task at hand and I’m thrilled they have reconfirmed their support of Mila with an additional $3,95 million funding grant until 22.

– Yoshua Bengio, for Google’s Official Canada Blog

Interesting, eh? Of course, Douglas Coupland is working with Google, presumably for money, and that would connect over 50% of the Canadian content (Douglas Coupland, Yoshua Bengio, and Geoffrey Hinton; Kurzweil is an American) in the programme to Google.

My hat’s off to Google’s marketing communications and public relations teams.

Anthony Morgan of Science Everywhere also provided some Canadian content. His LinkedIn profile indicates that he’s working on a PhD in molecular science, which is described this way, “My work explores the characteristics of learning environments, that support critical thinking and the relationship between critical thinking and wisdom.”

Morgan is also the founder and creative director of Science Everywhere, from his LinkedIn profile, “An events & media company supporting knowledge mobilization, community engagement, entrepreneurship and critical thinking. We build social tools for better thinking.”

There is this from his LinkedIn profile,

I develop, create and host engaging live experiences & media to foster critical thinking.

I’ve spent my 15+ years studying and working in psychology and science communication, thinking deeply about the most common individual and societal barriers to critical thinking. As an entrepreneur, I lead a team to create, develop and deploy cultural tools designed to address those barriers. As a researcher I study what we can do to reduce polarization around science.

There’s a lot more to Morgan (do look him up; he has connections to the CBC and other media outlets). The difficulty is: why was he chosen to talk about artificial intelligence and emotions and creativity when he doesn’t seem to know much about the topic? He does mention GPT-3, an AI programming language. He seems to be acting as an advocate for AI although he offers this bit of almost cautionary wisdom, “… algorithms are sets of instructions.” (You can can find out more about it in my April 27, 2021 posting. There’s also this November 26, 2021 posting [The Inherent Limitations of GPT-3] by Andrey Kurenkov, a PhD student with the Stanford [University] Vision and Learning Lab.)

Most of the cautionary commentary comes from Luke Stark, assistant professor at Western [Ontario] University’s Faculty of Information and Media Studies. He’s the one who mentions stunted emotional growth.

Before moving on, there is another set of connections through the Pan-Canadian Artificial Intelligence Strategy, a Canadian government science funding initiative announced in the 2017 federal budget. The funds allocated to the strategy are administered by the Canadian Institute for Advanced Research (CIFAR). Yoshua Bengio through Mila is associated with the strategy and CIFAR, as is Geoffrey Hinton through his position as Chief Scientific Advisor for the Vector Institute.

Evolution

Getting back to “The Singularity: When will we all become super-humans? Are we really only a moment away from “The Singularity,” a technological epoch that will usher in a new era in human evolution?” Xenobots point in a disconcerting (for some of us) evolutionary direction.

I featured the work, which is being done at Tufts University in the US, in my June 21, 2021 posting, which includes an embedded video,

From a March 31, 2021 news item on ScienceDaily,

Last year, a team of biologists and computer scientists from Tufts University and the University of Vermont (UVM) created novel, tiny self-healing biological machines from frog cells called “Xenobots” that could move around, push a payload, and even exhibit collective behavior in the presence of a swarm of other Xenobots.

Get ready for Xenobots 2.0.

Also from an excerpt in the posting, the team has “created life forms that self-assemble a body from single cells, do not require muscle cells to move, and even demonstrate the capability of recordable memory.”

Memory is key to intelligence and this work introduces the notion of ‘living’ robots which leads to questioning what constitutes life. ‘The Machine That Feels’ is already grappling with far too many questions to address this development but introducing the research here might have laid the groundwork for the next episode, The New Human, telecast on November 26, 2021,

While no one can be certain what will happen, evolutionary biologists and statisticians are observing trends that could mean our future feet only have four toes (so long, pinky toe) or our faces may have new combinations of features. The new humans might be much taller than their parents or grandparents, or have darker hair and eyes.

And while evolution takes a lot of time, we might not have to wait too long for a new version of ourselves.

Technology is redesigning the way we look and function — at a much faster pace than evolution. We are merging with technology more than ever before: our bodies may now have implanted chips, smart limbs, exoskeletons and 3D-printed organs. A revolutionary gene editing technique has given us the power to take evolution into our own hands and alter our own DNA. How long will it be before we are designing our children?

As the story about the xenobots doesn’t say, we could also take the evolution of another species into our hands.

David Suzuki, where are you?

Our programme host, David Suzuki surprised me. I thought that as an environmentalist he’d point out that the huge amounts of computing power needed for artificial intelligence as mentioned in the programme, constitutes an environmental issue. I also would have expected a geneticist like Suzuki might have some concerns with regard to xenobots but perhaps that’s being saved for the next episode (The New Human) of the Nature of Things.

Artificial stupidity

Thanks to Will Knight for introducing me to the term ‘artificial stupidity’. Knight, a senior writer covers artificial intelligence for WIRED magazine. According to its Wikipedia entry,

Artificial stupidity is commonly used as a humorous opposite of the term artificial intelligence (AI), often as a derogatory reference to the inability of AI technology to adequately perform its tasks.[1] However, within the field of computer science, artificial stupidity is also used to refer to a technique of “dumbing down” computer programs in order to deliberately introduce errors in their responses.

Knight was using the term in its humorous, derogatory form.

Finally

The episode certainly got me thinking if not quite in the way producers might have hoped. ‘The Machine That Feels’ is a glossy, pretty well researched piece of infotainment.

To be blunt, I like and have no problems with infotainment but it can be seductive. I found it easier to remember the artificial friends, wife, xenobots, and symphony than the critiques and concerns.

Hopefully, ‘The Machine That Feels’ stimulates more interest in some very important topics. If you missed the telecast, you can catch the episode here.

For anyone curious about predictive policing, which was mentioned in the Ayanna Howard segment, see my November 23, 2017 posting about Vancouver’s plunge into AI and car theft.

*ETA December 6, 2021: One of the first ‘chatterbots’ was ELIZA, a computer programme developed from1964 to 1966. The most famous ELIZA script was DOCTOR, where the programme simulated a therapist. Many early users believed ELIZA understood and could respond as a human would despite Joseph Weizenbaum’s (creator of the programme) insistence otherwise.

Artificial intelligence (AI) company (in Montréal, Canada) attracts $135M in funding from Microsoft, Intel, Nvidia and others

It seems there’s a push on to establish Canada as a centre for artificial intelligence research and, if the federal and provincial governments have their way, for commercialization of said research. As always, there seems to be a bit of competition between Toronto (Ontario) and Montréal (Québec) as to which will be the dominant hub for the Canadian effort if one is to take Braga’s word for the situation.

In any event, Toronto seemed to have a mild advantage over Montréal initially with the 2017 Canadian federal government  budget announcement that the Canadian Institute for Advanced Research (CIFAR), based in Toronto, would launch a Pan-Canadian Artificial Intelligence Strategy and with an announcement from the University of Toronto shortly after (from my March 31, 2017 posting),

On the heels of the March 22, 2017 federal budget announcement of $125M for a Pan-Canadian Artificial Intelligence Strategy, the University of Toronto (U of T) has announced the inception of the Vector Institute for Artificial Intelligence in a March 28, 2017 news release by Jennifer Robinson (Note: Links have been removed),

A team of globally renowned researchers at the University of Toronto is driving the planning of a new institute staking Toronto’s and Canada’s claim as the global leader in AI.

Geoffrey Hinton, a University Professor Emeritus in computer science at U of T and vice-president engineering fellow at Google, will serve as the chief scientific adviser of the newly created Vector Institute based in downtown Toronto.

“The University of Toronto has long been considered a global leader in artificial intelligence research,” said U of T President Meric Gertler. “It’s wonderful to see that expertise act as an anchor to bring together researchers, government and private sector actors through the Vector Institute, enabling them to aim even higher in leading advancements in this fast-growing, critical field.”

As part of the Government of Canada’s Pan-Canadian Artificial Intelligence Strategy, Vector will share $125 million in federal funding with fellow institutes in Montreal and Edmonton. All three will conduct research and secure talent to cement Canada’s position as a world leader in AI.

However, Montréal and the province of Québec are no slouches when it comes to supporting to technology. From a June 14, 2017 article by Matthew Braga for CBC (Canadian Broadcasting Corporation) news online (Note: Links have been removed),

One of the most promising new hubs for artificial intelligence research in Canada is going international, thanks to a $135 million investment with contributions from some of the biggest names in tech.

The company, Montreal-based Element AI, was founded last October [2016] to help companies that might not have much experience in artificial intelligence start using the technology to change the way they do business.

It’s equal parts general research lab and startup incubator, with employees working to develop new and improved techniques in artificial intelligence that might not be fully realized for years, while also commercializing products and services that can be sold to clients today.

It was co-founded by Yoshua Bengio — one of the pioneers of a type of AI research called machine learning — along with entrepreneurs Jean-François Gagné and Nicolas Chapados, and the Canadian venture capital fund Real Ventures.

In an interview, Bengio and Gagné said the money from the company’s funding round will be used to hire 250 new employees by next January. A hundred will be based in Montreal, but an additional 100 employees will be hired for a new office in Toronto, and the remaining 50 for an Element AI office in Asia — its first international outpost.

They will join more than 100 employees who work for Element AI today, having left jobs at Amazon, Uber and Google, among others, to work at the company’s headquarters in Montreal.

The expansion is a big vote of confidence in Element AI’s strategy from some of the world’s biggest technology companies. Microsoft, Intel and Nvidia all contributed to the round, and each is a key player in AI research and development.

The company has some not unexpected plans and partners (from the Braga, article, Note: A link has been removed),

The Series A round was led by Data Collective, a Silicon Valley-based venture capital firm, and included participation by Fidelity Investments Canada, National Bank of Canada, and Real Ventures.

What will it help the company do? Scale, its founders say.

“We’re looking at domain experts, artificial intelligence experts,” Gagné said. “We already have quite a few, but we’re looking at people that are at the top of their game in their domains.

“And at this point, it’s no longer just pure artificial intelligence, but people who understand, extremely well, robotics, industrial manufacturing, cybersecurity, and financial services in general, which are all the areas we’re going after.”

Gagné says that Element AI has already delivered 10 projects to clients in those areas, and have many more in development. In one case, Element AI has been helping a Japanese semiconductor company better analyze the data collected by the assembly robots on its factory floor, in a bid to reduce manufacturing errors and improve the quality of the company’s products.

There’s more to investment in Québec’s AI sector than Element AI (from the Braga article; Note: Links have been removed),

Element AI isn’t the only organization in Canada that investors are interested in.

In September, the Canadian government announced $213 million in funding for a handful of Montreal universities, while both Google and Microsoft announced expansions of their Montreal AI research groups in recent months alongside investments in local initiatives. The province of Quebec has pledged $100 million for AI initiatives by 2022.

Braga goes on to note some other initiatives but at that point the article’s focus is exclusively Toronto.

For more insight into the AI situation in Québec, there’s Dan Delmar’s May 23, 2017 article for the Montreal Express (Note: Links have been removed),

Advocating for massive government spending with little restraint admittedly deviates from the tenor of these columns, but the AI business is unlike any other before it. [emphasis misn] Having leaders acting as fervent advocates for the industry is crucial; resisting the coming technological tide is, as the Borg would say, futile.

The roughly 250 AI researchers who call Montreal home are not simply part of a niche industry. Quebec’s francophone character and Montreal’s multilingual citizenry are certainly factors favouring the development of language technology, but there’s ample opportunity for more ambitious endeavours with broader applications.

AI isn’t simply a technological breakthrough; it is the technological revolution. [emphasis mine] In the coming decades, modern computing will transform all industries, eliminating human inefficiencies and maximizing opportunities for innovation and growth — regardless of the ethical dilemmas that will inevitably arise.

“By 2020, we’ll have computers that are powerful enough to simulate the human brain,” said (in 2009) futurist Ray Kurzweil, author of The Singularity Is Near, a seminal 2006 book that has inspired a generation of AI technologists. Kurzweil’s projections are not science fiction but perhaps conservative, as some forms of AI already effectively replace many human cognitive functions. “By 2045, we’ll have expanded the intelligence of our human-machine civilization a billion-fold. That will be the singularity.”

The singularity concept, borrowed from physicists describing event horizons bordering matter-swallowing black holes in the cosmos, is the point of no return where human and machine intelligence will have completed their convergence. That’s when the machines “take over,” so to speak, and accelerate the development of civilization beyond traditional human understanding and capability.

The claims I’ve highlighted in Delmar’s article have been made before for other technologies, “xxx is like no other business before’ and “it is a technological revolution.”  Also if you keep scrolling down to the bottom of the article, you’ll find Delmar is a ‘public relations consultant’ which, if you look at his LinkedIn profile, you’ll find means he’s a managing partner in a PR firm known as Provocateur.

Bertrand Marotte’s May 20, 2017 article for the Montreal Gazette offers less hyperbole along with additional detail about the Montréal scene (Note: Links have been removed),

It might seem like an ambitious goal, but key players in Montreal’s rapidly growing artificial-intelligence sector are intent on transforming the city into a Silicon Valley of AI.

Certainly, the flurry of activity these days indicates that AI in the city is on a roll. Impressive amounts of cash have been flowing into academia, public-private partnerships, research labs and startups active in AI in the Montreal area.

…, researchers at Microsoft Corp. have successfully developed a computing system able to decipher conversational speech as accurately as humans do. The technology makes the same, or fewer, errors than professional transcribers and could be a huge boon to major users of transcription services like law firms and the courts.

Setting the goal of attaining the critical mass of a Silicon Valley is “a nice point of reference,” said tech entrepreneur Jean-François Gagné, co-founder and chief executive officer of Element AI, an artificial intelligence startup factory launched last year.

The idea is to create a “fluid, dynamic ecosystem” in Montreal where AI research, startup, investment and commercialization activities all mesh productively together, said Gagné, who founded Element with researcher Nicolas Chapados and Université de Montréal deep learning pioneer Yoshua Bengio.

“Artificial intelligence is seen now as a strategic asset to governments and to corporations. The fight for resources is global,” he said.

The rise of Montreal — and rival Toronto — as AI hubs owes a lot to provincial and federal government funding.

Ottawa promised $213 million last September to fund AI and big data research at four Montreal post-secondary institutions. Quebec has earmarked $100 million over the next five years for the development of an AI “super-cluster” in the Montreal region.

The provincial government also created a 12-member blue-chip committee to develop a strategic plan to make Quebec an AI hub, co-chaired by Claridge Investments Ltd. CEO Pierre Boivin and Université de Montréal rector Guy Breton.

But private-sector money has also been flowing in, particularly from some of the established tech giants competing in an intense AI race for innovative breakthroughs and the best brains in the business.

Montreal’s rich talent pool is a major reason Waterloo, Ont.-based language-recognition startup Maluuba decided to open a research lab in the city, said the company’s vice-president of product development, Mohamed Musbah.

“It’s been incredible so far. The work being done in this space is putting Montreal on a pedestal around the world,” he said.

Microsoft struck a deal this year to acquire Maluuba, which is working to crack one of the holy grails of deep learning: teaching machines to read like the human brain does. Among the company’s software developments are voice assistants for smartphones.

Maluuba has also partnered with an undisclosed auto manufacturer to develop speech recognition applications for vehicles. Voice recognition applied to cars can include such things as asking for a weather report or making remote requests for the vehicle to unlock itself.

Marotte’s Twitter profile describes him as a freelance writer, editor, and translator.

Nanotechnology at the movies: Transcendence opens April 18, 2014 in the US & Canada

Screenwriter Jack Paglen has an intriguing interpretation of nanotechnology, one he (along with the director) shares in an April 13, 2014 article by Larry Getlen for the NY Post and in his movie, Transcendence. which is opening in the US and Canada on April 18, 2014. First, here are a few of the more general ideas underlying his screenplay,

In “Transcendence” — out Friday [April 18, 2014] and directed by Oscar-winning cinematographer Wally Pfister (“Inception,” “The Dark Knight”) — Johnny Depp plays Dr. Will Caster, an artificial-intelligence researcher who has spent his career trying to design a sentient computer that can hold, and even exceed, the world’s collective intelligence.

After he’s shot by antitechnology activists, his consciousness is uploaded to a computer network just before his body dies.

“The theories associated with the film say that when a strong artificial intelligence wakes up, it will quickly become more intelligent than a human being,” screenwriter Jack Paglen says, referring to a concept known as “the singularity.”

It should be noted that there are anti-technology terrorists. I don’t think I’ve covered that topic in a while so an Aug. 31, 2012 posting is the most recent and, despite the title, “In depth and one year later—the nanotechnology bombings in Mexico” provides an overview of sorts. For a more up-to-date view, you can read Eric Markowitz’s April 9, 2014 article for Vocative.com. I do have one observation about the article where Markowitz has linked some recent protests in San Francisco to the bombings in Mexico. Those protests in San Francisco seem more like a ‘poor vs. the rich’ situation where the rich happen to come from the technology sector.

Getting back to “Transcendence” and singularity, there’s a good Wikipedia entry describing the ideas and some of the thinkers behind the notion of a singularity or technological singularity, as it’s sometimes called (Note: Links have been removed),

The technological singularity, or simply the singularity, is a hypothetical moment in time when artificial intelligence will have progressed to the point of a greater-than-human intelligence, radically changing civilization, and perhaps human nature.[1] Because the capabilities of such an intelligence may be difficult for a human to comprehend, the technological singularity is often seen as an occurrence (akin to a gravitational singularity) beyond which the future course of human history is unpredictable or even unfathomable.

The first use of the term “singularity” in this context was by mathematician John von Neumann. In 1958, regarding a summary of a conversation with von Neumann, Stanislaw Ulam described “ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue”.[2] The term was popularized by science fiction writer Vernor Vinge, who argues that artificial intelligence, human biological enhancement, or brain-computer interfaces could be possible causes of the singularity.[3] Futurist Ray Kurzweil cited von Neumann’s use of the term in a foreword to von Neumann’s classic The Computer and the Brain.

Proponents of the singularity typically postulate an “intelligence explosion”,[4][5] where superintelligences design successive generations of increasingly powerful minds, that might occur very quickly and might not stop until the agent’s cognitive abilities greatly surpass that of any human.

Kurzweil predicts the singularity to occur around 2045[6] whereas Vinge predicts some time before 2030.[7] At the 2012 Singularity Summit, Stuart Armstrong did a study of artificial generalized intelligence (AGI) predictions by experts and found a wide range of predicted dates, with a median value of 2040. His own prediction on reviewing the data is that there is an 80% probability that the singularity will occur between 2017 and 2112.[8]

The ‘technological singularity’ is controversial and contested (from the Wikipedia entry).

In addition to general criticisms of the singularity concept, several critics have raised issues with Kurzweil’s iconic chart. One line of criticism is that a log-log chart of this nature is inherently biased toward a straight-line result. Others identify selection bias in the points that Kurzweil chooses to use. For example, biologist PZ Myers points out that many of the early evolutionary “events” were picked arbitrarily.[104] Kurzweil has rebutted this by charting evolutionary events from 15 neutral sources, and showing that they fit a straight line on a log-log chart. The Economist mocked the concept with a graph extrapolating that the number of blades on a razor, which has increased over the years from one to as many as five, will increase ever-faster to infinity.[105]

By the way, this movie is mentioned briefly in the pop culture portion of the Wikipedia entry.

Getting back to Paglen and his screenplay, here’s more from Getlen’s article,

… as Will’s powers grow, he begins to pull off fantastic achievements, including giving a blind man sight, regenerating his own body and spreading his power to the water and the air.

This conjecture was influenced by nanotechnology, the field of manipulating matter at the scale of a nanometer, or one-billionth of a meter. (By comparison, a human hair is around 70,000-100,000 nanometers wide.)

“In some circles, nanotechnology is the holy grail,” says Paglen, “where we could have microscopic, networked machines [emphasis mine] that would be capable of miracles.”

The potential uses of, and implications for, nanotechnology are vast and widely debated, but many believe the effects could be life-changing.

“When I visited MIT,” says Pfister, “I visited a cancer research institute. They’re talking about the ability of nanotechnology to be injected inside a human body, travel immediately to a cancer cell, and deliver a payload of medicine directly to that cell, eliminating [the need to] poison the whole body with chemo.”

“Nanotechnology could help us live longer, move faster and be stronger. It can possibly cure cancer, and help with all human ailments.”

I find the ‘golly gee wizness’ of Paglen’s and Pfister’s take on nanotechnology disconcerting but they can’t be dismissed. There are projects where people are testing retinal implants which allow them to see again. There is a lot of work in the field of medicine designed to make therapeutic procedures that are gentler on the body by making their actions specific to diseased tissue while ignoring healthy tissue (sadly, this is still not possible). As for human enhancement, I have so many pieces that it has its own category on this blog. I first wrote about it in a four-part series starting with this one: Nanotechnology enables robots and human enhancement: part 1, (You can read the series by scrolling past the end of the posting and clicking on the next part or search the category and pick through the more recent pieces.)

I’m not sure if this error is Paglen’s or Getlen’s but nanotechnology is not “microscopic, networked machines” as Paglen’s quote strongly suggests. Some nanoscale devices could be described as machines (often called nanobots) but there are also nanoparticles, nanotubes, nanowires, and more that cannot be described as machines or devices, for that matter. More importantly, it seems Paglen’s main concern is this,

“One of [science-fiction author] Arthur C. Clarke’s laws is that any sufficiently advanced technology is indistinguishable from magic. That very quickly would become the case if this happened, because this artificial intelligence would be evolving technologies that we do not understand, and it would be capable of miracles by that definition,” says Paglen. [emphasis mine]

This notion of “evolving technologies that we do not understand” brings to mind a  project that was announced at the University of Cambridge (from my Nov. 26, 2012 posting),

The idea that robots of one kind or another (e.g. nanobots eating up the world and leaving grey goo, Cylons in both versions of Battlestar Galactica trying to exterminate humans, etc.) will take over the world and find humans unnecessary  isn’t especially new in works of fiction. It’s not always mentioned directly but the underlying anxiety often has to do with intelligence and concerns over an ‘explosion of intelligence’. The question it raises,’ what if our machines/creations become more intelligent than humans?’ has been described as existential risk. According to a Nov. 25, 2012 article by Sylvia Hui for Huffington Post, a group of eminent philosophers and scientists at the University of Cambridge are proposing to found a Centre for the Study of Existential Risk,

While I do have some reservations about how Paglen and Pfister describe the science, I appreciate their interest in communicating the scientific ideas, particularly those underlying Paglen’s screenplay.

For anyone who may be concerned about the likelihood of emulating  a human brain and uploading it to a computer, there’s an April 13, 2014 article by Luke Muehlhauser and Stuart Armstrong for Slate discussing that very possibility (Note 1: Links have been removed; Note 2: Armstrong is mentioned in this posting’s excerpt from the Wikipedia entry on Technological Singularity),

Today scientists can’t even emulate the brain of a tiny worm called C. elegans, which has 302 neurons, compared with the human brain’s 86 billion neurons. Using models of expected technological progress on the three key problems, we’d estimate that we wouldn’t be able to emulate human brains until at least 2070 (though this estimate is very uncertain).

But would an emulation of your brain be you, and would it be conscious? Such questions quickly get us into thorny philosophical territory, so we’ll sidestep them for now. For many purposes—estimating the economic impact of brain emulations, for instance—it suffices to know that the brain emulations would have humanlike functionality, regardless of whether the brain emulation would also be conscious.

Paglen/Pfister seem to be equating intelligence (brain power) with consciousness while Muehlhauser/Armstrong simply sidestep the issue. As they (Muehlhauser/Armstrong) note, it’s “thorny.”

If you consider thinkers like David Chalmers who suggest everything has consciousness, then it follows that computers/robots/etc. may not appreciate having a human brain emulation which takes us back into Battlestar Galactica territory. From my March 19, 2014 posting (one of the postings where I recounted various TED 2014 talks in Vancouver), here’s more about David Chalmers,

Finally, I wasn’t expecting to write about David Chalmers so my notes aren’t very good. A philosopher, here’s an excerpt from Chalmers’ TED biography,

In his work, David Chalmers explores the “hard problem of consciousness” — the idea that science can’t ever explain our subjective experience.

David Chalmers is a philosopher at the Australian National University and New York University. He works in philosophy of mind and in related areas of philosophy and cognitive science. While he’s especially known for his theories on consciousness, he’s also interested (and has extensively published) in all sorts of other issues in the foundations of cognitive science, the philosophy of language, metaphysics and epistemology.

Chalmers provided an interesting bookend to a session started with a brain researcher (Nancy Kanwisher) who breaks the brain down into various processing regions (vastly oversimplified but the easiest way to summarize her work in this context). Chalmers reviewed the ‘science of consciousness’ and noted that current work in science tends to be reductionist, i.e., examining parts of things such as brains and that same reductionism has been brought to the question of consciousness.

Rather than trying to prove consciousness, Chalmers proposes that we consider it a fundamental in the same way that we consider time, space, and mass to be fundamental. He noted that there’s precedence for additions and gave the example of James Clerk Maxwell and his proposal to consider electricity and magnetism as fundamental.

Chalmers next suggestion is a little more outré and based on some thinking (sorry I didn’t catch the theorist’s name) that suggests everything, including photons, has a type of consciousness (but not intelligence).

Have a great time at the movie!

‘Eddie’ the robot, US National Security Agency talks back to Ed Snowden, at TED 2014′s Session 8: Hacked

The session started 30 minutes earlier than originally scheduled and as a consequence I got to the party a little late. First up, Marco Tempest, magician and technoillusionist, introduced and played with EDI (electronic deceptive intelligence; pronounced Eddy), a large, anthropomorphic robot (it had a comic book style face on the screen used for its face and was reminiscent of Ed Snowden’s appearance in a telepresent robot). This was a slick presentation combining magic and robotics bringing to mind Arthur C. Clarke’s comment, “Any sufficiently advanced technology is indistinguishable from magic,which I’m sure Tempest mentioned before I got there. Interestingly, he articulated the robot’s perspective that humans are fragile and unpredictable inspiring fear and uncertainty in the robot. It’s the first time I’ve encountered our relationship from the robot’s perspective,. Thank you Mr. Tempest.

Rick Ledgett, deputy director of the US National Science Agency (NSA), appeared on screen as he attended remotely but not telepresently as Ed Snowden did earlier this week to be interviewed by a TED moderator (Chris Anderson, I think). Technical problems meant the interview was interrupted and stopped while the tech guys scrambled to fix the problem. Before he was interrupted, Ledgett answered a question as to whether or not Snowden could have taken alternative actions. Ledgett made clear that he (and presumably the NSA) does not consider Snowden to be a whistleblower. It was a little confusing to me but it seemed to me that Ledgett was suggesting that whistleblowing is legitimate only when down to the corporate sector. As well, Ledgett said that Snowden could have reported to his superiors and to various oversight agencies rather than making his findings public. These responses, of course, are predictable so what made the interview interesting was Ledgett’s demeanour. He was careful not to say anything inflammatory and seemed reasonable. He is the right person to representing the NSA. He doesn’t seem to know how dangerous and difficult whistleblowing whether it’s done to a corporate entity or a government agency. Whether or not you agree with Snowden’s actions, the response to them is a classic response. I went to a talk some years ago and the speaker, Mark Wexler who teaches business ethics at Simon Fraser University, said that whistleblowers often lose their careers, their relationships, and their families due to the pressures brought to bear on them.

Ledgett rejoins the TED stage after Kurzweil and it sounds like he has been huddling with a communications team as he reframes his and Snowden’s participation as part of an important conversation. Clearly, the TED team has been in touch with Snowden who refutes Ledgett’s suggestions about alternative routes. Now. Ledgett talks tough as he describes Snowden as arrogant. He states somewhere in all this that Snowden’s actions have endangered lives and the moderator presses him for examples. Ledgett’s response features examples that are general and scenario-based. When pressed Ledgett indulges in a little sarcasm suggesting that things would be easier with badboy.com as a site where nefarious individuals would hang out. Ledgett makea some valid points about the need for some secrecy and he does state that he feels transparency is important and the NSA has not been good about it. Ledgett notes that every country in the world has a means of forcing companies to reveal information about users and he notes that some countries are using  the notion (effectively lying) that they don’t force revelations as a marketing tool. the interview switches to a discussion of metadata, its importance, and whether or not it provides more information about them individually than most people realize. Ledgett refutes that notion. I have to go, hope to get back and point you to other reports with more info. about this fascinating interview.

Ed Yong, uber science blogger, from his TED biography,

Ed Yong blogs with a mission: igniting excitement for science in everyone, regardless of their education or background.

The award-winning blog Not Exactly Rocket Science (hosted by National Geographic) is the epicenter of Yong’s formidable web and social media presence. In its posts, he tackles the hottest and most bizarre topics in science journalism. When not blogging, he also manages to contribute to Nature, Wired, Scientific American and many other web and print outlets. As he says, “The only one that matters to me, as far as my blog is concerned, is that something interests me. That is, excites or inspires or amuses me.”

Yong talked about mind-controlling parasites such as tapeworms and Gordian worms in the context of his fascination with how the parasites control animal behaviour. (i posted about a parasite infecting and controlling honey bees in an Aug. 2, 2012 piece.) Yong is liberal with his sexual references such as castrating, mind-controliing parasites in a very witty way. He also suggests that humans may in some instances (estimates suggest up to 1/3 of us) be controlled by parasites and our notions of individual autonomy are a little over-blown.

Ray Kurzweil, Mr. Singularity, describes evolution and suggests that humans are not evolving quickly enough given rapidly changing circumstances. He focuses on human brains and the current theories about their processing capabilities and segues into artificial intelligence. He makes the case that we are preparing for a quantitative leap in intelligence as our organic brains are augmented by the artificial.

Kurzweil was last mentioned here in a Jan. 6, 2010 posting in the context of reverse-engineering brains.

Human, Soul & Machine: The Coming Singularity! exhibition Oct. 5, 2013 – August 31, 2014 at Baltimore’s Visionary Art Museum

Doug Rule’s Oct. 4, 2013 article for the Baltimore (Maryland, US) edition of the Metro Weekly highlights a rather unusual art/science exhibition (Note: Links have been removed),

Maybe the weirdest, wildest museum you’ll ever visit, Baltimore’s American Visionary Art Museum opens its 19th original thematic yearlong exhibition this weekend. Human, Soul & Machine: The Coming Singularity! is what the quirky museum, focused on presenting self-taught artists, bills as its most complex subject yet, a playful examination of the serious impact of technology — in all its forms, from artificial intelligence to nanotechnology to Big Data — on our lives, as seen through the eyes of more than 40 artists, futurists and inventors in a hot-wired blend of art, science, humor and imagination.

The show opened Oct. 5, 2013 and runs until August 31, 2014. The exhibition webpage offers a description of the show and curation,

Curated by AVAM founder and director Rebecca Alban Hoffberger, this stirring show harnesses the enchanting visual delights of remarkable visionary artists and their masterworks. Among them: Kenny Irwin’s Robotmas—a special installation from his Palm Springs Robo-Lights display, glowing inside of a central black box theater at the heart of this exhibition; a selection of Alex Grey’s Sacred Mirrors; O.L. Samuels’ 7-ft tall Godzilla—a creation first imagined in response to the devastating use of the A-bomb on Hiroshima and Nagasaki; Rigo 23’s delicate anti-drone drawings; Allen Christian’s life-sized Piano Family—a love song to string theory; Fred Carter’s massive wooden carvings—created as a warning of destruction from industry’s manipulation of nature; and much more!

The exhibition media kit features a striking (imo) graphic image representing the show,

American Visionary Art Museum graphic for Human Soul exhibition [downloaded from http://www.avam.org/news-and-events/pdf/press-kits/Singularity/HSM-MediaKit-Web.pdf]

American Visionary Art Museum graphic for Human, Soul, and Machine exhibition [downloaded from http://www.avam.org/news-and-events/pdf/press-kits/Singularity/HSM-MediaKit-Web.pdf]


The list of artists includes one person familiar to anyone following the ‘singularity’ story even occasionally, Ray Kurzweil.

Reverse engineering the brain Ray Kurzweil style; funding for neuroprosthetics; a Canadian digital power list for 2009

After much hemming and hawing, I finally got around to reading something about Ray Kurzweil and his ideas in an interview at the H+ site and quite unexpectedly was engaged by his discussion of consciousness. From the interview,

I get very excited about discussions about the true nature of consciousness, because I‘ve been thinking about this issue for literally 50 years, going back to junior high school. And it‘s a very difficult subject. When some article purports to present the neurological basis of consciousness… I read it. And the articles usually start out, “Well, we think that consciousness is caused by…” You know, fill in the blank. And then it goes on with a big extensive examination of that phenomenon. And at the end of the article, I inevitably find myself thinking… where is the link to consciousness? Where is any justification for believing that this phenomenon should cause consciousness? Why would it cause consciousness?

Some scientists say, “Well, it‘s not a scientific issue, therefore it‘s not a real issue. Therefore consciousness is just an illusion and we should not waste time on it.” But we shouldn‘t be too quick to throw it overboard because our whole moral system and ethical system is based on consciousness.

The article is well worth a read  and I have to say I enjoyed his comments about science fiction movies. I’m not enamoured of his notion about trying to reverse engineer brains no matter how ‘mindfully’ done. I suspect I have a fundamental disagreement with many of Kurweil’s ideas which as far as I can tell are profoundly influenced by his experience and success in IT (information technology).

Unlike Kurzweil, I don’t view the brain or genomes as computer codes but I will read more about his work and ideas as he makes me think about some of my unconscious (pun intended) assumptions. (Note: in the H+ article Kurzweil mentions some nanotechnology guidelines from what the interviewers call the Forsyth Institute, I believe Kurzweil was referring to the Foresight Institute’s nanotechnology guidelines found here.)

I guess I’m getting a little blasé about money as I find the $1.6 million US funding awarded to help with neuroprosthetics for returning US soldiers a little on the skimpy side. From the news item on Nanowerk,

The conflicts in Iraq and Afghanistan have left a terrible legacy: more than 1,200 returning American soldiers have lost one or more limbs. To address this growing national need, researchers at Worcester Polytechnic Institute (WPI) are laying the groundwork for a new generation of advanced prosthetic limbs that will be fully integrated with the body and nervous system. These implantable neuroprosthetics will look and function like natural limbs, enabling injured soldiers and the more than 2 million other amputees in the United States lead higher quality, more independent lives.

As for making these limbs more natural looking, I find this contrasts a bit with some of Lanfranco Aceti’s work  (I first posted my comments about it here) where he notes that males (under 50) don’t want limbs that look natural. I don’t if he or someone else has followed up with that but it certainly poses an intriguing question about how we may be starting to view our bodies, gender differences and all.

Michael Geist has a 2009 Canadian digital power list on The Tyee website here. I was surprised that Gary Goodyear (Minister of State for Science and Technology) received no mention, given his portfolio.

Nanotechnology enables robots and human enhancement: part 2

Mary King’s project on Robots and AI, the one I mentioned yesterday, was written in 2007 so there have been some changes since then but her focus is largely cultural and that doesn’t change so quickly. The bird’s eye view she provides of the situation in Japan and other parts of Asia contrasts with the information and ideas that are common currency in North America and, I suspect, Europe too. (As for other geographic regions, I don’t venture any comments as I’m not sufficiently familiar with the thinking in those regions.)  Take for example this,

South Korea, meanwhile, has not only announced that by 2010 it expects to have robo-cops patrolling the streets alongside its police force and army, but that its “Robot Ethics Charter” will take effect later this year. The charter includes Asimov-like laws for the robots, as well as guidelines to protect robots from abuse by humans. South Korea is concerned that some people will become addicted to robots, may want to marry their android or will use robots for illegal activities. The charter demands full human control over the robots, an idea that is likely to be popular with Japanese too. But a number of organizations and individuals in the West are bound to criticize laws that do not grant equal “human” rights to robots.

Mary goes on to cite some of the work on roboethics and robo-rights being done in the West and gives a brief discussion of some of the more apocalyptic possibilities. I think the latest incarnation of Battlestar Galactica anchored its mythology in many of the “Western” fears associated with the arrival of intelligent robots. She also mentions this,

Beyond robots becoming more ubiquitous in our lives, a vanguard of Western scientists asserts that humans will merge with the machine. Brooks says “… it is clear that robotic technology will merge with biotechnology in the first half of this century,” and he therefore concludes that “the distinction between us and robots is going to disappear.

Leading proponents of Strong AI state that humans will transcend biology and evolve to a higher level by merging with robot technology. Ray Kurzweil, a renowned inventor, transhumanist and the author of several books on “spiritual machines,” claims that immortality lies within the grasp of many of us alive today.

The concept of transhumanism does not accord well with the Japanese perspective,

Japan’s fondness for humanoid robots highlights the high regard Japanese share for the role of humans within nature. Humans are viewed as not being above nature, but a part of it.

This reminds me of the discussion taking place on the topic of synthetic biology (blog posting here) where the synthetic biologists are going to reconfigure the human genome to make it better. According to Denise Caruso (executive director of the Hybrid Vigor Institute), many of the synthetic biologists have backgrounds in IT not biology. I highly recommend Mary’s essay. It’s a longish read (5000 words) but well worth it for the insights it provides.

In Canada, we are experiencing robotic surveillance at the border with the US. The CBC reported in June that the US was launching a drone plane in the Great Lakes region of the border. It was the 2nd drone, the 1st being deplored over the Manitoba border and there is talk that a drone will be used on the BC border in the future. For details, go here. More tomorrow.

Waiting for Martha

Last April (2008), Canada’s National Institute of Nanotechnology (NINT) announced a new chairperson for their board, Martha Cook Piper. I was particularly interested in the news since she was the president of the University of British Columbia (UBC) for a number of years during which she maintained a pretty high profile locally and, I gather, nationally. She really turned things around at UBC and helped it gain more national prominence.

I contacted NINT and sent some interview questions in May or June last year. After some months (as I recall it was Sept. or Oct. 2008), I got an email address for Martha and redirected my queries to her. She was having a busy time during the fall and through Christmas into 2009 with the consequence that my questions have only recently been answered. At this point, someone at NINT is reviewing the answers and I’m hopeful that I will finally have the interview in the near future.

There is a documentary about Ray Kurzweil (‘Mr. Singularity’) making the rounds. You can see a trailer and a preview article here at Fast Company.

As you may have guessed, there’s not a lot of news today.

The poetry of Canadian Copyright Law

Techdirt had an item, Intellectual Property Laws Rewrittten as Poetry. The poet, Yehuda Berlinger, has included Canada’s copyright law in the oeuvre. You can read the verse here. It’s surprisingly informative given how amusing and concise the verses are.

On a completely other note, there’s an article in Fast Company about a haptic exhibition in Japan that’s quite intriguing in light of the Nokia Morph. Part of an exhibit last year, the Morph concept is a flexible, foldable, bendable (you get the idea) phone. As far as I know, they (University of Cambridge and Nokia) have yet to produce a prototype (last year they had an animation which demonstrated the concept). Getting back to Japan, one of the exhibits was a design for speakers where you control the volume by changing their shapes. Haptic Speakers: Reach Out and Touch Some Sound is the article. Do go and read it. I found it very helpful to see the pictures (which seems ironic given that the article is about the sense of touch).

I’ve been curious about research concerning disabled folks and using their ‘thought waves’ to control equipment or machinery. I’ve found a description of some of the research in Richard Jones’s blog but it’s in the context of a discussion of Ray Kurzweil and some of Kurzweil’s ideas regarding the ‘singularity’. Anyway, Jones offers a good description of some of the ‘thought wave’ research. As for Kurzweil, one of these days I will try and read some of the material he’s written. The little I have seen suggests that he has absolutely no concept of human nature, in much the same way that economists don’t.