Category Archives: pop culture

The physics of the multiverse of madness

The Dr. Strange movie (Dr. Strange in the Multiverse of Madness released May 6, 2022) has inspired an essay on physics. From a May 9, 2022 news item on phys.org

If you’re a fan of science fiction films, you’ll likely be familiar with the idea of alternate universes—hypothetical planes of existence with different versions of ourselves. As far from reality as it sounds, it is a question that scientists have contemplated. So just how well does the fiction stack up with the science?

The many-worlds interpretation is one idea in physics that supports the concept of multiple universes existing. It stems from the way we comprehend quantum mechanics, which defy the rules of our regular world. While it’s impossible to test and is considered an interpretation rather than a scientific theory, many physicists think it could be possible.

“When you look at the regular world, things are measurable and predictable—if you drop a ball off a roof, it will fall to the ground. But when you look on a very small scale in quantum mechanics, the rules stop applying. Instead of being predictable, it becomes about probabilities,” says Sarah Martell, Associate Professor at the School of Physics, UNSW Science.

A May 9, 2022 University of New South Wales (UNSW; Australia) press release originated the news item,

The fundamental quantum equation – called a wave function – shows a particle inhabiting many possible positions, with different probabilities assigned to each. If you were to attempt to observe the particle to determine its position – known in physics as ‘collapsing’ the wave function – you’ll find it in just one place. But the particle actually inhabits all the positions allowed by the wave function.

This interpretation of quantum mechanics is important, as it helps explain some of the quantum paradoxes that logic can’t answer, like why a particle can be in two places at once. While it might seem impossible to us, since we experience time and space as fixed, mathematically it adds up.

“When you make a measurement in quantum physics, you’re only measuring one of the possibilities. We can work with that mathematically, but it’s philosophically uncomfortable that the world stops being predictable,” A/Prof. Martell says.

“If you don’t get hung up on the philosophy, you simply move on with your physics. But what if the other possibility were true? That’s where this idea of the multiverse comes in.”

The quantum multiverse

Like it is depicted in many science fiction films, the many-worlds interpretation suggests our reality is just one of many. The universe supposedly splits or branches into other universes any time we take action – whether it’s a molecule moving, what you decide to eat or your choice of career. 

In physics, this is best explained through the thought experiment of Schrodinger’s cat. In the many-worlds interpretation, when the box is opened, the observer and the possibly alive cat split into an observer looking at a box with a deceased cat and one looking at a box with a live cat.

“A version of you measures one result, and a version of you measures the other result. That way, you don’t have to explain why a particular probability resulted. It’s just everything that could happen, does happen, somewhere,” A/Prof. Martell says.

“This is the logic often depicted in science fiction, like Spider-Man: Into the Spider-Verse, where five different Spider-Man exist in different universes based on the idea there was a different event that set up each one’s progress and timeline.”

This interpretation suggests that our decisions in this universe have implications for other versions of ourselves living in parallel worlds. But what about the possibility of interacting with these hypothetical alternate universes?

According to the many-worlds interpretation, humans wouldn’t be able to interact with parallel universes as they do in films – although science fiction has creative licence to do so.

“It’s a device used all the time in comic books, but it’s not something that physics would have anything to say about,” A/Prof. Martell says. “But I love science fiction for the creativity and the way that little science facts can become the motivation for a character or the essential crisis in a story with characters like Doctor Strange.”

“If for nothing else, science fiction can help make science more accessible, and the more we get people talking about science, the better,” A/Prof. Martell says.

“I think we do ourselves a lot of good by putting hooks out there that people can grab. So, if we can get people interested in science through popular culture, they’ll be more interested in the science we do.” 

The university also offers a course as this October 6, 2020 UNSW press release reveals,

From the morality plays in Star Trek, to the grim futures in Black Mirror, fiction can help explore our hopes – and fears – of the role science might play in our futures.

But sci-fi can be more than just a source of entertainment. When fiction gets the science right (or right enough), sci-fi can also be used to make science accessible to broader audiences. 

“Sci-fi can help relate science and technology to the lived human experience,” says Dr Maria Cunningham, a radio astronomer and senior lecturer in UNSW Science’s School of Physics. 

“Storytelling can make complex theories easier to visualise, understand and remember.”

Dr Cunningham – a sci-fi fan herself – convenes ‘Brave New World’: a course on science fact and fiction aimed at students from a non-scientific background. The course explores the relationship between literature, science, and society, using case studies like Futurama and MacGyver.

She says her own interest in sci-fi long predates her career in science.

“Fiction can help get people interested in science – sometimes without them even knowing it,” says Dr Cunningham.

“Sci-fi has the potential to increase the science literacy of the general population.”

Here, Dr Cunningham shares three tricky physics concepts best explained through science fiction (spoilers ahead).

Cunningham goes on to discuss the Universal Speed Limit, Time Dilation, and, yes, the Many Worlds Interpretation.

The course, “Brave New World: Science Fiction, Science Fact and the Future – GENS4015” is still offered but do check the link to make sure it takes you to the latest version (I found 2023). One more thing, it is offered wholly on the internet.

STEM (science, technology, engineering and math) brings life to the global hit television series “The Walking Dead” and a Canadian AI initiative for women and diversity

I stumbled across this June 8, 2022 AMC Networks news release in the last place I was expecting (i.e., a self-described global entertainment company’s website) to see a STEM (science, technology, engineering, and mathematics) announcement,

AMC NETWORKS CONTENT ROOM TEAMS WITH THE AD COUNCIL TO EMPOWER GIRLS IN STEM, FEATURING “THE WALKING DEAD”

AMC Networks Content Room and the Ad Council, a non-profit and leading producer of social impact campaigns for 80 years, announced today a series of new public service advertisements (PSAs) that will highlight the power of girls in STEM (science, technology, engineering and math) against the backdrop of the global hit series “The Walking Dead.”  In the spots, behind-the-scenes talent of the popular franchise, including Director Aisha Tyler, Costume Designer Vera Chow and Art Director Jasmine Garnet, showcase how STEM is used to bring the post-apocalyptic world of “The Walking Dead” to life on screen.  Created by AMC Networks Content Room, the PSAs are part of the Ad Council’s national She Can STEM campaign, which encourages girls, trans youth and non-binary youth around the country to get excited about and interested in STEM.

The new creative consists of TV spots and custom videos created specifically for TikTok and Instagram.  The spots also feature Gitanjali Rao, a 16-year-old scientist, inventor and activist, interviewing Tyler, Chow and Garnet discussing how they and their teams use STEM in the production of “The Walking Dead.”  Using before and after visuals, each piece highlights the unique and unexpected uses of STEM in the making of the series.  In addition to being part of the larger Ad Council campaign, the spots will be available on “The Walking Dead’s” social media platforms, including Facebook, Instagram, Twitter and YouTube pages, and across AMC Networks linear channels and digital platforms.

PSA:   https://youtu.be/V20HO-tUO18

Social: https://youtu.be/LnDwmZrx6lI

Said Kim Granito, EVP of AMC Networks Content Room: “We are thrilled to partner with the Ad Council to inspire young girls in STEM through the unexpected backdrop of ‘The Walking Dead.’  Over the last 11 years, this universe has been created by an array of insanely talented women that utilize STEM every day in their roles.  This campaign will broaden perceptions of STEM beyond the stereotypes of lab coats and beakers, and hopefully inspire the next generation of talented women in STEM.  Aisha Tyler, Vera Chow and Jasmine Garnet were a dream to work with and their shared enthusiasm for this mission is inspiring.”

“Careers in STEM are varied and can touch all aspects of our lives. We are proud to partner with AMC Networks Content Room on this latest work for the She Can STEM campaign. With it, we hope to inspire young girls, non-binary youth, and trans youth to recognize that their passion for STEM can impact countless industries – including the entertainment industry,” said Michelle Hillman, Chief Campaign Development Officer, Ad Council.

Women make up nearly half of the total college-educated workforce in the U.S., but they only constitute 27% of the STEM workforce, according to the U.S. Census Bureau. Research shows that many girls lose interest in STEM as early as middle school, and this path continues through high school and college, ultimately leading to an underrepresentation of women in STEM careers.  She Can STEM aims to dismantle the intimidating perceived barrier of STEM fields by showing girls, non-binary youth, and trans youth how fun, messy, diverse and accessible STEM can be, encouraging them to dive in, no matter where they are in their STEM journey.

Since the launch of She Can STEM in September 2018, the campaign has been supported by a variety of corporate, non-profit and media partners. The current funder of the campaign is IF/THEN, an initiative of Lyda Hill Philanthropies.  Non-profit partners include Black Girls Code, ChickTech, Girl Scouts of the USA, Girls Inc., Girls Who Code, National Center for Women & Information Technology, The New York Academy of Sciences and Society of Women Engineers.

About AMC Networks Inc.

AMC Networks (Nasdaq: AMCX) is a global entertainment company known for its popular and critically-acclaimed content. Its brands include targeted streaming services AMC+, Acorn TV, Shudder, Sundance Now, ALLBLK, and the newest addition to its targeted streaming portfolio, the anime-focused HIDIVE streaming service, in addition to AMC, BBC AMERICA (operated through a joint venture with BBC Studios), IFC, SundanceTV, WE tv and IFC Films. AMC Studios, the Company’s in-house studio, production and distribution operation, is behind some of the biggest titles and brands known to a global audience, including The Walking Dead, the Anne Rice catalog and the Agatha Christie library.  The Company also operates AMC Networks International, its international programming business, and 25/7 Media, its production services business.

About Content Room

Content Room is AMC Networks’ award-winning branded entertainment studio that collaborates with advertising partners to build brand stories and create bespoke experiences across an expanding range of digital, social, and linear platforms. Content Room enables brands to fully tap into the company’s premium programming, distinct IP, deep talent roster and filmmaking roots through an array of creative partnership opportunities— from premium branded content and integrations— to franchise and gaming extensions.

Content Room is also home to the award-winning digital content studio which produces dozens of original series annually, which expands popular AMC Networks scripted programming for both fans and advertising partners by leveraging the built-in massive series and talent fandoms.

The Ad Council
The Ad Council is where creativity and causes converge. The non-profit organization brings together the most creative minds in advertising, media, technology and marketing to address many of the nation’s most important causes. The Ad Council has created many of the most iconic campaigns in advertising history. Friends Don’t Let Friends Drive Drunk. Smokey Bear. Love Has No Labels.

The Ad Council’s innovative social good campaigns raise awareness, inspire action and save lives. To learn more, visit AdCouncil.org, follow the Ad Council’s communities on Facebook and Twitter, and view the creative on YouTube.

You can find the ‘She Can Stem’ Ad Council initiative here.

Canadian women and the AI4Good Lab

A June 9, 2022 posting on the Borealis AI website describes an artificial intelligence (AI) initiative designed to encourage women to enter the field,

The AI4Good Lab is one of those programs that creates exponential opportunities. As the leading Canadian AI-training initiative for women-identified STEM students, the lab helps encourage diversity in the field of AI. Participants work together to use AI to solve a social problem, delivering untold benefits to their local communities. And they work shoulder-to-shoulder with other leaders in the field of AI, building their networks and expanding the ecosystem.

At this year’s [2022] AI4Good Lab Industry Night, program partners – like Borealis AI, RBC [Royal Bank of Canada], DeepMind, Ivado and Google – had an opportunity to (virtually) meet the nearly 90  participants of this year’s program. Many of the program’s alumni were also in attendance. So, too, were representatives from CIFAR [Canadian Institute for Advanced Research], one of Canada’s leading global research organizations.

Industry participants – including Dr. Eirene Seiradaki, Director of Research Partnerships at Borealis AI, Carey Mende-Gibson, RBC’s Location Intelligence ambassador, and Lucy Liu, Director of Data Science at RBC – talked with attendees about their experiences in the AI industry, discussed career opportunities and explored various career paths that the participants could take in the industry. For the entire two hours, our three tables  and our virtually cozy couches were filled to capacity. It was only after the end of the event that we had the chance to exchange visits to the tables of our partners from CIFAR and AMII [Alberta Machine Intelligence Institute]. Eirene did not miss the opportunity to catch up with our good friend, Warren Johnston, and hear first-hand the news from AMII’s recent AI Week 2022.

Borealis AI is funded by the Royal Bank of Canada. Somebody wrote this for the homepage (presumably tongue in cheek),

All you can bank on.

The AI4Good Lab can be found here,

The AI4Good Lab is a 7-week program that equips women and people of marginalized genders with the skills to build their own machine learning projects. We emphasize mentorship and curiosity-driven learning to prepare our participants for a career in AI.

The program is designed to open doors for those who have historically been underrepresented in the AI industry. Together, we are building a more inclusive and diverse tech culture in Canada while inspiring the next generation of leaders to use AI as a tool for social good.

A most recent programme ran (May 3 – June 21, 2022) in Montréal, Toronto, and Edmonton.

There are a number of AI for Good initiatives including this one from the International Telecommunications Union (a United Nations Agency).

For the curious, I have a May 10, 2018 post “The Royal Bank of Canada reports ‘Humans wanted’ and some thoughts on the future of work, robots, and artificial intelligence” where I ‘examine’ RBC and its AI initiatives.

Wound healing without sutures

Whoever wrote this Technion-Israel Institute of Technology November 28, 2021 press release (also on EurekAlert) they seem to have had a lot of fun doing it,

“Sutures? That’s practically medieval!”

It is a staple of science fiction to mock sutures as outdated. The technique has, after all, been in use for at least 5,000 years. Surely medicine should have advanced since ancient Egypt. Professor Hossam Haick from the Wolfson Department of Chemical Engineering at the Technion has finally turned science fiction into reality. His lab succeeded in creating a smart sutureless dressing that binds the wound together, wards off infection, and reports on the wound’s condition directly to the doctors’ computers. Their study was published in Advanced Materials.

Current surgical procedures entail the surgeon cutting the human body, doing what needs to be done, and sewing the wound shut – an invasive procedure that damages surrounding healthy tissue. Some sutures degrade by themselves – or should degrade – as the wound heals. Others need to be manually removed. Dressing is then applied over the wound and medical personnel monitor the wound by removing the dressing to allow observation for signs of infection like swelling, redness, and heat. This procedure is painful to the patient, and disruptive to healing, but it is unavoidable. Working with these methods also mean that infection is often discovered late, since it takes time for visible signs to appear, and more time for the inspection to come round and see them. In developed countries, with good sanitation available, about 20% of patients develop infections post-surgery, necessitating additional treatment and extending the time to recovery. The figure and consequences are much worse in developing countries.

How will it work with Prof. Haick’s new dressing?

Prior to beginning a procedure, the dressing – which is very much like a smart band-aid – developed by Prof. Haick’s lab will be applied to the site of the planned incision. The incision will then be made through it. Following the surgery, the two ends of the wound will be brought together, and within three seconds the dressing will bind itself together, holding the wound closed, similarly to sutures. From then, the dressing will be continuously monitoring the wound, tracking the healing process, checking for signs of infection like changes in temperature, pH, and glucose levels, and report to the medical personnel’s smartphones or other devices. The dressing will also itself release antibiotics onto the wound area, preventing infection.

“I was watching a movie on futuristic robotics with my kids late one night,” said Prof. Haick, “and I thought, what if we could really make self-repairing sensors?”

Most people discard their late-night cinema-inspired ideas. Not Prof. Haick, who, the very next day after his Eureka moment, was researching and making plans. The first publication about a self-healing sensor came in 2015 (read more about it on the Technion website here). At that time, the sensor needed almost 24 hours to repair itself. By 2020, sensors were healing in under a minute (read about the study by Muhammad Khatib, a student in Prof. Haick’s lab here), but while it had multiple applications, it was not yet biocompatible, that is, not usable in contact with skin and blood. Creating a polymer that would be both biocompatible and self-healing was the next step, and one that was achieved by postdoctoral fellow Dr. Ning Tang.

The new polymer is structured like a molecular zipper, made from sulfur and nitrogen: the surgeon’s scalpel opens it; then pressed together, it closes and holds fast. Integrated carbon nanotubes provide electric conductivity and the integration of the sensor array. In experiments, wounds closed with the smart dressing healed as fast as those closed with sutures and showed reduced rates of infection.

“It’s a new approach to wound treatment,” said Prof. Haick. “We introduce the advances of the fourth industrial revolution – smart interconnected devices, into the day-to-day treatment of patients.”

Prof. Haick is the head of the Laboratory for Nanomaterial-based Devices (LNBD) and the Dean of Undergraduate Studies at the Technion. Dr. Ning Tang was a postdoctoral fellow in Prof. Haick’s laboratory and conducted this study as part of his fellowship. He has now been appointed an associate professor in Shanghai Jiao Tong University.

Here’s a link to and a citation for the paper,

Highly Efficient Self-Healing Multifunctional Dressing with Antibacterial Activity for Sutureless Wound Closure and Infected Wound Monitoring by Ning Tang, Rongjun Zhang, Youbin Zheng, Jing Wang, Muhammad Khatib, Xue Jiang, Cheng Zhou, Rawan Omar, Walaa Saliba, Weiwei Wu, Miaomiao Yuan, Daxiang Cui, Hossam Haick. DOI: https://doi.org/10.1002/adma.202106842 First published: 05 November 2021

This paper is behind a paywall.

I usually like to have three links to a news/press release and in my searches for a third source for this press release, I stumbled onto the technioncanada.org website. They seemed to have scooped everyone including Technion as they have a November 25, 2021posting of the press release.

True love with AI (artificial intelligence): The Nature of Things explores emotional and creative AI (long read)

The Canadian Broadcasting Corporation’s (CBC) science television series,The Nature of Things, which has been broadcast since November 1960, explored the world of emotional, empathic and creative artificial intelligence (AI) in a Friday, November 19, 2021 telecast titled, The Machine That Feels,

The Machine That Feels explores how artificial intelligence (AI) is catching up to us in ways once thought to be uniquely human: empathy, emotional intelligence and creativity.

As AI moves closer to replicating humans, it has the potential to reshape every aspect of our world – but most of us are unaware of what looms on the horizon.

Scientists see AI technology as an opportunity to address inequities and make a better, more connected world. But it also has the capacity to do the opposite: to stoke division and inequality and disconnect us from fellow humans. The Machine That Feels, from The Nature of Things, shows viewers what they need to know about a field that is advancing at a dizzying pace, often away from the public eye.

What does it mean when AI makes art? Can AI interpret and understand human emotions? How is it possible that AI creates sophisticated neural networks that mimic the human brain? The Machine That Feels investigates these questions, and more.

In Vienna, composer Walter Werzowa has — with the help of AI — completed Beethoven’s previously unfinished 10th symphony. By feeding data about Beethoven, his music, his style and the original scribbles on the 10th symphony into an algorithm, AI has created an entirely new piece of art.

In Atlanta, Dr. Ayanna Howard and her robotics lab at Georgia Tech are teaching robots how to interpret human emotions. Where others see problems, Howard sees opportunity: how AI can help fill gaps in education and health care systems. She believes we need a fundamental shift in how we perceive robots: let’s get humans and robots to work together to help others.

At Tufts University in Boston, a new type of biological robot has been created: the xenobot. The size of a grain of sand, xenobots are grown from frog heart and skin cells, and combined with the “mind” of a computer. Programmed with a specific task, they can move together to complete it. In the future, they could be used for environmental cleanup, digesting microplastics and targeted drug delivery (like releasing chemotherapy compounds directly into tumours).

The film includes interviews with global leaders, commentators and innovators from the AI field, including Geoff Hinton, Yoshua Bengio, Ray Kurzweil and Douglas Coupland, who highlight some of the innovative and cutting-edge AI technologies that are changing our world.

The Machine That Feels focuses on one central question: in the flourishing age of artificial intelligence, what does it mean to be human?

I’ll get back to that last bit, “… what does it mean to be human?” later.

There’s a lot to appreciate in this 44 min. programme. As you’d expect, there was a significant chunk of time devoted to research being done in the US but Poland and Japan also featured and Canadian content was substantive. A number of tricky topics were covered and transitions from one topic to the next were smooth.

In the end credits, I counted over 40 source materials from Getty Images, Google Canada, Gatebox, amongst others. It would have been interesting to find out which segments were produced by CBC.

David Suzuki’s (programme host) script was well written and his narration was enjoyable, engaging, and non-intrusive. That last quality is not always true of CBC hosts who can fall into the trap of overdramatizing the text.

Drilling down

I have followed artificial intelligence stories in a passive way (i.e., I don’t seek them out) for many years. Even so, there was a lot of material in the programme that was new to me.

For example, there was this love story (from the ‘I love her and see her as a real woman.’ Meet a man who ‘married’ an artificial intelligence hologram webpage on the CBC),

In the The Machine That Feels, a documentary from The Nature of Things, we meet Kondo Akihiko, a Tokyo resident who “married” a hologram of virtual pop singer Hatsune Miku using a certificate issued by Gatebox (the marriage isn’t recognized by the state, and Gatebox acknowledges the union goes “beyond dimensions”).

I found Akihiko to be quite moving when he described his relationship, which is not unique. It seems some 4,000 men have ‘wed’ their digital companions, you can read about that and more on the ‘I love her and see her as a real woman.’ Meet a man who ‘married’ an artificial intelligence hologram webpage.

What does it mean to be human?

Overall, this Nature of Things episode embraces certainty, which means the question of what it means to human is referenced rather than seriously discussed. An unanswerable philosophical question, the programme is ill-equipped to address it, especially since none of the commentators are philosophers or seem inclined to philosophize.

The programme presents AI as a juggernaut. Briefly mentioned is the notion that we need to make some decisions about how our juggernaut is developed and utilized. No one discusses how we go about making changes to systems that are already making critical decisions for us. (For more about AI and decision-making, see my February 28, 2017 posting and scroll down to the ‘Algorithms and big data’ subhead for Cathy O’Neil’s description of how important decisions that affect us are being made by AI systems. She is the author of the 2016 book, ‘Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy’; still a timely read.)

In fact, the programme’s tone is mostly one of breathless excitement. A few misgivings are expressed, e.g,, one woman who has an artificial ‘texting friend’ (Replika; a chatbot app) noted that it can ‘get into your head’ when she had a chat where her ‘friend’ told her that all of a woman’s worth is based on her body; she pushed back but intimated that someone more vulnerable could find that messaging difficult to deal with.

The sequence featuring Akihiko and his hologram ‘wife’ is followed by one suggesting that people might become more isolated and emotionally stunted as they interact with artificial friends. It should be noted, Akihiko’s wife is described as ‘perfect’. I gather perfection means that you are always understanding and have no needs of your own. She also seems to be about 18″ high.

Akihiko has obviously been asked about his ‘wife’ before as his answers are ready. They boil down to “there are many types of relationships” and there’s nothing wrong with that. It’s an intriguing thought which is not explored.

Also unexplored, these relationships could be said to resemble slavery. After all, you pay for these friends over which you have control. But perhaps that’s alright since AI friends don’t have consciousness. Or do they? In addition to not being able to answer the question, “what is it to be human?” we still can’t answer the question, “what is consciousness?”

AI and creativity

The Nature of Things team works fast. ‘Beethoven X – The AI Project’ had its first performance on October 9, 2021. (See my October 1, 2021 post ‘Finishing Beethoven’s unfinished 10th Symphony’ for more information from Ahmed Elgammal’s (Director of the Art & AI Lab at Rutgers University) technical perspective on the project.

Briefly, Beethoven died before completing his 10th symphony and a number of computer scientists, musicologists, AI, and musicians collaborated to finish the symphony.)

The one listener (Felix Mayer, music professor at the Technical University Munich) in the hall during a performance doesn’t consider the work to be a piece of music. He does have a point. Beethoven left some notes but this ’10th’ is at least partly mathematical guesswork. A set of probabilities where an algorithm chooses which note comes next based on probability.

There was another artist also represented in the programme. Puzzlingly, it was the still living Douglas Coupland. In my opinion, he’s better known as a visual artist than a writer (his Wikipedia entry lists him as a novelist first) but he has succeeded greatly in both fields.

What makes his inclusion in the Nature of Things ‘The Machine That Feels’ programme puzzling, is that it’s not clear how he worked with artificial intelligence in a collaborative fashion. Here’s a description of Coupland’s ‘AI’ project from a June 29, 2021 posting by Chris Henry on the Google Outreach blog (Note: Links have been removed),

… when the opportunity presented itself to explore how artificial intelligence (AI) inspires artistic expression — with the help of internationally renowned Canadian artist Douglas Coupland — the Google Research team jumped on it. This collaboration, with the support of Google Arts & Culture, culminated in a project called Slogans for the Class of 2030, which spotlights the experiences of the first generation of young people whose lives are fully intertwined with the existence of AI. 

This collaboration was brought to life by first introducing Coupland’s written work to a machine learning language model. Machine learning is a form of AI that provides computer systems the ability to automatically learn from data. In this case, Google research scientists tuned a machine learning algorithm with Coupland’s 30-year body of written work — more than a million words — so it would familiarize itself with the author’s unique style of writing. From there, curated general-public social media posts on selected topics were added to teach the algorithm how to craft short-form, topical statements. [emphases mine]

Once the algorithm was trained, the next step was to process and reassemble suggestions of text for Coupland to use as inspiration to create twenty-five Slogans for the Class of 2030. [emphasis mine]

I would comb through ‘data dumps’ where characters from one novel were speaking with those in other novels in ways that they might actually do. It felt like I was encountering a parallel universe Doug,” Coupland says. “And from these outputs, the statements you see here in this project appeared like gems. Did I write them? Yes. No. Could they have existed without me? No.” [emphases mine]

So, the algorithms crunched through Coupland’s word and social media texts to produce slogans, which Coupland then ‘combed through’ to pick out 25 slogans for the ‘Slogans For The Class of 2030’ project. (Note: In the programme, he says that he started a sentence and then the AI system completed that sentence with material gleaned from his own writings, which brings to Exquisite Corpse, a collaborative game for writers originated by the Surrealists, possibly as early as 1918.)

The ‘slogans’ project also reminds me of William S. Burroughs and the cut-up technique used in his work. From the William S. Burroughs Cut-up technique webpage on the Language is a Virus website (Thank you to Lake Rain Vajra for a very interesting website),

The cutup is a mechanical method of juxtaposition in which Burroughs literally cuts up passages of prose by himself and other writers and then pastes them back together at random. This literary version of the collage technique is also supplemented by literary use of other media. Burroughs transcribes taped cutups (several tapes spliced into each other), film cutups (montage), and mixed media experiments (results of combining tapes with television, movies, or actual events). Thus Burroughs’s use of cutups develops his juxtaposition technique to its logical conclusion as an experimental prose method, and he also makes use of all contemporary media, expanding his use of popular culture.

[Burroughs says] “All writing is in fact cut-ups. A collage of words read heard overheard. What else? Use of scissors renders the process explicit and subject to extension and variation. Clear classical prose can be composed entirely of rearranged cut-ups. Cutting and rearranging a page of written words introduces a new dimension into writing enabling the writer to turn images in cinematic variation. Images shift sense under the scissors smell images to sound sight to sound to kinesthetic. This is where Rimbaud was going with his color of vowels. And his “systematic derangement of the senses.” The place of mescaline hallucination: seeing colors tasting sounds smelling forms.

“The cut-ups can be applied to other fields than writing. Dr Neumann [emphasis mine] in his Theory of Games and Economic behavior introduces the cut-up method of random action into game and military strategy: assume that the worst has happened and act accordingly. … The cut-up method could be used to advantage in processing scientific data. [emphasis mine] How many discoveries have been made by accident? We cannot produce accidents to order. The cut-ups could add new dimension to films. Cut gambling scene in with a thousand gambling scenes all times and places. Cut back. Cut streets of the world. Cut and rearrange the word and image in films. There is no reason to accept a second-rate product when you can have the best. And the best is there for all. Poetry is for everyone . . .”

First, John von Neumann (1902 – 57) is a very important figure in the history of computing. From a February 25, 2017 John von Neumann and Modern Computer Architecture essay on the ncLab website, “… he invented the computer architecture that we use today.”

Here’s Burroughs on the history of writers and cutups (thank you to QUEDEAR for posting this clip),

You can hear Burroughs talk about the technique and how he started using it in 1959.

There is no explanation from Coupland as to how his project differs substantively from Burroughs’ cut-ups or a session of Exquisite Corpse. The use of a computer programme to crunch through data and give output doesn’t seem all that exciting. *(More about computers and chatbots at end of posting).* It’s hard to know if this was an interview situation where he wasn’t asked the question or if the editors decided against including it.

Kazuo Ishiguro?

Given that Ishiguro’s 2021 book (Klara and the Sun) is focused on an artificial friend and raises the question of ‘what does it mean to be human’, as well as the related question, ‘what is the nature of consciousness’, it would have been interesting to hear from him. He spent a fair amount of time looking into research on machine learning in preparation for his book. Maybe he was too busy?

AI and emotions

The work being done by Georgia Tech’s Dr. Ayanna Howard and her robotics lab is fascinating. They are teaching robots how to interpret human emotions. The segment which features researchers teaching and interacting with robots, Pepper and Salt, also touches on AI and bias.

Watching two African American researchers talk about the ways in which AI is unable to read emotions on ‘black’ faces as accurately as ‘white’ faces is quite compelling. It also reinforces the uneasiness you might feel after the ‘Replika’ segment where an artificial friend informs a woman that her only worth is her body.

(Interestingly, Pepper and Salt are produced by Softbank Robotics, part of Softbank, a multinational Japanese conglomerate, [see a June 28, 2021 article by Ian Carlos Campbell for The Verge] whose entire management team is male according to their About page.)

While Howard is very hopeful about the possibilities of a machine that can read emotions, she doesn’t explore (on camera) any means for pushing back against bias other than training AI by using more black faces to help them learn. Perhaps more representative management and coding teams in technology companies?

While the programme largely focused on AI as an algorithm on a computer, robots can be enabled by AI (as can be seen in the segment with Dr. Howard).

My February 14, 2019 posting features research with a completely different approach to emotions and machines,

“I’ve always felt that robots shouldn’t just be modeled after humans [emphasis mine] or be copies of humans,” he [Guy Hoffman, assistant professor at Cornell University)] said. “We have a lot of interesting relationships with other species. Robots could be thought of as one of those ‘other species,’ not trying to copy what we do but interacting with us with their own language, tapping into our own instincts.”

[from a July 16, 2018 Cornell University news release on EurekAlert]

This brings the question back to, what is consciousness?

What scientists aren’t taught

Dr. Howard notes that scientists are not taught to consider the implications of their work. Her comment reminded me of a question I was asked many years ago after a presentation, it concerned whether or not science had any morality. (I said, no.)

My reply angered an audience member (a visual artist who was working with scientists at the time) as she took it personally and started defending scientists as good people who care and have morals and values. She failed to understand that the way in which we teach science conforms to a notion that somewhere there are scientific facts which are neutral and objective. Society and its values are irrelevant in the face of the larger ‘scientific truth’ and, as a consequence, you don’t need to teach or discuss how your values or morals affect that truth or what the social implications of your work might be.

Science is practiced without much if any thought to values. By contrast, there is the medical injunction, “Do no harm,” which suggests to me that someone recognized competing values. E.g., If your important and worthwhile research is harming people, you should ‘do no harm’.

The experts, the connections, and the Canadian content

It’s been a while since I’ve seen Ray Kurzweil mentioned but he seems to be getting more attention these days. (See this November 16, 2021 posting by Jonny Thomson titled, “The Singularity: When will we all become super-humans? Are we really only a moment away from “The Singularity,” a technological epoch that will usher in a new era in human evolution?” on The Big Think for more). Note: I will have a little more about evolution later in this post.

Interestingly, Kurzweil is employed by Google these days (see his Wikipedia entry, the column to the right). So is Geoffrey Hinton, another one of the experts in the programme (see Hinton’s Wikipedia entry, the column to the right, under Institutions).

I’m not sure about Yoshu Bengio’s relationship with Google but he’s a professor at the Université de Montréal, and he’s the Scientific Director for Mila ((Quebec’s Artificial Intelligence research institute)) & IVADO (Institut de valorisation des données), Note: IVADO is not particularly relevant to what’s being discussed in this post.

As for Mila, the Canada Google blog in a November 21, 2016 posting notes a $4.5M grant to the institution,

Google invests $4.5 Million in Montreal AI Research

A new grant from Google for the Montreal Institute for Learning Algorithms (MILA) will fund seven faculty across a number of Montreal institutions and will help tackle some of the biggest challenges in machine learning and AI, including applications in the realm of systems that can understand and generate natural language. In other words, better understand a fan’s enthusiasm for Les Canadien [sic].

Google is expanding its academic support of deep learning at MILA, renewing Yoshua Bengio’s Focused Research Award and offering Focused Research Awards to MILA faculty at University of Montreal and McGill University:

Google reaffirmed their commitment to Mila in 2020 with a grant worth almost $4M (from a November 13, 2020 posting on the Mila website, Note: A link has been removed),

Google Canada announced today [November 13, 2020] that it will be renewing its funding of Mila – Quebec Artificial Intelligence Institute, with a generous pledge of nearly $4M over a three-year period. Google previously invested $4.5M US in 2016, enabling Mila to grow from 25 to 519 researchers.

In a piece written for Google’s Official Canada Blog, Yoshua Bengio, Mila Scientific Director, says that this year marked a “watershed moment for the Canadian AI community,” as the COVID-19 pandemic created unprecedented challenges that demanded rapid innovation and increased interdisciplinary collaboration between researchers in Canada and around the world.

COVID-19 has changed the world forever and many industries, from healthcare to retail, will need to adapt to thrive in our ‘new normal.’ As we look to the future and how priorities will shift, it is clear that AI is no longer an emerging technology but a useful tool that can serve to solve world problems. Google Canada recognizes not only this opportunity but the important task at hand and I’m thrilled they have reconfirmed their support of Mila with an additional $3,95 million funding grant until 22.

– Yoshua Bengio, for Google’s Official Canada Blog

Interesting, eh? Of course, Douglas Coupland is working with Google, presumably for money, and that would connect over 50% of the Canadian content (Douglas Coupland, Yoshua Bengio, and Geoffrey Hinton; Kurzweil is an American) in the programme to Google.

My hat’s off to Google’s marketing communications and public relations teams.

Anthony Morgan of Science Everywhere also provided some Canadian content. His LinkedIn profile indicates that he’s working on a PhD in molecular science, which is described this way, “My work explores the characteristics of learning environments, that support critical thinking and the relationship between critical thinking and wisdom.”

Morgan is also the founder and creative director of Science Everywhere, from his LinkedIn profile, “An events & media company supporting knowledge mobilization, community engagement, entrepreneurship and critical thinking. We build social tools for better thinking.”

There is this from his LinkedIn profile,

I develop, create and host engaging live experiences & media to foster critical thinking.

I’ve spent my 15+ years studying and working in psychology and science communication, thinking deeply about the most common individual and societal barriers to critical thinking. As an entrepreneur, I lead a team to create, develop and deploy cultural tools designed to address those barriers. As a researcher I study what we can do to reduce polarization around science.

There’s a lot more to Morgan (do look him up; he has connections to the CBC and other media outlets). The difficulty is: why was he chosen to talk about artificial intelligence and emotions and creativity when he doesn’t seem to know much about the topic? He does mention GPT-3, an AI programming language. He seems to be acting as an advocate for AI although he offers this bit of almost cautionary wisdom, “… algorithms are sets of instructions.” (You can can find out more about it in my April 27, 2021 posting. There’s also this November 26, 2021 posting [The Inherent Limitations of GPT-3] by Andrey Kurenkov, a PhD student with the Stanford [University] Vision and Learning Lab.)

Most of the cautionary commentary comes from Luke Stark, assistant professor at Western [Ontario] University’s Faculty of Information and Media Studies. He’s the one who mentions stunted emotional growth.

Before moving on, there is another set of connections through the Pan-Canadian Artificial Intelligence Strategy, a Canadian government science funding initiative announced in the 2017 federal budget. The funds allocated to the strategy are administered by the Canadian Institute for Advanced Research (CIFAR). Yoshua Bengio through Mila is associated with the strategy and CIFAR, as is Geoffrey Hinton through his position as Chief Scientific Advisor for the Vector Institute.

Evolution

Getting back to “The Singularity: When will we all become super-humans? Are we really only a moment away from “The Singularity,” a technological epoch that will usher in a new era in human evolution?” Xenobots point in a disconcerting (for some of us) evolutionary direction.

I featured the work, which is being done at Tufts University in the US, in my June 21, 2021 posting, which includes an embedded video,

From a March 31, 2021 news item on ScienceDaily,

Last year, a team of biologists and computer scientists from Tufts University and the University of Vermont (UVM) created novel, tiny self-healing biological machines from frog cells called “Xenobots” that could move around, push a payload, and even exhibit collective behavior in the presence of a swarm of other Xenobots.

Get ready for Xenobots 2.0.

Also from an excerpt in the posting, the team has “created life forms that self-assemble a body from single cells, do not require muscle cells to move, and even demonstrate the capability of recordable memory.”

Memory is key to intelligence and this work introduces the notion of ‘living’ robots which leads to questioning what constitutes life. ‘The Machine That Feels’ is already grappling with far too many questions to address this development but introducing the research here might have laid the groundwork for the next episode, The New Human, telecast on November 26, 2021,

While no one can be certain what will happen, evolutionary biologists and statisticians are observing trends that could mean our future feet only have four toes (so long, pinky toe) or our faces may have new combinations of features. The new humans might be much taller than their parents or grandparents, or have darker hair and eyes.

And while evolution takes a lot of time, we might not have to wait too long for a new version of ourselves.

Technology is redesigning the way we look and function — at a much faster pace than evolution. We are merging with technology more than ever before: our bodies may now have implanted chips, smart limbs, exoskeletons and 3D-printed organs. A revolutionary gene editing technique has given us the power to take evolution into our own hands and alter our own DNA. How long will it be before we are designing our children?

As the story about the xenobots doesn’t say, we could also take the evolution of another species into our hands.

David Suzuki, where are you?

Our programme host, David Suzuki surprised me. I thought that as an environmentalist he’d point out that the huge amounts of computing power needed for artificial intelligence as mentioned in the programme, constitutes an environmental issue. I also would have expected a geneticist like Suzuki might have some concerns with regard to xenobots but perhaps that’s being saved for the next episode (The New Human) of the Nature of Things.

Artificial stupidity

Thanks to Will Knight for introducing me to the term ‘artificial stupidity’. Knight, a senior writer covers artificial intelligence for WIRED magazine. According to its Wikipedia entry,

Artificial stupidity is commonly used as a humorous opposite of the term artificial intelligence (AI), often as a derogatory reference to the inability of AI technology to adequately perform its tasks.[1] However, within the field of computer science, artificial stupidity is also used to refer to a technique of “dumbing down” computer programs in order to deliberately introduce errors in their responses.

Knight was using the term in its humorous, derogatory form.

Finally

The episode certainly got me thinking if not quite in the way producers might have hoped. ‘The Machine That Feels’ is a glossy, pretty well researched piece of infotainment.

To be blunt, I like and have no problems with infotainment but it can be seductive. I found it easier to remember the artificial friends, wife, xenobots, and symphony than the critiques and concerns.

Hopefully, ‘The Machine That Feels’ stimulates more interest in some very important topics. If you missed the telecast, you can catch the episode here.

For anyone curious about predictive policing, which was mentioned in the Ayanna Howard segment, see my November 23, 2017 posting about Vancouver’s plunge into AI and car theft.

*ETA December 6, 2021: One of the first ‘chatterbots’ was ELIZA, a computer programme developed from1964 to 1966. The most famous ELIZA script was DOCTOR, where the programme simulated a therapist. Many early users believed ELIZA understood and could respond as a human would despite Joseph Weizenbaum’s (creator of the programme) insistence otherwise.

U of Ottawa & Ingenium (Canada’s museums of science and innovation) team up to make learning fun and foster innovation

This November 4, 2021 University of Ottawa news release (also on EurekAlert and the Ingenium website), seems, borrowing from the movies, to be a teaser rather than a trailer or preview of what is to come.

Today [November 4, 20210], University of Ottawa and Ingenium – Canada’s Museums of Science and Innovation – announced a partnership that brings an interactive and educational digital experience to Kanata North. Innovating beyond the walls of its museums, Ingenium has created iOS [formerly iPhone OS {operating system}] and Nintendo Switch games to make learning fun. On site at the University’s Kanata North campus at 535 Legget Drive, visitors can now experience what it is like to fly like a honeybee, go on a mission to Mars, or test their skills as a fighter pilot in WWI.

“The University’s partnership with Ingenium has been a long and productive one, anchored by a common mandate to promote science education and to create environments that foster science and technology innovation,” said Veronica Farmer, Director, Partnerships and Commercialization at uOttawa Kanata North. “The digital games installation reflects this intent and definitely brings an element of fun to our Kanata North campus.”

Opened in 2018, uOttawa’s Kanata North campus has been partnering with Kanata North companies, connecting them to exceptional young talent, valuable education programming, relevant research expertise as well as global networks – all important factors to facilitate innovation. Recently expanded to 8000sqft, uOttawa Kanata North offers a large, dynamic collaborative and training space.

“As a national institution, we know that digital innovation is key to connecting with all Canadians. In partnering with uOttawa, we hope to foster creativity, discovery and innovation [emphasis mine] in the next generation,” said Darcy Ferron, Vice-President, Business Development [emphasis mine] at Ingenium.

This digital experience [emphasis mine] will benefit students, researchers, alumni and partners based in Kanata North. All are welcome to visit the uOttawa Kanata North campus and immerse themselves in an innovative, interactive and educational digital experience through this unique installation dedicated to showcasing that science and technology innovation starts with curiosity and exploration.

“Ingenium has been the place where this has happened for generations and this digital experience offers a reminder to all that visit our Kanata North campus of the deep connection between science and technology education, university training and research, and fulfilling careers in technology,” added Veronica Farmer.

###

The University of Ottawa—A crossroads of cultures and ideas

The University of Ottawa is home to over 50,000 students, faculty and staff, who live, work and study in both French and English. Our campus is a crossroads of cultures and ideas, where bold minds come together to inspire game-changing [inadvertent pun] ideas. We are one of Canada’s top 10 research universities—our professors and researchers explore new approaches to today’s challenges. One of a handful of Canadian universities ranked among the top 200 in the world, we attract exceptional thinkers and welcome diverse perspectives from across the globe.

About Ingenium – Canada’s Museums of Science and Innovation

Ingenium oversees three national museums of science and innovation in Ottawa – the Canada Agriculture and Food Museum, the Canada Aviation and Space Museum, and the Canada Science and Technology Museum— and the new lngenium Centre, which houses an exceptional collection, research institute, and digital innovation lab. lngenium takes science engagement to the next level by co-creating participatory experiences, acting as community hubs and connectors, helping Canadians contribute to solving global challenges, and creating a collective impact which extends far beyond the physical spaces of our museums. Ingenium is a vital link between science and society. Our engaging digital content, outreach programs, travelling exhibitions, and collaborative spaces help to educate, entertain, and engage audiences across Canada and around the world.

I do have a few questions. Presumably offering these digital experiences will cost money and there’s no mention of how this is being funded. As well, it’s hard to know when this digital experience will be offered since there’s no mention of any proposed start date.

The innovation (in the instance I’ve emphasized, it’s code for business) part of this endeavour is a bit puzzling. Is this University of Ottawa/Ingenium partnership going to act as a lab for Apple and Nintendo games development?

Finally, if an outsider should wish to visit this digital lab/experience at the University’s Kanata North campus at 535 Legget Drive how should they identify it? There doesn’t seem to be a name for it.

Deus Ex, a video game developer, his art, and reality

The topics of human enhancement and human augmentation have been featured here a number of times from a number of vantage points, including that of a video game seires with some thoughtful story lines known under the Deus Ex banner. (My August 18, 2011 posting, . August 30, 2011 posting, and Sept. 1, 2016 posting are three, which mention Deus Ex in the title but there may be others where the game is noted in the posting.)

A March 19, 2021 posting by Timothy Geigner for Techdirt offers a more fulsome but still brief description of the games along with a surprising declaration (it’s too real) by the game’s creator (Note: Links have been removed),

The Deus Ex franchise has found its way onto Techdirt’s pages a couple of times in the past. If you’re not familiar with the series, it’s a cyberpunk-ish take on the near future with broad themes around human augmentation, and the weaving of broad and famous conspiracy theories. That perhaps makes it somewhat ironic that several of our posts dealing with the franchise have to do with mass media outlets getting confused into thinking its augmentation stories were real life, or the conspiracy theories that centered around leaks for the original game’s sequel were true. The conspiracy theories woven into the original Deus Ex storyline were of the grand variety: takeover of government by biomedical companies pushing a vaccine for a sickness it created, the illuminati, FEMA [US Federal Emergency Management Agency] takeovers, AI-driven surveillance of the public, etc.

And it’s the fact that such conspiracy-driven thinking today led Warren Spector, the creator of the series, to recently state that he probably wouldn’t have created the game today if given the chance. [See pull quote below]

Deus Ex was originally released in 2000 but took place in an alternate 2052 where many of the real world conspiracy theories have come true. The plot included references to vaccinations, black helicopters, FEMA, and ECHELON amongst others, some of which have connotations to real-life events. Spector said, “Interestingly, I’m not sure I’d make Deus Ex today. The conspiracy theories we wrote about are now part of the real world. I don’t want to support that.”

… I’d like to focus on how clearly this illustrates the artistic nature of video games. The desire, or not, to create certain kinds of art due to the reflection such art receives from the broader society is exactly the kind of thing artists operating in other artforms have to deal with. Art imitates life, yes, but in the case of speculative fiction like this, it appears that life can also imitate art. Spector notes that seeing what has happened in the world since Deus Ex was first released in 2000 has had a profound effect on him as an artist. [See pull quote below]

Earlier, Spector had commented on how he was “constantly amazed at how accurate our view of the world ended up being. Frankly it freaks me out a bit.” Some of the conspiracy theories that didn’t end up in the game were those surrounding Denver Airport because they were considered “too silly to include in the game.” These include theories about secret tunnels, connections to aliens and Nazi secret societies, and hidden messages within the airport’s artwork. Spector is now incredulous that they’re “something people actually believe.”

It was possible for Geigner even back to an Oct. 18, 2013 posting to write about a UK newspaper that confused Deus Ex with reality,

… I bring you the British tabloid, The Sun, and their amazing story about an augmented mechanical eyeball that, if associated material is to be believed, allows you to see through walls, color-codes friends and enemies, and permits telescopic zoom. Here’s the reference from The Sun.

Oops. See, part of the reason that Sarif Industries’ cybernetic implants are still in their infancy is that the company doesn’t exist. Sarif Industries is a fictitious company from a cyberpunk video game, Deus Ex, set in a future Detroit. …

There’s more about Spector’s latest comments at a 2021 Game Developers Conference in a March 15, 2021 article by Riley MacLeod for Kotaku. There’s more about Warren Spector here. I always thought Deus Ex was developed by Canadian company, Eidos Montréal and, fter reading the company’s Wikipedia entry, it seems I may have been only partially correct.

Getting back to Deus Ex being ‘too real’, it seems to me that the line between science fiction and reality is increasingly frayed.

TRIUMF (Canada’s national particle accelerator centre) welcomes Nigel Smith as its new Chief Executive Officer (CEO) on May 17, 2021and some Hollywood news

I have two bits of news as noted in the headline. There’s news about TRIUMF located on the University of British Columbia (UBC) endowment lands and news about Dr. Suzanne Simard (UBC Forestry) and her memoir, Finding the Mother Tree: Discovering the Wisdom of the Fores.

Nigel Smith and TRIUMF (Canada’s national particle accelerator centre)

As soon as I saw his first name, Nigel, I bet myself he’d be from the UK (more about that later in this posting). This is TRIUMF’s third CEO since I started science blogging in May 2008. When I first started it was called TRIUMF (Canada’s National Laboratory for Particle and Nuclear Physics) but these days it’s TRIUMF (Canada’s national particle accelerator centre).

As for the organization’s latest CEO, here’s more from a TRIUMF February 12, 2021 announcement page ( the text is identical to TRIUMF’s February 12, 2021 press release),

Dr. Nigel Smith, Executive Director of SNOLAB, has been selected to serve as the next Director of TRIUMF.  

Succeeding Dr. Jonathan Bagger, who departed TRIUMF in January 2021 to become CEO of the American Physical Society, Dr. Smith’s appointment comes as the result of a highly competitive, six-month international search. Dr. Smith will begin his 5-year term as TRIUMF Director on May 17, 2021. 

“I am truly honoured to have been selected as the next Director of TRIUMF”, said Dr. Smith. “I have long been engaged with TRIUMF’s vibrant community and have been really impressed with the excellence of its science, capabilities and people. TRIUMF plays a unique and vital role in Canada’s research ecosystem and I look forward to help continue the legacy of excellence upheld by Dr. Jonathan Bagger and the previous TRIUMF Directors”.  

Describing what interested him in the position, Smith spoke to the breadth and impact of TRIUMF’s diverse science programs, stating “TRIUMF has an amazing portfolio of research covering fundamental and applied science that also delivers tangible societal impact through its range of medical and commercialisation initiatives. I am extremely excited to have the opportunity to lead a laboratory with such a broad and world-leading science program.” 

“Nigel brings all the necessary skills and background to the role of Director,” said Dr. Digvir Jayas, Interim Director of TRIUMF, Chair of the TRIUMF Board of Management, and Vice-President, Research and International at the University of Manitoba. “As Executive Director of SNOLAB, Dr. Smith is both a renowned researcher and experienced laboratory leader who offers a tremendous track record of success spanning the local, national, and international spheres. The Board of Management is thrilled to bring Nigel’s expertise to TRIUMF so he may help guide the laboratory through many of the exciting developments on the horizon.  

Dr. Smith joins TRIUMF at an important period in the laboratory’s history, moving into the second year of our current Five-Year Plan (2020-2025) and preparing to usher in a new era of science and innovation that will include the completion of the Advance Rare Isotope Laboratory (ARIEL) and the Institute for Advanced Medical Isotopes (IAMI) [not to be confused with Amii {Alberta Machine Intelligence Institute}]. This new infrastructure, alongside TRIUMF’s existing facilities and world-class research programs, will solidify Canada’s position as a global leader in both fundamental and applied research. 

Dr. Smith expressed his optimism for TRIUMF, saying “I am delighted to have this opportunity, and it will be a pleasure to lead the laboratory through this next exciting phase of our growth and evolution.” 

Smith is leaving what is probably one of the more unusual laboratories, at a depth of 2km, SNOLAB is the deepest, cleanest laboratory in the world. (more information either at SNOLAB or its Wikipedia entry.)

Is Smith from the UK? Some clues

I found my subsequent clues on SNOLAB’s ‘bio’ page for Dr. Nigel Smith,

Nigel Smith joined SNOLAB as Director during July 2009. He currently holds a full Professorship at Laurentian University, adjunct Professor status at Queen’s University, and a visiting Professorial chair at Imperial College, London. He received his Bachelor of Science in physics from Leeds University in the U.K. in 1985 and his Ph. D. in astrophysics from Leeds in 1991. He has served as a lecturer at Leeds University, a research associate at Imperial College London, group leader (dark matter) and deputy division head at the STFC Rutherford Appleton Laboratory, before relocating to Canada to oversee the SNOLAB deep underground facility.

The answer would seem to be yes, Nigel James Telfer Smith is originally from the UK.

I don’t know if this is going to be a trend but this is the second ‘Nigel” to lead TRIUMF. (The Nigels are now tied with the Johns and the Alans. Of course, the letter ‘j’ seems the most popular with four names, John, John, Jack, and Jonathan.) Here’s a list of TRIUMF’s previous CEOs (from the TRIUMF Wikipedia entry),

Since its inception, TRIUMF has had eight directors [now nine] overseeing its operations.

The first Nigel (Lockyer) is described as an American in his Wikipedia entry. He was born in Scotland and raised in Canada. However, he has spent the majority of his adult life in the US, other than the five or six years at TRIUMF. So, previous Nigel also started life in the UK.

Good luck to the new Nigel.

UBC forestry professor, Suzanne Simard’s memoir going to the movies?

Given that Simard’s memoir, Finding the Mother Tree: Discovering the Wisdom of the Forest, was published last week on May 4, 2021, this is very heady news,. From a May 12, 2021 article by Cassandra Gill for the Daily Hive (Note: Links have been removed),

Jake Gyllenhaal is bringing the story of a UBC professor to the big screen.

The Oscar nominee’s production company, Nine Stories, is producing a film based on Suzanne Simard’s memoir, Finding the Mother Tree.

Amy Adams is set to play Simard, who is a forest ecology expert renowned for her research on plants and fungi.

Adams is also co-producing the film with Gyllenhaal through her own company, Bond Group Entertainment.

The BC native [Simard] developed an interest in trees and the outdoors through her close relationship with her grandfather, who was a horse logger.

Her 30 year career and early life is documented in the memoir, which was released last week on May 4 [2021]. Simard explores how trees have evolved, have memories, and are the foundation of our planet’s ecosystem — along with her own personal experiences with grief.

The scientists’ [sic] influence has had influence in popular culture, notably in James Cameron’s 2009 film Avatar. The giant willow-like “Tree of Souls” was specifically inspired by Simard’s work.

No mention of a script and no mention of financing, so, it could be a while before we see the movie on Netflix, Apple+, HBO, or maybe a movie house (if they’re open by then).

I think the script may prove to the more challenging aspect of this project. Here’s the description of Simard’s memoir (from the Finding the Mother Tree webpage on suzannesimard.com)

From the world’s leading forest ecologist who forever changed how people view trees and their connections to one another and to other living things in the forest–a moving, deeply personal journey of discovery.

About the Book

In her first book, Simard brings us into her world, the intimate world of the trees, in which she brilliantly illuminates the fascinating and vital truths – that trees are not simply the source of timber or pulp, but are a complex, interdependent circle of life; that forests are social, cooperative creatures connected through underground networks by which trees communicate their vitality and vulnerabilities with communal lives not that different from our own.

Simard writes – in inspiring, illuminating, and accessible ways – how trees, living side by side for hundreds of years, have evolved, how they perceive one another, learn and adapt their behaviors, recognize neighbors, and remember the past; how they have agency about the future; elicit warnings and mount defenses, compete and cooperate with one another with sophistication, characteristics ascribed to human intelligence, traits that are the essence of civil societies – and at the center of it all, the Mother Trees: the mysterious, powerful forces that connect and sustain the others that surround them.

How does Simard’s process of understanding trees and conceptualizing a ‘mother tree’ get put into a script for a movie that’s not a documentary or an animation?

Movies are moving pictures, yes? How do you introduce movement and action in a script heavily focused on trees, which operate on a timescale that’s vastly different.

It’s an interesting problem and I look forward to seeing how it’s resolved. I wish them good luck.

Telling stories about artificial intelligence (AI) and Chinese science fiction; a Nov. 17, 2020 virtual event

[downloaded from https://www.berggruen.org/events/ai-narratives-in-contemporary-chinese-science-fiction/]

Exciting news: Chris Eldred of the Berggruen Institute sent this notice (from his Nov. 13, 2020 email)

Renowned science fiction novelists Hao Jingfang, Chen Qiufan, and Wang Yao (Xia Jia) will be featured in a virtual event next Tuesday, and I thought their discussion may be of interest to you and your readers. The event will explore how AI is used in contemporary Chinese science fiction, and the writers’ roundtable will address questions such as: How does Chinese sci-fi literature since the Reform and Opening-Up compare to sci-fi writing in the West? How does the Wandering Earth narrative and Chinese perspectives on home influence ideas about the impact of AI on the future?

Berggruen Fellow Hao Jingfang is an economist by training and an award-winning author (Hugo Award for Best Novelette). This event will be co-hosted with the University of Cambridge Leverhulme Centre for the Future of Intelligence. 

This event will be live streamed on Zoom (agenda and registration link here) on Tuesday, November 17th, from 8:30-11:50 AM GMT / 4:30-7:50 PM CST. Simultaneous English translation will be provided. 

The Berggruen Institute is offering a conversation with authors and researchers about how Chinese science fiction grapples with artificial intelligence (from the Berggruen Institute’s AI Narratives in Contemporary Chinese Science Fiction event page),

AI Narratives in Contemporary Chinese Science Fiction

November 17, 2020

Platform & Language:

Zoom (Chinese and English, with simultaneous translation)

Click here to register.

Discussion points:

1. How does Chinese sci-fi literature since the Reform and Opening-Up compare to sci-fi writing in the West?

2. How does the Wandering Earth narrative and Chinese perspectives on home influence ideas about the impact of AI on the future

About the Speakers:

WU Yan is a professor and PhD supervisor at the Humanities Center of Southern University of Science and Technology. He is a science fiction writer, vice chairman of the China Science Writers Association, recipient of the Thomas D Clareson Award of the American Science Fiction Research Association, and co-founder of the Xingyun (Nebula) Awards for Global Chinese Science Fiction. He is the author of science fictions such as Adventure of the Soul and The Sixth Day of Life and Death, academic works such as Outline of Science Fiction Literature, and textbooks such as Science and Fantasy – Training Course for Youth Imagination and Scientific Innovation.

Sanfeng is a science fiction researcher, visiting researcher of the Humanities Center of Southern University of Science and Technology, chief researcher of Shenzhen Science & Fantasy Growth Foundation, honorary assistant professor of the University of Hong Kong, Secretary-General of the World Chinese Science Fiction Association, and editor-in-chief of Nebula Science Fiction Review. His research covers the history of Chinese science fiction, development of science fiction industry, science fiction and urban development, science fiction and technological innovation, etc.

About the Event

Keynote 1 “Chinese AI Science Fiction in the Early Period of Reform and Opening-Up (1978-1983)”

(改革开放早期(1978-1983)的中国AI科幻小说)

Abstract: Science fiction on the themes of computers and robots emerged early but in a scattered manner in China. In the stories, the protagonists are largely humanlike assistants chiefly collecting data or doing daily manual labor, and this does not fall in the category of today’s artificial intelligence. Major changes took place after the reform and opening-up in 1978 in this regard. In 1979, the number of robot-themed works ballooned. By 1980, the quality of works also saw a quantum leap, and stories on the nature of artificial intelligence began to appear. At this stage, the AI works such as Spy Case Outside the Pitch, Dulles and Alice, Professor Shalom’s Misconception, and Riot on the Ziwei Island That Shocked the World describe how intelligent robots respond to activities such as adversarial ball games (note that these are not chess games), fully integrate into the daily life of humans, and launch collective riots beyond legal norms under special circumstances. The ideas that the growth of artificial intelligence requires a suitable environment, stable family relationship, social adaptation, etc. are still of important value.

Keynote 2 “Algorithm of the Soul: Narrative of AI in Recent Chinese Science Fiction”

(灵魂的算法:近期中国科幻小说中的AI叙事)

Abstract: As artificial intelligence has been applied to the fields of technology and daily life in the past decade, the AI narrative in Chinese science fiction has also seen seismic changes. On the one hand, young authors are aware that the “soul” of AI comes, to a large extent, from machine learning algorithms. As a result, their works often highlight the existence and implementation of algorithms, bringing maneuverability and credibility to the AI. On the other hand, the authors prefer to focus on the conflicts and contradictions in emotions, ethics, and morality caused by AI that penetrate into human life. If the previous AI-themed science fiction is like a distant robot fable, the recent AI narrative assumes contemporary and practical significance. This report focuses on exploring the AI-themed science fiction by several young authors (including Hao Jingfang’s [emphasis mine] The Problem of Love and Where Are You, Chen Qiufan’s Image Maker and Algorithm for Life, and Xia Jia’s Let’s Have a Talk and Shejiang, Baoshu’s Little Girl and Shuangchimu’s The Cock Prince, etc.) to delve into the breakthroughs and achievements in AI narratives.

Hao Jingfang, one of the authors mentioned in the abstract, is currently a fellow at the Berggruen Institute and she is scheduled to be a guest according to the co-host’s the University of Cambridge’s Leverhulme Centre for the Future of Intelligence (CFI) page: Workshop: AI Narratives in Contemporary Chinese Science Fiction programme description (I’ll try not to include too much repetitive information),

Workshop 2 – November 17, 2020

AI Narratives in Contemporary Chinese Science Fiction

Programme

16:30-16:40 CST (8:30-8:40 GMT)  Introductions

SONG Bing, Vice President, Co-Director, Berggruen Research Center, Peking University

Kanta Dihal, Postdoctoral Researcher, Project Lead on Global Narratives, Leverhulme Centre for the Future of Intelligence, University of Cambridge  

16:40-17:10 CST (8:40-9:10 GMT)  Talk 1 [Chinese AI SciFi and the early period]

17:10-17:40 CST (9:10-9:40 GMT)  Talk 2  [Algorithm of the soul]

17:40-18:10 CST (9:40-10:10 GMT)  Q&A

18:10-18:20 CST (10:10-10:20 GMT) Break

18:20-19:50 CST (10:20-11:50 GMT)  Roundtable Discussion

Host:

HAO Jingfang(郝景芳), author, researcher & Berggruen Fellow

Guests:

Baoshu (宝树), sci-fi and fantasy writer

CHEN Qiufan(陈楸帆), sci-fi writer, screenwriter & translator

Feidao(飞氘), sci-fi writer, Associate Professor in the Department of Chinese Language and Literature at Tsinghua University

WANG Yao(王瑶,pen name “Xia Jia”), sci-fi writer, Associate Professor of Chinese Literature at Xi’an Jiaotong University

Suggested Readings

ABOUT CHINESE [Science] FICTION

“What Makes Chinese Fiction Chinese?”, by Xia Jia and Ken Liu,

The Worst of All Possible Universes and the Best of All Possible Earths: Three Body and Chinese Science Fiction”, Cixin Liu, translated by Ken Liu

Science Fiction in China: 2016 in Review

SHORT NOVELS ABOUT ROBOTS/AI/ALGORITHM:

The Robot Who Liked to Tell Tall Tales”, by Feidao, translated by Ken Liu

Goodnight, Melancholy”, by Xia Jia, translated by Ken Liu

The Reunion”, by Chen Qiufan, translated by Emily Jin and Ken Liu, MIT Technology Review, December 16, 2018

Folding Beijing”, by Hao Jingfang, translated by Ken Liu

Let’s have a talk”, by Xia Jia

For those of us on the West Coast of North America the event times are: Tuesday, November 17, 2020, 1430 – 1750 or 2:30 – 5:50 pm. *Added On Nov.16.20 at 11:55 am PT: For anyone who can’t attend the live event, a full recording will be posted to YouTube.*

Kudos to all involved in organizing and participating in this event. It’s important to get as many viewpoints as possible on AI and its potential impacts.

Finally and for the curious, there’s another posting about Chinese science fiction here (May 31, 2019).

Technical University of Munich: embedded ethics approach for AI (artificial intelligence) and storing a tv series in synthetic DNA

I stumbled across two news bits of interest from the Technical University of Munich in one day (Sept. 1, 2020, I think). The topics: artificial intelligence (AI) and synthetic DNA (deoxyribonucleic acid).

Embedded ethics and artificial intelligence (AI)

An August 27, 2020 Technical University of Munich (TUM) press release (also on EurekAlert but published Sept. 1, 2020) features information about a proposal to embed ethicists in with AI development teams,

The increasing use of AI (artificial intelligence) in the development of new medical technologies demands greater attention to ethical aspects. An interdisciplinary team at the Technical University of Munich (TUM) advocates the integration of ethics from the very beginning of the development process of new technologies. Alena Buyx, Professor of Ethics in Medicine and Health Technologies, explains the embedded ethics approach.

Professor Buyx, the discussions surrounding a greater emphasis on ethics in AI research have greatly intensified in recent years, to the point where one might speak of “ethics hype” …

Prof. Buyx: … and many committees in Germany and around the world such as the German Ethics Council or the EU Commission High-Level Expert Group on Artificial Intelligence have responded. They are all in agreement: We need more ethics in the development of AI-based health technologies. But how do things look in practice for engineers and designers? Concrete solutions are still few and far between. In a joint pilot project with two Integrative Research Centers at TUM, the Munich School of Robotics and Machine Intelligence (MSRM) with its director, Prof. Sami Haddadin, and the Munich Center for Technology in Society (MCTS), with Prof. Ruth Müller, we want to try out the embedded ethics approach. We published the proposal in Nature Machine Intelligence at the end of July [2020].

What exactly is meant by the “embedded ethics approach”?

Prof.Buyx: The idea is to make ethics an integral part of the research process by integrating ethicists into the AI development team from day one. For example, they attend team meetings on a regular basis and create a sort of “ethical awareness” for certain issues. They also raise and analyze specific ethical and social issues.

Is there an example of this concept in practice?

Prof. Buyx: The Geriatronics Research Center, a flagship project of the MSRM in Garmisch-Partenkirchen, is developing robot assistants to enable people to live independently in old age. The center’s initiatives will include the construction of model apartments designed to try out residential concepts where seniors share their living space with robots. At a joint meeting with the participating engineers, it was noted that the idea of using an open concept layout everywhere in the units – with few doors or individual rooms – would give the robots considerable range of motion. With the seniors, however, this living concept could prove upsetting because they are used to having private spaces. At the outset, the engineers had not given explicit consideration to this aspect.

Prof.Buyx: The approach sounds promising. But how can we avoid “embedded ethics” from turning into an “ethics washing” exercise, offering companies a comforting sense of “being on the safe side” when developing new AI technologies?

That’s not something we can be certain of avoiding. The key is mutual openness and a willingness to listen, with the goal of finding a common language – and subsequently being prepared to effectively implement the ethical aspects. At TUM we are ideally positioned to achieve this. Prof. Sami Haddadin, the director of the MSRM, is also a member of the EU High-Level Group of Artificial Intelligence. In his research, he is guided by the concept of human centered engineering. Consequently, he has supported the idea of embedded ethics from the very beginning. But one thing is certain: Embedded ethics alone will not suddenly make AI “turn ethical”. Ultimately, that will require laws, codes of conduct and possibly state incentives.

Here’s a link to and a citation for the paper espousing the embedded ethics for AI development approach,

An embedded ethics approach for AI development by Stuart McLennan, Amelia Fiske, Leo Anthony Celi, Ruth Müller, Jan Harder, Konstantin Ritt, Sami Haddadin & Alena Buyx. Nature Machine Intelligence (2020) DOI: https://doi.org/10.1038/s42256-020-0214-1 Published 31 July 2020

This paper is behind a paywall.

Religion, ethics and and AI

For some reason embedded ethics and AI got me to thinking about Pope Francis and other religious leaders.

The Roman Catholic Church and AI

There was a recent announcement that the Roman Catholic Church will be working with MicroSoft and IBM on AI and ethics (from a February 28, 2020 article by Jen Copestake for British Broadcasting Corporation (BBC) news online (Note: A link has been removed),

Leaders from the two tech giants met senior church officials in Rome, and agreed to collaborate on “human-centred” ways of designing AI.

Microsoft president Brad Smith admitted some people may “think of us as strange bedfellows” at the signing event.

“But I think the world needs people from different places to come together,” he said.

The call was supported by Pope Francis, in his first detailed remarks about the impact of artificial intelligence on humanity.

The Rome Call for Ethics [sic] was co-signed by Mr Smith, IBM executive vice-president John Kelly and president of the Pontifical Academy for Life Archbishop Vincenzo Paglia.

It puts humans at the centre of new technologies, asking for AI to be designed with a focus on the good of the environment and “our common and shared home and of its human inhabitants”.

Framing the current era as a “renAIssance”, the speakers said the invention of artificial intelligence would be as significant to human development as the invention of the printing press or combustion engine.

UN Food and Agricultural Organization director Qu Dongyu and Italy’s technology minister Paola Pisano were also co-signatories.

Hannah Brockhaus’s February 28, 2020 article for the Catholic News Agency provides some details missing from the BBC report and I found it quite helpful when trying to understand the various pieces that make up this initiative,

The Pontifical Academy for Life signed Friday [February 28, 2020], alongside presidents of IBM and Microsoft, a call for ethical and responsible use of artificial intelligence technologies.

According to the document, “the sponsors of the call express their desire to work together, in this context and at a national and international level, to promote ‘algor-ethics.’”

“Algor-ethics,” according to the text, is the ethical use of artificial intelligence according to the principles of transparency, inclusion, responsibility, impartiality, reliability, security, and privacy.

The signing of the “Rome Call for AI Ethics [PDF]” took place as part of the 2020 assembly of the Pontifical Academy for Life, which was held Feb. 26-28 [2020] on the theme of artificial intelligence.

One part of the assembly was dedicated to private meetings of the academics of the Pontifical Academy for Life. The second was a workshop on AI and ethics that drew 356 participants from 41 countries.

On the morning of Feb. 28 [2020], a public event took place called “renAIssance. For a Humanistic Artificial Intelligence” and included the signing of the AI document by Microsoft President Brad Smith, and IBM Executive Vice-president John Kelly III.

The Director General of FAO, Dongyu Qu, and politician Paola Pisano, representing the Italian government, also signed.

The president of the European Parliament, David Sassoli, was also present Feb. 28.

Pope Francis canceled his scheduled appearance at the event due to feeling unwell. His prepared remarks were read by Archbishop Vincenzo Paglia, president of the Academy for Life.

You can find Pope Francis’s comments about the document here (if you’re not comfortable reading Italian, hopefully, the English translation which follows directly afterward will be helpful). The Pope’s AI initiative has a dedicated website, Rome Call for AI ethics, and while most of the material dates from the February 2020 announcement, they are keeping up a blog. It has two entries, one dated in May 2020 and another in September 2020.

Buddhism and AI

The Dalai Lama is well known for having an interest in science and having hosted scientists for various dialogues. So, I was able to track down a November 10, 2016 article by Ariel Conn for the futureoflife.org website, which features his insights on the matter,

The question of what it means and what it takes to feel needed is an important problem for ethicists and philosophers, but it may be just as important for AI researchers to consider. The Dalai Lama argues that lack of meaning and purpose in one’s work increases frustration and dissatisfaction among even those who are gainfully employed.

“The problem,” says the Dalai Lama, “is … the growing number of people who feel they are no longer useful, no longer needed, no longer one with their societies. … Feeling superfluous is a blow to the human spirit. It leads to social isolation and emotional pain, and creates the conditions for negative emotions to take root.”

If feeling needed and feeling useful are necessary for happiness, then AI researchers may face a conundrum. Many researchers hope that job loss due to artificial intelligence and automation could, in the end, provide people with more leisure time to pursue enjoyable activities. But if the key to happiness is feeling useful and needed, then a society without work could be just as emotionally challenging as today’s career-based societies, and possibly worse.

I also found a talk on the topic by The Venerable Tenzin Priyadarshi, first here’s a description from his bio at the Dalai Lama Center for Ethics and Transformative Values webspace on the Massachusetts Institute of Technology (MIT) website,

… an innovative thinker, philosopher, educator and a polymath monk. He is Director of the Ethics Initiative at the MIT Media Lab and President & CEO of The Dalai Lama Center for Ethics and Transformative Values at the Massachusetts Institute of Technology. Venerable Tenzin’s unusual background encompasses entering a Buddhist monastery at the age of ten and receiving graduate education at Harvard University with degrees ranging from Philosophy to Physics to International Relations. He is a Tribeca Disruptive Fellow and a Fellow at the Center for Advanced Study in Behavioral Sciences at Stanford University. Venerable Tenzin serves on the boards of a number of academic, humanitarian, and religious organizations. He is the recipient of several recognitions and awards and received Harvard’s Distinguished Alumni Honors for his visionary contributions to humanity.

He gave the 2018 Roger W. Heyns Lecture in Religion and Society at Stanford University on the topic, “Religious and Ethical Dimensions of Artificial Intelligence.” The video runs over one hour but he is a sprightly speaker (in comparison to other Buddhist speakers I’ve listened to over the years).

Judaism, Islam, and other Abrahamic faiths examine AI and ethics

I was delighted to find this January 30, 2020 Artificial Intelligence: Implications for Ethics and Religion event as it brought together a range of thinkers from various faiths and disciplines,

New technologies are transforming our world every day, and the pace of change is only accelerating.  In coming years, human beings will create machines capable of out-thinking us and potentially taking on such uniquely-human traits as empathy, ethical reasoning, perhaps even consciousness.  This will have profound implications for virtually every human activity, as well as the meaning we impart to life and creation themselves.  This conference will provide an introduction for non-specialists to Artificial Intelligence (AI):

What is it?  What can it do and be used for?  And what will be its implications for choice and free will; economics and worklife; surveillance economies and surveillance states; the changing nature of facts and truth; and the comparative intelligence and capabilities of humans and machines in the future? 

Leading practitioners, ethicists and theologians will provide cross-disciplinary and cross-denominational perspectives on such challenges as technology addiction, inherent biases and resulting inequalities, the ethics of creating destructive technologies and of turning decision-making over to machines from self-driving cars to “autonomous weapons” systems in warfare, and how we should treat the suffering of “feeling” machines.  The conference ultimately will address how we think about our place in the universe and what this means for both religious thought and theological institutions themselves.

UTS [Union Theological Seminary] is the oldest independent seminary in the United States and has long been known as a bastion of progressive Christian scholarship.  JTS [Jewish Theological Seminary] is one of the academic and spiritual centers of Conservative Judaism and a major center for academic scholarship in Jewish studies. The Riverside Church is an interdenominational, interracial, international, open, welcoming, and affirming church and congregation that has served as a focal point of global and national activism for peace and social justice since its inception and continues to serve God through word and public witness. The annual Greater Good Gathering, the following week at Columbia University’s School of International & Public Affairs, focuses on how technology is changing society, politics and the economy – part of a growing nationwide effort to advance conversations promoting the “greater good.”

They have embedded a video of the event (it runs a little over seven hours) on the January 30, 2020 Artificial Intelligence: Implications for Ethics and Religion event page. For anyone who finds that a daunting amount of information, you may want to check out the speaker list for ideas about who might be writing and thinking on this topic.

As for Islam, I did track down this November 29, 2018 article by Shahino Mah Abdullah, a fellow at the Institute of Advanced Islamic Studies (IAIS) Malaysia,

As the global community continues to work together on the ethics of AI, there are still vast opportunities to offer ethical inputs, including the ethical principles based on Islamic teachings.

This is in line with Islam’s encouragement for its believers to convey beneficial messages, including to share its ethical principles with society.

In Islam, ethics or akhlak (virtuous character traits) in Arabic, is sometimes employed interchangeably in the Arabic language with adab, which means the manner, attitude, behaviour, and etiquette of putting things in their proper places. Islamic ethics cover all the legal concepts ranging from syariah (Islamic law), fiqh ( jurisprudence), qanun (ordinance), and ‘urf (customary practices).

Adopting and applying moral values based on the Islamic ethical concept or applied Islamic ethics could be a way to address various issues in today’s societies.

At the same time, this approach is in line with the higher objectives of syariah (maqasid alsyariah) that is aimed at conserving human benefit by the protection of human values, including faith (hifz al-din), life (hifz alnafs), lineage (hifz al-nasl), intellect (hifz al-‘aql), and property (hifz al-mal). This approach could be very helpful to address contemporary issues, including those related to the rise of AI and intelligent robots.

..

Part of the difficulty with tracking down more about AI, ethics, and various religions is linguistic. I simply don’t have the language skills to search for the commentaries and, even in English, I may not have the best or most appropriate search terms.

Television (TV) episodes stored on DNA?

According to a Sept. 1, 2020 news item on Nanowerk, the first episode of a tv series, ‘Biohackers’ has been stored on synthetic DNA (deoxyribonucleic acid) by a researcher at TUM and colleagues at another institution,

The first episode of the newly released series “Biohackers” was stored in the form of synthetic DNA. This was made possible by the research of Prof. Reinhard Heckel of the Technical University of Munich (TUM) and his colleague Prof. Robert Grass of ETH Zürich.

They have developed a method that permits the stable storage of large quantities of data on DNA for over 1000 years.

A Sept. 1, 2020 TUM press release, which originated the news item, proceeds with more detail in an interview format,

Prof. Heckel, Biohackers is about a medical student seeking revenge on a professor with a dark past – and the manipulation of DNA with biotechnology tools. You were commissioned to store the series on DNA. How does that work?

First, I should mention that what we’re talking about is artificially generated – in other words, synthetic – DNA. DNA consists of four building blocks: the nucleotides adenine (A), thymine (T), guanine (G) and cytosine (C). Computer data, meanwhile, are coded as zeros and ones. The first episode of Biohackers consists of a sequence of around 600 million zeros and ones. To code the sequence 01 01 11 00 in DNA, for example, we decide which number combinations will correspond to which letters. For example: 00 is A, 01 is C, 10 is G and 11 is T. Our example then produces the DNA sequence CCTA. Using this principle of DNA data storage, we have stored the first episode of the series on DNA.

And to view the series – is it just a matter of “reverse translation” of the letters?

In a very simplified sense, you can visualize it like that. When writing, storing and reading the DNA, however, errors occur. If these errors are not corrected, the data stored on the DNA will be lost. To solve the problem, I have developed an algorithm based on channel coding. This method involves correcting errors that take place during information transfers. The underlying idea is to add redundancy to the data. Think of language: When we read or hear a word with missing or incorrect letters, the computing power of our brain is still capable of understanding the word. The algorithm follows the same principle: It encodes the data with sufficient redundancy to ensure that even highly inaccurate data can be restored later.

Channel coding is used in many fields, including in telecommunications. What challenges did you face when developing your solution?

The first challenge was to create an algorithm specifically geared to the errors that occur in DNA. The second one was to make the algorithm so efficient that the largest possible quantities of data can be stored on the smallest possible quantity of DNA, so that only the absolutely necessary amount of redundancy is added. We demonstrated that our algorithm is optimized in that sense.

DNA data storage is very expensive because of the complexity of DNA production as well as the reading process. What makes DNA an attractive storage medium despite these challenges?

First, DNA has a very high information density. This permits the storage of enormous data volumes in a minimal space. In the case of the TV series, we stored “only” 100 megabytes on a picogram – or a billionth of a gram of DNA. Theoretically, however, it would be possible to store up to 200 exabytes on one gram of DNA. And DNA lasts a long time. By comparison: If you never turned on your PC or wrote data to the hard disk it contains, the data would disappear after a couple of years. By contrast, DNA can remain stable for many thousands of years if it is packed right.

And the method you have developed also makes the DNA strands durable – practically indestructible.

My colleague Robert Grass was the first to develop a process for the “stable packing” of DNA strands by encapsulating them in nanometer-scale spheres made of silica glass. This ensures that the DNA is protected against mechanical influences. In a joint paper in 2015, we presented the first robust DNA data storage concept with our algorithm and the encapsulation process developed by Prof. Grass. Since then we have continuously improved our method. In our most recent publication in Nature Protocols of January 2020, we passed on what we have learned.

What are your next steps? Does data storage on DNA have a future?

We’re working on a way to make DNA data storage cheaper and faster. “Biohackers” was a milestone en route to commercialization. But we still have a long way to go. If this technology proves successful, big things will be possible. Entire libraries, all movies, photos, music and knowledge of every kind – provided it can be represented in the form of data – could be stored on DNA and would thus be available to humanity for eternity.

Here’s a link to and a citation for the paper,

Reading and writing digital data in DNA by Linda C. Meiser, Philipp L. Antkowiak, Julian Koch, Weida D. Chen, A. Xavier Kohll, Wendelin J. Stark, Reinhard Heckel & Robert N. Grass. Nature Protocols volume 15, pages86–101(2020) Issue Date: January 2020 DOI: https://doi.org/10.1038/s41596-019-0244-5 Published [online] 29 November 2019

This paper is behind a paywall.

As for ‘Biohackers’, it’s a German science fiction television series and you can find out more about it here on the Internet Movie Database.

Science fiction, interconnectedness (globality), and pandemics

Mayurika Chakravorty at Carleton University (Department of English) in Ottawa, (Ontario, Canada) points out that the latest pandemic (COVID-19) is an example of how everything is connected (interconnectedness or globality) by way of science fiction in her July 19, 2020 essay on The Conversation (h/t July 20, 2020 item on phys.org), Note: Links have been removed,

In the early days of the coronavirus outbreak, a theory widely shared on social media suggested that a science fiction text, Dean Koontz’s 1981 science fiction novel, The Eyes of Darkness, had predicted the coronavirus pandemic with uncanny precision. COVID-19 has held the entire world hostage, producing a resemblance to the post-apocalyptic world depicted in many science fiction texts. Canadian author Margaret Atwood’s classic 2003 novel Oryx and Crake refers to a time when “there was a lot of dismay out there, and not enough ambulances” — a prediction of our current predicament.

However, the connection between science fiction and pandemics runs deeper. They are linked by a perception of globality, what sociologist Roland Robertson defines as “the consciousness of the world as a whole.”

Chakravorty goes on to make a compelling case (from her July 19, 2020 essay Note: Links have been removed),

In his 1992 survey of the history of telecommunications, How the World Was One, Arthur C. Clarke alludes to the famed historian Alfred Toynbee’s lecture entitled “The Unification of the World.” Delivered at the University of London in 1947, Toynbee envisions a “single planetary society” and notes how “despite all the linguistic, religious and cultural barriers that still sunder nations and divide them into yet smaller tribes, the unification of the world has passed the point of no return.”

Science fiction writers have, indeed, always embraced globality. In interplanetary texts, humans of all nations, races and genders have to come together as one people in the face of alien invasions. Facing an interplanetary encounter, bellicose nations have to reluctantly eschew political rivalries and collaborate on a global scale, as in Denis Villeneuve’s 2018 film, Arrival.

Globality is central to science fiction. To be identified as an Earthling, one has to transcend the local and the national, and sometimes, even the global, by embracing a larger planetary consciousness.

In The Left Hand of Darkness, Ursula K. Le Guin conceptualizes the Ekumen, which comprises 83 habitable planets. The idea of the Ekumen was borrowed from Le Guin’s father, the noted cultural anthropologist Arthur L. Kroeber. Kroeber had, in a 1945 paper, introduced the concept (from Greek oikoumene) to represent a “historic culture aggregate.” Originally, Kroeber used oikoumene to refer to the “entire inhabited world,” as he traced back human culture to one single people. Le Guin then adopted this idea of a common origin of shared humanity in her novel.

..,

Regarding Canada’s response to the crisis [COVID-19], researchers have noted both the immorality and futility of a nationalistic “Canada First” approach.

If you have time, I recommend reading Chakravorty’s July 19, 2020 essay in its entirety.