Category Archives: science philosophy

Bruno Latour, science, and the 2021 Kyoto Prize in Arts and Philosophy: Commemorative Lecture

The Kyoto Prize (Wikipedia entry) was first given out in 1985. These days (I checked out a currency converter today, November 15, 2021), the Inamori Foundation, which administers the prize, gives out $100M yen per prize, worth about $1,098,000 CAD or $876,800 USD.

Here’s more about the prize from the November 9, 2021 Inamori Foundation press release on EurekAlert,

The Kyoto Prize is an international award of Japanese origin, presented to individuals who have made significant contributions to the progress of science, the advancement of civilization, and the enrichment and elevation of the human spirit. The Prize is granted in the three categories of Advanced Technology, Basic Sciences; Arts and Philosophy, each of which comprises four fields, making a total of 12 fields. Every year, one Prize is awarded in each of the three categories with prize money of 100 million yen per category.

One of the distinctive features of the Kyoto Prize is that it recognizes both “science” and “arts and philosophy” fields. This is because of its founder Kazuo Inamori’s conviction that the future of humanity can be assured only when there is a balance between scientific development and the enrichment of the human spirit.

The recipient for arts and philosophy, Bruno Latour has been mentioned here before (from a July 15, 2020 posting titled, ‘Architecture, the practice of science, and meaning’),

The 1979 book, Laboratory Life: the Social Construction of Scientific Facts by Bruno Latour and Steve Woolgar immediately came to mind on reading about a new book (The New Architecture of Science: Learning from Graphene) linking architecture to the practice of science (research on graphene). It turns out that one of the authors studied with Latour. (For more about Laboratory Life see: Bruno Latour’s Wikipedia entry; scroll down to Main Works)

Back to Latour and his prize from the November 9, 2021 Inamori Foundation press release,

Bruno Latour, Professor Emeritus at Paris Institute of Political Studies (Sciences Po), received the 2021 Kyoto Prize in Arts and Philosophy for his radically re-examining “modernity” by developing a philosophy that focuses on interactions between technoscience and social structure. Latour’s Commemorative Lecture “How to React to a Change in Cosmology” will be released on November 10, 2021, 10:00 AM JST at the 2021 Kyoto Prize Special Website.

“Viruses–we don’t even know if viruses are our enemies or our friends!” says Latour in his lecture. By using the ongoing Covid epidemic as a sort of lead, Latour discusses the shift in cosmology, a structure that distributes agencies around. He then suggests a “new project” we have to work on now, which he assumes is very different from the modernist project.

Bruno Latour has revolutionized the conventional view of science by treating nature, humans, laboratory equipment, and other entities as equal actors, and describing technoscience as the hybrid network of these actors. His philosophy re-examines “modernity” based on the dualism of nature and society. He has a large influence across disciplines, with his multifaceted activities that include proposals regarding global environmental issues.

Latour and the other two 2021 Kyoto Prize laureates are introduced on the 2021 Kyoto Prize Special Website with information about their work, profiles, and three-minute introduction videos. The Kyoto Prize in Advanced Technology for this year went to Andrew Chi-Chih Yao, Professor of Institute for Interdisciplinary Information Sciences at Tsinghua University, and Basic Sciences to Robert G. Roeder, Arnold and Mabel Beckman Professor of Biochemistry and Molecular Biology at The Rockefeller University. 

The folks at the Kyoto Prize have made a three-minute video introduction to Bruno Latour available,

For more information you can check out the Inamori Foundation website. There are two Kyoto Prize websites, the 2021 Kyoto Prize Special Website and the Kyoto Prize website. These are all English language websites and, if you have the language skills and the interest, it is possible to toggle (upper right hand side) and get the Japanese language version.

Finally, there’s a dedicated Bruno Latour webpage on the 2021 Kyoto Prize Special Website and Bruno Latour has his own website where French and English are items are mixed together but it seems the majority of the content is in English.

The metaverse or not

The ‘metaverse’ seems to be everywhere these days (especially since Facebook has made a number of announcements bout theirs (more about that later in this posting).

At this point, the metaverse is very hyped up despite having been around for about 30 years. According to the Wikipedia timeline (see the Metaverse entry), the first one was a MOO in 1993 called ‘The Metaverse’. In any event, it seems like it might be a good time to see what’s changed since I dipped my toe into a metaverse (Second Life by Linden Labs) in 2007.

(For grammar buffs, I switched from definite article [the] to indefinite article [a] purposefully. In reading the various opinion pieces and announcements, it’s not always clear whether they’re talking about a single, overarching metaverse [the] replacing the single, overarching internet or whether there will be multiple metaverses, in which case [a].)

The hype/the buzz … call it what you will

This September 6, 2021 piece by Nick Pringle for Fast Company dates the beginning of the metaverse to a 1992 science fiction novel before launching into some typical marketing hype (for those who don’t know, hype is the short form for hyperbole; Note: Links have been removed),

The term metaverse was coined by American writer Neal Stephenson in his 1993 sci-fi hit Snow Crash. But what was far-flung fiction 30 years ago is now nearing reality. At Facebook’s most recent earnings call [June 2021], CEO Mark Zuckerberg announced the company’s vision to unify communities, creators, and commerce through virtual reality: “Our overarching goal across all of these initiatives is to help bring the metaverse to life.”

So what actually is the metaverse? It’s best explained as a collection of 3D worlds you explore as an avatar. Stephenson’s original vision depicted a digital 3D realm in which users interacted in a shared online environment. Set in the wake of a catastrophic global economic crash, the metaverse in Snow Crash emerged as the successor to the internet. Subcultures sprung up alongside new social hierarchies, with users expressing their status through the appearance of their digital avatars.

Today virtual worlds along these lines are formed, populated, and already generating serious money. Household names like Roblox and Fortnite are the most established spaces; however, there are many more emerging, such as Decentraland, Upland, Sandbox, and the soon to launch Victoria VR.

These metaverses [emphasis mine] are peaking at a time when reality itself feels dystopian, with a global pandemic, climate change, and economic uncertainty hanging over our daily lives. The pandemic in particular saw many of us escape reality into online worlds like Roblox and Fortnite. But these spaces have proven to be a place where human creativity can flourish amid crisis.

In fact, we are currently experiencing an explosion of platforms parallel to the dotcom boom. While many of these fledgling digital worlds will become what Ask Jeeves was to Google, I predict [emphasis mine] that a few will match the scale and reach of the tech giant—or even exceed it.

Because the metaverse brings a new dimension to the internet, brands and businesses will need to consider their current and future role within it. Some brands are already forging the way and establishing a new genre of marketing in the process: direct to avatar (D2A). Gucci sold a virtual bag for more than the real thing in Roblox; Nike dropped virtual Jordans in Fortnite; Coca-Cola launched avatar wearables in Decentraland, and Sotheby’s has an art gallery that your avatar can wander in your spare time.

D2A is being supercharged by blockchain technology and the advent of digital ownership via NFTs, or nonfungible tokens. NFTs are already making waves in art and gaming. More than $191 million was transacted on the “play to earn” blockchain game Axie Infinity in its first 30 days this year. This kind of growth makes NFTs hard for brands to ignore. In the process, blockchain and crypto are starting to feel less and less like “outsider tech.” There are still big barriers to be overcome—the UX of crypto being one, and the eye-watering environmental impact of mining being the other. I believe technology will find a way. History tends to agree.

Detractors see the metaverse as a pandemic fad, wrapping it up with the current NFT bubble or reducing it to Zuck’s [Jeffrey Zuckerberg and Facebook] dystopian corporate landscape. This misses the bigger behavior change that is happening among Gen Alpha. When you watch how they play, it becomes clear that the metaverse is more than a buzzword.

For Gen Alpha [emphasis mine], gaming is social life. While millennials relentlessly scroll feeds, Alphas and Zoomers [emphasis mine] increasingly stroll virtual spaces with their friends. Why spend the evening staring at Instagram when you can wander around a virtual Harajuku with your mates? If this seems ridiculous to you, ask any 13-year-old what they think.

Who is Nick Pringle and how accurate are his predictions?

At the end of his September 6, 2021 piece, you’ll find this,

Nick Pringle is SVP [Senior Vice President] executive creative director at R/GA London.

According to the R/GA Wikipedia entry,

… [the company] evolved from a computer-assisted film-making studio to a digital design and consulting company, as part of a major advertising network.

Here’s how Pringle sees our future, his September 6, 2021 piece,

By thinking “virtual first,” you can see how these spaces become highly experimental, creative, and valuable. The products you can design aren’t bound by physics or marketing convention—they can be anything, and are now directly “ownable” through blockchain. …

I believe that the metaverse is here to stay. That means brands and marketers now have the exciting opportunity to create products that exist in multiple realities. The winners will understand that the metaverse is not a copy of our world, and so we should not simply paste our products, experiences, and brands into it.

I emphasized “These metaverses …” in the previous section to highlight the fact that I find the use of ‘metaverses’ vs. ‘worlds’ confusing as the words are sometimes used as synonyms and sometimes as distinctions. We do it all the time in all sorts of conversations but for someone who’s an outsider to a particular occupational group or subculture, the shifts can make for confusion.

As for Gen Alpha and Zoomer, I’m not a fan of ‘Gen anything’ as shorthand for describing a cohort based on birth years. For example, “For Gen Alpha [emphasis mine], gaming is social life,” ignores social and economic classes, as well as, the importance of locations/geography, e.g., Afghanistan in contrast to the US.

To answer the question I asked, Pringle does not mention any record of accuracy for his predictions for the future but I was able to discover that he is a “multiple Cannes Lions award-winning creative” (more here).

A more measured view of the metaverse

An October 4, 2021 article (What is the metaverse, and do I have to care? One part definition, one part aspiration, one part hype) by Adi Robertson and Jay Peters for The Verge offers a deeper dive into the metaverse (Note: Links have been removed),

In recent months you may have heard about something called the metaverse. Maybe you’ve read that the metaverse is going to replace the internet. Maybe we’re all supposed to live there. Maybe Facebook (or Epic, or Roblox, or dozens of smaller companies) is trying to take it over. And maybe it’s got something to do with NFTs [non-fungible tokens]?

Unlike a lot of things The Verge covers, the metaverse is tough to explain for one reason: it doesn’t necessarily exist. It’s partly a dream for the future of the internet and partly a neat way to encapsulate some current trends in online infrastructure, including the growth of real-time 3D worlds.

Then what is the real metaverse?

There’s no universally accepted definition of a real “metaverse,” except maybe that it’s a fancier successor to the internet. Silicon Valley metaverse proponents sometimes reference a description from venture capitalist Matthew Ball, author of the extensive Metaverse Primer:

“The Metaverse is an expansive network of persistent, real-time rendered 3D worlds and simulations that support continuity of identity, objects, history, payments, and entitlements, and can be experienced synchronously by an effectively unlimited number of users, each with an individual sense of presence.”

Facebook, arguably the tech company with the biggest stake in the metaverse, describes it more simply:

“The ‘metaverse’ is a set of virtual spaces where you can create and explore with other people who aren’t in the same physical space as you.”

There are also broader metaverse-related taxonomies like one from game designer Raph Koster, who draws a distinction between “online worlds,” “multiverses,” and “metaverses.” To Koster, online worlds are digital spaces — from rich 3D environments to text-based ones — focused on one main theme. Multiverses are “multiple different worlds connected in a network, which do not have a shared theme or ruleset,” including Ready Player One’s OASIS. And a metaverse is “a multiverse which interoperates more with the real world,” incorporating things like augmented reality overlays, VR dressing rooms for real stores, and even apps like Google Maps.

If you want something a little snarkier and more impressionistic, you can cite digital scholar Janet Murray — who has described the modern metaverse ideal as “a magical Zoom meeting that has all the playful release of Animal Crossing.”

But wait, now Ready Player One isn’t a metaverse and virtual worlds don’t have to be 3D? It sounds like some of these definitions conflict with each other.

An astute observation.

Why is the term “metaverse” even useful? “The internet” already covers mobile apps, websites, and all kinds of infrastructure services. Can’t we roll virtual worlds in there, too?

Matthew Ball favors the term “metaverse” because it creates a clean break with the present-day internet. [emphasis mine] “Using the metaverse as a distinctive descriptor allows us to understand the enormity of that change and in turn, the opportunity for disruption,” he said in a phone interview with The Verge. “It’s much harder to say ‘we’re late-cycle into the last thing and want to change it.’ But I think understanding this next wave of computing and the internet allows us to be more proactive than reactive and think about the future as we want it to be, rather than how to marginally affect the present.”

A more cynical spin is that “metaverse” lets companies dodge negative baggage associated with “the internet” in general and social media in particular. “As long as you can make technology seem fresh and new and cool, you can avoid regulation,” researcher Joan Donovan told The Washington Post in a recent article about Facebook and the metaverse. “You can run defense on that for several years before the government can catch up.”

There’s also one very simple reason: it sounds more futuristic than “internet” and gets investors and media people (like us!) excited.

People keep saying NFTs are part of the metaverse. Why?

NFTs are complicated in their own right, and you can read more about them here. Loosely, the thinking goes: NFTs are a way of recording who owns a specific virtual good, creating and transferring virtual goods is a big part of the metaverse, thus NFTs are a potentially useful financial architecture for the metaverse. Or in more practical terms: if you buy a virtual shirt in Metaverse Platform A, NFTs can create a permanent receipt and let you redeem the same shirt in Metaverse Platforms B to Z.

Lots of NFT designers are selling collectible avatars like CryptoPunks, Cool Cats, and Bored Apes, sometimes for astronomical sums. Right now these are mostly 2D art used as social media profile pictures. But we’re already seeing some crossover with “metaverse”-style services. The company Polygonal Mind, for instance, is building a system called CryptoAvatars that lets people buy 3D avatars as NFTs and then use them across multiple virtual worlds.

If you have the time, the October 4, 2021 article (What is the metaverse, and do I have to care? One part definition, one part aspiration, one part hype) is definitely worth the read.

Facebook’s multiverse and other news

Since starting this post sometime in September 2021, the situation regarding Facebook has changed a few times. I’ve decided to begin my version of the story from a summer 2021 announcement.

On Monday, July 26, 2021, Facebook announced a new Metaverse product group. From a July 27, 2021 article by Scott Rosenberg for Yahoo News (Note: A link has been removed),

Facebook announced Monday it was forming a new Metaverse product group to advance its efforts to build a 3D social space using virtual and augmented reality tech.

Facebook’s new Metaverse product group will report to Andrew Bosworth, Facebook’s vice president of virtual and augmented reality [emphasis mine], who announced the new organization in a Facebook post.

Facebook, integrity, and safety in the metaverse

On September 27, 2021 Facebook posted this webpage (Building the Metaverse Responsibly by Andrew Bosworth, VP, Facebook Reality Labs [emphasis mine] and Nick Clegg, VP, Global Affairs) on its site,

The metaverse won’t be built overnight by a single company. We’ll collaborate with policymakers, experts and industry partners to bring this to life.

We’re announcing a $50 million investment in global research and program partners to ensure these products are developed responsibly.

We develop technology rooted in human connection that brings people together. As we focus on helping to build the next computing platform, our work across augmented and virtual reality and consumer hardware will deepen that human connection regardless of physical distance and without being tied to devices. 

Introducing the XR [extended reality] Programs and Research Fund

There’s a long road ahead. But as a starting point, we’re announcing the XR Programs and Research Fund, a two-year $50 million investment in programs and external research to help us in this effort. Through this fund, we’ll collaborate with industry partners, civil rights groups, governments, nonprofits and academic institutions to determine how to build these technologies responsibly. 

..

Where integrity and safety are concerned Facebook is once again having some credibility issues according to an October 5, 2021 Associated Press article (Whistleblower testifies Facebook chooses profit over safety, calls for ‘congressional action’) posted on the Canadian Broadcasting Corporation’s (CBC) news online website.

Rebranding Facebook’s integrity and safety issues away?

It seems Facebook’s credibility issues are such that the company is about to rebrand itself according to an October 19, 2021 article by Alex Heath for The Verge (Note: Links have been removed),

Facebook is planning to change its company name next week to reflect its focus on building the metaverse, according to a source with direct knowledge of the matter.

The coming name change, which CEO Mark Zuckerberg plans to talk about at the company’s annual Connect conference on October 28th [2021], but could unveil sooner, is meant to signal the tech giant’s ambition to be known for more than social media and all the ills that entail. The rebrand would likely position the blue Facebook app as one of many products under a parent company overseeing groups like Instagram, WhatsApp, Oculus, and more. A spokesperson for Facebook declined to comment for this story.

Facebook already has more than 10,000 employees building consumer hardware like AR glasses that Zuckerberg believes will eventually be as ubiquitous as smartphones. In July, he told The Verge that, over the next several years, “we will effectively transition from people seeing us as primarily being a social media company to being a metaverse company.”

A rebrand could also serve to further separate the futuristic work Zuckerberg is focused on from the intense scrutiny Facebook is currently under for the way its social platform operates today. A former employee turned whistleblower, Frances Haugen, recently leaked a trove of damning internal documents to The Wall Street Journal and testified about them before Congress. Antitrust regulators in the US and elsewhere are trying to break the company up, and public trust in how Facebook does business is falling.

Facebook isn’t the first well-known tech company to change its company name as its ambitions expand. In 2015, Google reorganized entirely under a holding company called Alphabet, partly to signal that it was no longer just a search engine, but a sprawling conglomerate with companies making driverless cars and health tech. And Snapchat rebranded to Snap Inc. in 2016, the same year it started calling itself a “camera company” and debuted its first pair of Spectacles camera glasses.

If you have time, do read Heath’s article in its entirety.

An October 20, 2021 Thomson Reuters item on CBC (Canadian Broadcasting Corporation) news online includes quotes from some industry analysts about the rebrand,

“It reflects the broadening out of the Facebook business. And then, secondly, I do think that Facebook’s brand is probably not the greatest given all of the events of the last three years or so,” internet analyst James Cordwell at Atlantic Equities said.

“Having a different parent brand will guard against having this negative association transferred into a new brand, or other brands that are in the portfolio,” said Shankha Basu, associate professor of marketing at University of Leeds.

Tyler Jadah’s October 20, 2021 article for the Daily Hive includes an earlier announcement (not mentioned in the other two articles about the rebranding), Note: A link has been removed,

Earlier this week [October 17, 2021], Facebook announced it will start “a journey to help build the next computing platform” and will hire 10,000 new high-skilled jobs within the European Union (EU) over the next five years.

“Working with others, we’re developing what is often referred to as the ‘metaverse’ — a new phase of interconnected virtual experiences using technologies like virtual and augmented reality,” wrote Facebook’s Nick Clegg, the VP of Global Affairs. “At its heart is the idea that by creating a greater sense of “virtual presence,” interacting online can become much closer to the experience of interacting in person.”

Clegg says the metaverse has the potential to help unlock access to new creative, social, and economic opportunities across the globe and the virtual world.

In an email with Facebook’s Corporate Communications Canada, David Troya-Alvarez told Daily Hive, “We don’t comment on rumour or speculation,” in regards to The Verge‘s report.

I will update this posting when and if Facebook rebrands itself into a ‘metaverse’ company.

***See Oct. 28, 2021 update at the end of this posting and prepare yourself for ‘Meta’.***

Who (else) cares about integrity and safety in the metaverse?

Apparently, the international legal firm, Norton Rose Fulbright also cares about safety and integrity in the metaverse. Here’s more from their July 2021 The Metaverse: The evolution of a universal digital platform webpage,

In technology, first-mover advantage is often significant. This is why BigTech and other online platforms are beginning to acquire software businesses to position themselves for the arrival of the Metaverse.  They hope to be at the forefront of profound changes that the Metaverse will bring in relation to digital interactions between people, between businesses, and between them both. 

What is the Metaverse? The short answer is that it does not exist yet. At the moment it is vision for what the future will be like where personal and commercial life is conducted digitally in parallel with our lives in the physical world. Sounds too much like science fiction? For something that does not exist yet, the Metaverse is drawing a huge amount of attention and investment in the tech sector and beyond.  

Here we look at what the Metaverse is, what its potential is for disruptive change, and some of the key legal and regulatory issues future stakeholders may need to consider.

What are the potential legal issues?

The revolutionary nature of the Metaverse is likely to give rise to a range of complex legal and regulatory issues. We consider some of the key ones below. As time goes by, naturally enough, new ones will emerge.

Data

Participation in the Metaverse will involve the collection of unprecedented amounts and types of personal data. Today, smartphone apps and websites allow organisations to understand how individuals move around the web or navigate an app. Tomorrow, in the Metaverse, organisations will be able to collect information about individuals’ physiological responses, their movements and potentially even brainwave patterns, thereby gauging a much deeper understanding of their customers’ thought processes and behaviours.

Users participating in the Metaverse will also be “logged in” for extended amounts of time. This will mean that patterns of behaviour will be continually monitored, enabling the Metaverse and the businesses (vendors of goods and services) participating in the Metaverse to understand how best to service the users in an incredibly targeted way.

The hungry Metaverse participant

How might actors in the Metaverse target persons participating in the Metaverse? Let us assume one such woman is hungry at the time of participating. The Metaverse may observe a woman frequently glancing at café and restaurant windows and stopping to look at cakes in a bakery window, and determine that she is hungry and serve her food adverts accordingly.

Contrast this with current technology, where a website or app can generally only ascertain this type of information if the woman actively searched for food outlets or similar on her device.

Therefore, in the Metaverse, a user will no longer need to proactively provide personal data by opening up their smartphone and accessing their webpage or app of choice. Instead, their data will be gathered in the background while they go about their virtual lives. 

This type of opportunity comes with great data protection responsibilities. Businesses developing, or participating in, the Metaverse will need to comply with data protection legislation when processing personal data in this new environment. The nature of the Metaverse raises a number of issues around how that compliance will be achieved in practice.

Who is responsible for complying with applicable data protection law? 

In many jurisdictions, data protection laws place different obligations on entities depending on whether an entity determines the purpose and means of processing personal data (referred to as a “controller” under the EU General Data Protection Regulation (GDPR)) or just processes personal data on behalf of others (referred to as a “processor” under the GDPR). 

In the Metaverse, establishing which entity or entities have responsibility for determining how and why personal data will be processed, and who processes personal data on behalf of another, may not be easy. It will likely involve picking apart a tangled web of relationships, and there may be no obvious or clear answers – for example:

Will there be one main administrator of the Metaverse who collects all personal data provided within it and determines how that personal data will be processed and shared?
Or will multiple entities collect personal data through the Metaverse and each determine their own purposes for doing so? 

Either way, many questions arise, including:

How should the different entities each display their own privacy notice to users? 
Or should this be done jointly? 
How and when should users’ consent be collected? 
Who is responsible if users’ personal data is stolen or misused while they are in the Metaverse? 
What data sharing arrangements need to be put in place and how will these be implemented?

There’s a lot more to this page including a look at Social Media Regulation and Intellectual Property Rights.

One other thing, according to the Norton Rose Fulbright Wikipedia entry, it is one of the ten largest legal firms in the world.

How many realities are there?

I’m starting to think we should talking about RR (real reality), as well as, VR (virtual reality), AR (augmented reality), MR (mixed reality), and XR (extended reality). It seems that all of these (except RR, which is implied) will be part of the ‘metaverse’, assuming that it ever comes into existence. Happily, I have found a good summarized description of VR/AR/MR/XR in a March 20, 2018 essay by North of 41 on medium.com,

Summary: VR is immersing people into a completely virtual environment; AR is creating an overlay of virtual content, but can’t interact with the environment; MR is a mixed of virtual reality and the reality, it creates virtual objects that can interact with the actual environment. XR brings all three Reality (AR, VR, MR) together under one term.

If you have the interest and approximately five spare minutes, read the entire March 20, 2018 essay, which has embedded images illustrating the various realities.

Alternate Mixed Realities: an example

TransforMR: Pose-Aware Object Substitution for Composing Alternate Mixed Realities (ISMAR ’21)

Here’s a description from one of the researchers, Mohamed Kari, of the video, which you can see above, and the paper he and his colleagues presented at the 20th IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 2021 (from the TransforMR page on YouTube),

We present TransforMR, a video see-through mixed reality system for mobile devices that performs 3D-pose-aware object substitution to create meaningful mixed reality scenes in previously unseen, uncontrolled, and open-ended real-world environments.

To get a sense of how recent this work is, ISMAR 2021 was held from October 4 – 8, 2021.

The team’s 2021 ISMAR paper, TransforMR Pose-Aware Object Substitution for Composing Alternate Mixed Realities by Mohamed Kari, Tobias Grosse-Puppendah, Luis Falconeri Coelho, Andreas Rene Fender, David Bethge, Reinhard Schütte, and Christian Holz lists two educational institutions I’d expect to see (University of Duisburg-Essen and ETH Zürich), the surprise was this one: Porsche AG. Perhaps that explains the preponderance of vehicles in this demonstration.

Space walking in virtual reality

Ivan Semeniuk’s October 2, 2021 article for the Globe and Mail highlights a collaboration between Montreal’s Felix and Paul Studios with NASA (US National Aeronautics and Space Administration) and Time studios,

Communing with the infinite while floating high above the Earth is an experience that, so far, has been known to only a handful.

Now, a Montreal production company aims to share that experience with audiences around the world, following the first ever recording of a spacewalk in the medium of virtual reality.

The company, which specializes in creating virtual-reality experiences with cinematic flair, got its long-awaited chance in mid-September when astronauts Thomas Pesquet and Akihiko Hoshide ventured outside the International Space Station for about seven hours to install supports and other equipment in preparation for a new solar array.

The footage will be used in the fourth and final instalment of Space Explorers: The ISS Experience, a virtual-reality journey to space that has already garnered a Primetime Emmy Award for its first two episodes.

From the outset, the production was developed to reach audiences through a variety of platforms for 360-degree viewing, including 5G-enabled smart phones and tablets. A domed theatre version of the experience for group audiences opened this week at the Rio Tinto Alcan Montreal Planetarium. Those who desire a more immersive experience can now see the first two episodes in VR form by using a headset available through the gaming and entertainment company Oculus. Scenes from the VR series are also on offer as part of The Infinite, an interactive exhibition developed by Montreal’s Phi Studio, whose works focus on the intersection of art and technology. The exhibition, which runs until Nov. 7 [2021], has attracted 40,000 visitors since it opened in July [2021?].

At a time when billionaires are able to head off on private extraterrestrial sojourns that almost no one else could dream of, Lajeunesse [Félix Lajeunesse, co-founder and creative director of Felix and Paul studios] said his project was developed with a very different purpose in mind: making it easier for audiences to become eyewitnesses rather than distant spectators to humanity’s greatest adventure.

For the final instalments, the storyline takes viewers outside of the space station with cameras mounted on the Canadarm, and – for the climax of the series – by following astronauts during a spacewalk. These scenes required extensive planning, not only because of the limited time frame in which they could be gathered, but because of the lighting challenges presented by a constantly shifting sun as the space station circles the globe once every 90 minutes.

… Lajeunesse said that it was equally important to acquire shots that are not just technically spectacular but that serve the underlying themes of Space Explorers: The ISS Experience. These include an examination of human adaptation and advancement, and the unity that emerges within a group of individuals from many places and cultures and who must learn to co-exist in a high risk environment in order to achieve a common goal.

If you have the time, do read Semeniuk’s October 2, 2021 article in its entirety. You can find the exhibits (hopefully, you’re in Montreal) The Infinite here and Space Explorers: The ISS experience here (see the preview below),

The realities and the ‘verses

There always seems to be a lot of grappling with new and newish science/technology where people strive to coin terms and define them while everyone, including members of the corporate community, attempts to cash in.

The last time I looked (probably about two years ago), I wasn’t able to find any good definitions for alternate reality and mixed reality. (By good, I mean something which clearly explicated the difference between the two.) It was nice to find something this time.

As for Facebook and its attempts to join/create a/the metaverse, the company’s timing seems particularly fraught. As well, paradigm-shifting technology doesn’t usually start with large corporations. The company is ignoring its own history.

Multiverses

Writing this piece has reminded me of the upcoming movie, “Doctor Strange in the Multiverse of Madness” (Wikipedia entry). While this multiverse is based on a comic book, the idea of a Multiverse (Wikipedia entry) has been around for quite some time,

Early recorded examples of the idea of infinite worlds existed in the philosophy of Ancient Greek Atomism, which proposed that infinite parallel worlds arose from the collision of atoms. In the third century BCE, the philosopher Chrysippus suggested that the world eternally expired and regenerated, effectively suggesting the existence of multiple universes across time.[1] The concept of multiple universes became more defined in the Middle Ages.

Multiple universes have been hypothesized in cosmology, physics, astronomy, religion, philosophy, transpersonal psychology, music, and all kinds of literature, particularly in science fiction, comic books and fantasy. In these contexts, parallel universes are also called “alternate universes”, “quantum universes”, “interpenetrating dimensions”, “parallel universes”, “parallel dimensions”, “parallel worlds”, “parallel realities”, “quantum realities”, “alternate realities”, “alternate timelines”, “alternate dimensions” and “dimensional planes”.

The physics community has debated the various multiverse theories over time. Prominent physicists are divided about whether any other universes exist outside of our own.

Living in a computer simulation or base reality

The whole thing is getting a little confusing for me so I think I’ll stick with RR (real reality) or as it’s also known base reality. For the notion of base reality, I want to thank astronomer David Kipping of Columbia University in Anil Ananthaswamy’s article for this analysis of the idea that we might all be living in a computer simulation (from my December 8, 2020 posting; scroll down about 50% of the way to the “Are we living in a computer simulation?” subhead),

… there is a more obvious answer: Occam’s razor, which says that in the absence of other evidence, the simplest explanation is more likely to be correct. The simulation hypothesis is elaborate, presuming realities nested upon realities, as well as simulated entities that can never tell that they are inside a simulation. “Because it is such an overly complicated, elaborate model in the first place, by Occam’s razor, it really should be disfavored, compared to the simple natural explanation,” Kipping says.

Maybe we are living in base reality after all—The Matrix, Musk and weird quantum physics notwithstanding.

To sum it up (briefly)

I’m sticking with the base reality (or real reality) concept, which is where various people and companies are attempting to create a multiplicity of metaverses or the metaverse effectively replacing the internet. This metaverse can include any all of these realities (AR/MR/VR/XR) along with base reality. As for Facebook’s attempt to build ‘the metaverse’, it seems a little grandiose.

The computer simulation theory is an interesting thought experiment (just like the multiverse is an interesting thought experiment). I’ll leave them there.

Wherever it is we are living, these are interesting times.

***Updated October 28, 2021: D. (Devindra) Hardawar’s October 28, 2021 article for engadget offers details about the rebranding along with a dash of cynicism (Note: A link has been removed),

Here’s what Facebook’s metaverse isn’t: It’s not an alternative world to help us escape from our dystopian reality, a la Snow Crash. It won’t require VR or AR glasses (at least, not at first). And, most importantly, it’s not something Facebook wants to keep to itself. Instead, as Mark Zuckerberg described to media ahead of today’s Facebook Connect conference, the company is betting it’ll be the next major computing platform after the rise of smartphones and the mobile web. Facebook is so confident, in fact, Zuckerberg announced that it’s renaming itself to “Meta.”

After spending the last decade becoming obsessed with our phones and tablets — learning to stare down and scroll practically as a reflex — the Facebook founder thinks we’ll be spending more time looking up at the 3D objects floating around us in the digital realm. Or maybe you’ll be following a friend’s avatar as they wander around your living room as a hologram. It’s basically a digital world layered right on top of the real world, or an “embodied internet” as Zuckerberg describes.

Before he got into the weeds for his grand new vision, though, Zuckerberg also preempted criticism about looking into the future now, as the Facebook Papers paint the company as a mismanaged behemoth that constantly prioritizes profit over safety. While acknowledging the seriousness of the issues the company is facing, noting that it’ll continue to focus on solving them with “industry-leading” investments, Zuckerberg said: 

“The reality is is that there’s always going to be issues and for some people… they may have the view that there’s never really a great time to focus on the future… From my perspective, I think that we’re here to create things and we believe that we can do this and that technology can make things better. So we think it’s important to to push forward.”

Given the extent to which Facebook, and Zuckerberg in particular, have proven to be untrustworthy stewards of social technology, it’s almost laughable that the company wants us to buy into its future. But, like the rise of photo sharing and group chat apps, Zuckerberg at least has a good sense of what’s coming next. And for all of his talk of turning Facebook into a metaverse company, he’s adamant that he doesn’t want to build a metaverse that’s entirely owned by Facebook. He doesn’t think other companies will either. Like the mobile web, he thinks every major technology company will contribute something towards the metaverse. He’s just hoping to make Facebook a pioneer.

“Instead of looking at a screen, or today, how we look at the Internet, I think in the future you’re going to be in the experiences, and I think that’s just a qualitatively different experience,” Zuckerberg said. It’s not quite virtual reality as we think of it, and it’s not just augmented reality. But ultimately, he sees the metaverse as something that’ll help to deliver more presence for digital social experiences — the sense of being there, instead of just being trapped in a zoom window. And he expects there to be continuity across devices, so you’ll be able to start chatting with friends on your phone and seamlessly join them as a hologram when you slip on AR glasses.

D. (Devindra) Hardawar’s October 28, 2021 article provides a lot more details and I recommend reading it in its entirety.

Cortical spheroids (like mini-brains) could unlock (larger) brain’s mysteries

A March 19, 2021 Northwestern University news release on EurekAlert announces the creation of a device designed to monitor brain organoids (for anyone unfamiliar with brain organoids there’s more information after the news),

A team of scientists, led by researchers at Northwestern University, Shirley Ryan AbilityLab and the University of Illinois at Chicago (UIC), has developed novel technology promising to increase understanding of how brains develop, and offer answers on repairing brains in the wake of neurotrauma and neurodegenerative diseases.

Their research is the first to combine the most sophisticated 3-D bioelectronic systems with highly advanced 3-D human neural cultures. The goal is to enable precise studies of how human brain circuits develop and repair themselves in vitro. The study is the cover story for the March 19 [March 17, 2021 according to the citation] issue of Science Advances.

The cortical spheroids used in the study, akin to “mini-brains,” were derived from human-induced pluripotent stem cells. Leveraging a 3-D neural interface system that the team developed, scientists were able to create a “mini laboratory in a dish” specifically tailored to study the mini-brains and collect different types of data simultaneously. Scientists incorporated electrodes to record electrical activity. They added tiny heating elements to either keep the brain cultures warm or, in some cases, intentionally overheated the cultures to stress them. They also incorporated tiny probes — such as oxygen sensors and small LED lights — to perform optogenetic experiments. For instance, they introduced genes into the cells that allowed them to control the neural activity using different-colored light pulses.

This platform then enabled scientists to perform complex studies of human tissue without directly involving humans or performing invasive testing. In theory, any person could donate a limited number of their cells (e.g., blood sample, skin biopsy). Scientists can then reprogram these cells to produce a tiny brain spheroid that shares the person’s genetic identity. The authors believe that, by combining this technology with a personalized medicine approach using human stem cell-derived brain cultures, they will be able to glean insights faster and generate better, novel interventions.

“The advances spurred by this research will offer a new frontier in the way we study and understand the brain,” said Shirley Ryan AbilityLab’s Dr. Colin Franz, co-lead author on the paper who led the testing of the cortical spheroids. “Now that the 3-D platform has been developed and validated, we will be able to perform more targeted studies on our patients recovering from neurological injury or battling a neurodegenerative disease.”

Yoonseok Park, postdoctoral fellow at Northwestern University and co-lead author, added, “This is just the beginning of an entirely new class of miniaturized, 3-D bioelectronic systems that we can construct to expand the capacity of the regenerative medicine field. For example, our next generation of device will support the formation of even more complex neural circuits from brain to muscle, and increasingly dynamic tissues like a beating heart.”

Current electrode arrays for tissue cultures are 2-D, flat and unable to match the complex structural designs found throughout nature, such as those found in the human brain. Moreover, even when a system is 3-D, it is extremely challenging to incorporate more than one type of material into a small 3-D structure. With this advance, however, an entire class of 3-D bioelectronics devices has been tailored for the field of regenerative medicine.

“Now, with our small, soft 3-D electronics, the capacity to build devices that mimic the complex biological shapes found in the human body is finally possible, providing a much more holistic understanding of a culture,” said Northwestern’s John Rogers, who led the technology development using technology similar to that found in phones and computers. “We no longer have to compromise function to achieve the optimal form for interfacing with our biology.”

As a next step, scientists will use the devices to better understand neurological disease, test drugs and therapies that have clinical potential, and compare different patient-derived cell models. This understanding will then enable a better grasp of individual differences that may account for the wide variation of outcomes seen in neurological rehabilitation.

“As scientists, our goal is to make laboratory research as clinically relevant as possible,” said Kristen Cotton, research assistant in Dr. Franz’s lab. “This 3-D platform opens the door to new experiments, discovery and scientific advances in regenerative neurorehabilitation medicine that have never been possible.”

Caption: Three dimensional multifunctional neural interfaces for cortical spheroids and engineered assembloids Credit: Northwestern University

As for what brain ogranoids might be, Carl Zimmer in an Aug. 29, 2019 article for the New York Times provides an explanation,

Organoids Are Not Brains. How Are They Making Brain Waves?

Two hundred and fifty miles over Alysson Muotri’s head, a thousand tiny spheres of brain cells were sailing through space.

The clusters, called brain organoids, had been grown a few weeks earlier in the biologist’s lab here at the University of California, San Diego. He and his colleagues altered human skin cells into stem cells, then coaxed them to develop as brain cells do in an embryo.

The organoids grew into balls about the size of a pinhead, each containing hundreds of thousands of cells in a variety of types, each type producing the same chemicals and electrical signals as those cells do in our own brains.

In July, NASA packed the organoids aboard a rocket and sent them to the International Space Station to see how they develop in zero gravity.

Now the organoids were stowed inside a metal box, fed by bags of nutritious broth. “I think they are replicating like crazy at this stage, and so we’re going to have bigger organoids,” Dr. Muotri said in a recent interview in his office overlooking the Pacific.

What, exactly, are they growing into? That’s a question that has scientists and philosophers alike scratching their heads.

On Thursday, Dr. Muotri and his colleagues reported that they  have recorded simple brain waves in these organoids. In mature human brains, such waves are produced by widespread networks of neurons firing in synchrony. Particular wave patterns are linked to particular forms of brain activity, like retrieving memories and dreaming.

As the organoids mature, the researchers also found, the waves change in ways that resemble the changes in the developing brains of premature babies.

“It’s pretty amazing,” said Giorgia Quadrato, a neurobiologist at the University of Southern California who was not involved in the new study. “No one really knew if that was possible.”

But Dr. Quadrato stressed it was important not to read too much into the parallels. What she, Dr. Muotri and other brain organoid experts build are clusters of replicating brain cells, not actual brains.

If you have the time, I recommend reading Zimmer’s article in its entirety. Perhaps not coincidentally, Zimmer has an excerpt titled “Lab-Grown Brain Organoids Aren’t Alive. But They’re Not Not Alive, Either.” published in Slate.com,

From Life’s Edge: The Search For What It Means To Be Alive by Carl Zimmer, published by Dutton, an imprint of Penguin Publishing Group, a division of Penguin Random House, LLC. Copyright © 2021 by Carl Zimmer.

Cleber Trujillo led me to a windowless room banked with refrigerators, incubators, and microscopes. He extended his blue-gloved hands to either side and nearly touched the walls. “This is where we spend half our day,” he said.

In that room Trujillo and a team of graduate students raised a special kind of life. He opened an incubator and picked out a clear plastic box. Raising it above his head, he had me look up at it through its base. Inside the box were six circular wells, each the width of a cookie and filled with what looked like watered-down grape juice. In each well 100 pale globes floated, each the size of a housefly head.

Getting back to the research about monitoring brain organoids, here’s a link to and a citation for the paper about cortical spheroids,

Three-dimensional, multifunctional neural interfaces for cortical spheroids and engineered assembloids by Yoonseok Park, Colin K. Franz, Hanjun Ryu, Haiwen Luan, Kristen Y. Cotton, Jong Uk Kim, Ted S. Chung, Shiwei Zhao, Abraham Vazquez-Guardado, Da Som Yang, Kan Li, Raudel Avila, Jack K. Phillips, Maria J. Quezada, Hokyung Jang, Sung Soo Kwak, Sang Min Won, Kyeongha Kwon, Hyoyoung Jeong, Amay J. Bandodkar, Mengdi Han, Hangbo Zhao, Gabrielle R. Osher, Heling Wang, KunHyuck Lee, Yihui Zhang, Yonggang Huang, John D. Finan and John A. Rogers. Science Advances 17 Mar 2021: Vol. 7, no. 12, eabf9153 DOI: 10.1126/sciadv.abf9153

This paper appears to be open access.

According to a March 22, 2021 posting on the Shirley Riley AbilityLab website, the paper is featured on the front cover of Science Advances (vol. 7 no. 12).

A look back at 2020 on this blog and a welcome to 2021

Things past

A year later i still don’t know what came over me but I got the idea that I could write a 10-year (2010 – 2019) review of science culture in Canada during the last few days of 2019. Somehow two and half months later, I managed to publish my 25,000+ multi-part series.

Plus,

Sadly, 2020 started on a somber note with this January 13, 2020 posting, In memory of those in the science, engineering, or technology communities returning to or coming to live or study in Canada on Flight PS752.

COVID-19 was mentioned and featured here a number of times throughout the year. I’m highlighting two of those postings. The first is a June 24, 2020 posting titled, Tiny sponges lure coronavirus away from lung cells. It’s a therapeutic approach that is not a vaccine but a way of neutralizing the virus. The idea is that the nanosponge is coated in the material that the virus seeks in a human cell. Once the virus locks onto the sponge, it is unable to seek out cells. If I remember rightly, the sponges along with the virus are disposed of by the body’s usual processes.

The second COVID-19 posting I’m highlighting is my first ever accepted editorial opinion by the Canadian Science Policy Centre (CSPC). I republished the piece here in a May 15, 2020 posting, which included all of my references. However, the magazine version is more attractively displayed in the CSPC Featured Editorial Series Volume 1, Issue 2, May 2020 PDF on pp. 31-2.

Artist Joseph Nechvatal reached out to me earlier this year regarding his viral symphOny (2006-2008), a 1 hour 40 minute collaborative electronic noise music symphony. It was featured in an April 7, 2020 posting which seemed strangely à propos during a pandemic even though the work was focused on viral artificial life. You can access it for free https://archive.org/details/ViralSymphony but the Internet Archive where this is stored is requesting donations.

Also on a vaguely related COVID-19 note, there’s my December 7, 2020 posting titled, Digital aromas? And a potpourri of ‘scents and sensibility’. As any regular readers may know, I have a longstanding interest in scent and fragrances. The COVID-19 part of the posting (it’s not about losing your sense of smell) is in the subsection titled, Smelling like an old book. Apparently some folks are missing the smell of bookstores and Powell’s books have responded to that need with a new fragrance.

For anyone who may have missed it, I wrote an update of the CRISPR twin affair in my July 28, 2020 posting, titled, July 2020 update on Dr. He Jiankui (the CRISPR twins) situation.

Finishing off with 2020, I wrote a commentary (mostly focused on the Canada chapter) about a book titled, Communicating Science: A Global Perspective in my December 10, 2020 posting. The book offers science communication perspectives from 39 different countries.

Things future

I have no doubt there will be delights ahead but as they are in the realm of discovery and, at this point, they are currently unknown.

My future plans include a posting about trust and governance. This has come about since writing my Dec. 29, 2020 posting titled, “Governments need to tell us when and how they’re using AI (artificial intelligence) algorithms to make decisions” and stumbling across a reference to a December 15, 2020 article by Dr. Andrew Maynard titled, Why Trustworthiness Matters in Building Global Futures. Maynard’s focus was on a newly published report titled, Trust & Tech Governance.

I will also be considering the problematic aspects of science communication and my own shortcomings. On the heels of reading more than usually forthright discussions of racism in Canada across multiple media platforms, I was horrified to discover I had featured, without any caveats, work by a man who was deeply problematic with regard to his beliefs about race. He was a eugenicist, as well as, a zoologist, naturalist, philosopher, physician, professor, marine biologist, and artist who coined many terms in biology, including ecology, phylum, phylogeny, and Protista; see his Wikipedia entry.

A Dec. 23, 2020 news release on EurekAlert (Scientists at Tel Aviv University develop new gene therapy for deafness) and a December 2020 article by Sarah Zhang for The Atlantic about prenatal testing and who gets born have me wanting to further explore the field of how genetic testing and therapies will affect our concepts of ‘normality’. Fingers crossed I’ll be able to get Dr. Gregor Wolbring to answer a few questions for publication here. (Gregor is a tenured associate professor [in Alberta, Canada] at the University of Calgary’s Cumming School of Medicine and a scholar in the field of ‘ableism’. He is deeply knowledgeable about notions of ability vs disability.)

As 2021 looms, I’m hopeful that I’ll be featuring more art/sci (or sciart) postings, which is my segue to a more hopeful note about 2021 will bring us,

The Knobbed Russet has a rough exterior, with creamy insides. Photo courtesy of William Mullan.

It’s an apple! This is one of the many images embedded in Annie Ewbank’s January 6, 2020 article about rare and beautiful apples for Atlas Obscura (featured on getpocket.com),

In early 2020, inside a bright Brooklyn gallery that is plastered in photographs of apples, William Mullan is being besieged with questions.

A writer is researching apples for his novel set in post-World War II New York. An employee of a fruit-delivery company, who covetously eyes the round table on which Mullan has artfully arranged apples, asks where to buy his artwork.

But these aren’t your Granny Smith’s apples. A handful of Knobbed Russets slumping on the table resemble rotting masses. Despite their brown, wrinkly folds, they’re ripe, with clean white interiors. Another, the small Roberts Crab, when sliced by Mullan through the middle to show its vermillion flesh, looks less like an apple than a Bing cherry. The entire lineup consists of apples assembled by Mullan, who, by publishing his fruit photographs in a book and on Instagram, is putting the glorious diversity of apples in the limelight.

Do go and enjoy! Happy 2021!

A computer simulation inside a computer simulation?

Stumbling across an entry from National Film Board of Canada for the Venice VR (virtual reality) Expanded section at the 77th Venice International Film Festival (September 2 to 12, 2020) and a recent Scientific American article on computer simulations provoked a memory from Frank Herbert’s 1965 novel, Dune. From an Oct. 3, 2007 posting on Equivocality; A journal of self-discovery, healing, growth, and growing pains,

Knowing where the trap is — that’s the first step in evading it. This is like single combat, Son, only on a larger scale — a feint within a feint within a feint [emphasis mine]…seemingly without end. The task is to unravel it.

—Duke Leto Atreides, Dune [Note: Dune is a 1965 science-fiction novel by US author Frank Herbert]

Now, onto what provoked memory of that phrase.

The first computer simulation “Agence”

Here’s a description of “Agence” and its creators from an August 11, 2020 Canada National Film Board (NFB) news release,

Two-time Emmy Award-winning storytelling pioneer Pietro Gagliano’s new work Agence (Transitional Forms/National Film Board of Canada) is an industry-first dynamic film that integrates cinematic storytelling, artificial intelligence, and user interactivity to create a different experience each time.

Agence is premiering in official competition in the Venice VR Expanded section at the 77th Venice International Film Festival (September 2 to 12), and accessible worldwide via the online Venice VR Expanded platform.

About the experience

Would you play god to intelligent life? Agence places the fate of artificially intelligent creatures in your hands. In their simulated universe, you have the power to observe, and to interfere. Maintain the balance of their peaceful existence or throw them into a state of chaos as you move from planet to planet. Watch closely and you’ll see them react to each other and their emerging world.

About the creators

Created by Pietro Gagliano, Agence is a co-production between his studio lab Transitional Forms and the NFB. Pietro is a pioneer of new forms of media that allow humans to understand what it means to be machine, and machines what it means to be human. Previously, Pietro co-founded digital studio Secret Location, and with his team, made history in 2015 by winning the first ever Emmy Award for a virtual reality project. His work has been recognized through hundreds of awards and nominations, including two Emmy Awards, 11 Canadian Screen Awards, 31 FWAs, two Webby Awards, a Peabody-Facebook Award, and a Cannes Lion.

Agence is produced by Casey Blustein (Transitional Forms) and David Oppenheim (NFB) and executive produced by Pietro Gagliano (Transitional Forms) and Anita Lee (NFB). 

About Transitional Forms

Transitional Forms is a studio lab focused on evolving entertainment formats through the use of artificial intelligence. Through their innovative approach to content and tool creation, their interdisciplinary team transforms valuable research into dynamic, culturally relevant experiences across a myriad of emerging platforms. Dedicated to the intersection of technology and art, Transitional Forms strives to make humans more creative, and machines more human.

About the NFB

David Oppenheim and Anita Lee’s recent VR credits also include the acclaimed virtual reality/live performance piece Draw Me Close and The Book of Distance, which premiered at the Sundance Film Festival and is in the “Best of VR” section at Venice this year. Canada’s public producer of award-winning creative documentaries, auteur animation, interactive stories and participatory experiences, the NFB has won over 7,000 awards, including 21 Webbys and 12 Academy Awards.

The line that caught my eye? “Would you play god to intelligent life?” For the curious, here’s the film’s trailer,

Now for the second computer simulation (the feint within the feint).

Are we living in a computer simulation?

According to some thinkers in the field, the chances are about 50/50 that we are computer simulations, which makes “Agence” a particularly piquant experience.

An October 13, 2020 article ‘Do We Live in a Simulation? Chances are about 50 – 50‘ by Anil Ananthaswamy for Scientific American poses the question with an answer that’s unexpectedly uncertain, Note: Links have been removed,

It is not often that a comedian gives an astrophysicist goose bumps when discussing the laws of physics. But comic Chuck Nice managed to do just that in a recent episode of the podcast StarTalk.The show’s host Neil deGrasse Tyson had just explained the simulation argument—the idea that we could be virtual beings living in a computer simulation. If so, the simulation would most likely create perceptions of reality on demand rather than simulate all of reality all the time—much like a video game optimized to render only the parts of a scene visible to a player. “Maybe that’s why we can’t travel faster than the speed of light, because if we could, we’d be able to get to another galaxy,” said Nice, the show’s co-host, prompting Tyson to gleefully interrupt. “Before they can program it,” the astrophysicist said,delighting at the thought. “So the programmer put in that limit.”

Such conversations may seem flippant. But ever since Nick Bostrom of the University of Oxford wrote a seminal paper about the simulation argument in 2003, philosophers, physicists, technologists and, yes, comedians have been grappling with the idea of our reality being a simulacrum. Some have tried to identify ways in which we can discern if we are simulated beings. Others have attempted to calculate the chance of us being virtual entities. Now a new analysis shows that the odds that we are living in base reality—meaning an existence that is not simulated—are pretty much even. But the study also demonstrates that if humans were to ever develop the ability to simulate conscious beings, the chances would overwhelmingly tilt in favor of us, too, being virtual denizens inside someone else’s computer. (A caveat to that conclusion is that there is little agreement about what the term “consciousness” means, let alone how one might go about simulating it.)

In 2003 Bostrom imagined a technologically adept civilization that possesses immense computing power and needs a fraction of that power to simulate new realities with conscious beings in them. Given this scenario, his simulation argument showed that at least one proposition in the following trilemma must be true: First, humans almost always go extinct before reaching the simulation-savvy stage. Second, even if humans make it to that stage, they are unlikely to be interested in simulating their own ancestral past. And third, the probability that we are living in a simulation is close to one.

Before Bostrom, the movie The Matrix had already done its part to popularize the notion of simulated realities. And the idea has deep roots in Western and Eastern philosophical traditions, from Plato’s cave allegory to Zhuang Zhou’s butterfly dream. More recently, Elon Musk gave further fuel to the concept that our reality is a simulation: “The odds that we are in base reality is one in billions,” he said at a 2016 conference.

For him [astronomer David Kipping of Columbia University], there is a more obvious answer: Occam’s razor, which says that in the absence of other evidence, the simplest explanation is more likely to be correct. The simulation hypothesis is elaborate, presuming realities nested upon realities, as well as simulated entities that can never tell that they are inside a simulation. “Because it is such an overly complicated, elaborate model in the first place, by Occam’s razor, it really should be disfavored, compared to the simple natural explanation,” Kipping says.

Maybe we are living in base reality after all—The Matrix, Musk and weird quantum physics notwithstanding.

It’s all a little mind-boggling (a computer simulation creating and playing with a computer simulation?) and I’m not sure how far how I want to start thinking about the implications (the feint within the feint within the feint). Still, it seems that the idea could be useful as a kind of thought experiment designed to have us rethink our importance in the world. Or maybe, as a way to have a laugh at our own absurdity.

Telling stories about artificial intelligence (AI) and Chinese science fiction; a Nov. 17, 2020 virtual event

[downloaded from https://www.berggruen.org/events/ai-narratives-in-contemporary-chinese-science-fiction/]

Exciting news: Chris Eldred of the Berggruen Institute sent this notice (from his Nov. 13, 2020 email)

Renowned science fiction novelists Hao Jingfang, Chen Qiufan, and Wang Yao (Xia Jia) will be featured in a virtual event next Tuesday, and I thought their discussion may be of interest to you and your readers. The event will explore how AI is used in contemporary Chinese science fiction, and the writers’ roundtable will address questions such as: How does Chinese sci-fi literature since the Reform and Opening-Up compare to sci-fi writing in the West? How does the Wandering Earth narrative and Chinese perspectives on home influence ideas about the impact of AI on the future?

Berggruen Fellow Hao Jingfang is an economist by training and an award-winning author (Hugo Award for Best Novelette). This event will be co-hosted with the University of Cambridge Leverhulme Centre for the Future of Intelligence. 

This event will be live streamed on Zoom (agenda and registration link here) on Tuesday, November 17th, from 8:30-11:50 AM GMT / 4:30-7:50 PM CST. Simultaneous English translation will be provided. 

The Berggruen Institute is offering a conversation with authors and researchers about how Chinese science fiction grapples with artificial intelligence (from the Berggruen Institute’s AI Narratives in Contemporary Chinese Science Fiction event page),

AI Narratives in Contemporary Chinese Science Fiction

November 17, 2020

Platform & Language:

Zoom (Chinese and English, with simultaneous translation)

Click here to register.

Discussion points:

1. How does Chinese sci-fi literature since the Reform and Opening-Up compare to sci-fi writing in the West?

2. How does the Wandering Earth narrative and Chinese perspectives on home influence ideas about the impact of AI on the future

About the Speakers:

WU Yan is a professor and PhD supervisor at the Humanities Center of Southern University of Science and Technology. He is a science fiction writer, vice chairman of the China Science Writers Association, recipient of the Thomas D Clareson Award of the American Science Fiction Research Association, and co-founder of the Xingyun (Nebula) Awards for Global Chinese Science Fiction. He is the author of science fictions such as Adventure of the Soul and The Sixth Day of Life and Death, academic works such as Outline of Science Fiction Literature, and textbooks such as Science and Fantasy – Training Course for Youth Imagination and Scientific Innovation.

Sanfeng is a science fiction researcher, visiting researcher of the Humanities Center of Southern University of Science and Technology, chief researcher of Shenzhen Science & Fantasy Growth Foundation, honorary assistant professor of the University of Hong Kong, Secretary-General of the World Chinese Science Fiction Association, and editor-in-chief of Nebula Science Fiction Review. His research covers the history of Chinese science fiction, development of science fiction industry, science fiction and urban development, science fiction and technological innovation, etc.

About the Event

Keynote 1 “Chinese AI Science Fiction in the Early Period of Reform and Opening-Up (1978-1983)”

(改革开放早期(1978-1983)的中国AI科幻小说)

Abstract: Science fiction on the themes of computers and robots emerged early but in a scattered manner in China. In the stories, the protagonists are largely humanlike assistants chiefly collecting data or doing daily manual labor, and this does not fall in the category of today’s artificial intelligence. Major changes took place after the reform and opening-up in 1978 in this regard. In 1979, the number of robot-themed works ballooned. By 1980, the quality of works also saw a quantum leap, and stories on the nature of artificial intelligence began to appear. At this stage, the AI works such as Spy Case Outside the Pitch, Dulles and Alice, Professor Shalom’s Misconception, and Riot on the Ziwei Island That Shocked the World describe how intelligent robots respond to activities such as adversarial ball games (note that these are not chess games), fully integrate into the daily life of humans, and launch collective riots beyond legal norms under special circumstances. The ideas that the growth of artificial intelligence requires a suitable environment, stable family relationship, social adaptation, etc. are still of important value.

Keynote 2 “Algorithm of the Soul: Narrative of AI in Recent Chinese Science Fiction”

(灵魂的算法:近期中国科幻小说中的AI叙事)

Abstract: As artificial intelligence has been applied to the fields of technology and daily life in the past decade, the AI narrative in Chinese science fiction has also seen seismic changes. On the one hand, young authors are aware that the “soul” of AI comes, to a large extent, from machine learning algorithms. As a result, their works often highlight the existence and implementation of algorithms, bringing maneuverability and credibility to the AI. On the other hand, the authors prefer to focus on the conflicts and contradictions in emotions, ethics, and morality caused by AI that penetrate into human life. If the previous AI-themed science fiction is like a distant robot fable, the recent AI narrative assumes contemporary and practical significance. This report focuses on exploring the AI-themed science fiction by several young authors (including Hao Jingfang’s [emphasis mine] The Problem of Love and Where Are You, Chen Qiufan’s Image Maker and Algorithm for Life, and Xia Jia’s Let’s Have a Talk and Shejiang, Baoshu’s Little Girl and Shuangchimu’s The Cock Prince, etc.) to delve into the breakthroughs and achievements in AI narratives.

Hao Jingfang, one of the authors mentioned in the abstract, is currently a fellow at the Berggruen Institute and she is scheduled to be a guest according to the co-host’s the University of Cambridge’s Leverhulme Centre for the Future of Intelligence (CFI) page: Workshop: AI Narratives in Contemporary Chinese Science Fiction programme description (I’ll try not to include too much repetitive information),

Workshop 2 – November 17, 2020

AI Narratives in Contemporary Chinese Science Fiction

Programme

16:30-16:40 CST (8:30-8:40 GMT)  Introductions

SONG Bing, Vice President, Co-Director, Berggruen Research Center, Peking University

Kanta Dihal, Postdoctoral Researcher, Project Lead on Global Narratives, Leverhulme Centre for the Future of Intelligence, University of Cambridge  

16:40-17:10 CST (8:40-9:10 GMT)  Talk 1 [Chinese AI SciFi and the early period]

17:10-17:40 CST (9:10-9:40 GMT)  Talk 2  [Algorithm of the soul]

17:40-18:10 CST (9:40-10:10 GMT)  Q&A

18:10-18:20 CST (10:10-10:20 GMT) Break

18:20-19:50 CST (10:20-11:50 GMT)  Roundtable Discussion

Host:

HAO Jingfang(郝景芳), author, researcher & Berggruen Fellow

Guests:

Baoshu (宝树), sci-fi and fantasy writer

CHEN Qiufan(陈楸帆), sci-fi writer, screenwriter & translator

Feidao(飞氘), sci-fi writer, Associate Professor in the Department of Chinese Language and Literature at Tsinghua University

WANG Yao(王瑶,pen name “Xia Jia”), sci-fi writer, Associate Professor of Chinese Literature at Xi’an Jiaotong University

Suggested Readings

ABOUT CHINESE [Science] FICTION

“What Makes Chinese Fiction Chinese?”, by Xia Jia and Ken Liu,

The Worst of All Possible Universes and the Best of All Possible Earths: Three Body and Chinese Science Fiction”, Cixin Liu, translated by Ken Liu

Science Fiction in China: 2016 in Review

SHORT NOVELS ABOUT ROBOTS/AI/ALGORITHM:

The Robot Who Liked to Tell Tall Tales”, by Feidao, translated by Ken Liu

Goodnight, Melancholy”, by Xia Jia, translated by Ken Liu

The Reunion”, by Chen Qiufan, translated by Emily Jin and Ken Liu, MIT Technology Review, December 16, 2018

Folding Beijing”, by Hao Jingfang, translated by Ken Liu

Let’s have a talk”, by Xia Jia

For those of us on the West Coast of North America the event times are: Tuesday, November 17, 2020, 1430 – 1750 or 2:30 – 5:50 pm. *Added On Nov.16.20 at 11:55 am PT: For anyone who can’t attend the live event, a full recording will be posted to YouTube.*

Kudos to all involved in organizing and participating in this event. It’s important to get as many viewpoints as possible on AI and its potential impacts.

Finally and for the curious, there’s another posting about Chinese science fiction here (May 31, 2019).

Two cultures: the open science movement and the reproducibility movement

It’s C. P. Snow who comes to mind on seeing the words ‘science and two cultures’ (for anyone unfamiliar with the lecture and/or book see The Two Cultures Wikipedia entry).

This Sept. 14, 2020 news item on phys.org puts forward an entirely different concept concerning two cultures and science (Note: Links have been removed),

In the world of scientific research today, there’s a revolution going on—over the last decade or so, scientists across many disciplines have been seeking to improve the workings of science and its methods.

To do this, scientists are largely following one of two paths: the movement for reproducibility and the movement for open science. Both movements aim to create centralized archives for data, computer code and other resources, but from there, the paths diverge. The movement for reproducibility calls on scientists to reproduce the results of past experiments to verify earlier results, while open science calls on scientists to share resources so that future research can build on what has been done, ask new questions and advance science.

A Sept. 14, 2020 Indiana University (IU) news release (also on EurekAlert), which originated the news item, explains the research findings, which unexpectedly (for me) led to some conclusions about diversity with regard to gender in particular,

Now, an international research team led by IU’s Mary Murphy, Amanda Mejia, Jorge Mejia, Yan Xiaoran, Patty Mabry, Susanne Ressl, Amanda Diekman, and Franco Pestilli, finds the two movements do more than diverge. They have very distinct cultures, with two distinct literatures produced by two groups of researchers with little crossover. Their investigation also suggests that one of the movements — open science — promotes greater equity, diversity, and inclusivity. Their findings were recently reported in the Proceedings for the National Academy of Sciences [PNAS].

The team of researchers on the study, whose fields range widely – from social psychology, network science, neuroscience, structural biology, biochemistry, statistics, business, and education, among others – were taken by surprise by the results.

“The two movements have very few crossovers, shared authors or collaborations,” said Murphy. “They operate relatively independently. And this distinction between the two approaches is replicated across all scientific fields we examined.”

In other words, whether in biology, psychology or physics, scientists working in the open science participate in a different scientific culture than those working within the reproducibility culture, even if they work in the same disciplinary field. And which culture a scientist works in determines a lot about access and participation, particularly for women.

IU cognitive scientist Richard Shiffrin, who has previously been involved in efforts to improve science but did not participate in the current study, says the new study by Murphy and her colleagues provides a remarkable look into the way that current science operates. “There are two quite distinct cultures, one more inclusive, that promotes transparency of reporting and open science, and another, less inclusive, that promotes reproducibility as a remedy to the current practice of science,” he said.

A Tale of Two Sciences

To investigate the fault lines between the two movements, the team, led by network scientists Xiaoran Yan and Patricia Mabry, first conducted a network analysis of papers published from 2010-2017 identified with one of the two movements. The analysis showed that even though both movements span widely across STEM fields, the authors within them occupy two largely distinct networks. Authors who publish open science research, in other words, rarely produce research within reproducibility, and very few reproducibility researchers conduct open science research.

Next, information systems analyst Jorge Mejia and statistician Amanda Mejia applied a semantic text analysis to the abstracts of the papers to determine the values implicit in the language used to define the research. Specifically they looked at the degree to which the research was prosocial, that is, oriented toward helping others by seeking to solve large social problems.

“This is significant,” Murphy explained, “insofar as previous studies have shown that women often gravitate toward science that has more socially oriented goals and aims to improve the health and well-being of people and society. We found that open science has more prosocial language in its abstracts than reproducibility does.”

With respect to gender, the team found that “women publish more often in high-status authorship positions in open science, and that participation in high-status authorship positions has been increasing over time in open science, while in reproducibility women’s participation in high-status authorship positions is decreasing over time,” Murphy said.

The researchers are careful to point out that the link they found between women and open science is so far a correlation, not a causal connection.

“It could be that as more women join these movements, the science becomes more prosocial. But women could also be drawn to this prosocial model because that’s what they value in science, which in turn strengthens the prosocial quality of open science,” Murphy noted. “It’s likely to be an iterative cultural cycle, which starts one way, attracts people who are attracted to that culture, and consequently further builds and supports that culture.”

Diekman, a social psychologist and senior author on the paper, noted these patterns might help open more doors to science. “What we know from previous research is that when science conveys a more prosocial culture, it tends to attract not only more women, but also people of color and prosocially oriented men,” she said.

The distinctions traced in the study are also reflected in the scientific processes employed by the research team itself. As one of the most diverse teams to publish in the pages of PNAS, the research team used open science practices.

“The initial intuition, before the project started, was that investigators have come to this debate from very different perspectives and with different intellectual interests. These interests might attract different categories of researchers.” says Pestilli, an IU neuroscientist. “Some of us are working on improving science by providing new technology and opportunities to reduce human mistakes and promote teamwork. Yet we also like to focus on the greater good science does for society, every day. We are perhaps seeing more of this now in the time of the COVID-19 pandemic.”

With a core of eight lead scientists at IU, the team also included 20 more co-authors, mostly women and people of color who are experts on how to increase the participation of underrepresented groups in science; diversity and inclusion; and the movements to improve science.

Research team leader Mary Murphy noted that in this cultural moment of examining inequality throughout our institutions, looking at who gets to participate in science can yield great benefit.

“Trying to understand inequality in science has the potential to benefit society now more than ever. Understanding how the culture of science can compound problems of inequality or mitigate them could be a real advance in this moment when long-standing inequalities are being recognized–and when there is momentum to act and create a more equitable science.”

I think someone had a little fun writing the news release. First, there’s a possible reference to C. P. Snow’s The Two Cultures and, then, a reference to Charles Dickens’ A Tale of Two Cities (Wikipedia entry here) along with, possibly, an allusion to the French Revolution (liberté, égalité, et fraternité). Going even further afield, is there also an allusion to a science revolution? Certainly the values of liberty and equality would seem to fit in with the findings.

Here’s a link to and a citation for the paper,

Open science, communal culture, and women’s participation in the movement to improve science by Mary C. Murphy, Amanda F. Mejia, Jorge Mejia, Xiaoran Yan, Sapna Cheryan, Nilanjana Dasgupta, Mesmin Destin, Stephanie A. Fryberg, Julie A. Garcia, Elizabeth L. Haines, Judith M. Harackiewicz, Alison Ledgerwood, Corinne A. Moss-Racusin, Lora E. Park, Sylvia P. Perry, Kate A. Ratliff, Aneeta Rattan, Diana T. Sanchez, Krishna Savani, Denise Sekaquaptewa, Jessi L. Smith, Valerie Jones Taylor, Dustin B. Thoman, Daryl A. Wout, Patricia L. Mabry, Susanne Ressl, Amanda B. Diekman, and Franco Pestilli PNAS DOI: https://doi.org/10.1073/pnas.1921320117 First published September 14, 2020

This paper appears to be open access.

Here’s an image representing the researchers’ findings,

Caption: Figure 1. From “I” science to team science. Moving from an ‘!’-focused, independent, lab-centric approach to science to a more collaborative team science that promotes communal values, sharing, education, and training. Teamwork is a strength for scientific work and discovery; the total is more than the sum of the individual part contributions. Credit: Indiana University

Technical University of Munich: embedded ethics approach for AI (artificial intelligence) and storing a tv series in synthetic DNA

I stumbled across two news bits of interest from the Technical University of Munich in one day (Sept. 1, 2020, I think). The topics: artificial intelligence (AI) and synthetic DNA (deoxyribonucleic acid).

Embedded ethics and artificial intelligence (AI)

An August 27, 2020 Technical University of Munich (TUM) press release (also on EurekAlert but published Sept. 1, 2020) features information about a proposal to embed ethicists in with AI development teams,

The increasing use of AI (artificial intelligence) in the development of new medical technologies demands greater attention to ethical aspects. An interdisciplinary team at the Technical University of Munich (TUM) advocates the integration of ethics from the very beginning of the development process of new technologies. Alena Buyx, Professor of Ethics in Medicine and Health Technologies, explains the embedded ethics approach.

Professor Buyx, the discussions surrounding a greater emphasis on ethics in AI research have greatly intensified in recent years, to the point where one might speak of “ethics hype” …

Prof. Buyx: … and many committees in Germany and around the world such as the German Ethics Council or the EU Commission High-Level Expert Group on Artificial Intelligence have responded. They are all in agreement: We need more ethics in the development of AI-based health technologies. But how do things look in practice for engineers and designers? Concrete solutions are still few and far between. In a joint pilot project with two Integrative Research Centers at TUM, the Munich School of Robotics and Machine Intelligence (MSRM) with its director, Prof. Sami Haddadin, and the Munich Center for Technology in Society (MCTS), with Prof. Ruth Müller, we want to try out the embedded ethics approach. We published the proposal in Nature Machine Intelligence at the end of July [2020].

What exactly is meant by the “embedded ethics approach”?

Prof.Buyx: The idea is to make ethics an integral part of the research process by integrating ethicists into the AI development team from day one. For example, they attend team meetings on a regular basis and create a sort of “ethical awareness” for certain issues. They also raise and analyze specific ethical and social issues.

Is there an example of this concept in practice?

Prof. Buyx: The Geriatronics Research Center, a flagship project of the MSRM in Garmisch-Partenkirchen, is developing robot assistants to enable people to live independently in old age. The center’s initiatives will include the construction of model apartments designed to try out residential concepts where seniors share their living space with robots. At a joint meeting with the participating engineers, it was noted that the idea of using an open concept layout everywhere in the units – with few doors or individual rooms – would give the robots considerable range of motion. With the seniors, however, this living concept could prove upsetting because they are used to having private spaces. At the outset, the engineers had not given explicit consideration to this aspect.

Prof.Buyx: The approach sounds promising. But how can we avoid “embedded ethics” from turning into an “ethics washing” exercise, offering companies a comforting sense of “being on the safe side” when developing new AI technologies?

That’s not something we can be certain of avoiding. The key is mutual openness and a willingness to listen, with the goal of finding a common language – and subsequently being prepared to effectively implement the ethical aspects. At TUM we are ideally positioned to achieve this. Prof. Sami Haddadin, the director of the MSRM, is also a member of the EU High-Level Group of Artificial Intelligence. In his research, he is guided by the concept of human centered engineering. Consequently, he has supported the idea of embedded ethics from the very beginning. But one thing is certain: Embedded ethics alone will not suddenly make AI “turn ethical”. Ultimately, that will require laws, codes of conduct and possibly state incentives.

Here’s a link to and a citation for the paper espousing the embedded ethics for AI development approach,

An embedded ethics approach for AI development by Stuart McLennan, Amelia Fiske, Leo Anthony Celi, Ruth Müller, Jan Harder, Konstantin Ritt, Sami Haddadin & Alena Buyx. Nature Machine Intelligence (2020) DOI: https://doi.org/10.1038/s42256-020-0214-1 Published 31 July 2020

This paper is behind a paywall.

Religion, ethics and and AI

For some reason embedded ethics and AI got me to thinking about Pope Francis and other religious leaders.

The Roman Catholic Church and AI

There was a recent announcement that the Roman Catholic Church will be working with MicroSoft and IBM on AI and ethics (from a February 28, 2020 article by Jen Copestake for British Broadcasting Corporation (BBC) news online (Note: A link has been removed),

Leaders from the two tech giants met senior church officials in Rome, and agreed to collaborate on “human-centred” ways of designing AI.

Microsoft president Brad Smith admitted some people may “think of us as strange bedfellows” at the signing event.

“But I think the world needs people from different places to come together,” he said.

The call was supported by Pope Francis, in his first detailed remarks about the impact of artificial intelligence on humanity.

The Rome Call for Ethics [sic] was co-signed by Mr Smith, IBM executive vice-president John Kelly and president of the Pontifical Academy for Life Archbishop Vincenzo Paglia.

It puts humans at the centre of new technologies, asking for AI to be designed with a focus on the good of the environment and “our common and shared home and of its human inhabitants”.

Framing the current era as a “renAIssance”, the speakers said the invention of artificial intelligence would be as significant to human development as the invention of the printing press or combustion engine.

UN Food and Agricultural Organization director Qu Dongyu and Italy’s technology minister Paola Pisano were also co-signatories.

Hannah Brockhaus’s February 28, 2020 article for the Catholic News Agency provides some details missing from the BBC report and I found it quite helpful when trying to understand the various pieces that make up this initiative,

The Pontifical Academy for Life signed Friday [February 28, 2020], alongside presidents of IBM and Microsoft, a call for ethical and responsible use of artificial intelligence technologies.

According to the document, “the sponsors of the call express their desire to work together, in this context and at a national and international level, to promote ‘algor-ethics.’”

“Algor-ethics,” according to the text, is the ethical use of artificial intelligence according to the principles of transparency, inclusion, responsibility, impartiality, reliability, security, and privacy.

The signing of the “Rome Call for AI Ethics [PDF]” took place as part of the 2020 assembly of the Pontifical Academy for Life, which was held Feb. 26-28 [2020] on the theme of artificial intelligence.

One part of the assembly was dedicated to private meetings of the academics of the Pontifical Academy for Life. The second was a workshop on AI and ethics that drew 356 participants from 41 countries.

On the morning of Feb. 28 [2020], a public event took place called “renAIssance. For a Humanistic Artificial Intelligence” and included the signing of the AI document by Microsoft President Brad Smith, and IBM Executive Vice-president John Kelly III.

The Director General of FAO, Dongyu Qu, and politician Paola Pisano, representing the Italian government, also signed.

The president of the European Parliament, David Sassoli, was also present Feb. 28.

Pope Francis canceled his scheduled appearance at the event due to feeling unwell. His prepared remarks were read by Archbishop Vincenzo Paglia, president of the Academy for Life.

You can find Pope Francis’s comments about the document here (if you’re not comfortable reading Italian, hopefully, the English translation which follows directly afterward will be helpful). The Pope’s AI initiative has a dedicated website, Rome Call for AI ethics, and while most of the material dates from the February 2020 announcement, they are keeping up a blog. It has two entries, one dated in May 2020 and another in September 2020.

Buddhism and AI

The Dalai Lama is well known for having an interest in science and having hosted scientists for various dialogues. So, I was able to track down a November 10, 2016 article by Ariel Conn for the futureoflife.org website, which features his insights on the matter,

The question of what it means and what it takes to feel needed is an important problem for ethicists and philosophers, but it may be just as important for AI researchers to consider. The Dalai Lama argues that lack of meaning and purpose in one’s work increases frustration and dissatisfaction among even those who are gainfully employed.

“The problem,” says the Dalai Lama, “is … the growing number of people who feel they are no longer useful, no longer needed, no longer one with their societies. … Feeling superfluous is a blow to the human spirit. It leads to social isolation and emotional pain, and creates the conditions for negative emotions to take root.”

If feeling needed and feeling useful are necessary for happiness, then AI researchers may face a conundrum. Many researchers hope that job loss due to artificial intelligence and automation could, in the end, provide people with more leisure time to pursue enjoyable activities. But if the key to happiness is feeling useful and needed, then a society without work could be just as emotionally challenging as today’s career-based societies, and possibly worse.

I also found a talk on the topic by The Venerable Tenzin Priyadarshi, first here’s a description from his bio at the Dalai Lama Center for Ethics and Transformative Values webspace on the Massachusetts Institute of Technology (MIT) website,

… an innovative thinker, philosopher, educator and a polymath monk. He is Director of the Ethics Initiative at the MIT Media Lab and President & CEO of The Dalai Lama Center for Ethics and Transformative Values at the Massachusetts Institute of Technology. Venerable Tenzin’s unusual background encompasses entering a Buddhist monastery at the age of ten and receiving graduate education at Harvard University with degrees ranging from Philosophy to Physics to International Relations. He is a Tribeca Disruptive Fellow and a Fellow at the Center for Advanced Study in Behavioral Sciences at Stanford University. Venerable Tenzin serves on the boards of a number of academic, humanitarian, and religious organizations. He is the recipient of several recognitions and awards and received Harvard’s Distinguished Alumni Honors for his visionary contributions to humanity.

He gave the 2018 Roger W. Heyns Lecture in Religion and Society at Stanford University on the topic, “Religious and Ethical Dimensions of Artificial Intelligence.” The video runs over one hour but he is a sprightly speaker (in comparison to other Buddhist speakers I’ve listened to over the years).

Judaism, Islam, and other Abrahamic faiths examine AI and ethics

I was delighted to find this January 30, 2020 Artificial Intelligence: Implications for Ethics and Religion event as it brought together a range of thinkers from various faiths and disciplines,

New technologies are transforming our world every day, and the pace of change is only accelerating.  In coming years, human beings will create machines capable of out-thinking us and potentially taking on such uniquely-human traits as empathy, ethical reasoning, perhaps even consciousness.  This will have profound implications for virtually every human activity, as well as the meaning we impart to life and creation themselves.  This conference will provide an introduction for non-specialists to Artificial Intelligence (AI):

What is it?  What can it do and be used for?  And what will be its implications for choice and free will; economics and worklife; surveillance economies and surveillance states; the changing nature of facts and truth; and the comparative intelligence and capabilities of humans and machines in the future? 

Leading practitioners, ethicists and theologians will provide cross-disciplinary and cross-denominational perspectives on such challenges as technology addiction, inherent biases and resulting inequalities, the ethics of creating destructive technologies and of turning decision-making over to machines from self-driving cars to “autonomous weapons” systems in warfare, and how we should treat the suffering of “feeling” machines.  The conference ultimately will address how we think about our place in the universe and what this means for both religious thought and theological institutions themselves.

UTS [Union Theological Seminary] is the oldest independent seminary in the United States and has long been known as a bastion of progressive Christian scholarship.  JTS [Jewish Theological Seminary] is one of the academic and spiritual centers of Conservative Judaism and a major center for academic scholarship in Jewish studies. The Riverside Church is an interdenominational, interracial, international, open, welcoming, and affirming church and congregation that has served as a focal point of global and national activism for peace and social justice since its inception and continues to serve God through word and public witness. The annual Greater Good Gathering, the following week at Columbia University’s School of International & Public Affairs, focuses on how technology is changing society, politics and the economy – part of a growing nationwide effort to advance conversations promoting the “greater good.”

They have embedded a video of the event (it runs a little over seven hours) on the January 30, 2020 Artificial Intelligence: Implications for Ethics and Religion event page. For anyone who finds that a daunting amount of information, you may want to check out the speaker list for ideas about who might be writing and thinking on this topic.

As for Islam, I did track down this November 29, 2018 article by Shahino Mah Abdullah, a fellow at the Institute of Advanced Islamic Studies (IAIS) Malaysia,

As the global community continues to work together on the ethics of AI, there are still vast opportunities to offer ethical inputs, including the ethical principles based on Islamic teachings.

This is in line with Islam’s encouragement for its believers to convey beneficial messages, including to share its ethical principles with society.

In Islam, ethics or akhlak (virtuous character traits) in Arabic, is sometimes employed interchangeably in the Arabic language with adab, which means the manner, attitude, behaviour, and etiquette of putting things in their proper places. Islamic ethics cover all the legal concepts ranging from syariah (Islamic law), fiqh ( jurisprudence), qanun (ordinance), and ‘urf (customary practices).

Adopting and applying moral values based on the Islamic ethical concept or applied Islamic ethics could be a way to address various issues in today’s societies.

At the same time, this approach is in line with the higher objectives of syariah (maqasid alsyariah) that is aimed at conserving human benefit by the protection of human values, including faith (hifz al-din), life (hifz alnafs), lineage (hifz al-nasl), intellect (hifz al-‘aql), and property (hifz al-mal). This approach could be very helpful to address contemporary issues, including those related to the rise of AI and intelligent robots.

..

Part of the difficulty with tracking down more about AI, ethics, and various religions is linguistic. I simply don’t have the language skills to search for the commentaries and, even in English, I may not have the best or most appropriate search terms.

Television (TV) episodes stored on DNA?

According to a Sept. 1, 2020 news item on Nanowerk, the first episode of a tv series, ‘Biohackers’ has been stored on synthetic DNA (deoxyribonucleic acid) by a researcher at TUM and colleagues at another institution,

The first episode of the newly released series “Biohackers” was stored in the form of synthetic DNA. This was made possible by the research of Prof. Reinhard Heckel of the Technical University of Munich (TUM) and his colleague Prof. Robert Grass of ETH Zürich.

They have developed a method that permits the stable storage of large quantities of data on DNA for over 1000 years.

A Sept. 1, 2020 TUM press release, which originated the news item, proceeds with more detail in an interview format,

Prof. Heckel, Biohackers is about a medical student seeking revenge on a professor with a dark past – and the manipulation of DNA with biotechnology tools. You were commissioned to store the series on DNA. How does that work?

First, I should mention that what we’re talking about is artificially generated – in other words, synthetic – DNA. DNA consists of four building blocks: the nucleotides adenine (A), thymine (T), guanine (G) and cytosine (C). Computer data, meanwhile, are coded as zeros and ones. The first episode of Biohackers consists of a sequence of around 600 million zeros and ones. To code the sequence 01 01 11 00 in DNA, for example, we decide which number combinations will correspond to which letters. For example: 00 is A, 01 is C, 10 is G and 11 is T. Our example then produces the DNA sequence CCTA. Using this principle of DNA data storage, we have stored the first episode of the series on DNA.

And to view the series – is it just a matter of “reverse translation” of the letters?

In a very simplified sense, you can visualize it like that. When writing, storing and reading the DNA, however, errors occur. If these errors are not corrected, the data stored on the DNA will be lost. To solve the problem, I have developed an algorithm based on channel coding. This method involves correcting errors that take place during information transfers. The underlying idea is to add redundancy to the data. Think of language: When we read or hear a word with missing or incorrect letters, the computing power of our brain is still capable of understanding the word. The algorithm follows the same principle: It encodes the data with sufficient redundancy to ensure that even highly inaccurate data can be restored later.

Channel coding is used in many fields, including in telecommunications. What challenges did you face when developing your solution?

The first challenge was to create an algorithm specifically geared to the errors that occur in DNA. The second one was to make the algorithm so efficient that the largest possible quantities of data can be stored on the smallest possible quantity of DNA, so that only the absolutely necessary amount of redundancy is added. We demonstrated that our algorithm is optimized in that sense.

DNA data storage is very expensive because of the complexity of DNA production as well as the reading process. What makes DNA an attractive storage medium despite these challenges?

First, DNA has a very high information density. This permits the storage of enormous data volumes in a minimal space. In the case of the TV series, we stored “only” 100 megabytes on a picogram – or a billionth of a gram of DNA. Theoretically, however, it would be possible to store up to 200 exabytes on one gram of DNA. And DNA lasts a long time. By comparison: If you never turned on your PC or wrote data to the hard disk it contains, the data would disappear after a couple of years. By contrast, DNA can remain stable for many thousands of years if it is packed right.

And the method you have developed also makes the DNA strands durable – practically indestructible.

My colleague Robert Grass was the first to develop a process for the “stable packing” of DNA strands by encapsulating them in nanometer-scale spheres made of silica glass. This ensures that the DNA is protected against mechanical influences. In a joint paper in 2015, we presented the first robust DNA data storage concept with our algorithm and the encapsulation process developed by Prof. Grass. Since then we have continuously improved our method. In our most recent publication in Nature Protocols of January 2020, we passed on what we have learned.

What are your next steps? Does data storage on DNA have a future?

We’re working on a way to make DNA data storage cheaper and faster. “Biohackers” was a milestone en route to commercialization. But we still have a long way to go. If this technology proves successful, big things will be possible. Entire libraries, all movies, photos, music and knowledge of every kind – provided it can be represented in the form of data – could be stored on DNA and would thus be available to humanity for eternity.

Here’s a link to and a citation for the paper,

Reading and writing digital data in DNA by Linda C. Meiser, Philipp L. Antkowiak, Julian Koch, Weida D. Chen, A. Xavier Kohll, Wendelin J. Stark, Reinhard Heckel & Robert N. Grass. Nature Protocols volume 15, pages86–101(2020) Issue Date: January 2020 DOI: https://doi.org/10.1038/s41596-019-0244-5 Published [online] 29 November 2019

This paper is behind a paywall.

As for ‘Biohackers’, it’s a German science fiction television series and you can find out more about it here on the Internet Movie Database.

Frugal science, foldable microscopes, and curiosity: a talk on June 3, 2019 at Simon Fraser University (Burnaby, Canada) … it’s in Metro Vancouver

This is the second frugal science item* I’m publishing today (May 29, 2019) which means that I’ve gone from complete ignorance on the topic to collecting news items about it. Manu Prakash, the developer behind a usable paper microscope than can be folded and kept in your pocket, is going to be giving a talk locally according to a May 28, 2019 announcement (received via email) from Simon Fraser University’s (SFU) Faculty of Science,

On June 3rd [2019], at 7:30 pmManu Prakash from Stanford University will give the Herzberg Public Lecture in conjunction with this year’s Canadian Association of Physicists (CAP) conference that the department is hosting. Dr. Prakash’s lecture is entitled “Frugal Science in the Age of Curiosity”. Tickets are free and can be obtained through Eventbrite: https://t.co/WNrPh9fop5 . 

This presentation will be held at the Shrum Science Centre Chemistry C9001 Lecture Theatre, Burnaby campus (instead of the Diamond Family Auditorium).

There’s a synopsis of the talk on the Herzbergy Public Lecture: Frugal Science in the Age of Curiosity webpage,

Science faces an accessibility challenge. Although information/knowledge is fast becoming available to everyone around the world, the experience of science is significantly limited. One approach to solving this challenge is to democratize access to scientific tools. Manu Prakash believes this can be achieved via “Frugal science”; a philosophy that inspires design, development, and deployment of ultra-affordable yet powerful scientific tools for the masses. Using examples from his own work (Foldscope: one-dollar origami microscope, Paperfuge: a twenty-cent high-speed centrifuge), Dr. Prakash will describe the process of identifying challenges, designing solutions, and deploying these tools globally to enable open ended scientific curiosity/inquiries in communities around the world. By connecting the dots between science education, global health and environmental monitoring, he will explore the role of “simple” tools in advancing access to better human and planetary health in a resource limited world.

If you’re curious there is a Foldscope website where you can find out more and/or get a Foldscope for yourself.

In addition to the talk, there is a day-long workshop for teachers (as part of the 2019 CAP Congress) with Dr. Donna Strickland the University of Waterloo researcher who won the 2018 Nobel Prize for physics. If you want to learn how to make a Foldscope, t here is also a one hour session for which you can register separately from the day-long event,. (I featured Strickland and her win in an October 3, 2018 posting.)

Getting back to the main event. Dr. Prakash’s evening talk, you can register here.

*ETA May 29, 2019 at 1120 hours PDT: My first posting on frugal science is Frugal science: ancient toys for state-of-the-art science. It’s about a 3D printable centrifuge based on a toy known (in English) as a whirligig.