Category Archives: New Media

The metaverse or not

The ‘metaverse’ seems to be everywhere these days (especially since Facebook has made a number of announcements bout theirs (more about that later in this posting).

At this point, the metaverse is very hyped up despite having been around for about 30 years. According to the Wikipedia timeline (see the Metaverse entry), the first one was a MOO in 1993 called ‘The Metaverse’. In any event, it seems like it might be a good time to see what’s changed since I dipped my toe into a metaverse (Second Life by Linden Labs) in 2007.

(For grammar buffs, I switched from definite article [the] to indefinite article [a] purposefully. In reading the various opinion pieces and announcements, it’s not always clear whether they’re talking about a single, overarching metaverse [the] replacing the single, overarching internet or whether there will be multiple metaverses, in which case [a].)

The hype/the buzz … call it what you will

This September 6, 2021 piece by Nick Pringle for Fast Company dates the beginning of the metaverse to a 1992 science fiction novel before launching into some typical marketing hype (for those who don’t know, hype is the short form for hyperbole; Note: Links have been removed),

The term metaverse was coined by American writer Neal Stephenson in his 1993 sci-fi hit Snow Crash. But what was far-flung fiction 30 years ago is now nearing reality. At Facebook’s most recent earnings call [June 2021], CEO Mark Zuckerberg announced the company’s vision to unify communities, creators, and commerce through virtual reality: “Our overarching goal across all of these initiatives is to help bring the metaverse to life.”

So what actually is the metaverse? It’s best explained as a collection of 3D worlds you explore as an avatar. Stephenson’s original vision depicted a digital 3D realm in which users interacted in a shared online environment. Set in the wake of a catastrophic global economic crash, the metaverse in Snow Crash emerged as the successor to the internet. Subcultures sprung up alongside new social hierarchies, with users expressing their status through the appearance of their digital avatars.

Today virtual worlds along these lines are formed, populated, and already generating serious money. Household names like Roblox and Fortnite are the most established spaces; however, there are many more emerging, such as Decentraland, Upland, Sandbox, and the soon to launch Victoria VR.

These metaverses [emphasis mine] are peaking at a time when reality itself feels dystopian, with a global pandemic, climate change, and economic uncertainty hanging over our daily lives. The pandemic in particular saw many of us escape reality into online worlds like Roblox and Fortnite. But these spaces have proven to be a place where human creativity can flourish amid crisis.

In fact, we are currently experiencing an explosion of platforms parallel to the dotcom boom. While many of these fledgling digital worlds will become what Ask Jeeves was to Google, I predict [emphasis mine] that a few will match the scale and reach of the tech giant—or even exceed it.

Because the metaverse brings a new dimension to the internet, brands and businesses will need to consider their current and future role within it. Some brands are already forging the way and establishing a new genre of marketing in the process: direct to avatar (D2A). Gucci sold a virtual bag for more than the real thing in Roblox; Nike dropped virtual Jordans in Fortnite; Coca-Cola launched avatar wearables in Decentraland, and Sotheby’s has an art gallery that your avatar can wander in your spare time.

D2A is being supercharged by blockchain technology and the advent of digital ownership via NFTs, or nonfungible tokens. NFTs are already making waves in art and gaming. More than $191 million was transacted on the “play to earn” blockchain game Axie Infinity in its first 30 days this year. This kind of growth makes NFTs hard for brands to ignore. In the process, blockchain and crypto are starting to feel less and less like “outsider tech.” There are still big barriers to be overcome—the UX of crypto being one, and the eye-watering environmental impact of mining being the other. I believe technology will find a way. History tends to agree.

Detractors see the metaverse as a pandemic fad, wrapping it up with the current NFT bubble or reducing it to Zuck’s [Jeffrey Zuckerberg and Facebook] dystopian corporate landscape. This misses the bigger behavior change that is happening among Gen Alpha. When you watch how they play, it becomes clear that the metaverse is more than a buzzword.

For Gen Alpha [emphasis mine], gaming is social life. While millennials relentlessly scroll feeds, Alphas and Zoomers [emphasis mine] increasingly stroll virtual spaces with their friends. Why spend the evening staring at Instagram when you can wander around a virtual Harajuku with your mates? If this seems ridiculous to you, ask any 13-year-old what they think.

Who is Nick Pringle and how accurate are his predictions?

At the end of his September 6, 2021 piece, you’ll find this,

Nick Pringle is SVP [Senior Vice President] executive creative director at R/GA London.

According to the R/GA Wikipedia entry,

… [the company] evolved from a computer-assisted film-making studio to a digital design and consulting company, as part of a major advertising network.

Here’s how Pringle sees our future, his September 6, 2021 piece,

By thinking “virtual first,” you can see how these spaces become highly experimental, creative, and valuable. The products you can design aren’t bound by physics or marketing convention—they can be anything, and are now directly “ownable” through blockchain. …

I believe that the metaverse is here to stay. That means brands and marketers now have the exciting opportunity to create products that exist in multiple realities. The winners will understand that the metaverse is not a copy of our world, and so we should not simply paste our products, experiences, and brands into it.

I emphasized “These metaverses …” in the previous section to highlight the fact that I find the use of ‘metaverses’ vs. ‘worlds’ confusing as the words are sometimes used as synonyms and sometimes as distinctions. We do it all the time in all sorts of conversations but for someone who’s an outsider to a particular occupational group or subculture, the shifts can make for confusion.

As for Gen Alpha and Zoomer, I’m not a fan of ‘Gen anything’ as shorthand for describing a cohort based on birth years. For example, “For Gen Alpha [emphasis mine], gaming is social life,” ignores social and economic classes, as well as, the importance of locations/geography, e.g., Afghanistan in contrast to the US.

To answer the question I asked, Pringle does not mention any record of accuracy for his predictions for the future but I was able to discover that he is a “multiple Cannes Lions award-winning creative” (more here).

A more measured view of the metaverse

An October 4, 2021 article (What is the metaverse, and do I have to care? One part definition, one part aspiration, one part hype) by Adi Robertson and Jay Peters for The Verge offers a deeper dive into the metaverse (Note: Links have been removed),

In recent months you may have heard about something called the metaverse. Maybe you’ve read that the metaverse is going to replace the internet. Maybe we’re all supposed to live there. Maybe Facebook (or Epic, or Roblox, or dozens of smaller companies) is trying to take it over. And maybe it’s got something to do with NFTs [non-fungible tokens]?

Unlike a lot of things The Verge covers, the metaverse is tough to explain for one reason: it doesn’t necessarily exist. It’s partly a dream for the future of the internet and partly a neat way to encapsulate some current trends in online infrastructure, including the growth of real-time 3D worlds.

Then what is the real metaverse?

There’s no universally accepted definition of a real “metaverse,” except maybe that it’s a fancier successor to the internet. Silicon Valley metaverse proponents sometimes reference a description from venture capitalist Matthew Ball, author of the extensive Metaverse Primer:

“The Metaverse is an expansive network of persistent, real-time rendered 3D worlds and simulations that support continuity of identity, objects, history, payments, and entitlements, and can be experienced synchronously by an effectively unlimited number of users, each with an individual sense of presence.”

Facebook, arguably the tech company with the biggest stake in the metaverse, describes it more simply:

“The ‘metaverse’ is a set of virtual spaces where you can create and explore with other people who aren’t in the same physical space as you.”

There are also broader metaverse-related taxonomies like one from game designer Raph Koster, who draws a distinction between “online worlds,” “multiverses,” and “metaverses.” To Koster, online worlds are digital spaces — from rich 3D environments to text-based ones — focused on one main theme. Multiverses are “multiple different worlds connected in a network, which do not have a shared theme or ruleset,” including Ready Player One’s OASIS. And a metaverse is “a multiverse which interoperates more with the real world,” incorporating things like augmented reality overlays, VR dressing rooms for real stores, and even apps like Google Maps.

If you want something a little snarkier and more impressionistic, you can cite digital scholar Janet Murray — who has described the modern metaverse ideal as “a magical Zoom meeting that has all the playful release of Animal Crossing.”

But wait, now Ready Player One isn’t a metaverse and virtual worlds don’t have to be 3D? It sounds like some of these definitions conflict with each other.

An astute observation.

Why is the term “metaverse” even useful? “The internet” already covers mobile apps, websites, and all kinds of infrastructure services. Can’t we roll virtual worlds in there, too?

Matthew Ball favors the term “metaverse” because it creates a clean break with the present-day internet. [emphasis mine] “Using the metaverse as a distinctive descriptor allows us to understand the enormity of that change and in turn, the opportunity for disruption,” he said in a phone interview with The Verge. “It’s much harder to say ‘we’re late-cycle into the last thing and want to change it.’ But I think understanding this next wave of computing and the internet allows us to be more proactive than reactive and think about the future as we want it to be, rather than how to marginally affect the present.”

A more cynical spin is that “metaverse” lets companies dodge negative baggage associated with “the internet” in general and social media in particular. “As long as you can make technology seem fresh and new and cool, you can avoid regulation,” researcher Joan Donovan told The Washington Post in a recent article about Facebook and the metaverse. “You can run defense on that for several years before the government can catch up.”

There’s also one very simple reason: it sounds more futuristic than “internet” and gets investors and media people (like us!) excited.

People keep saying NFTs are part of the metaverse. Why?

NFTs are complicated in their own right, and you can read more about them here. Loosely, the thinking goes: NFTs are a way of recording who owns a specific virtual good, creating and transferring virtual goods is a big part of the metaverse, thus NFTs are a potentially useful financial architecture for the metaverse. Or in more practical terms: if you buy a virtual shirt in Metaverse Platform A, NFTs can create a permanent receipt and let you redeem the same shirt in Metaverse Platforms B to Z.

Lots of NFT designers are selling collectible avatars like CryptoPunks, Cool Cats, and Bored Apes, sometimes for astronomical sums. Right now these are mostly 2D art used as social media profile pictures. But we’re already seeing some crossover with “metaverse”-style services. The company Polygonal Mind, for instance, is building a system called CryptoAvatars that lets people buy 3D avatars as NFTs and then use them across multiple virtual worlds.

If you have the time, the October 4, 2021 article (What is the metaverse, and do I have to care? One part definition, one part aspiration, one part hype) is definitely worth the read.

Facebook’s multiverse and other news

Since starting this post sometime in September 2021, the situation regarding Facebook has changed a few times. I’ve decided to begin my version of the story from a summer 2021 announcement.

On Monday, July 26, 2021, Facebook announced a new Metaverse product group. From a July 27, 2021 article by Scott Rosenberg for Yahoo News (Note: A link has been removed),

Facebook announced Monday it was forming a new Metaverse product group to advance its efforts to build a 3D social space using virtual and augmented reality tech.

Facebook’s new Metaverse product group will report to Andrew Bosworth, Facebook’s vice president of virtual and augmented reality [emphasis mine], who announced the new organization in a Facebook post.

Facebook, integrity, and safety in the metaverse

On September 27, 2021 Facebook posted this webpage (Building the Metaverse Responsibly by Andrew Bosworth, VP, Facebook Reality Labs [emphasis mine] and Nick Clegg, VP, Global Affairs) on its site,

The metaverse won’t be built overnight by a single company. We’ll collaborate with policymakers, experts and industry partners to bring this to life.

We’re announcing a $50 million investment in global research and program partners to ensure these products are developed responsibly.

We develop technology rooted in human connection that brings people together. As we focus on helping to build the next computing platform, our work across augmented and virtual reality and consumer hardware will deepen that human connection regardless of physical distance and without being tied to devices. 

Introducing the XR [extended reality] Programs and Research Fund

There’s a long road ahead. But as a starting point, we’re announcing the XR Programs and Research Fund, a two-year $50 million investment in programs and external research to help us in this effort. Through this fund, we’ll collaborate with industry partners, civil rights groups, governments, nonprofits and academic institutions to determine how to build these technologies responsibly. 

..

Where integrity and safety are concerned Facebook is once again having some credibility issues according to an October 5, 2021 Associated Press article (Whistleblower testifies Facebook chooses profit over safety, calls for ‘congressional action’) posted on the Canadian Broadcasting Corporation’s (CBC) news online website.

Rebranding Facebook’s integrity and safety issues away?

It seems Facebook’s credibility issues are such that the company is about to rebrand itself according to an October 19, 2021 article by Alex Heath for The Verge (Note: Links have been removed),

Facebook is planning to change its company name next week to reflect its focus on building the metaverse, according to a source with direct knowledge of the matter.

The coming name change, which CEO Mark Zuckerberg plans to talk about at the company’s annual Connect conference on October 28th [2021], but could unveil sooner, is meant to signal the tech giant’s ambition to be known for more than social media and all the ills that entail. The rebrand would likely position the blue Facebook app as one of many products under a parent company overseeing groups like Instagram, WhatsApp, Oculus, and more. A spokesperson for Facebook declined to comment for this story.

Facebook already has more than 10,000 employees building consumer hardware like AR glasses that Zuckerberg believes will eventually be as ubiquitous as smartphones. In July, he told The Verge that, over the next several years, “we will effectively transition from people seeing us as primarily being a social media company to being a metaverse company.”

A rebrand could also serve to further separate the futuristic work Zuckerberg is focused on from the intense scrutiny Facebook is currently under for the way its social platform operates today. A former employee turned whistleblower, Frances Haugen, recently leaked a trove of damning internal documents to The Wall Street Journal and testified about them before Congress. Antitrust regulators in the US and elsewhere are trying to break the company up, and public trust in how Facebook does business is falling.

Facebook isn’t the first well-known tech company to change its company name as its ambitions expand. In 2015, Google reorganized entirely under a holding company called Alphabet, partly to signal that it was no longer just a search engine, but a sprawling conglomerate with companies making driverless cars and health tech. And Snapchat rebranded to Snap Inc. in 2016, the same year it started calling itself a “camera company” and debuted its first pair of Spectacles camera glasses.

If you have time, do read Heath’s article in its entirety.

An October 20, 2021 Thomson Reuters item on CBC (Canadian Broadcasting Corporation) news online includes quotes from some industry analysts about the rebrand,

“It reflects the broadening out of the Facebook business. And then, secondly, I do think that Facebook’s brand is probably not the greatest given all of the events of the last three years or so,” internet analyst James Cordwell at Atlantic Equities said.

“Having a different parent brand will guard against having this negative association transferred into a new brand, or other brands that are in the portfolio,” said Shankha Basu, associate professor of marketing at University of Leeds.

Tyler Jadah’s October 20, 2021 article for the Daily Hive includes an earlier announcement (not mentioned in the other two articles about the rebranding), Note: A link has been removed,

Earlier this week [October 17, 2021], Facebook announced it will start “a journey to help build the next computing platform” and will hire 10,000 new high-skilled jobs within the European Union (EU) over the next five years.

“Working with others, we’re developing what is often referred to as the ‘metaverse’ — a new phase of interconnected virtual experiences using technologies like virtual and augmented reality,” wrote Facebook’s Nick Clegg, the VP of Global Affairs. “At its heart is the idea that by creating a greater sense of “virtual presence,” interacting online can become much closer to the experience of interacting in person.”

Clegg says the metaverse has the potential to help unlock access to new creative, social, and economic opportunities across the globe and the virtual world.

In an email with Facebook’s Corporate Communications Canada, David Troya-Alvarez told Daily Hive, “We don’t comment on rumour or speculation,” in regards to The Verge‘s report.

I will update this posting when and if Facebook rebrands itself into a ‘metaverse’ company.

***See Oct. 28, 2021 update at the end of this posting and prepare yourself for ‘Meta’.***

Who (else) cares about integrity and safety in the metaverse?

Apparently, the international legal firm, Norton Rose Fulbright also cares about safety and integrity in the metaverse. Here’s more from their July 2021 The Metaverse: The evolution of a universal digital platform webpage,

In technology, first-mover advantage is often significant. This is why BigTech and other online platforms are beginning to acquire software businesses to position themselves for the arrival of the Metaverse.  They hope to be at the forefront of profound changes that the Metaverse will bring in relation to digital interactions between people, between businesses, and between them both. 

What is the Metaverse? The short answer is that it does not exist yet. At the moment it is vision for what the future will be like where personal and commercial life is conducted digitally in parallel with our lives in the physical world. Sounds too much like science fiction? For something that does not exist yet, the Metaverse is drawing a huge amount of attention and investment in the tech sector and beyond.  

Here we look at what the Metaverse is, what its potential is for disruptive change, and some of the key legal and regulatory issues future stakeholders may need to consider.

What are the potential legal issues?

The revolutionary nature of the Metaverse is likely to give rise to a range of complex legal and regulatory issues. We consider some of the key ones below. As time goes by, naturally enough, new ones will emerge.

Data

Participation in the Metaverse will involve the collection of unprecedented amounts and types of personal data. Today, smartphone apps and websites allow organisations to understand how individuals move around the web or navigate an app. Tomorrow, in the Metaverse, organisations will be able to collect information about individuals’ physiological responses, their movements and potentially even brainwave patterns, thereby gauging a much deeper understanding of their customers’ thought processes and behaviours.

Users participating in the Metaverse will also be “logged in” for extended amounts of time. This will mean that patterns of behaviour will be continually monitored, enabling the Metaverse and the businesses (vendors of goods and services) participating in the Metaverse to understand how best to service the users in an incredibly targeted way.

The hungry Metaverse participant

How might actors in the Metaverse target persons participating in the Metaverse? Let us assume one such woman is hungry at the time of participating. The Metaverse may observe a woman frequently glancing at café and restaurant windows and stopping to look at cakes in a bakery window, and determine that she is hungry and serve her food adverts accordingly.

Contrast this with current technology, where a website or app can generally only ascertain this type of information if the woman actively searched for food outlets or similar on her device.

Therefore, in the Metaverse, a user will no longer need to proactively provide personal data by opening up their smartphone and accessing their webpage or app of choice. Instead, their data will be gathered in the background while they go about their virtual lives. 

This type of opportunity comes with great data protection responsibilities. Businesses developing, or participating in, the Metaverse will need to comply with data protection legislation when processing personal data in this new environment. The nature of the Metaverse raises a number of issues around how that compliance will be achieved in practice.

Who is responsible for complying with applicable data protection law? 

In many jurisdictions, data protection laws place different obligations on entities depending on whether an entity determines the purpose and means of processing personal data (referred to as a “controller” under the EU General Data Protection Regulation (GDPR)) or just processes personal data on behalf of others (referred to as a “processor” under the GDPR). 

In the Metaverse, establishing which entity or entities have responsibility for determining how and why personal data will be processed, and who processes personal data on behalf of another, may not be easy. It will likely involve picking apart a tangled web of relationships, and there may be no obvious or clear answers – for example:

Will there be one main administrator of the Metaverse who collects all personal data provided within it and determines how that personal data will be processed and shared?
Or will multiple entities collect personal data through the Metaverse and each determine their own purposes for doing so? 

Either way, many questions arise, including:

How should the different entities each display their own privacy notice to users? 
Or should this be done jointly? 
How and when should users’ consent be collected? 
Who is responsible if users’ personal data is stolen or misused while they are in the Metaverse? 
What data sharing arrangements need to be put in place and how will these be implemented?

There’s a lot more to this page including a look at Social Media Regulation and Intellectual Property Rights.

One other thing, according to the Norton Rose Fulbright Wikipedia entry, it is one of the ten largest legal firms in the world.

How many realities are there?

I’m starting to think we should talking about RR (real reality), as well as, VR (virtual reality), AR (augmented reality), MR (mixed reality), and XR (extended reality). It seems that all of these (except RR, which is implied) will be part of the ‘metaverse’, assuming that it ever comes into existence. Happily, I have found a good summarized description of VR/AR/MR/XR in a March 20, 2018 essay by North of 41 on medium.com,

Summary: VR is immersing people into a completely virtual environment; AR is creating an overlay of virtual content, but can’t interact with the environment; MR is a mixed of virtual reality and the reality, it creates virtual objects that can interact with the actual environment. XR brings all three Reality (AR, VR, MR) together under one term.

If you have the interest and approximately five spare minutes, read the entire March 20, 2018 essay, which has embedded images illustrating the various realities.

Alternate Mixed Realities: an example

TransforMR: Pose-Aware Object Substitution for Composing Alternate Mixed Realities (ISMAR ’21)

Here’s a description from one of the researchers, Mohamed Kari, of the video, which you can see above, and the paper he and his colleagues presented at the 20th IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 2021 (from the TransforMR page on YouTube),

We present TransforMR, a video see-through mixed reality system for mobile devices that performs 3D-pose-aware object substitution to create meaningful mixed reality scenes in previously unseen, uncontrolled, and open-ended real-world environments.

To get a sense of how recent this work is, ISMAR 2021 was held from October 4 – 8, 2021.

The team’s 2021 ISMAR paper, TransforMR Pose-Aware Object Substitution for Composing Alternate Mixed Realities by Mohamed Kari, Tobias Grosse-Puppendah, Luis Falconeri Coelho, Andreas Rene Fender, David Bethge, Reinhard Schütte, and Christian Holz lists two educational institutions I’d expect to see (University of Duisburg-Essen and ETH Zürich), the surprise was this one: Porsche AG. Perhaps that explains the preponderance of vehicles in this demonstration.

Space walking in virtual reality

Ivan Semeniuk’s October 2, 2021 article for the Globe and Mail highlights a collaboration between Montreal’s Felix and Paul Studios with NASA (US National Aeronautics and Space Administration) and Time studios,

Communing with the infinite while floating high above the Earth is an experience that, so far, has been known to only a handful.

Now, a Montreal production company aims to share that experience with audiences around the world, following the first ever recording of a spacewalk in the medium of virtual reality.

The company, which specializes in creating virtual-reality experiences with cinematic flair, got its long-awaited chance in mid-September when astronauts Thomas Pesquet and Akihiko Hoshide ventured outside the International Space Station for about seven hours to install supports and other equipment in preparation for a new solar array.

The footage will be used in the fourth and final instalment of Space Explorers: The ISS Experience, a virtual-reality journey to space that has already garnered a Primetime Emmy Award for its first two episodes.

From the outset, the production was developed to reach audiences through a variety of platforms for 360-degree viewing, including 5G-enabled smart phones and tablets. A domed theatre version of the experience for group audiences opened this week at the Rio Tinto Alcan Montreal Planetarium. Those who desire a more immersive experience can now see the first two episodes in VR form by using a headset available through the gaming and entertainment company Oculus. Scenes from the VR series are also on offer as part of The Infinite, an interactive exhibition developed by Montreal’s Phi Studio, whose works focus on the intersection of art and technology. The exhibition, which runs until Nov. 7 [2021], has attracted 40,000 visitors since it opened in July [2021?].

At a time when billionaires are able to head off on private extraterrestrial sojourns that almost no one else could dream of, Lajeunesse [Félix Lajeunesse, co-founder and creative director of Felix and Paul studios] said his project was developed with a very different purpose in mind: making it easier for audiences to become eyewitnesses rather than distant spectators to humanity’s greatest adventure.

For the final instalments, the storyline takes viewers outside of the space station with cameras mounted on the Canadarm, and – for the climax of the series – by following astronauts during a spacewalk. These scenes required extensive planning, not only because of the limited time frame in which they could be gathered, but because of the lighting challenges presented by a constantly shifting sun as the space station circles the globe once every 90 minutes.

… Lajeunesse said that it was equally important to acquire shots that are not just technically spectacular but that serve the underlying themes of Space Explorers: The ISS Experience. These include an examination of human adaptation and advancement, and the unity that emerges within a group of individuals from many places and cultures and who must learn to co-exist in a high risk environment in order to achieve a common goal.

If you have the time, do read Semeniuk’s October 2, 2021 article in its entirety. You can find the exhibits (hopefully, you’re in Montreal) The Infinite here and Space Explorers: The ISS experience here (see the preview below),

The realities and the ‘verses

There always seems to be a lot of grappling with new and newish science/technology where people strive to coin terms and define them while everyone, including members of the corporate community, attempts to cash in.

The last time I looked (probably about two years ago), I wasn’t able to find any good definitions for alternate reality and mixed reality. (By good, I mean something which clearly explicated the difference between the two.) It was nice to find something this time.

As for Facebook and its attempts to join/create a/the metaverse, the company’s timing seems particularly fraught. As well, paradigm-shifting technology doesn’t usually start with large corporations. The company is ignoring its own history.

Multiverses

Writing this piece has reminded me of the upcoming movie, “Doctor Strange in the Multiverse of Madness” (Wikipedia entry). While this multiverse is based on a comic book, the idea of a Multiverse (Wikipedia entry) has been around for quite some time,

Early recorded examples of the idea of infinite worlds existed in the philosophy of Ancient Greek Atomism, which proposed that infinite parallel worlds arose from the collision of atoms. In the third century BCE, the philosopher Chrysippus suggested that the world eternally expired and regenerated, effectively suggesting the existence of multiple universes across time.[1] The concept of multiple universes became more defined in the Middle Ages.

Multiple universes have been hypothesized in cosmology, physics, astronomy, religion, philosophy, transpersonal psychology, music, and all kinds of literature, particularly in science fiction, comic books and fantasy. In these contexts, parallel universes are also called “alternate universes”, “quantum universes”, “interpenetrating dimensions”, “parallel universes”, “parallel dimensions”, “parallel worlds”, “parallel realities”, “quantum realities”, “alternate realities”, “alternate timelines”, “alternate dimensions” and “dimensional planes”.

The physics community has debated the various multiverse theories over time. Prominent physicists are divided about whether any other universes exist outside of our own.

Living in a computer simulation or base reality

The whole thing is getting a little confusing for me so I think I’ll stick with RR (real reality) or as it’s also known base reality. For the notion of base reality, I want to thank astronomer David Kipping of Columbia University in Anil Ananthaswamy’s article for this analysis of the idea that we might all be living in a computer simulation (from my December 8, 2020 posting; scroll down about 50% of the way to the “Are we living in a computer simulation?” subhead),

… there is a more obvious answer: Occam’s razor, which says that in the absence of other evidence, the simplest explanation is more likely to be correct. The simulation hypothesis is elaborate, presuming realities nested upon realities, as well as simulated entities that can never tell that they are inside a simulation. “Because it is such an overly complicated, elaborate model in the first place, by Occam’s razor, it really should be disfavored, compared to the simple natural explanation,” Kipping says.

Maybe we are living in base reality after all—The Matrix, Musk and weird quantum physics notwithstanding.

To sum it up (briefly)

I’m sticking with the base reality (or real reality) concept, which is where various people and companies are attempting to create a multiplicity of metaverses or the metaverse effectively replacing the internet. This metaverse can include any all of these realities (AR/MR/VR/XR) along with base reality. As for Facebook’s attempt to build ‘the metaverse’, it seems a little grandiose.

The computer simulation theory is an interesting thought experiment (just like the multiverse is an interesting thought experiment). I’ll leave them there.

Wherever it is we are living, these are interesting times.

***Updated October 28, 2021: D. (Devindra) Hardawar’s October 28, 2021 article for engadget offers details about the rebranding along with a dash of cynicism (Note: A link has been removed),

Here’s what Facebook’s metaverse isn’t: It’s not an alternative world to help us escape from our dystopian reality, a la Snow Crash. It won’t require VR or AR glasses (at least, not at first). And, most importantly, it’s not something Facebook wants to keep to itself. Instead, as Mark Zuckerberg described to media ahead of today’s Facebook Connect conference, the company is betting it’ll be the next major computing platform after the rise of smartphones and the mobile web. Facebook is so confident, in fact, Zuckerberg announced that it’s renaming itself to “Meta.”

After spending the last decade becoming obsessed with our phones and tablets — learning to stare down and scroll practically as a reflex — the Facebook founder thinks we’ll be spending more time looking up at the 3D objects floating around us in the digital realm. Or maybe you’ll be following a friend’s avatar as they wander around your living room as a hologram. It’s basically a digital world layered right on top of the real world, or an “embodied internet” as Zuckerberg describes.

Before he got into the weeds for his grand new vision, though, Zuckerberg also preempted criticism about looking into the future now, as the Facebook Papers paint the company as a mismanaged behemoth that constantly prioritizes profit over safety. While acknowledging the seriousness of the issues the company is facing, noting that it’ll continue to focus on solving them with “industry-leading” investments, Zuckerberg said: 

“The reality is is that there’s always going to be issues and for some people… they may have the view that there’s never really a great time to focus on the future… From my perspective, I think that we’re here to create things and we believe that we can do this and that technology can make things better. So we think it’s important to to push forward.”

Given the extent to which Facebook, and Zuckerberg in particular, have proven to be untrustworthy stewards of social technology, it’s almost laughable that the company wants us to buy into its future. But, like the rise of photo sharing and group chat apps, Zuckerberg at least has a good sense of what’s coming next. And for all of his talk of turning Facebook into a metaverse company, he’s adamant that he doesn’t want to build a metaverse that’s entirely owned by Facebook. He doesn’t think other companies will either. Like the mobile web, he thinks every major technology company will contribute something towards the metaverse. He’s just hoping to make Facebook a pioneer.

“Instead of looking at a screen, or today, how we look at the Internet, I think in the future you’re going to be in the experiences, and I think that’s just a qualitatively different experience,” Zuckerberg said. It’s not quite virtual reality as we think of it, and it’s not just augmented reality. But ultimately, he sees the metaverse as something that’ll help to deliver more presence for digital social experiences — the sense of being there, instead of just being trapped in a zoom window. And he expects there to be continuity across devices, so you’ll be able to start chatting with friends on your phone and seamlessly join them as a hologram when you slip on AR glasses.

D. (Devindra) Hardawar’s October 28, 2021 article provides a lot more details and I recommend reading it in its entirety.

Proximal Fields from September 8 – 12, 2021 and a peek into the international art/sci/tech scene

Toronto’s (Canada) Art/Sci Salon (also known as, Art Science Salon) sent me an August 26, 2021 announcement (received via email) of an online show with a limited viewing period (BTW, nice play on words with the title echoing the name of the institution mentioned in the first sentence),

PROXIMAL FIELDS

The Fields Institute was closed to the public for a long time. Yet, it
has not been empty. Peculiar sounds and intriguing silences, the flows
of the few individuals and the janitors occasional visiting the building
made it surprisingly alive. Microorganisms, dust specs and other
invisible guests populated undisturbed the space while the humans were
away. The building is alive. We created site specific installations
reflecting this condition: Elaine Whittaker and her poet collaborators
take us to a journey of the microbes living in our proximal spaces. Joel
Ong and his collaborators have recorded space data in the building: the
result is an emergent digital organism. Roberta Buiani and Kavi
interpret the venue as an organism which can be taken outside on a
mobile gallery.

PROXIMAL FIELDS will be visible  September 8-12 2021 at

https://ars.electronica.art/newdigitaldeal/en/proximal-fields/

it [sic] is part of Ars Electronica Garden LEONARDO LASER [Anti]disciplinary Topographies

https://ars.electronica.art/newdigitaldeal/en/antidisciplinary-topographies/

see [sic] a teaser here:

https://youtu.be/AYxlvLnYSdE

With: Elaine Whittaker, Joel Ong, Nina Czegledy, Roberta Buiani, Sachin
Karghie, Ryan Martin, Racelar Ho, Kavi.
Poetry: Maureen Hynes, Sheila Stewart

Video: Natalie Plociennik

This event is one of many such events being held for Ars Electronica 2021 festival.

For anyone who remembers back to my May 3, 2021 posting (scroll down to the relevant subhead; a number of events were mentioned), I featured a show from the ArtSci Salon community called ‘Proximal Spaces’, a combined poetry reading and bioart experience.

Many of the same artists and poets seem to have continued working together to develop more work based on the ‘proximal’ for a larger international audience.

International and local scene details (e.g., same show? what is Ars Electronica? etc.)

As you may have noticed from the announcement, there are a lot of different institutions involved.

Local: Fields Institute and ArtSci Salon

The Fields Institute is properly known as The Fields Institute for Research in Mathematical Sciences and is located at the University of Toronto. Here’s more from their About Us webpage,

Founded in 1992, the Fields Institute was initially located at the University of Waterloo. Since 1995, it has occupied a purpose-built building on the St. George Campus of the University of Toronto.

The Institute is internationally renowned for strengthening collaboration, innovation, and learning in mathematics and across a broad range of disciplines. …

The Fields Institute is named after the Canadian mathematician John Charles Fields (1863-1932). Fields was a pioneer and visionary who recognized the scientific, educational, and economic value of research in the mathematical sciences. Fields spent many of his early years in Berlin and, to a lesser extent, in Paris and Göttingen, the principal mathematical centres of Europe of that time. These experiences led him, after his return to Canada, to work for the public support of university research, which he did very successfully. He also organized and presided over the 1924 meeting of the International Congress of Mathematicians in Toronto. This quadrennial meeting was, and still is, the major meeting of the mathematics world.

There is no Nobel Prize in mathematics, and Fields felt strongly that there should be a comparable award to recognize the most outstanding current research in mathematics. With this in mind, he established the International Medal for Outstanding Discoveries in Mathematics, which, contrary to his personal directive, is now known as the Fields Medal. Information on Fields Medal winners can be found through the International Mathematical Union, which chooses the quadrennial recipients of the prize.

Fields’ name was given to the Institute in recognition of his seminal contributions to world mathematics and his work on behalf of high level mathematical scholarship in Canada. The Institute aims to carry on the work of Fields and to promote the wider use and understanding of mathematics in Canada.

The relationship between the Fields Institute and the ArtSci Salon is unclear to me. This can be found under Programs and Activities on the Fields Institute website,

2020-2021 ArtSci Salon

Description

ArtSci Salon consists of a series of semi-informal gatherings facilitating discussion and cross-pollination between science, technology, and the arts. ArtSci Salon started in 2010 as a spin-off of Subtle Technologies Festival to satisfy increasing demands by the audience attending the Festival to have a more frequent (monthly or bi-monthly) outlet for debate and information sharing across disciplines. In addition, it responds to the recent expansion in the GTA [Greater Toronto Area] area of a community of scientists and artists increasingly seeking collaborations across disciplines to successfully accomplish their research projects and questions.

For more details, visit our blog.

Sign up to our mailing list here.

For more information please contact:

Stephen Morris: smorris@physics.utoronto.ca

Roberta Buiani: rbuiani@gmail.com

We are pleased to announce our upcoming March 2021 events (more details are in the schedule below):

Ars Electronica

It started life as a Festival for Art, Technology and Society in 1979 in Linz, Austria. Here’s a little more from their About webpage,

… Since September 18, 1979, our world has changed radically, and digitization has covered almost all areas of our lives. Ars Electronica’s philosophy has remained the same over the years. Our activities are always guided by the question of what new technologies mean for our lives. Together with artists, scientists, developers, designers, entrepreneurs and activists, we shed light on current developments in our digital society and speculate about their manifestations in the future. We never ask what technology can or will be able to do, but always what it should do for us. And we don’t try to adapt to technology, but we want the development of technology to be oriented towards us. Therefore, our artistic research always focuses on ourselves, our needs, our desires, our feelings.

They have a number of initiatives in addition to the festival. The next festival, A New Digital Deal, runs from September 8 – 12, 2021 (Ars Electronica 2021). Here’s a little more from the festival webpage,

Ars Electronica 2021, the festival for art, technology and society, will take place from September 8 to 12. For the second time since 1979, it will be a hybrid event that includes exhibitions, concerts, talks, conferences, workshops and guided tours in Linz, Austria, and more than 80 other locations around the globe.

Leonardo; The International Society for Arts, Sciences and Technology

Ars Electronica and Leonardo; The International Society for Arts, Sciences and Technology (ISAST) cooperate on projects but they are two different entities. Here’s more from the About LEONARDO webpage,

Fearlessly pioneering since 1968, Leonardo serves as THE community forging a transdisciplinary network to convene, research, collaborate, and disseminate best practices at the nexus of arts, science and technology worldwide. Leonardo’ serves a network of transdisciplinary scholars, artists, scientists, technologists and thinkers, who experiment with cutting-edge, new approaches, practices, systems and solutions to tackle the most complex challenges facing humanity today.

As a not-for-profit 501(c)3 enterprising think tank, Leonardo offers a global platform for creative exploration and collaboration reaching tens of thousands of people across 135 countries. Our flagship publication, Leonardo, the world’s leading scholarly journal on transdisciplinary art, anchors a robust publishing partnership with MIT Press; our partnership with ASU [Arizona State University] infuses educational innovation with digital art and media for lifelong learning; our creative programs span thought-provoking events, exhibits, residencies and fellowships, scholarship and social enterprise ventures.

I have a description of Leonardo’s LASER (Leonardo Art Science Evening Rendezvous), from my March 22, 2021 posting (the Garden comes up next),

Here’s a description of the LASER talks from the Leonardo/ISAST LASER Talks event page,

“… a program of international gatherings that bring artists, scientists, humanists and technologists together for informal presentations, performances and conversations with the wider public. The mission of LASER is to encourage contribution to the cultural environment of a region by fostering interdisciplinary dialogue and opportunities for community building.”

To be specific it’s Ars Electronica Garden LEONARDO LASER and this is one of the series being held as part of the festival (A Digital New Deal). Here’s more from the [Anti]disciplinary Topographies ‘garden’ webpage,

Culturing transnational dialogue for creative hybridity

Leonardo LASER Garden gathers our global network of artists, scientists, humanists and technologists together in a series of hybrid formats addressing the world’s most pressing issues. Animated by the theme of a “new digital deal” and grounded in the UN Sustainability Goals, Leonardo LASER Garden cultivates our values of equity and inclusion by elevating underrepresented voices in a wide-ranging exploration of global challenges, digital communities and placemaking, space, networks and systems, the digital divide – and the impact of interdisciplinary art, science and technology discourse and collaboration.

Dovetailing with the launch of LASER Linz, this asynchronous multi-platform garden will highlight the best of the Leonardo Network (spanning 47 cities worldwide) and our transdisciplinary community. In “Extraordinary Times Call for Extraordinary Vision: Humanizing Digital Culture with the New Creativity Agenda & the UNSDGs [United Nations Sustainable Development Goals],” Leonardo/ISAST CEO Diana Ayton-Shenker presents our vision for shaping our global future. This will be followed by a Leonardo Community Lounge open to the general public, with the goal of encouraging contributions to the cultural environments of different regions through transnational exchange and community building.

Getting back to the beginning you can view Proximal Fields from September 8 – 12, 2021 as part of the Ars Electonica 2021 festival, specifically, the ‘garden’ series.

ETA September 8, 2021: There’s a newly posted (on the Fields Institute webspace) and undated notice/article “ArtSci Salon’s Proximal Fields debuts at the Ars Electronica Festival,” which includes an interview with members of the Proximal Fields team.

Future of Being Human: a call for proposals

The Canadian Institute for Advanced Research (CIFAR) is investigating the ‘Future of Being Human’ and has instituted a global call for proposals but there is one catch, your team has to have one person (with or without citizenship) who’s living and working in Canada. (Note: I am available.)

Here’s more about the call (from the CIFAR Global Call for Ideas: The Future of Being Human webpage),

New program proposals should explore the long term intersection of humans, science and technology, social and cultural systems, and our environment. Our understanding of the world around us, and new insights into individual and societal behaviour, have the potential to provide enormous benefits to humanity and the planet. 

We invite bold proposals from researchers at universities or research institutions that ask new questions about our complex emerging world. We are confronting challenging problems that require a diverse team incorporating multiple disciplines (potentially spanning the humanities, social sciences, arts, physical sciences, and life sciences [emphasis mine]) to engage in a sustained dialogue to develop new insights, and change the conversation on important questions facing science and humanity.

CIFAR is committed to creating a more diverse, equitable, and inclusive environment. We welcome proposals that include individuals from countries and institutions that are not yet represented in our research community.

Here’s a description, albeit, a little repetitive, of what CIFAR is asking researchers to do (from the Program Guide [PDF]),

For CIFAR’s next Global Call for Ideas, we are soliciting proposals related to The Future of Being Human, exploring in the long term the intersection of humans, science and technology, social and cultural systems, and our environment. Our understanding of the natural world around us, and new insights into individual and societal behaviour, have the potential to provide enormous benefits to humanity and the planet. We invite bold proposals that ask new questions about our complex emerging world, where the issues under study are entangled and dynamic. We are confronting challenging problems that necessitate a diverse team incorporating multiple disciplines (potentially spanning the humanities, social sciences, arts, physical sciences, and life sciences) to engage in a sustained dialogue to develop new insights, and change the conversation on important questions facing science and humanity. [p. 2 print; p. 4 PDF]

There is an upcoming information webinar (from the CIFAR Global Call for Ideas: The Future of Being Human webpage),

Monday, June 28, 2021 – 1:00pm – 1:45pm EDT

Webinar Sign-Up

Also from the CIFAR Global Call for Ideas: The Future of Being Human webpage, here are the various deadlines and additional sources of information,

August 17, 2021

Registration deadline

January 26, 2022

LOI [Letter of Intent] deadline

Spring 2022

LOIs invited to Full Proposal

Fall 2022

Full proposals due

March 2023

New program announcement and celebration

Resources

Program Guide [PDF]

Frequently Asked Questions

Good luck!

The Internet of Bodies and Ghislaine Boddington

I stumbled across this event on my Twitter feed (h/t @katepullinger; Note: Kate Pullinger is a novelist and Professor of Creative Writing and Digital Media, Director of the Centre for Cultural and Creative Industries [CCCI] at Bath Spa University in the UK).

Anyone who visits here with any frequency will have noticed I have a number of articles on technology and the body (you can find them in the ‘human enhancement’ category and/or search fro the machine/flesh tag). Boddington’s view is more expansive than the one I’ve taken and I welcome it. First, here’s the event information and, then, a link to her open access paper from February 2021.

From the CCCI’s Annual Public Lecture with Ghislaine Boddington eventbrite page,

This year’s CCCI Public Lecture will be given by Ghislaine Boddington. Ghislaine is Creative Director of body>data>space and Reader in Digital Immersion at University of Greenwich. Ghislaine has worked at the intersection of the body, the digital, and spatial research for many years. This will be her first in-person appearance since the start of the pandemic, and she will share with us the many insights she has gathered during this extraordinary pivot to online interfaces much of the world has been forced to undertake.

With a background in performing arts and body technologies, Ghislaine is recognised as a pioneer in the exploration of digital intimacy, telepresence and virtual physical blending since the early 90s. As a curator, keynote speaker and radio presenter she has shared her outlook on the future human into the cultural, academic, creative industries and corporate sectors worldwide, examining topical issues with regards to personal data usage, connected bodies and collective embodiment. Her research led practice, examining the evolution of the body as the interface, is presented under the heading ‘The Internet of Bodies’. Recent direction and curation outputs include “me and my shadow” (Royal National Theatre 2012), FutureFest 2015-18 and Collective Reality (Nesta’s FutureFest / SAT Montreal 2016/17). In 2017 Ghislaine was awarded the international IX Immersion Experience Visionary Pioneer Award. She recently co-founded University of Greenwich Strategic Research Group ‘CLEI – Co-creating Liveness in Embodied Immersion’ and is an Associate Editor for AI & Society (Springer). Ghislaine is a long term advocate for diversity and inclusion, working as a Trustee for Stemette Futures and Spokesperson for Deutsche Bank ‘We in Social Tech’ initiative. She is a team member and presenter with BBC World Service flagship radio show/podcast Digital Planet.

Date and time

Thu, 24 June 2021
08:00 – 09:00 [am] PDT

@GBoddington

@bodydataspace

@ConnectedBodies

Boddington’s paper is what ignited my interest; here’s a link to and a citation for it,

The Internet of Bodies—alive, connected and collective: the virtual physical future of our bodies and our senses by Ghislaine Boddington. AI Soc. 2021 Feb 8 : 1–17. DOI: 10.1007/s00146-020-01137-1 PMCID: PMC7868903 PMID: 33584018

Some excerpts from this open access paper,

The Weave—virtual physical presence design—blending processes for the future

Coming from a performing arts background, dance led, in 1989, I became obsessed with the idea that there must be a way for us to be able to create and collaborate in our groups, across time and space, whenever we were not able to be together physically. The focus of my work, as a director, curator and presenter across the last 30 years, has been on our physical bodies and our data selves and how they have, through the extended use of our bodies into digitally created environments, started to merge and converge, shifting our relationship and understanding of our identity and our selfhood.

One of the key methodologies that I have been using since the mid-1990s is inter-authored group creation, a process we called The Weave (Boddington 2013a, b). It uses the simple and universal metaphor of braiding, plaiting or weaving three strands of action and intent, these three strands being:

1. The live body—whether that of the performer, the participant, or the public;

2. The technologies of today—our tools of virtually physical reflection;

3. The content—the theme in exploration.

As with a braid or a plait, the three strands must be weaved simultaneously. What is key to this weave is that in any co-creation between the body and technology, the technology cannot work without the body; hence, there will always be virtual/physical blending. [emphasis mine]

Cyborgs

Cyborg culture is also moving forward at a pace with most countries having four or five cyborgs who have reached out into media status. Manel Munoz is the weather man as such, fascinated and affected by cyclones and anticyclones, his back of the head implant sent vibrations to different sides of his head linked to weather changes around him.

Neil Harbisson from Northern Ireland calls himself a trans-species rather than a cyborg, because his implant is permanently fused into the crown of his head. He is the first trans-species/cyborg to have his passport photo accepted as he exists with his fixed antenna. Neil has, from birth, an eye condition called greyscale, which means he only sees the world in grey and white. He uses his antennae camera to detect colour, and it sends a vibration with a different frequency for each colour viewed. He is learning what colours are within his viewpoint at any given time through the vibrations in his head, a synaesthetic method of transference of one sense for another. Moon Ribas, a Spanish choreographer and a dancer, had two implants placed into the top of her feet, set to sense seismic activity as it occurs worldwide. When a small earthquake occurs somewhere, she received small vibrations; a bigger eruption gives her body a more intense vibration. She dances as she receives and reacts to these transferred data. She feels a need to be closer to our earth, a part of nature (Harbisson et al. 2018).

Medical, non medical and sub-dermal implants

Medical implants, embedded into the body or subdermally (nearer the surface), have rapidly advanced in the last 30 years with extensive use of cardiac pacemakers, hip implants, implantable drug pumps and cochlear implants helping partial deaf people to hear.

Deep body and subdermal implants can be personalised to your own needs. They can be set to transmit chosen aspects of your body data outwards, but they also can receive and control data in return. There are about 200 medical implants in use today. Some are complex, like deep brain stimulation for motor neurone disease, and others we are more familiar with, for example, pacemakers. Most medical implants are not digitally linked to the outside world at present, but this is in rapid evolution.

Kevin Warwick, a pioneer in this area, has interconnected himself and his partner with implants for joint use of their personal and home computer systems through their BrainGate (Warwick 2008) implant, an interface between the nervous system and the technology. They are connected bodies. He works onwards with his experiments to feel the shape of distant objects and heat through fingertip implants.

‘Smart’ implants into the brain for deep brain stimulation are in use and in rapid advancement. The ethics of these developments is under constant debate in 2020 and will be onwards, as is proved by the mass coverage of the Neuralink, Elon Musk’s innovation which connects to the brain via wires, with the initial aim to cure human diseases such as dementia, depression and insomnia and onwards plans for potential treatment of paraplegia (Musk 2016).

Given how many times I’ve featured art/sci (also know as, art/science and/or sciart) and cyborgs and medical implants here, my excitement was a given.

For anyone who wants to pursue Boddington’s work further, her eponymous website is here, the body>data>space is here, and her University of Greenwich profile page is here.

For anyone interested in the Centre for Creative and Cultural Industries (CCCI), their site is here.

Finally, here’s one of my earliest pieces about cyborgs titled ‘My mother is a cyborg‘ from April 20, 2012 and my September 17, 2020 posting titled, ‘Turning brain-controlled wireless electronic prostheses into reality plus some ethical points‘. If you scroll down to the ‘Brain-computer interfaces, symbiosis, and ethical issues’ subhead, you’ll find some article excerpts about a fascinating qualitative study on implants and ethics.

DEBBY FRIDAY’s LINK SICK, an audio play+, opens March 29, 2021 (online)

[downloaded from https://debbyfriday.com/link-sick]

This is an artistic work, part of the DEBBY FRIDAY enterprise, and an MFA (Master of Fine Arts) project. Here’s the description from the Simon Fraser University (SFU) Link Sick event page,

LINK SICK

DEBBY FRIDAY’S MFA Project
Launching Monday, March 29, 2021 | debbyfriday.com/link-sick

Set against the backdrop of an ambiguous dystopia and eternal rave, LINK SICK is a tale about the threads that bind us together.  

LINK SICK is DEBBY FRIDAY’S graduate thesis project – an audio-play written, directed and scored by the artist herself. The project is a science-fiction exploration of the connective tissue of human experience as well as an experiment in sound art; blurring the lines between theatre, radio, music, fiction, essay, and internet art. Over 42-minutes, listeners are invited to gather round, close their eyes, and open their ears; submerging straight into a strange future peppered with blink-streams, automated protests, disembodied DJs, dancefloor orgies, and only the trendiest S/S 221 G-E two-piece club skins.

Starring 

DEBBY FRIDAY as Izzi/Narrator
Chino Amobi as Philo
Sam Rolfes as Dj GODLESS
Hanna Sam as ABC Inc. Announcer
Storm Greenwood as Diana Deviance
Alex Zhang Hungtai as Weaver
Allie Stephen as Numee
Soukayna as Katz
AI Voice Generated Protesters via Replica Studios

Presented in partial fulfillment of the requirements of the Degree of Master of Fine Arts in the School for the Contemporary Arts at Simon Fraser University.

No time is listed but I’m assuming FRIDAY is operating on PDT, so, you might want to take that into account when checking.

FRIDAY seems to favour full caps for her name and everywhere on her eponymous website (from her ABOUT page),

DEBBY FRIDAY is an experimentalist.

Born in Nigeria, raised in Montreal, and now based in Vancouver, DEBBY FRIDAY’s work spans the spectrum of the audio-visual, resisting categorizations of genre and artistic discipline. She is at once sound theorist and musician, performer and poet, filmmaker and PUNK GOD. …

Should you wish to support the artist financially, she offers merchandise.

Getting back to the play, I look forward to the auditory experience. Given how much we are expected to watch and the dominance of images, creating a piece that requires listening is an interesting choice.

Inside Dogma Lab; an ArtSci Salon event on March 25, 2021

This event is taking place at 7 am PDT. Should you still be interested, here are more details from a March 17, 2021 ArtSci Salon announcement (received via email; you can also find the information on the artscisalon.com/dogmalab/ webpage) provides descriptions of the talk and the artists after the registration and viewing information,

Benjamin Bacon & Vivian Xu –  Inside Dogma Lab – exploring new media
ecologies


Thursday, March 25 [2021]

10 am EDT, 4 pm GST, 10 pm CST [ 7 am PDT]

This session will stream on Zoom and YouTube

Register in advance for this meeting:

https://utoronto.zoom.us/meeting/register/tZMlfuyrpz4jG9aTl-Y8sAwn6Q75CPEpWRsM

After registering, you will receive a confirmation email containing
information about joining the meeting.

See more here:
https://artscisalon.com/dogmalab/

Or on Facebook:

https://facebook.com/artscisalon

Description

This ArtSci Salon /LASER morning event is inspired by the NewONE,
Learning without borders, a program at the University of Toronto
dedicated to interdisciplinary pedagogies and ecological learning
experiences. Art technology and science are waved together and inform
each other. The arts here are not simply used to illustrate or to
narrate, but to transmit, and make sense of complexity without falling
into given disciplinary and instrumental containers. The artistic medium
becomes simultaneously a catalyst for interrogating nature and a new
research tools able to display and communicate its complexity.

With this event, we welcome interdisciplinary artists Benjamin Bacon and
Vivian Xu.

Their transdisciplinary design lab, the Dogma Lab (http://dogma.org/, not only combines a diverse range of mediums (including software,
hardware, networked systems, online platforms, raw data, biomaterials
and living organisms), but also considers “the entanglement of
technological systems with other realities, including surveillance, sensory, bodily, environmental, and living systems. They are interested in complex hybrid networks that bridge the digital with the physical and biological realms, speculating on possible synthesized futures”.

Their research outcomes both individually and collectively have taken
the form of interfaces, wearables, toolkits, machines, musical
instruments, compositions and performances, public installations,
architectural spectacles and educational programs.

Situated in China, they have an invested interest in understanding and
participating in local design, technology and societal discourse, as
well how China as a local actor affects the dynamic of the larger global
system.

Bios

Benjamin Bacon is an inter-disciplinary artist, designer and musician
that works at the intersection of computational design, networked
systems, data, sound, installation and mechanical sculpture. He is
currently Associate Professor of Media and Art and Director of Signature
Work at Duke Kunshan University. He is also a lifetime fellow at V2_ Lab
for the Unstable Media in Rotterdam, Netherlands.

He has exhibited or performed his work in the USA, Europe, Iran, and
China such as the National Art Museum of  China (Beijing), Gallery Ho
(NYC), Wave Gotik Treffen (Germany), Chelsea Museum (NYC), Millennium
Museum (Beijing), Plug-In Gallery (Switzerland), Beijing Design Week,
Shenzhen Bay Science Technology and Arts Festival, the  Shanghai
Symphony Hall. Most recently his mechanical life and AI sculpture PROBE
– AVERSO SPECILLO DI  DUCENDUM was collected by the UNArt Center in
Shanghai, China.

https://www.benjaminbacon.studio/ [3]

Vivian Xu is a Beijing-born media artist, designer, researcher and
educator. Her work explores the boundaries  between bio and electronic
media in creating new forms of machine logic, speculative life and
sensory systems  often taking the form of objects, machines,
installations and wearable. Her work has been presented at various
institutions in China, the US, Europe and Australia.

She is an Assistant Professor of Media and Arts at Duke Kunshan
University. She has lectured, held research positions at various
institutions including Parsons New School for Design, New York
University Shanghai, and the Chinese University of Hong Kong (Shenzhen).

https://www.vivianxu.studio/

This event is hosted by ArtSci Salon @ The Fields Institute for
Research in Mathematical Sciences, the NewOne @ UofT and is part of
Leonardo/ISAST LASER TALKS. LASER is a program of international
gatherings that bring artists, scientists, humanists and technologists
together for informal presentations, performances and conversations with
the wider public. The mission of the LASERs is to encourage contribution
to the cultural environment of a region by fostering interdisciplinary
dialogue and opportunities for community building to over 40 cities
around the world. To learn more about how our LASER Hosts and to visit a
LASER near you please visit our website: leonardo.info/laser-talks [5].
@lasertalks_

Interesting timing: two Michaels and Meng Wanzhou

Given the tensions between Canada and China these days, this session with China-based artists intrigues for more than the usual reasons.

For anyone unfamiliar with the situation, here’s a quick recap: Meng Wanzhou, deputy board chair and chief financial officer (CFO) of telecom giant, Huawei, which was founded by her father Ren Zhengfei. has been detained, at a US government request and in accordance with a treaty, since 2018 in one of her two multimillion dollar mansions in Vancouver, Canada. She wears an electronic bracelet for surveillance purposes, must be escorted on her shopping trips and other excursions, and must abide by an 11 pm – 7 am curfew. She is currently fighting extradition to the US with an extensive team of Canadian lawyers.

In what has been widely perceived as retaliatory, China shortly after Meng Wanzhou’s arrest put two Canadians, Michael Kovrig and Michael Spavor, wre arrested and put in prison allowing only severely limited contact with Canadian consular officials. As I write this on March 22, 2021, brief trials have been held (Friday, March 19, 2021 and Monday, March 22, 2021) for both Michaels, no outside observers allowed. It’s unclear as to which or how many lawyers are arguing in defence of either Michael. Sentences will be given at some time in the future.

Tensions are very high indeed.

Moving on to links

You can find the Dogma Lab here. As for Leonardo/ISAST, there is an interesting history,

The journal Leonardo was founded in 1968 in Paris by kinetic artist and astronautical pioneer Frank Malina. Malina saw the need for a journal that would serve as an international channel of communication among artists, with emphasis on the writings of artists who use science and developing technologies in their work. After the death of Frank Malina in 1981 and under the leadership of his son, Roger F. Malina, Leonardo moved to San Francisco, California, as the flagship journal of the newly founded nonprofit organization Leonardo/The International Society for the Arts, Sciences and Technology (Leonardo/ISAST). Leonardo/ISAST has grown along with its community and today is the leading organization for artists, scientists and others interested in the application of contemporary science and technology to the arts and music.

Frank Malina, founder of Leonardo, was an American scientist. After receiving his PhD from the California Institute of Technology in 1936, Malina directed the WAC Corporal program that put the first rocket beyond the Earth’s atmosphere. He co-founded and was the second director of the Jet Propulsion Laboratory (JPL), co-founded the Aerojet General Corporation and was an active participant in rocket-science development in the period leading up to and during World War II.

Invited to join the United Nations Education, Science and Culture Organization (UNESCO) in 1947 by Julian Huxley, Malina moved to Paris as the director of the organization’s science programs. The separation between science and the humanities was the subject of intense debate during the post-war period, particularly after the publication of C.P. Snow’s Two Cultures in 1959. The concept that there was and should be a natural relationship between science and art fascinated Malina, eventually influencing him to synthesize his scientific experience with his long-standing artistic sensibilities. As an artist, Malina moved from traditional media to mesh, string and canvas constructions and finally to experiments with light, which led to his development of systems for kinetic painting.

Here’s a description of the LASER talks from the Leonardo/ISAST LASER Talks event page,

… a program of international gatherings that bring artists, scientists, humanists and technologists together for informal presentations, performances and conversations with the wider public. The mission of LASER is to encourage contribution to the cultural environment of a region by fostering interdisciplinary dialogue and opportunities for community building.

There are two talks scheduled for tomorrow, Tuesday, March 23, 2021 and four talks for Thursday, March 25, 2021 with more scheduled for April on the Leonardo/ISAST LASER Talks event page,

You can find out more about the New College at the University of Toronto here where the New One: Learning without Borders programme is offered. BTW, New College was founded in 1962. You can get more information on their Why New College page.

COVID-19 infection as a dance of molecules

What a great bit of work, publicity-wise, from either or both the Aga Khan Museum in Toronto (Canada) and artist/scientist Radha Chaddah. IAM (ee-yam): Dance of the Molecules, a virtual performance installation featuring COVID-19 and molecular dance, has been profiled in the Toronto Star, on the Canadian Broadcasting Corporation (CBC) website, and in the Globe and Mail within the last couple of weeks. From a Canadian perspective, that’s major coverage and much of it national.

Bruce DeMara’s March 11, 2021 article for the Toronto Star introduces artist/scientist Radha Chaddah, her COVID-19 dance of molecules, and her team (Note: A link has been removed),

Visual artist Radha Chaddah has always had an abiding interest in science. She has a degree in biology and has done graduate studies in stem cell research.

[…] four-act dance performance; the first part “IAM: Dance of the Molecules” premiered as a digital exhibition on the Aga Khan Museum’s website March 5 [2021] and runs for eight weeks. Subsequent acts — human, planetary and universal, all using the COVID virus as an entry point — will be unveiled over the coming months until the final instalment in December 2022.

Among Chaddah’s team were Allie Blumas and the Open Fortress dance collective — who perform as microscopic components of the virus’s proliferation, including “spike” proteins, A2 receptors and ribosomes — costumiers Call and Response (who designed for the late Prince), director of photography Henry Sansom and composer Dan Bédard (who wrote the film’s music after observing the dance rehearsals remotely).

A March 5, 2021 article by Leah Collins for CBC online offers more details (Note: Links have been removed),

This month, the Aga Khan Museum in Toronto is debuting new work from local artist Radha Chaddah. Called IAM, this digital exhibition is actually the first act in a series of four short films that she aims to produce between now and the end of 2022. It’s a “COVID story,” says Chaddah, but one that offers a perspective beyond the anniversary of its impact on life and culture and toilet-paper consumption. “I wanted to present a piece that makes people think about the coronavirus in a different way,” she explains, “one that pulls them out of the realm of fear and puts our imaginations into the realm of curiosity.”

It’s scientific curiosity that Chaddah’s talking about, and her own extra-curricular inquiries first sparked the series. For several years, Chaddah has produced work that splices art and science, a practice she began while doing grad studies in molecular neurobiology. “If I had to describe it simply, I would say that I make art about invisible realities, often using the tools of research science,” she says, and in January of last year, she was gripped by news of the novel coronavirus’ discovery. 

“I started researching: reading research papers, looking into how it was that [the virus] actually affected the human body,” she says. “How does it get into the cells? What’s its replicative life cycle?” Chaddah wanted a closer look at the structure of the various molecules associated with the progression of COVID-19 in the body, and there is, it turns out, a trove of free material online. Using animated 3-D renderings (sourced from this digital database), Chaddah began reviewing the files: blowing them up with a video projector, and using the trees in her own backyard as “a kind of green, living stage.”

Part one of IAM (the film appearing on the Aga Khan’s website) is called “Dance of the Molecules.” Recorded on Chaddah’s property in September, it features two dancers: Allie Blumas (who choreographed the piece) and Lee Gelbloom. Their bodies, along with the leafy setting, serve as a screen for Chaddah’s projections: a swirl of firecracker colour and pattern, built from found digital models. Quite literally, the viewer is looking at an illustration of how the coronavirus infects the human body and then replicates. (The very first images, for example, are close-ups of the virus’ spiky surface, she explains.) And in tandem with this molecular drama, the dancers interpret the process. 

There is a brief preview,

To watch part 1 of IAM: Dance of the Molecules, go here to the Aga Khan Museum.

Enjoy!

Being a bit curious I looked up Radha Chaddah’s website and found this on her Bio webpage (click on About tab for the dropdown menu from the Home page),

Radha Chaddah is a Toronto based visual artist and scientist. Born in Owen Sound, Ontario she studied Film and Art History at Queen’s University (BAH), and Human Biology at the University of Toronto, where she received a Master of Science in Cell and Molecular Neurobiology. 

Chaddah makes art about invisible realities like the cellular world, electromagnetism and wave form energy, using light as her primary medium.  Her work examines the interconnected themes of knowledge, illusion, desire and the unseen world. In her studio she designs projected light installations for public exhibition. In the laboratory, she uses the tools of research science to grow and photograph cells using embedded fluorescent light-emitting molecules. Her cell photographs and light installations have been exhibited across Canada and her photographs have appeared in numerous publications.  She has lectured on basic cell and stem cell biology for artists, art students and the public at OCADU [Ontario College of Art & Design University], the Ontario Science Centre, the University of Toronto and the Textile Museum of Canada.

I also found Call and Response here, the Open Fortress dance collective on the Centre de Création O Vertigo website, Henry Sansom here, and Dan Bedard here. Both Bedard and Sansom can be found on the Internet Move Database (IMDB.com), as well.

FACTT (Festival of Art and Science) 2021: Improbable Times on Thursday, Jan.28.21 at 3:30 pm EST

Courtesy: Arte Institute

Plans for last year’s FACTT (Festival of Art and Science) 2020 had to be revised at the last minute due to COVID-19. This year, organizers were prepared so no in person sessions have to be cancelled or turned into virtual events. Here’s more from the Jan. 25, 2021 announcement I received (via email) from one of the festival partners, the ArtSci Salon at the University of Toronto,

Join us! Opening of FACTT 20-21 Improbable Times! 

Thursday, January 28, 2021 at 3:30 PM EST – 5:30 PM EST
Public  · Anyone on or off Facebook – link will be disseminated closer to the event.

The Arte Institute and the RHI Initiative, in partnership with Cultivamos Cultura, have the pleasure to present the FACTT 2021 – Festival Art & Science. The festival opens on January 28, at 8.30 PM (GMT), and will be exhibited online on RHI Stage.

This year we are reshaping FACTT! Come join us for the kick-off of this amazing project!

A project spearheaded and promoted by the Arte Institute we are in or production and conception partners with Cultivamos Cultura and Ectopia (Portugal), InArts Lab@Ionian University (Greece), ArtSci Salon@The Fields Institute and Sensorium@York University (Canada), School of Visual Arts (USA), UNAM, Arte+Ciência and Bioscenica (Mexico), and Central Academy of Fine Arts (China).

Together we will work and bring into being our ideas and actions for this during the year of 2021!

FACTT 20/21 – Improbable Times presents a series of exceptional artworks jointly curated by Cultivamos Cultura and our partners. The challenge of a translation from the physical space that artworks occupy typically, into an exhibition that lives as a hybrid experience, involves rethinking the materiality of the work itself. It also questions whether we can live and interact with each other remotely and in person producing creative effective collaborative outcomes to immerse ourselves in. Improbable Times brings together a collection of works that reflect the times we live in, the constraints we are faced with, the drive to rethink what tomorrow may bring us, navigate it and build a better future, beyond borders.

Watch online: RHI Stage platform – http://bit.ly/3bWCT64 OR on the RHI Think app OR at Arte Institute and RHI Think facebook pages. https://vimeo.com/arteinstitute and youtube @rhi_think

January 28, 2021 | 8:30 PM (GMT)Program:
– Introduction
– Performance Toronto: void * ambience : Latency, with Joel Ong, Michael Palumbo and Kavi
– Performance Mexico “El Tercero Cuerpo Sonoro” (Third Sonorous Body), by Arte+Ciência.
– Q&A

The performance series void * ambience experiments with sound and video
content that is developed through a focus on the topographies and networks through which these flow. Initiated during the time of COVID and social distancing, this project explores processes of information sharing, real-time performance and network communication protocols that contribute to the sustenance of our digital communities, shared experiences and telematic intimacies.

“El Tercero Cuerpo Sonoro” project is a digital drift that explores different relationships with the environment, nature, humans and non-humans from the formulation of an intersubjective body. Its main search is to generate resonances with and among the others.

In these complicated times in which it seems that our existence unfolds in front of the screen, confined to the space of the black mirror, it becomes urgent to challenge the limits and scopes of digital life. We need to rethink the way in which we inhabit the others as well as our own subjectivity.

IEither the RHI FACTT 2021 event page or the Arte Institute FACTT 2021 event page, offer a more detailed and, somewhat, more accessible description,

Program:
– Introduction
– Performance Toronto: Proximal Spaces
Artistic Directors: Joel Ong, Elaine Whittaker
Graphic Designer: Natalie Plociennik Bhavesh Kakwani
AR [augmented reality] development : Sachin Khargie, Ryan Martin
Bioartists: Roberta Buiani, Nathalie Dubois Calero, Sarah Choukah, Nicole Clouston, Jess Holtz, Mick Lorusso, Maro Pebo, Felipe Shibuya
– Performance Mexico Tercero Cuerpo Sonoro (Third Sonorous Body) by Arte+Ciência

FACTT team: Marta de Menezes, Suzanne Anker, Maria Antonia Gonzalez Valerio, Roberta Buiani, Jo Wei, Dalila Honorato, Joel Ong, Lena Lee and Minerva Ortiz.

For FACTT20/21 we propose to put together an exhibition where the virtual and the physical share space, a space that is hybrid from its conception, a space that desires to break the limits of access to culture, to collaboration, to the experience of art. A place where we can think deeply and creatively together about the adaptive moves we had and have to develop to the rapid and sudden changes our lives and environment are going through.

Enjoy!

Artificial Intelligence (AI), musical creativity conference, art creation, ISEA 2020 (Why Sentience?) recap, and more

I have a number of items from Simon Fraser University’s (SFU) Metacreation Lab January 2021 newsletter (received via email on Jan. 5, 2020).

29th International Joint Conference on Artificial Intelligence and the 17th Pacific Rim International Conference on Artificial Intelligence! or IJCAI-PRICAI2020 being held on Jan. 7 – 15, 2021

This first excerpt features a conference that’s currently taking place,,

Musical Metacreation Tutorial at IIJCAI – PRICAI 2020 [Yes, the 29th International Joint Conference on Artificial Intelligence and the 17th Pacific Rim International Conference on Artificial Intelligence or IJCAI-PRICAI2020 is being held in 2021!]

As part of the International Joint Conference on Artificial Intelligence (IJCAI – PRICAI 2020, January 7-15), Philippe Pasquier will lead a tutorial on Musical Metacreation. This tutorial aims at introducing the field of musical metacreation and its current developments, promises, and challenges.

The tutorial will be held this Friday, January 8th, from 9 am to 12:20 pm JST ([JST = Japanese Standard Time] 12 am to 3:20 am UTC [or 4 pm – 7:30 pm PST]) and a full description of the syllabus can be found here. For details about registration for the conference and tutorials, click below.

Register for IJCAI – PRICAI 2020

The conference will be held at a virtual venue created by Virtual Chair on the gather.town platform, which offers the spontaneity of mingling with colleagues from all over the world while in the comfort of your home. The platform will allow attendees to customize avatars to fit their mood, enjoy a virtual traditional Japanese village, take part in plenary talks and more.

Two calls for papers

These two excerpts from SFU’s Metacreation Lab January 2021 newsletter feature one upcoming conference and an upcoming workshop, both with calls for papers,

2nd Conference on AI Music Creativity (MuMe + CSMC)

The second Conference on AI Music Creativity brings together two overlapping research forums: The Computer Simulation of Music Creativity Conference (est. 2016) and The International Workshop on Musical Metacreation (est. 2012). The objective of the conference is to bring together scholars and artists interested in the emulation and extension of musical creativity through computational means and to provide them with an interdisciplinary platform in which to present and discuss their work in scientific and artistic contexts.

The 2021 Conference on AI Music Creativity will be hosted by the Institute of Electronic Music and Acoustics (IEM) of the University of Music and Performing Arts of Graz, Austria and held online. The five-day program will feature paper presentations, concerts, panel discussions, workshops, tutorials, sound installations and two keynotes.

AIMC 2021 Info & CFP

AIART  2021

The 3rd IEEE Workshop on Artificial Intelligence for Art Creation (AIART) workshop has been announced for 2021. to bring forward cutting-edge technologies and most recent advances in the area of AI art in terms of enabling creation, analysis and understanding technologies. The theme topic of the workshop will be AI creativity, and will be accompanied by a Special Issue of the renowned SCI journal.

AIART is inviting high-quality papers presenting or addressing issues related to AI art, in a wide range of topics. The submission due date is January 31, 2021, and you can learn about the wide range of topics accepted below:

AIART 2021 Info & CFP

Toying with music

SFU’s Metacreation Lab January 2021 newsletter also features a kind of musical toy,

MMM : Multi-Track Music Machine

One of the latest projects at the Metacreation Lab is MMM: a generative music generation system based on Transformer architecture, capable of generating multi-track music, developed by Jeff Enns and Philippe Pasquier.

Based on an auto-regressive model, the system is capable of generating music from scratch using a wide range of preset instruments. Inputs from one or several tracks can condition the generation of new tracks, resampling MIDI input from the user or adding further layers of music.

To learn more about the system and see it in action, click below and watch the demonstration video, hear some examples, or try the program yourself through Google Colab.

Explore MMM: Multi-Track Music Machine

Why Sentience?

Finally, for anyone who was wondering what happened at the 2020 International Symposium of Electronic Arts (ISEA 2020) held virtually in Montreal in the fall, here’s some news from SFU’s Metacreation Lab January 2021 newsletter,

ISEA2020 Recap // Why Sentience? 

As we look back at one of the most unprecedented years, some of the questions explored at ISEA2020 are more salient now than ever. This recap video highlights some of the most memorable moments from last year’s virtual symposium.

ISEA2020 // Why Sentience? Recap Video

The Metacreation Lab’s researchers explored some of these guiding questions at ISEA2020 with two papers presented at the symposium: Chatterbox: an interactive system of gibberish agents and Liminal Scape, An Interactive Visual Installation with Expressive AI. These papers, and the full proceedings from ISEA2020 can now be accessed below. 

ISEA2020 Proceedings

The video is a slick, flashy, and fun 15 minutes or so. In addition to the recap for ISEA 2020, there’s a plug for ISEA 2022 in Barcelona, Spain.

The proceedings took my system a while to download (there are approximately 700 pp.). By the way, here’s another link to the proceedings or rather to the archives for the 2020 and previous years’ ISEA proceedings.