Tag Archives: US National Aeronautics and Space Administration NASA

Venice Biennale 2024 (April 20 – November 24, 2024)

Every once in a while I get an email from a lawyer (Gale P. Eston) in New York City who specializes in the art and business communities. How I got on her list is a mystery to me but her missives are always interesting. The latest one was a little difficult to understand until I looked at the Venice Biennale website and saw the theme for this year’s exhibition,

Courtesy: Venice Biennale [downloaded from https://www.labiennale.org/en/news/biennale-arte-2024-stranieri-ovunque-foreigners-everywhere]

Biennale Arte 2024: Stranieri Ovunque – Foreigners Everywhere

The 60th International Art Exhibition, curated by Adriano Pedrosa, will be open from Saturday 20 April to Sunday 24 November at the Giardini and Arsenale venues.

The 60th International Art Exhibition, titled Stranieri Ovunque – Foreigners Everywhere, will open to the public from Saturday April 20 to Sunday November 24, 2024, at the Giardini and the Arsenale; it will be curated by Adriano Pedrosa and organised by La Biennale di Venezia. The pre-opening will take place on April 17, 18 and 19; the awards ceremony and inauguration will be held on 20 April 2024.

Since 2021, La Biennale di Venezia launched a plan to reconsider all of its activities in light of recognized and consolidated principles of environmental sustainability. For the year 2024, the goal is to extend the achievement of “carbon neutrality” certification, which was obtained in 2023 for La Biennale’s scheduled activities: the 80th Venice International Film Festival, the Theatre, Music and Dance Festivals and, in particular, the 18th International Architecture Exhibition which was the first major Exhibition in this discipline to test in the field a tangible process for achieving carbon neutrality – while furthermore itself reflecting upon the themes of decolonisation and decarbonisation

The Exhibition will take place in the Central Pavilion (Giardini) and in the Arsenale, and it will present two sections: the Nucleo Contemporaneo and the Nucleo Storico.

As a guiding principle, the Biennale Arte 2024 has favored artists who have never participated in the International Exhibition—though a number of them may have been featured in a National Pavilion, a Collateral Event, or in a past edition of the International Exhibition. Special attention is being given to outdoor projects, both in the Arsenale and in the Giardini, where a performance program is being planned with events during the pre-opening and closing weekend of the 60th Exhibition.

Stranieri Ovunque – Foreigners Everywhere, the title of the 60th International Art Exhibition of La Biennale di Venezia, is drawn from a series of works started in 2004 by the Paris-born and Palermo-based Claire Fontaine collective. The works consist of neon sculptures in different colours that render in a growing number of languages the words “Foreigners Everywhere”. The phrase comes, in turn, from the name of a Turin collective who fought racism and xenophobia in Italy in the early 2000s.

«The expression Stranieri Ovunque – explains Adriano Pedrosa – has several meanings. First of all, that wherever you go and wherever you are you will always encounter foreigners— they/we are everywhere. Secondly, that no matter where you find yourself, you are always truly, and deep down inside, a foreigner.»

«The Italian straniero, the Portuguese estrangeiro, the French étranger, and the Spanish extranjero, are all etymologically connected to the strano, the estranho, the étrange, the extraño, respectively, which is precisely the stranger. Sigmund Freud’s Das Unheimliche comes to mind—The Uncanny in English, which in Portuguese has indeed been translated as “o estranho”– the strange that is also familiar, within, deep down side. According to the American Heritage and the Oxford Dictionaries, the first meaning of the word “queer” is precisely “strange”, and thus the Exhibition unfolds and focuses on the production of other related subjects: the queer artist, who has moved within different sexualities and genders, often being persecuted or outlawed; the outsider artist, who is located at the margins of the art world, much like the self-taught artist, the folk artist and the artista popular; the indigenous artist, frequently treated as a foreigner in his or her own land. The productions of these four subjects are the interest of this Biennale, constituting the Nucleo Contemporaneo

«Indigenous artists have an emblematic presence and their work greets the public in the Central Pavilion, where the Mahku collective from Brazil will paint a monumental mural on the building’s façade, and in the Corderie, where the Maataho collective from Aotearoa/New Zealand will present a large-scale installation in the first room. Queer artists appear throughout the exhibition, and are also the subject of a large section in the Corderie, and one devoted to queer abstraction in the Central Pavilion.»

The Nucleo Contemporaneo will feature a special section in the Corderie devoted to the Disobedience Archive, a project by Marco Scotini, which since 2005 has been developing a video archive focusing on the relationships between artistic practices and activism. In the Exhibition, the presentation of the Disobedience Archive is designed by Juliana Ziebell, who also worked in the exhibition architecture of the entire International Exhibition. This section is divided into two main parts especially conceived for our framework: Diaspora activism and Gender Disobedience. The Disobedience Archive will include works by 39 artists and collectives made between 1975 and 2023.»

«The Nucleo Storico gathering works from 20th century Latin America, Africa, the Middle East and Asia. Much has been written about global modernisms and modernisms in the Global South, and a number of rooms will feature works from these territories, much like an essay, a draft, a speculative curatorial exercise that seeks to question the boundaries and definitions of modernism. We are all too familiar with the histories of modernism in Euroamerica, yet the modernisms in the Global South remain largely unknown. […]. European modernism itself travelled far beyond Europe throughout the 20th century, often intertwined with colonialism, and many artists in the Global South traveled to Europe to be exposed to it […].»

In the Central Pavilion three rooms are planned for the Nucleo Storico: one room is titled Portraits, one Abstractions and the third one is devoted to the the worldwide Italian artistic diaspora in the 20th century.

«The double-room named Portraits, includes works from 112 artists, mostly paintings but also works on paper and sculpture, spanning the years of 1905 and 1990. […] The theme of the human figure has been explored in countless different ways by artists in the Global South, reflecting on the crisis of representation around the that very figure that marked much of the art in 20th century art. In the Global South, many artists were in touch with European modernism, through travels, studies or books, yet they bring in their own highly personal and powerful reflections and contributions to their works […]. The room devoted to Abstractions includes 37 artists: most of them are being exhibited together for the first time, and we will learn from these unforeseen juxtapositions in the flesh, which will then hopefully point towards new connections, associations, and parallels much beyond the rather straightforward categories that I have proposed. […]»

Artists from Singapore and Korea have been brought into this section, given that at the time they were part of the so-called Third World. In a similar manner, Selwyn Wilson and Sandy Adsett, from Aotearoa/New Zealand, have been brought into this Nucleo Storico as they are historical Maori artists.

«[…] A third room in the Nucleo Storico is dedicated to the worldwide Italian artistic diaspora in the 20th century: Italian artists who travelled and moved abroad developing their careers in Africa, Asia, Latin America, as well as in the rest of Europe and the United States, becoming embedded in local cultures—and who often played significant roles in the development of the narratives of modernism beyond Italy. This room will feature works by 40 artists who are first or second generations Italians, exhibited in Lina Bo Bardi’s glass easel display system (Bo Bardi herself an Italian who moved to Brazil, and who won the 2021 Biennale Architettura’s Special Golden Lion for Lifetime Achievement in Memoriam).»

«Two quite different but related elements have emerged – underlines Pedrosa – rather organically in the research and have been developed, appearing as leitmotivs throughout the International Exhibition. The first one is textiles, which have been explored by many artists in the show in multiple, from key historical figures in the Nucleo Storico, to many artists in the Nucleo Contemporaneo. […] These works reveal an interest in craft, tradition, and the handmade, and in techniques that were at times considered other or foreign, outsider or strange in the larger field of fine arts. […] A second motif is artists—artists related by blood, many of them Indigenous. […] Again tradition plays an important role here: the transmission of knowledge and practices from father or mother to son or daughter or among siblings and relatives.»

There’s a lot more about this huge art exhibition on the Venice Biennale website but this is enough to give you a sense of the size and scope and how the work Eston describes fits into the 2024 exhibition theme.

Gale P. Eston‘s April 12, 2024 email announced an exhibition she curated and which is being held on site during the 2024 Venice Biennale (Note 1: I’ve published too late for the opening reception but there’s more to Eston’s curation than a reception; Note 2: There is an art/science aspect to the work from artist China Blue),

Hospitality in the Pluriverse, curated by Gale Elston during the 60th edition of the Venice Biennial from April 16 to May 4, 2024.

The Opening Reception will be held April 16th, 2024 from 5-7 pm

HOSPITALITY IN THE PLURIVERSE

JEREMY DENNIS

ANITA GLESTA

ANN MCCOY

WARREN NEIDICH

ILONA RICH

Corte de Ca’ Sarasina, Castello 1199, Venezia, IT, 30122

April 16 to May 4, 2024

OPENING RECEPTION: April 16, 5 to 7 PM

Gallery hours: Tuesday-Saturday 10-6 PM

Performances by CHINA BLUE curated by Elga Wimmer

April 16, 18 and 19 at 6 PM

RAINER GANAHL, Requiem, performed April 17 at 6 PM

This exhibition includes five artists who explore the political, historical, aesthetical, physical, and epistemological dimensions of hospitality and its’ conflicts. Based upon the analysis of Jacques Derrida, in his Of Hospitality, this exhibition scrutinizes the reaction of the host to alterity or otherness.

Each artist examines various questions surrounding the encounter of a foreigner and their host sovereign using a variety of media such as painting, photography, sculpture, and animation.

In discussion with Adriano Pedrosa’s exhibition Foreigners Everywhere, the exhibition Hospitality in the Pluriverse understands the complexity of immigration and begs the question of what hospitality is and when and how should it be extended to the stranger, the foreigner, the “other”.

On the one hand the devastating effects of global inequality, climate change (climate refugees) and the political pressures created have led to mass migration and political and chaos. In opposition, the richness of the contributions of the other in the form of cultural and epistemological multiplicity is invaluable.

Jeremy Dennis, First Nation artist and Tribal Member of the Shinnecock Indian Nation in Southampton, NY, uses staging and computer assisted techniques to create unusual color photographs which portray indigenous identity, culture, and assimilation. His photographs challenge how indigenous people have been presented in film in America Westerns as well as empowering them through the use of a haunting Zombie trope establishing the power of ancestral knowledge as a means of resistance.

Ann McCoy, a New York-based sculptor, painter, and art critic, and Editor-at- Large for the Brooklyn Rail includes a new drawing from her recent Guggenheim Fellowship exploring the fairy tale of a wolf in her father’s silver, gold and tungsten mill. The fairy tale is based on an historic site of many Irish immigrant workers’ deaths and expresses the tragedy using Jungian and alchemical references.

Warren Neidich’s work Pluriverse* engages with the concept of cognitive justice. As Bonaventure de Sousa Santos has said there can be no social justice without cognitive justice. Cognitive which includes the right of different traditions of knowledge and the cultural practices they are engaged with to co-exist without duress. Especially relevant for us here are those forms of knowledge that have evolved in the so-called enlightened global North, Indigenous Knowledges and those in the subaltern global South and Asia. Pluriverse is an expression that is inclusive of these diverse epistemologies. We don’t want to live in a normative, homogeneous Universe but rather a heterogeneous and multiplicitous Pluriverse.*

Anita Glesta, depicts the non-human foreigner (a corona virus moving through the body like a bug or a butterfly) set to a soundtrack from Hildegard von Bingen, the abbess and composer from the medieval ages. Glesta’s video was developed on a Fellowship with The ARC Laureate Felt Experience & Empathy Lab to research how anxiety affects our nervous system. As an extension of the pandemic series her animations invite the viewer to experience how humans process fear and anxiety in their bodies.

Spanish artist Ilona Rich work continues the theme of what it is to be a foreigner on a psychological level. Her colorful sculptures describe a dystopian view of the commonplace and the everyday.

Her work shows us a person who feels like a stranger in their own skin, anxious, precarious, not normative. Her dogs have two heads and the many feet of a centipede. Her sculpture Wheel of Fortune will be displayed which posits that fate is contingent on chance and our roles as host or foreigner are subject to rapid unexpected change.

The exhibition offers a dizzying study of alterity, on the biological (Glesta), the social (Dennis), the historical (McCoy), cognitive (Neidich) and personal levels (Rich).The viewer will come away with an expanded and enriched view of what it means to be a foreigner and asks what contingencies, if any, should accompany hospitality.

— Gale Elston

China Blue, Saturn Walk: Embodying Listening during the 2024 Venice Biennial with (Re)Create [emphasis mine]

Project Space Venice, curated by Elga Wimmer.

US/Canadian artist China Blue creates art performances that give a physical expression to sound based on her interest in connecting through art and science.

For her 2024 Venice exhibition, Saturn Walk: Embodying Listening by China Blue, performers and visitors walk in a labyrinth to a composition created by her and Lance Massey. This is a work based on the sonics in Saturn’s rings that China Blue and Dr. Seth Horowitz discovered as a result of a grant from NASA to explore Saturn’s rings.

In Saturn Walk: Embodying Listening for the (Re)Create Project Space Venice, the artist invites viewers to experience the sound walk following the dance performance. The dancers include Andrea Nann and Jennifer Dahl, Canada, and Laura Coloman, UK. A trace of China Blue’s performance, an artwork, Celestial Pearls, based on 16 of Saturn’s 100+ moons, will remain on view at (Re)Create Project Space Venice.

Austrian artist Rainer Ganahl performs his work, Requiem in memoriam for Russian dissident Alexei Navalny.

It seems like you might need the full seven months to fully appreciate the work on display at the 2024 Venice Biennale.

Invitation to collect data during April 8, 2024 eclipse for US National Aeronautics and Space Administration (NASA)

An April 2, 2024 news item on phys.org is, in fact, an open invitation to participate in data collection for NASA during the April 8, 2024 eclipse,

On April 8, 2024, as the moon passes between the sun and Earth, thousands of amateur citizen scientists will measure air temperatures and snap pictures of clouds. The data they collect will aid researchers who are investigating how the sun influences climates in different environments.

Among those citizen scientists are the fifth- and sixth-grade students at Alpena Elementary in northwest Arkansas. In the weeks leading up to the eclipse, these students are visiting the school’s weather station 10 times a day to collect temperature readings and monitor cloud cover. They will then upload the data to a phone-based app that’s part of a NASA-led program called GLOBE, short for Global Learning and Observations to Benefit the Environment.

The goal, according to Alpena Elementary science and math teacher Roger Rose, is to “make science and math more real” for his students. “It makes them feel like they’re doing something that’s important and worthwhile.”

The GLOBE eclipse tool is a small part of the much broader GLOBE project, through which students and citizen scientists collect data on plants, soil, water, the atmosphere, and even mosquitoes. Contributors to the eclipse project will only need a thermometer and a smartphone with the GLOBE Observer app downloaded. They can access the eclipse tool in the app. [emphases mine]

An April 1, 2024 NASA article by James Riordon, which originated the news item, provides more information about the GLOBE program and the hopes for the April 8, 2024 eclipse initiative,

This is not the first time the GLOBE eclipse tool has been deployed in North America. During the 2017 North American eclipse, NASA researchers examined the relationship between clouds and air temperature and found that temperature swings during the eclipse were greatest in areas with less cloud cover, while temperature fluctuations in cloudier regions were more muted. It’s a finding that would have been difficult, perhaps impossible, without the assistance of numerous amateur observers along the eclipse path, said Marilé Colón Robles, a meteorologist based at NASA’s Langley Research Center in Hampton, Virginia, and the GLOBE project scientist overseeing the cloud study portion of the project.

GLOBE program volunteers across North America uploaded data coinciding with the July 21, 2017 event to this map. A high concentration of observers make the path of totality in the western part of the U.S. stand out. Credit: NASA Globe program

The number of weather stations along this year’s eclipse path is limited, and while satellites give us a global view, they can’t provide the same level of detail as people on the ground, said Ashlee Autore, a NASA Langley data scientist who will be conducting a follow-up to the 2017 study. “The power of citizen science is that people make the observations, and they can move.”

It’s still unclear how temperature fluctuations during a total eclipse compare across different climate regions, Colón Robles said. “This upcoming eclipse is passing through desert regions, mountainous regions, as well as more moist regions near the oceans.” Acquiring observations across these areas, she said, “will help us dig deeper into questions about regional connections between cloud cover and ground-level temperatures.” The studies should give scientists a better handle on the flow of energy from the Sun that’s crucial for understanding climate.

In many areas, citizen scientists are expected to gather en masse. “We’re inviting basically all of El Paso to campus,” said geophysicist and GLOBE partner John Olgin of El Paso Community College in Texas. The area will experience the eclipse in near totality, with about 80% of the Sun covered at the peak. It’s enough to make for an engaging event involving citizen scientists from the U.S. and Juarez, Mexico, just across the Rio Grande. 

Just a few minutes of midday darkness will have the long-term benefits of increasing awareness of NASA citizen science programs, Olgin said: “It’s going to inspire people to say, ‘Hey look, you can actually do stuff with NASA.’”

More than 30 million people live along the path of the 2024 eclipse, and hundreds of millions more will see a partial eclipse. It will be another 20 years before so many people in North America experience another total solar eclipse again.

With this in mind, Colón Robles has a piece of advice: As the Moon actively blocks the Sun, set your phone and thermometer aside, and marvel at one of the most extraordinary astronomical events of your lifetime.

Visit NASA’s Citizen Science page to learn how you can help NASA scientists study the Earth during eclipses and all year round. The GLOBE Program page provides connections to communities of GLOBE participants in 127 countries, access to data for retrieval and analysis, a roadmap for new participants, and other resources.

For anyone who wants to experience all of the ways that NASA has made their citizen science April 2024 eclipse projects accessible there’s NASA’s ‘general eclipse’ webpage.

Graphene goes to the moon

The people behind the European Union’s Graphene Flagship programme (if you need a brief explanation, keep scrolling down to the “What is the Graphene Flagship?” subhead) and the United Arab Emirates have got to be very excited about the announcement made in a November 29, 2022 news item on Nanowerk, Note: Canadians too have reason to be excited as of April 3, 2023 when it was announced that Canadian astronaut Jeremy Hansen was selected to be part of the team on NASA’s [US National Aeronautics and Space Administration] Artemis II to orbit the moon (April 3, 2023 CBC news online article by Nicole Mortillaro) ·

Graphene Flagship Partners University of Cambridge (UK) and Université Libre de Bruxelles (ULB, Belgium) paired up with the Mohammed bin Rashid Space Centre (MBRSC, United Arab Emirates), and the European Space Agency (ESA) to test graphene on the Moon. This joint effort sees the involvement of many international partners, such as Airbus Defense and Space, Khalifa University, Massachusetts Institute of Technology, Technische Universität Dortmund, University of Oslo, and Tohoku University.

The Rashid rover is planned to be launched on 30 November 2022 [Note: the launch appears to have occurred on December 11, 2022; keep scrolling for more about that] from Cape Canaveral in Florida and will land on a geologically rich and, as yet, only remotely explored area on the Moon’s nearside – the side that always faces the Earth. During one lunar day, equivalent to approximately 14 days on Earth, Rashid will move on the lunar surface investigating interesting geological features.

A November 29, 2022 Graphene Flagship press release (also on EurekAlert), which originated the news item, provides more details,

The Rashid rover wheels will be used for repeated exposure of different materials to the lunar surface. As part of this Material Adhesion and abrasion Detection experiment, graphene-based composites on the rover wheels will be used to understand if they can protect spacecraft against the harsh conditions on the Moon, and especially against regolith (also known as ‘lunar dust’).

Regolith is made of extremely sharp, tiny and sticky grains and, since the Apollo missions, it has been one of the biggest challenges lunar missions have had to overcome. Regolith is responsible for mechanical and electrostatic damage to equipment, and is therefore also hazardous for astronauts. It clogs spacesuits’ joints, obscures visors, erodes spacesuits and protective layers, and is a potential health hazard.  

University of Cambridge researchers from the Cambridge Graphene Centre produced graphene/polyether ether ketone (PEEK) composites. The interaction of these composites with the Moon regolith (soil) will be investigated. The samples will be monitored via an optical camera, which will record footage throughout the mission. ULB researchers will gather information during the mission and suggest adjustments to the path and orientation of the rover. Images obtained will be used to study the effects of the Moon environment and the regolith abrasive stresses on the samples.

This moon mission comes soon after the ESA announcement of the 2022 class of astronauts, including the Graphene Flagship’s own Meganne Christian, a researcher at Graphene Flagship Partner the Institute of Microelectronics and Microsystems (IMM) at the National Research Council of Italy.

“Being able to follow the Moon rover’s progress in real time will enable us to track how the lunar environment impacts various types of graphene-polymer composites, thereby allowing us to infer which of them is most resilient under such conditions. This will enhance our understanding of how graphene-based composites could be used in the construction of future lunar surface vessels,” says Sara Almaeeni, MBRSC science team lead, who designed Rashid’s communication system.

“New materials such as graphene have the potential to be game changers in space exploration. In combination with the resources available on the Moon, advanced materials will enable radiation protection, electronics shielding and mechanical resistance to the harshness of the Moon’s environment. The Rashid rover will be the first opportunity to gather data on the behavior of graphene composites within a lunar environment,” says Carlo Iorio, Graphene Flagship Space Champion, from ULB.

Leading up to the Moon mission, a variety of inks containing graphene and related materials, such as conducting graphene, insulating hexagonal boron nitride and graphene oxide, semiconducting molybdenum disulfide, prepared by the University of Cambridge and ULB were also tested on the MAterials Science Experiment Rocket 15 (MASER 15) mission, successfully launched on the 23rd of November 2022 from the Esrange Space Center in Sweden. This experiment, named ARLES-2 (Advanced Research on Liquid Evaporation in Space) and supported by European and UK space agencies (ESA, UKSA) included contributions from Graphene Flagship Partners University of Cambridge (UK), University of Pisa (Italy) and Trinity College Dublin (Ireland), with many international collaborators, including Aix-Marseille University (France), Technische Universität Darmstadt (Germany), York University (Canada), Université de Liège (Belgium), University of Edinburgh and Loughborough.

This experiment will provide new information about the printing of GMR inks in weightless conditions, contributing to the development of new addictive manufacturing procedures in space such as 3d printing. Such procedures are key for space exploration, during which replacement components are often needed, and could be manufactured from functional inks.

“Our experiments on graphene and related materials deposition in microgravity pave the way addictive manufacturing in space. The study of the interaction of Moon regolith with graphene composites will address some key challenges brought about by the harsh lunar environment,” says Yarjan Abdul Samad, from the Universities of Cambridge and Khalifa, who prepared the samples and coordinated the interactions with the United Arab Emirates.    

“The Graphene Flagship is spearheading the investigation of graphene and related materials (GRMs) for space applications. In November 2022, we had the first member of the Graphene Flagship appointed to the ESA astronaut class. We saw the launch of a sounding rocket to test printing of a variety of GRMs in zero gravity conditions, and the launch of a lunar rover that will test the interaction of graphene—based composites with the Moon surface. Composites, coatings and foams based on GRMs have been at the core of the Graphene Flagship investigations since its beginning. It is thus quite telling that, leading up to the Flagship’s 10th anniversary, these innovative materials are now to be tested on the lunar surface. This is timely, given the ongoing effort to bring astronauts back to the Moon, with the aim of building lunar settlements. When combined with polymers, GRMs can tailor the mechanical, thermal, electrical properties of then host matrices. These pioneering experiments could pave the way for widespread adoption of GRM-enhanced materials for space exploration,” says Andrea Ferrari, Science and Technology Officer and Chair of the Management Panel of the Graphene Flagship. 

Caption: The MASER15 launch Credit: John-Charles Dupin

A pioneering graphene work and a first for the Arab World

A December 11, 2022 news item on Alarabiya news (and on CNN) describes the ‘graphene’ launch which was also marked the Arab World’s first mission to the moon,

The United Arab Emirates’ Rashid Rover – the Arab world’s first mission to the Moon – was launched on Sunday [December 11, 2022], the Mohammed bin Rashid Space Center (MBRSC) announced on its official Twitter account.

The launch came after it was previously postponed for “pre-flight checkouts.”

The launch of a SpaceX Falcon 9 rocket carrying the UAE’s Rashid rover successfully took off from Cape Canaveral, Florida.

The Rashid rover – built by Emirati engineers from the UAE’s Mohammed bin Rashid Space Center (MBRSC) – is to be sent to regions of the Moon unexplored by humans.

What is the Graphene Flagship?

In 2013, the Graphene Flagship was chosen as one of two FET (Future and Emerging Technologies) funding projects (the other being the Human Brain Project) each receiving €1 billion to be paid out over 10 years. In effect, it’s a science funding programme specifically focused on research, development, and commercialization of graphene (a two-dimensional [it has length and width but no depth] material made of carbon atoms).

You can find out more about the flagship and about graphene here.

The metaverse or not

The ‘metaverse’ seems to be everywhere these days (especially since Facebook has made a number of announcements bout theirs (more about that later in this posting).

At this point, the metaverse is very hyped up despite having been around for about 30 years. According to the Wikipedia timeline (see the Metaverse entry), the first one was a MOO in 1993 called ‘The Metaverse’. In any event, it seems like it might be a good time to see what’s changed since I dipped my toe into a metaverse (Second Life by Linden Labs) in 2007.

(For grammar buffs, I switched from definite article [the] to indefinite article [a] purposefully. In reading the various opinion pieces and announcements, it’s not always clear whether they’re talking about a single, overarching metaverse [the] replacing the single, overarching internet or whether there will be multiple metaverses, in which case [a].)

The hype/the buzz … call it what you will

This September 6, 2021 piece by Nick Pringle for Fast Company dates the beginning of the metaverse to a 1992 science fiction novel before launching into some typical marketing hype (for those who don’t know, hype is the short form for hyperbole; Note: Links have been removed),

The term metaverse was coined by American writer Neal Stephenson in his 1993 sci-fi hit Snow Crash. But what was far-flung fiction 30 years ago is now nearing reality. At Facebook’s most recent earnings call [June 2021], CEO Mark Zuckerberg announced the company’s vision to unify communities, creators, and commerce through virtual reality: “Our overarching goal across all of these initiatives is to help bring the metaverse to life.”

So what actually is the metaverse? It’s best explained as a collection of 3D worlds you explore as an avatar. Stephenson’s original vision depicted a digital 3D realm in which users interacted in a shared online environment. Set in the wake of a catastrophic global economic crash, the metaverse in Snow Crash emerged as the successor to the internet. Subcultures sprung up alongside new social hierarchies, with users expressing their status through the appearance of their digital avatars.

Today virtual worlds along these lines are formed, populated, and already generating serious money. Household names like Roblox and Fortnite are the most established spaces; however, there are many more emerging, such as Decentraland, Upland, Sandbox, and the soon to launch Victoria VR.

These metaverses [emphasis mine] are peaking at a time when reality itself feels dystopian, with a global pandemic, climate change, and economic uncertainty hanging over our daily lives. The pandemic in particular saw many of us escape reality into online worlds like Roblox and Fortnite. But these spaces have proven to be a place where human creativity can flourish amid crisis.

In fact, we are currently experiencing an explosion of platforms parallel to the dotcom boom. While many of these fledgling digital worlds will become what Ask Jeeves was to Google, I predict [emphasis mine] that a few will match the scale and reach of the tech giant—or even exceed it.

Because the metaverse brings a new dimension to the internet, brands and businesses will need to consider their current and future role within it. Some brands are already forging the way and establishing a new genre of marketing in the process: direct to avatar (D2A). Gucci sold a virtual bag for more than the real thing in Roblox; Nike dropped virtual Jordans in Fortnite; Coca-Cola launched avatar wearables in Decentraland, and Sotheby’s has an art gallery that your avatar can wander in your spare time.

D2A is being supercharged by blockchain technology and the advent of digital ownership via NFTs, or nonfungible tokens. NFTs are already making waves in art and gaming. More than $191 million was transacted on the “play to earn” blockchain game Axie Infinity in its first 30 days this year. This kind of growth makes NFTs hard for brands to ignore. In the process, blockchain and crypto are starting to feel less and less like “outsider tech.” There are still big barriers to be overcome—the UX of crypto being one, and the eye-watering environmental impact of mining being the other. I believe technology will find a way. History tends to agree.

Detractors see the metaverse as a pandemic fad, wrapping it up with the current NFT bubble or reducing it to Zuck’s [Jeffrey Zuckerberg and Facebook] dystopian corporate landscape. This misses the bigger behavior change that is happening among Gen Alpha. When you watch how they play, it becomes clear that the metaverse is more than a buzzword.

For Gen Alpha [emphasis mine], gaming is social life. While millennials relentlessly scroll feeds, Alphas and Zoomers [emphasis mine] increasingly stroll virtual spaces with their friends. Why spend the evening staring at Instagram when you can wander around a virtual Harajuku with your mates? If this seems ridiculous to you, ask any 13-year-old what they think.

Who is Nick Pringle and how accurate are his predictions?

At the end of his September 6, 2021 piece, you’ll find this,

Nick Pringle is SVP [Senior Vice President] executive creative director at R/GA London.

According to the R/GA Wikipedia entry,

… [the company] evolved from a computer-assisted film-making studio to a digital design and consulting company, as part of a major advertising network.

Here’s how Pringle sees our future, his September 6, 2021 piece,

By thinking “virtual first,” you can see how these spaces become highly experimental, creative, and valuable. The products you can design aren’t bound by physics or marketing convention—they can be anything, and are now directly “ownable” through blockchain. …

I believe that the metaverse is here to stay. That means brands and marketers now have the exciting opportunity to create products that exist in multiple realities. The winners will understand that the metaverse is not a copy of our world, and so we should not simply paste our products, experiences, and brands into it.

I emphasized “These metaverses …” in the previous section to highlight the fact that I find the use of ‘metaverses’ vs. ‘worlds’ confusing as the words are sometimes used as synonyms and sometimes as distinctions. We do it all the time in all sorts of conversations but for someone who’s an outsider to a particular occupational group or subculture, the shifts can make for confusion.

As for Gen Alpha and Zoomer, I’m not a fan of ‘Gen anything’ as shorthand for describing a cohort based on birth years. For example, “For Gen Alpha [emphasis mine], gaming is social life,” ignores social and economic classes, as well as, the importance of locations/geography, e.g., Afghanistan in contrast to the US.

To answer the question I asked, Pringle does not mention any record of accuracy for his predictions for the future but I was able to discover that he is a “multiple Cannes Lions award-winning creative” (more here).

A more measured view of the metaverse

An October 4, 2021 article (What is the metaverse, and do I have to care? One part definition, one part aspiration, one part hype) by Adi Robertson and Jay Peters for The Verge offers a deeper dive into the metaverse (Note: Links have been removed),

In recent months you may have heard about something called the metaverse. Maybe you’ve read that the metaverse is going to replace the internet. Maybe we’re all supposed to live there. Maybe Facebook (or Epic, or Roblox, or dozens of smaller companies) is trying to take it over. And maybe it’s got something to do with NFTs [non-fungible tokens]?

Unlike a lot of things The Verge covers, the metaverse is tough to explain for one reason: it doesn’t necessarily exist. It’s partly a dream for the future of the internet and partly a neat way to encapsulate some current trends in online infrastructure, including the growth of real-time 3D worlds.

Then what is the real metaverse?

There’s no universally accepted definition of a real “metaverse,” except maybe that it’s a fancier successor to the internet. Silicon Valley metaverse proponents sometimes reference a description from venture capitalist Matthew Ball, author of the extensive Metaverse Primer:

“The Metaverse is an expansive network of persistent, real-time rendered 3D worlds and simulations that support continuity of identity, objects, history, payments, and entitlements, and can be experienced synchronously by an effectively unlimited number of users, each with an individual sense of presence.”

Facebook, arguably the tech company with the biggest stake in the metaverse, describes it more simply:

“The ‘metaverse’ is a set of virtual spaces where you can create and explore with other people who aren’t in the same physical space as you.”

There are also broader metaverse-related taxonomies like one from game designer Raph Koster, who draws a distinction between “online worlds,” “multiverses,” and “metaverses.” To Koster, online worlds are digital spaces — from rich 3D environments to text-based ones — focused on one main theme. Multiverses are “multiple different worlds connected in a network, which do not have a shared theme or ruleset,” including Ready Player One’s OASIS. And a metaverse is “a multiverse which interoperates more with the real world,” incorporating things like augmented reality overlays, VR dressing rooms for real stores, and even apps like Google Maps.

If you want something a little snarkier and more impressionistic, you can cite digital scholar Janet Murray — who has described the modern metaverse ideal as “a magical Zoom meeting that has all the playful release of Animal Crossing.”

But wait, now Ready Player One isn’t a metaverse and virtual worlds don’t have to be 3D? It sounds like some of these definitions conflict with each other.

An astute observation.

Why is the term “metaverse” even useful? “The internet” already covers mobile apps, websites, and all kinds of infrastructure services. Can’t we roll virtual worlds in there, too?

Matthew Ball favors the term “metaverse” because it creates a clean break with the present-day internet. [emphasis mine] “Using the metaverse as a distinctive descriptor allows us to understand the enormity of that change and in turn, the opportunity for disruption,” he said in a phone interview with The Verge. “It’s much harder to say ‘we’re late-cycle into the last thing and want to change it.’ But I think understanding this next wave of computing and the internet allows us to be more proactive than reactive and think about the future as we want it to be, rather than how to marginally affect the present.”

A more cynical spin is that “metaverse” lets companies dodge negative baggage associated with “the internet” in general and social media in particular. “As long as you can make technology seem fresh and new and cool, you can avoid regulation,” researcher Joan Donovan told The Washington Post in a recent article about Facebook and the metaverse. “You can run defense on that for several years before the government can catch up.”

There’s also one very simple reason: it sounds more futuristic than “internet” and gets investors and media people (like us!) excited.

People keep saying NFTs are part of the metaverse. Why?

NFTs are complicated in their own right, and you can read more about them here. Loosely, the thinking goes: NFTs are a way of recording who owns a specific virtual good, creating and transferring virtual goods is a big part of the metaverse, thus NFTs are a potentially useful financial architecture for the metaverse. Or in more practical terms: if you buy a virtual shirt in Metaverse Platform A, NFTs can create a permanent receipt and let you redeem the same shirt in Metaverse Platforms B to Z.

Lots of NFT designers are selling collectible avatars like CryptoPunks, Cool Cats, and Bored Apes, sometimes for astronomical sums. Right now these are mostly 2D art used as social media profile pictures. But we’re already seeing some crossover with “metaverse”-style services. The company Polygonal Mind, for instance, is building a system called CryptoAvatars that lets people buy 3D avatars as NFTs and then use them across multiple virtual worlds.

If you have the time, the October 4, 2021 article (What is the metaverse, and do I have to care? One part definition, one part aspiration, one part hype) is definitely worth the read.

Facebook’s multiverse and other news

Since starting this post sometime in September 2021, the situation regarding Facebook has changed a few times. I’ve decided to begin my version of the story from a summer 2021 announcement.

On Monday, July 26, 2021, Facebook announced a new Metaverse product group. From a July 27, 2021 article by Scott Rosenberg for Yahoo News (Note: A link has been removed),

Facebook announced Monday it was forming a new Metaverse product group to advance its efforts to build a 3D social space using virtual and augmented reality tech.

Facebook’s new Metaverse product group will report to Andrew Bosworth, Facebook’s vice president of virtual and augmented reality [emphasis mine], who announced the new organization in a Facebook post.

Facebook, integrity, and safety in the metaverse

On September 27, 2021 Facebook posted this webpage (Building the Metaverse Responsibly by Andrew Bosworth, VP, Facebook Reality Labs [emphasis mine] and Nick Clegg, VP, Global Affairs) on its site,

The metaverse won’t be built overnight by a single company. We’ll collaborate with policymakers, experts and industry partners to bring this to life.

We’re announcing a $50 million investment in global research and program partners to ensure these products are developed responsibly.

We develop technology rooted in human connection that brings people together. As we focus on helping to build the next computing platform, our work across augmented and virtual reality and consumer hardware will deepen that human connection regardless of physical distance and without being tied to devices. 

Introducing the XR [extended reality] Programs and Research Fund

There’s a long road ahead. But as a starting point, we’re announcing the XR Programs and Research Fund, a two-year $50 million investment in programs and external research to help us in this effort. Through this fund, we’ll collaborate with industry partners, civil rights groups, governments, nonprofits and academic institutions to determine how to build these technologies responsibly. 

..

Where integrity and safety are concerned Facebook is once again having some credibility issues according to an October 5, 2021 Associated Press article (Whistleblower testifies Facebook chooses profit over safety, calls for ‘congressional action’) posted on the Canadian Broadcasting Corporation’s (CBC) news online website.

Rebranding Facebook’s integrity and safety issues away?

It seems Facebook’s credibility issues are such that the company is about to rebrand itself according to an October 19, 2021 article by Alex Heath for The Verge (Note: Links have been removed),

Facebook is planning to change its company name next week to reflect its focus on building the metaverse, according to a source with direct knowledge of the matter.

The coming name change, which CEO Mark Zuckerberg plans to talk about at the company’s annual Connect conference on October 28th [2021], but could unveil sooner, is meant to signal the tech giant’s ambition to be known for more than social media and all the ills that entail. The rebrand would likely position the blue Facebook app as one of many products under a parent company overseeing groups like Instagram, WhatsApp, Oculus, and more. A spokesperson for Facebook declined to comment for this story.

Facebook already has more than 10,000 employees building consumer hardware like AR glasses that Zuckerberg believes will eventually be as ubiquitous as smartphones. In July, he told The Verge that, over the next several years, “we will effectively transition from people seeing us as primarily being a social media company to being a metaverse company.”

A rebrand could also serve to further separate the futuristic work Zuckerberg is focused on from the intense scrutiny Facebook is currently under for the way its social platform operates today. A former employee turned whistleblower, Frances Haugen, recently leaked a trove of damning internal documents to The Wall Street Journal and testified about them before Congress. Antitrust regulators in the US and elsewhere are trying to break the company up, and public trust in how Facebook does business is falling.

Facebook isn’t the first well-known tech company to change its company name as its ambitions expand. In 2015, Google reorganized entirely under a holding company called Alphabet, partly to signal that it was no longer just a search engine, but a sprawling conglomerate with companies making driverless cars and health tech. And Snapchat rebranded to Snap Inc. in 2016, the same year it started calling itself a “camera company” and debuted its first pair of Spectacles camera glasses.

If you have time, do read Heath’s article in its entirety.

An October 20, 2021 Thomson Reuters item on CBC (Canadian Broadcasting Corporation) news online includes quotes from some industry analysts about the rebrand,

“It reflects the broadening out of the Facebook business. And then, secondly, I do think that Facebook’s brand is probably not the greatest given all of the events of the last three years or so,” internet analyst James Cordwell at Atlantic Equities said.

“Having a different parent brand will guard against having this negative association transferred into a new brand, or other brands that are in the portfolio,” said Shankha Basu, associate professor of marketing at University of Leeds.

Tyler Jadah’s October 20, 2021 article for the Daily Hive includes an earlier announcement (not mentioned in the other two articles about the rebranding), Note: A link has been removed,

Earlier this week [October 17, 2021], Facebook announced it will start “a journey to help build the next computing platform” and will hire 10,000 new high-skilled jobs within the European Union (EU) over the next five years.

“Working with others, we’re developing what is often referred to as the ‘metaverse’ — a new phase of interconnected virtual experiences using technologies like virtual and augmented reality,” wrote Facebook’s Nick Clegg, the VP of Global Affairs. “At its heart is the idea that by creating a greater sense of “virtual presence,” interacting online can become much closer to the experience of interacting in person.”

Clegg says the metaverse has the potential to help unlock access to new creative, social, and economic opportunities across the globe and the virtual world.

In an email with Facebook’s Corporate Communications Canada, David Troya-Alvarez told Daily Hive, “We don’t comment on rumour or speculation,” in regards to The Verge‘s report.

I will update this posting when and if Facebook rebrands itself into a ‘metaverse’ company.

***See Oct. 28, 2021 update at the end of this posting and prepare yourself for ‘Meta’.***

Who (else) cares about integrity and safety in the metaverse?

Apparently, the international legal firm, Norton Rose Fulbright also cares about safety and integrity in the metaverse. Here’s more from their July 2021 The Metaverse: The evolution of a universal digital platform webpage,

In technology, first-mover advantage is often significant. This is why BigTech and other online platforms are beginning to acquire software businesses to position themselves for the arrival of the Metaverse.  They hope to be at the forefront of profound changes that the Metaverse will bring in relation to digital interactions between people, between businesses, and between them both. 

What is the Metaverse? The short answer is that it does not exist yet. At the moment it is vision for what the future will be like where personal and commercial life is conducted digitally in parallel with our lives in the physical world. Sounds too much like science fiction? For something that does not exist yet, the Metaverse is drawing a huge amount of attention and investment in the tech sector and beyond.  

Here we look at what the Metaverse is, what its potential is for disruptive change, and some of the key legal and regulatory issues future stakeholders may need to consider.

What are the potential legal issues?

The revolutionary nature of the Metaverse is likely to give rise to a range of complex legal and regulatory issues. We consider some of the key ones below. As time goes by, naturally enough, new ones will emerge.

Data

Participation in the Metaverse will involve the collection of unprecedented amounts and types of personal data. Today, smartphone apps and websites allow organisations to understand how individuals move around the web or navigate an app. Tomorrow, in the Metaverse, organisations will be able to collect information about individuals’ physiological responses, their movements and potentially even brainwave patterns, thereby gauging a much deeper understanding of their customers’ thought processes and behaviours.

Users participating in the Metaverse will also be “logged in” for extended amounts of time. This will mean that patterns of behaviour will be continually monitored, enabling the Metaverse and the businesses (vendors of goods and services) participating in the Metaverse to understand how best to service the users in an incredibly targeted way.

The hungry Metaverse participant

How might actors in the Metaverse target persons participating in the Metaverse? Let us assume one such woman is hungry at the time of participating. The Metaverse may observe a woman frequently glancing at café and restaurant windows and stopping to look at cakes in a bakery window, and determine that she is hungry and serve her food adverts accordingly.

Contrast this with current technology, where a website or app can generally only ascertain this type of information if the woman actively searched for food outlets or similar on her device.

Therefore, in the Metaverse, a user will no longer need to proactively provide personal data by opening up their smartphone and accessing their webpage or app of choice. Instead, their data will be gathered in the background while they go about their virtual lives. 

This type of opportunity comes with great data protection responsibilities. Businesses developing, or participating in, the Metaverse will need to comply with data protection legislation when processing personal data in this new environment. The nature of the Metaverse raises a number of issues around how that compliance will be achieved in practice.

Who is responsible for complying with applicable data protection law? 

In many jurisdictions, data protection laws place different obligations on entities depending on whether an entity determines the purpose and means of processing personal data (referred to as a “controller” under the EU General Data Protection Regulation (GDPR)) or just processes personal data on behalf of others (referred to as a “processor” under the GDPR). 

In the Metaverse, establishing which entity or entities have responsibility for determining how and why personal data will be processed, and who processes personal data on behalf of another, may not be easy. It will likely involve picking apart a tangled web of relationships, and there may be no obvious or clear answers – for example:

Will there be one main administrator of the Metaverse who collects all personal data provided within it and determines how that personal data will be processed and shared?
Or will multiple entities collect personal data through the Metaverse and each determine their own purposes for doing so? 

Either way, many questions arise, including:

How should the different entities each display their own privacy notice to users? 
Or should this be done jointly? 
How and when should users’ consent be collected? 
Who is responsible if users’ personal data is stolen or misused while they are in the Metaverse? 
What data sharing arrangements need to be put in place and how will these be implemented?

There’s a lot more to this page including a look at Social Media Regulation and Intellectual Property Rights.

One other thing, according to the Norton Rose Fulbright Wikipedia entry, it is one of the ten largest legal firms in the world.

How many realities are there?

I’m starting to think we should talking about RR (real reality), as well as, VR (virtual reality), AR (augmented reality), MR (mixed reality), and XR (extended reality). It seems that all of these (except RR, which is implied) will be part of the ‘metaverse’, assuming that it ever comes into existence. Happily, I have found a good summarized description of VR/AR/MR/XR in a March 20, 2018 essay by North of 41 on medium.com,

Summary: VR is immersing people into a completely virtual environment; AR is creating an overlay of virtual content, but can’t interact with the environment; MR is a mixed of virtual reality and the reality, it creates virtual objects that can interact with the actual environment. XR brings all three Reality (AR, VR, MR) together under one term.

If you have the interest and approximately five spare minutes, read the entire March 20, 2018 essay, which has embedded images illustrating the various realities.

Alternate Mixed Realities: an example

TransforMR: Pose-Aware Object Substitution for Composing Alternate Mixed Realities (ISMAR ’21)

Here’s a description from one of the researchers, Mohamed Kari, of the video, which you can see above, and the paper he and his colleagues presented at the 20th IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 2021 (from the TransforMR page on YouTube),

We present TransforMR, a video see-through mixed reality system for mobile devices that performs 3D-pose-aware object substitution to create meaningful mixed reality scenes in previously unseen, uncontrolled, and open-ended real-world environments.

To get a sense of how recent this work is, ISMAR 2021 was held from October 4 – 8, 2021.

The team’s 2021 ISMAR paper, TransforMR Pose-Aware Object Substitution for Composing Alternate Mixed Realities by Mohamed Kari, Tobias Grosse-Puppendah, Luis Falconeri Coelho, Andreas Rene Fender, David Bethge, Reinhard Schütte, and Christian Holz lists two educational institutions I’d expect to see (University of Duisburg-Essen and ETH Zürich), the surprise was this one: Porsche AG. Perhaps that explains the preponderance of vehicles in this demonstration.

Space walking in virtual reality

Ivan Semeniuk’s October 2, 2021 article for the Globe and Mail highlights a collaboration between Montreal’s Felix and Paul Studios with NASA (US National Aeronautics and Space Administration) and Time studios,

Communing with the infinite while floating high above the Earth is an experience that, so far, has been known to only a handful.

Now, a Montreal production company aims to share that experience with audiences around the world, following the first ever recording of a spacewalk in the medium of virtual reality.

The company, which specializes in creating virtual-reality experiences with cinematic flair, got its long-awaited chance in mid-September when astronauts Thomas Pesquet and Akihiko Hoshide ventured outside the International Space Station for about seven hours to install supports and other equipment in preparation for a new solar array.

The footage will be used in the fourth and final instalment of Space Explorers: The ISS Experience, a virtual-reality journey to space that has already garnered a Primetime Emmy Award for its first two episodes.

From the outset, the production was developed to reach audiences through a variety of platforms for 360-degree viewing, including 5G-enabled smart phones and tablets. A domed theatre version of the experience for group audiences opened this week at the Rio Tinto Alcan Montreal Planetarium. Those who desire a more immersive experience can now see the first two episodes in VR form by using a headset available through the gaming and entertainment company Oculus. Scenes from the VR series are also on offer as part of The Infinite, an interactive exhibition developed by Montreal’s Phi Studio, whose works focus on the intersection of art and technology. The exhibition, which runs until Nov. 7 [2021], has attracted 40,000 visitors since it opened in July [2021?].

At a time when billionaires are able to head off on private extraterrestrial sojourns that almost no one else could dream of, Lajeunesse [Félix Lajeunesse, co-founder and creative director of Felix and Paul studios] said his project was developed with a very different purpose in mind: making it easier for audiences to become eyewitnesses rather than distant spectators to humanity’s greatest adventure.

For the final instalments, the storyline takes viewers outside of the space station with cameras mounted on the Canadarm, and – for the climax of the series – by following astronauts during a spacewalk. These scenes required extensive planning, not only because of the limited time frame in which they could be gathered, but because of the lighting challenges presented by a constantly shifting sun as the space station circles the globe once every 90 minutes.

… Lajeunesse said that it was equally important to acquire shots that are not just technically spectacular but that serve the underlying themes of Space Explorers: The ISS Experience. These include an examination of human adaptation and advancement, and the unity that emerges within a group of individuals from many places and cultures and who must learn to co-exist in a high risk environment in order to achieve a common goal.

If you have the time, do read Semeniuk’s October 2, 2021 article in its entirety. You can find the exhibits (hopefully, you’re in Montreal) The Infinite here and Space Explorers: The ISS experience here (see the preview below),

The realities and the ‘verses

There always seems to be a lot of grappling with new and newish science/technology where people strive to coin terms and define them while everyone, including members of the corporate community, attempts to cash in.

The last time I looked (probably about two years ago), I wasn’t able to find any good definitions for alternate reality and mixed reality. (By good, I mean something which clearly explicated the difference between the two.) It was nice to find something this time.

As for Facebook and its attempts to join/create a/the metaverse, the company’s timing seems particularly fraught. As well, paradigm-shifting technology doesn’t usually start with large corporations. The company is ignoring its own history.

Multiverses

Writing this piece has reminded me of the upcoming movie, “Doctor Strange in the Multiverse of Madness” (Wikipedia entry). While this multiverse is based on a comic book, the idea of a Multiverse (Wikipedia entry) has been around for quite some time,

Early recorded examples of the idea of infinite worlds existed in the philosophy of Ancient Greek Atomism, which proposed that infinite parallel worlds arose from the collision of atoms. In the third century BCE, the philosopher Chrysippus suggested that the world eternally expired and regenerated, effectively suggesting the existence of multiple universes across time.[1] The concept of multiple universes became more defined in the Middle Ages.

Multiple universes have been hypothesized in cosmology, physics, astronomy, religion, philosophy, transpersonal psychology, music, and all kinds of literature, particularly in science fiction, comic books and fantasy. In these contexts, parallel universes are also called “alternate universes”, “quantum universes”, “interpenetrating dimensions”, “parallel universes”, “parallel dimensions”, “parallel worlds”, “parallel realities”, “quantum realities”, “alternate realities”, “alternate timelines”, “alternate dimensions” and “dimensional planes”.

The physics community has debated the various multiverse theories over time. Prominent physicists are divided about whether any other universes exist outside of our own.

Living in a computer simulation or base reality

The whole thing is getting a little confusing for me so I think I’ll stick with RR (real reality) or as it’s also known base reality. For the notion of base reality, I want to thank astronomer David Kipping of Columbia University in Anil Ananthaswamy’s article for this analysis of the idea that we might all be living in a computer simulation (from my December 8, 2020 posting; scroll down about 50% of the way to the “Are we living in a computer simulation?” subhead),

… there is a more obvious answer: Occam’s razor, which says that in the absence of other evidence, the simplest explanation is more likely to be correct. The simulation hypothesis is elaborate, presuming realities nested upon realities, as well as simulated entities that can never tell that they are inside a simulation. “Because it is such an overly complicated, elaborate model in the first place, by Occam’s razor, it really should be disfavored, compared to the simple natural explanation,” Kipping says.

Maybe we are living in base reality after all—The Matrix, Musk and weird quantum physics notwithstanding.

To sum it up (briefly)

I’m sticking with the base reality (or real reality) concept, which is where various people and companies are attempting to create a multiplicity of metaverses or the metaverse effectively replacing the internet. This metaverse can include any all of these realities (AR/MR/VR/XR) along with base reality. As for Facebook’s attempt to build ‘the metaverse’, it seems a little grandiose.

The computer simulation theory is an interesting thought experiment (just like the multiverse is an interesting thought experiment). I’ll leave them there.

Wherever it is we are living, these are interesting times.

***Updated October 28, 2021: D. (Devindra) Hardawar’s October 28, 2021 article for engadget offers details about the rebranding along with a dash of cynicism (Note: A link has been removed),

Here’s what Facebook’s metaverse isn’t: It’s not an alternative world to help us escape from our dystopian reality, a la Snow Crash. It won’t require VR or AR glasses (at least, not at first). And, most importantly, it’s not something Facebook wants to keep to itself. Instead, as Mark Zuckerberg described to media ahead of today’s Facebook Connect conference, the company is betting it’ll be the next major computing platform after the rise of smartphones and the mobile web. Facebook is so confident, in fact, Zuckerberg announced that it’s renaming itself to “Meta.”

After spending the last decade becoming obsessed with our phones and tablets — learning to stare down and scroll practically as a reflex — the Facebook founder thinks we’ll be spending more time looking up at the 3D objects floating around us in the digital realm. Or maybe you’ll be following a friend’s avatar as they wander around your living room as a hologram. It’s basically a digital world layered right on top of the real world, or an “embodied internet” as Zuckerberg describes.

Before he got into the weeds for his grand new vision, though, Zuckerberg also preempted criticism about looking into the future now, as the Facebook Papers paint the company as a mismanaged behemoth that constantly prioritizes profit over safety. While acknowledging the seriousness of the issues the company is facing, noting that it’ll continue to focus on solving them with “industry-leading” investments, Zuckerberg said: 

“The reality is is that there’s always going to be issues and for some people… they may have the view that there’s never really a great time to focus on the future… From my perspective, I think that we’re here to create things and we believe that we can do this and that technology can make things better. So we think it’s important to to push forward.”

Given the extent to which Facebook, and Zuckerberg in particular, have proven to be untrustworthy stewards of social technology, it’s almost laughable that the company wants us to buy into its future. But, like the rise of photo sharing and group chat apps, Zuckerberg at least has a good sense of what’s coming next. And for all of his talk of turning Facebook into a metaverse company, he’s adamant that he doesn’t want to build a metaverse that’s entirely owned by Facebook. He doesn’t think other companies will either. Like the mobile web, he thinks every major technology company will contribute something towards the metaverse. He’s just hoping to make Facebook a pioneer.

“Instead of looking at a screen, or today, how we look at the Internet, I think in the future you’re going to be in the experiences, and I think that’s just a qualitatively different experience,” Zuckerberg said. It’s not quite virtual reality as we think of it, and it’s not just augmented reality. But ultimately, he sees the metaverse as something that’ll help to deliver more presence for digital social experiences — the sense of being there, instead of just being trapped in a zoom window. And he expects there to be continuity across devices, so you’ll be able to start chatting with friends on your phone and seamlessly join them as a hologram when you slip on AR glasses.

D. (Devindra) Hardawar’s October 28, 2021 article provides a lot more details and I recommend reading it in its entirety.

Space and sound (music from the Milky Way)

A May 17, 2021 posting on the Canadian Broadcasting Corporation (CBC) Radio Ideas programme blog describes and hosts embedded videos and audio clips of space data sonfications and visualizations,

After years of attempts and failures to get a microphone to Mars, NASA’s [US National Aeronautics and Space Administration] latest rover, Perseverance, succeeded. It landed in February carrying two microphones.

For Jason Achilles Mezilis, a musician and record producer who has also worked for NASA, listening to the haunting Martian wind was an emotional experience.

“I’m in this bar half drunk, and I go over to the corner and I listen to it on my cellphone and … I broke down.”

The atmosphere of Mars is a little thinner than Earth’s, but it still has enough air to transmit sound.

Ben Burtt, an Oscar-winning sound designer, editor and director, made the sounds of cinematic space fantasy — from Star Wars to WALL-E to Star Trek. But he’s also deeply interested in the sound of actual space reality.

“All sound is a form of wind, really. It’s a puff of air molecules moving. And when I heard the sound, I thought: ‘Well, you know, I’ve heard this many times in my headphones on recording trips,'” Burtt said

SYSTEM Sounds, founded by University of Toronto astrophysicist and musician Matt Russo, translates data from space into music. 

Planets or moons sometimes fall into what’s called “orbital resonance,” where two or more bodies pull each other into a regular rhythm. One example is the three inner moons of Jupiter: Ganymede, Europa, and Io. 

“The rhythm is very similar to what a drummer might play. There’s a very simple regularity,” Russo said.

“And there’s something about our ears and our auditory system that finds that pleasing, finds repeating rhythms with simple ratios between them pleasing or natural sounding. It’s predictable. So it gives you something to kind of latch on to emotionally.”

Russo created this tool to illustrate the musical rhythm of the Galilean moons. 

During the pandemic, scientists at NASA, with the help of SYSTEM Sounds, tried to find new ways of connecting people with the beauty of space. The result was “sonic visualizations,” translating data captured by telescopes into sound instead of pictures.

Most images of space come from data translated into colours, such as Cassiopeia A, the remains of an exploded star. 

A given colour is usually assigned to the electromagnetic signature of each chemical in the dust cloud. But instead of assigning a colour, a musical note can be assigned, allowing us to hear Cassiopeia A instead of just seeing it.

There are several embedded videos and the Ideas radio interview embedded in the May 17, 2021 posting. Should you be interested, you can find System Sounds here.

You will find a number of previous postings (use the search term ‘data sonification’); the earliest concerning ‘space music’ is from February 7, 2014. You’ll also find Matt Russo, the TRAPPIST-1 planetary system, and music in a May 11, 2017 posting.

Moon dust at the nanoscale

Before getting to the moon dust, it seems the US National Institute of Standards and Technology (NIST) has undergone a communications strategy transformation. For example, there’s this whimsical video about the NIST’s latest on moon dust,

An April 28, 2021 news item on phys.org offers a little more whimsy and moon dust from the NIST,

Like a chameleon of the night sky, the moon often changes its appearance. It might look larger, brighter or redder, for example, due to its phases, its position in the solar system or smoke in Earth’s atmosphere. (It is not made of green cheese, however.)

Another factor in its appearance is the size and shape of moon dust particles, the small rock grains that cover the moon’s surface. Researchers at the National Institute of Standards and Technology (NIST) are now measuring tinier moon dust particles than ever before, a step toward more precisely explaining the moon’s apparent color and brightness. This in turn might help improve tracking of weather patterns and other phenomena by satellite cameras that use the moon as a calibration source.

An April 28, 2021US NIST news release (also on EurekAlert), which originated the news item, provides more technical detail,

NIST researchers and collaborators have developed a complex method of measuring the exact three-dimensional shape of 25 particles of moon dust collected during the Apollo 11 mission in 1969. The team includes researchers from the Air Force Research Laboratory, the Space Science Institute and the University of Missouri-Kansas City.

These researchers have been studying moon dust for several years. But as described in a new journal paper, they now have X-ray nano computed tomography (XCT), which allowed them to examine the shape of particles as small as 400 nanometers (billionths of a meter) in length.

The research team developed a method for both measuring and computationally analyzing how the dust particle shapes scatter light. Follow-up studies will include many more particles, and more clearly link their shape to light scattering. Researchers are especially interested in a feature called “albedo,” moonspeak for how much light or radiation it reflects.

The recipe for measuring the Moon’s nano dust is complicated. First you need to mix it with something, as if making an omelet, and then turn it on a stick for hours like a rotisserie chicken. Straws and dressmakers’ pins are involved too.

“The procedure is elaborate because it is hard to get a small particle by itself, but one needs to measure many particles for good statistics, since they are randomly distributed in size and shape,” NIST Fellow Ed Garboczi said.

“Since they are so tiny and because they only come in powders, a single particle needs to be separated from all the others,” Garboczi continued. “They are too small to do that by hand, at least not in any quantity, so they must be carefully dispersed in a medium. The medium must also freeze their mechanical motion, in order to be able to get good XCT images. If there is any movement of the particles during the several hours of the XCT scan, then the images will be badly blurred and generally not usable. The final form of the sample must also be compatible with getting the X-ray source and camera close to the sample while it rotates, so a narrow, straight cylinder is best.”

The procedure involved stirring the Apollo 11 material into epoxy, which was then dripped over the outside of a tiny straw to get a thin layer. Small pieces of this layer were then removed from the straw and mounted on dressmakers’ pins, which were inserted into the XCT instrument.

The XCT machine generated X-ray images of the samples that were reconstructed by software into slices. NIST software stacked the slices into a 3D image and then converted it into a format that classified units of volume, or voxels, as either inside or outside the particles. The 3D particle shapes were identified computationally from these segmented images. The voxels making up each particle were saved in separate files that were forwarded to software for solving electromagnetic scattering problems in the visible to the infrared frequency range.

The results indicated that the color of light absorbed by a moon dust particle is highly sensitive to its shape and can be significantly different from that of spherical or ellipsoidal particles of the same size. That doesn’t mean too much to the researchers — yet.

“This is our first look at the influence of actual shapes of lunar particles on light scattering and focuses on some fundamental particle properties,” co-author Jay Goguen of the Space Science Institute said. “The models developed here form the basis of future calculations that could model observations of the spectrum, brightness and polarization of the moon’s surface and how those observed quantities change during the moon’s phases.”

The authors are now studying a wider range of moon dust shapes and sizes, including particles collected during the Apollo 14 mission in 1971. The moon dust samples were loaned to NIST by NASA’s Curation and Analysis Planning Team for Extraterrestrial Materials program.

Here’s a (2nd) link to and a citation for the paper,

Optical Scattering Characteristics of 3-D Lunar Regolith Particles Measured Using X-Ray Nano Computed Tomography by Somen Baidya; Mikolas Melius; Ahmed M. Hassan; Andrew Sharits; Ann N. Chiaramonti; Thomas Lafarge; Jay D. Goguen; Edward J. Garboczi. IEEE Geoscience and Remote Sensing Letters DOI: 10.1109/LGRS.2021.3073344 Published online April 27, 2021

This paper is behind a paywall.

News from the Canadian Light Source (CLS), Canadian Science Policy Conference (CSPC) 2020, the International Symposium on Electronic Arts (ISEA) 2020, and HotPopRobot

I have some news about conserving art; early bird registration deadlines for two events, and, finally, an announcement about contest winners.

Canadian Light Source (CLS) and modern art

Rita Letendre. Victoire [Victory], 1961. Oil on canvas, Overall: 202.6 × 268 cm. Art Gallery of Ontario. Gift of Jessie and Percy Waxer, 1974, donated by the Ontario Heritage Foundation, 1988. © Rita Letendre L74.8. Photography by Ian Lefebvre

This is one of three pieces by Rita Letendre that underwent chemical mapping according to an August 5, 2020 CLS news release by Victoria Martinez (also received via email),

Research undertaken at the Canadian Light Source (CLS) at the University of Saskatchewan was key to understanding how to conserve experimental oil paintings by Rita Letendre, one of Canada’s most respected living abstract artists.

The work done at the CLS was part of a collaborative research project between the Art Gallery of Ontario (AGO) and the Canadian Conservation Institute (CCI) that came out of a recent retrospective Rita Letendre: Fire & Light at the AGO. During close examination, Meaghan Monaghan, paintings conservator from the Michael and Sonja Koerner Centre for Conservation, observed that several of Letendre’s oil paintings from the fifties and sixties had suffered significant degradation, most prominently, uneven gloss and patchiness, snowy crystalline structures coating the surface known as efflorescence, and cracking and lifting of the paint in several areas.

Kate Helwig, Senior Conservation Scientist at the Canadian Conservation Institute, says these problems are typical of mid-20th century oil paintings. “We focused on three of Rita Letendre’s paintings in the AGO collection, which made for a really nice case study of her work and also fits into the larger question of why oil paintings from that period tend to have degradation issues.”

Growing evidence indicates that paintings from this period have experienced these problems due to the combination of the experimental techniques many artists employed and the additives paint manufacturers had begun to use.

In order to determine more precisely how these factors affected Letendre’s paintings, the research team members applied a variety of analytical techniques, using microscopic samples taken from key points in the works.

“The work done at the CLS was particularly important because it allowed us to map the distribution of materials throughout a paint layer such as an impasto stroke,” Helwig said. The team used Mid-IR chemical mapping at the facility, which provides a map of different molecules in a small sample.

For example, chemical mapping at the CLS allowed the team to understand the distribution of the paint additive aluminum stearate throughout the paint layers of the painting Méduse. This painting showed areas of soft, incompletely dried paint, likely due to the high concentration and incomplete mixing of this additive. 

The painting Victoire had a crumbling base paint layer in some areas and cracking and efflorescence at the surface in others.  Infrared mapping at the CLS allowed the team to determine that excess free fatty acids in the paint were linked to both problems; where the fatty acids were found at the base they formed zing “soaps” which led to crumbling and cracking, and where they had moved to the surface they had crystallized, causing the snowflake-like efflorescence.

AGO curators and conservators interviewed Letendre to determine what was important to her in preserving and conserving her works, and she highlighted how important an even gloss across the surface was to her artworks, and the philosophical importance of the colour black in her paintings. These priorities guided conservation efforts, while the insights gained through scientific research will help maintain the works in the long term.

In order to restore the black paint to its intended even finish for display, conservator Meaghan Monaghan removed the white crystallization from the surface of Victoire, but it is possible that it could begin to recur. Understanding the processes that lead to this degradation will be an important tool to keep Letendre’s works in good condition.

“The world of modern paint research is complicated; each painting is unique, which is why it’s important to combine theoretical work on model paint systems with this kind of case study on actual works of art” said Helwig. The team hopes to collaborate on studying a larger cross section of Letendre’s paintings in oil and acrylic in the future to add to the body of knowledge.

Here’s a link to and a citation for the paper,

Rita Letendre’s Oil Paintings from the 1960s: The Effect of Artist’s Materials on Degradation Phenomena by Kate Helwig, Meaghan Monaghan, Jennifer Poulin, Eric J. Henderson & Maeve Moriarty. Studies in Conservation (2020): 1-15 DOI: https://doi.org/10.1080/00393630.2020.1773055 Published online: 06 Jun 2020

This paper is behind a paywall.

Canadian Science Policy Conference (CSPC) 2020

The latest news from the CSPC 2020 (November 16 – 20 with preconference events from Nov. 1 -14) organizers is that registration is open and early birds have a deadline of September 27, 2020 (from an August 6, 2020 CSPC 2020 announcement received via email),

It’s time! Registration for the 12th Canadian Science Policy Conference (CSPC 2020) is open now. Early Bird registration is valid until Sept. 27th [2020].

CSPC 2020 is coming to your offices and homes:

Register for full access to 3 weeks of programming of the biggest science and innovation policy forum of 2020 under the overarching theme: New Decade, New Realities: Hindsight, Insight, Foresight.

2500+ Participants

300+ Speakers from five continents

65+ Panel sessions, 15 pre conference sessions and symposiums

50+ On demand videos and interviews with the most prominent figures of science and innovation policy 

20+ Partner-hosted functions

15+ Networking sessions

15 Open mic sessions to discuss specific topics

The virtual conference features an exclusive array of offerings:

3D Lounge and Exhibit area

Advance access to the Science Policy Magazine, featuring insightful reflections from the frontier of science and policy innovation

Many more

Don’t miss this unique opportunity to engage in the most important discussions of science and innovation policy with insights from around the globe, just from your office, home desk, or your mobile phone.

Benefit from significantly reduced registration fees for an online conference with an option for discount for multiple ticket purchases

Register now to benefit from the Early Bird rate!

The preliminary programme can be found here. This year there will be some discussion of a Canadian synthetic biology roadmap, presentations on various Indigenous concerns (mostly health), a climate challenge presentation focusing on Mexico and social vulnerability and another on parallels between climate challenges and COVID-19. There are many presentations focused on COVID-19 and.or health.

There doesn’t seem to be much focus on cyber security and, given that we just lost two ice caps (see Brandon Spektor’s August 1, 2020 article [Two Canadian ice caps have completely vanished from the Arctic, NASA imagery shows] on the Live Science website), it’s surprising that there are no presentations concerning the Arctic.

International Symposium on Electronic Arts (ISEA) 2020

According to my latest information, the early bird rate for ISEA 2020 (Oct. 13 -18) ends on August 13, 2020. (My June 22, 2020 posting describes their plans for the online event.)

You can find registration information here.

Margaux Davoine has written up a teaser for the 2020 edition of ISEA in the form of an August 6, 2020 interview with Yan Breuleux. I’ve excerpted one bit,

Finally, thinking about this year’s theme [Why Sentience?], there might be something a bit ironic about exploring the notion of sentience (historically reserved for biological life, and quite a small subsection of it) through digital media and electronic arts. There’s been much work done in the past 25 years to loosen the boundaries between such distinctions: how do you imagine ISEA2020 helping in that?

The similarities shared between humans, animals, and machines are fundamental in cybernetic sciences. According to the founder of cybernetics Norbert Wiener, the main tenets of the information paradigm – the notion of feedback – can be applied to humans, animals as well as the material world. Famously, the AA predictor (as analysed by Peter Galison in 1994) can be read as a first attempt at human-machine fusion (otherwise known as a cyborg).

The infamous Turing test also tends to blur the lines between humans and machines, between language and informational systems. Second-order cybernetics are often associated with biologists Francisco Varela and Humberto Maturana. The very notion of autopoiesis (a system capable of maintaining a certain level of stability in an unstable environment) relates back to the concept of homeostasis formulated by Willam Ross [William Ross Ashby] in 1952. Moreover, the concept of “ecosystems” emanates directly from the field of second-order cybernetics, providing researchers with a clearer picture of the interdependencies between living and non-living organisms. In light of these theories, the absence of boundaries between animals, humans, and machines constitutes the foundation of the technosciences paradigm. New media, technological arts, virtual arts, etc., partake in the dialogue between humans and machines, and thus contribute to the prolongation of this paradigm. Frank Popper nearly called his book “Techno Art” instead of “Virtual Art”, in reference to technosciences (his editor suggested the name change). For artists in the technological arts community, Jakob von Uexkull’s notion of “human-animal milieu” is an essential reference. Also present in Simondon’s reflections on human environments (both natural and artificial), the notion of “milieu” is quite important in the discourses about art and the environment. Concordia University’s artistic community chose the concept of “milieu” as the rallying point of its research laboratories.

ISEA2020’s theme resonates particularly well with the recent eruption of processing and artificial intelligence technologies. For me, Sentience is a purely human and animal idea: machines can only simulate our ways of thinking and feeling. Partly in an effort to explore the illusion of sentience in computers, Louis-Philippe Rondeau, Benoît Melançon and I have established the Mimesis laboratory at NAD University. Processing and AI technologies are especially useful in the creation of “digital doubles”, “Vactors”, real-time avatar generation, Deep Fakes and new forms of personalised interactions.

I adhere to the epistemological position that the living world is immeasurable. Through their ability to simulate, machines can merely reduce complex logics to a point of understandability. The utopian notion of empathetic computers is an idea mostly explored by popular science-fiction movies. Nonetheless, research into computer sentience allows us to devise possible applications, explore notions of embodiment and agency, and thereby develop new forms of interaction. Beyond my own point of view, the idea that machines can somehow feel emotions gives artists and researchers the opportunity to experiment with certain findings from the fields of the cognitive sciences, computer sciences and interactive design. For example, in 2002 I was particularly marked by an immersive installation at Universal Exhibition in Neuchatel, Switzerland titled Ada: Intelligence Space. The installation comprised an artificial environment controlled by a computer, which interacted with the audience on the basis of artificial emotion. The system encouraged visitors to participate by intelligently analysing their movements and sounds. Another example, Louis-Philippe Demers’ Blind Robot (2012),  demonstrates how artists can be both critical of, and amazed by, these new forms of knowledge. Additionally, the 2016 BIAN (Biennale internationale d’art numérique), organized by ELEKTRA (Alain Thibault) explored the various ways these concepts were appropriated in installation and interactive art. The way I see it, current works of digital art operate as boundary objects. The varied usages and interpretations of a particular work of art allow it to be analyzed from nearly every angle or field of study. Thus, philosophers can ask themselves: how does a computer come to understand what being human really is?

I have yet to attend conferences or exchange with researchers on that subject. Although the sheer number of presentation propositions sent to ISEA2020, I have no doubt that the symposium will be the ideal context to reflect on the concept of Sentience and many issues raised therein.

For the last bit of news.

HotPopRobot, one of six global winners of 2020 NASA SpaceApps COVID-19 challenge

I last wrote about HotPopRobot’s (Artash and Arushi with a little support from their parents) response to the 2020 NASA (US National Aeronautics and Space Administration) SpaceApps challenge in my July 1, 2020 post, Toronto COVID-19 Lockdown Musical: a data sonification project from HotPopRobot. (You’ll find a video of the project embedded in the post.)

Here’s more news from HotPopRobot’s August 4, 2020 posting (Note: Links have been removed),

Artash (14 years) and Arushi (10 years). Toronto.

We are excited to become the global winners of the 2020 NASA SpaceApps COVID-19 Challenge from among 2,000 teams from 150 countries. The six Global Winners will be invited to visit a NASA Rocket Launch site to view a spacecraft launch along with the SpaceApps Organizing team once travel is deemed safe. They will also receive an invitation to present their projects to NASA, ESA [European Space Agency], JAXA [Japan Aerospace Exploration Agency], CNES [Centre National D’Etudes Spatiales; France], and CSA [Canadian Space Agency] personnel. https://covid19.spaceappschallenge.org/awards

15,000 participants joined together to submit over 1400 projects for the COVID-19 Global Challenge that was held on 30-31 May 2020. 40 teams made to the Global Finalists. Amongst them, 6 teams became the global winners!

The 2020 SpaceApps was an international collaboration between NASA, Canadian Space Agency, ESA, JAXA, CSA,[sic] and CNES focused on solving global challenges. During a period of 48 hours, participants from around the world were required to create virtual teams and solve any of the 12 challenges related to the COVID-19 pandemic posted on the SpaceApps website. More details about the 2020 SpaceApps COVID-19 Challenge:  https://sa-2019.s3.amazonaws.com/media/documents/Space_Apps_FAQ_COVID_.pdf

We have been participating in NASA Space Challenge for the last seven years since 2014. We were only 8 years and 5 years respectively when we participated in our very first SpaceApps 2014.

We have grown up learning more about space, tacking global challenges, making hardware and software projects, participating in meetings, networking with mentors and teams across the globe, and giving presentations through the annual NASA Space Apps Challenges. This is one challenge we look forward to every year.

It has been a fun and exciting journey meeting so many people and astronauts and visiting several fascinating places on the way! We hope more kids, youths, and families are inspired by our space journey. Space is for all and is yours to discover!

If you have the time, I recommend reading HotPopRobot’s August 4, 2020 posting in its entirety.