It’s like the flood gates have opened and I am being inundated with event notices. The latest is from Toronto’s (Canada) ArtSci Salon (again). From a September 21, 2022 notice (received via email),
Basic Necessities Connectivity and cultural creativity in Cuba
A public lecture by Nestor Siré With online participation by Steffen Köhn
Join me in welcoming Nestor Siré. Nestor Siré is a multimedia artist based in Cuba. His projects and collaborations explore unofficial methods for circulating information and goods, such as alternative forms of economic production, and phenomena resulting from social creativity and recycling, piracy, as well as a-legal activities benefitting from loopholes. Siré will discuss some of his recent creative works in the Cuban context. His “Paquete Semanal” is an offline digital media circulation system based on in person file sharing to provide a solution to connectivity and infrastructure failure in Cuba. “Basic Necessities”, a recent collaboration with Steffen Köhln, portraits the dynamics of the informal economy in Cuba as it unfolds in Telegram groups and analyses the eclectic and creative uses of product photography within this digital context. Köhln will join him in conversation via zoom.
October 3, 2022 4:30-6:00 pm [ET] Room YH 245 Glendon Campus [York University] 2275 Bayview Ave North York, ON M4N 3M6 Directions
Nestor Siré (*1988), lives and works in Havana, Cuba. www.nestorsire.com Nestor Siré’s artistic practice intervenes directly in social contexts in order to analyze specific cultural phenomena, often engaging with the particular idiosyncrasies of digital culture in the Cuban context. His works have been shown in the Museo Nacional de Bellas Artes (Havana), Queens Museum (New York), Rhizome (New York), New Museum (New York), Hong-Gah Museum (Taipei), Museo de Arte Contemporáneo (Mexico City), Museo de Arte Contemporáneo, Santa Fe (Argentina), The Photographers’ Gallery (London), among other places. He has participated in events such as the Manifesta 13 Biennial (France), Gwangju Biennale (South Korea), Curitiba Biennial (Brazil), the Havana Biennial (Cuba) and the Asunción International Biennale (Paraguay), the Festival of New Latin American Cinema in Cuba and the Oberhausen International Festival of Short Film (Germany).
Steffen Köhn is a filmmaker, anthropologist and video artist who uses ethnography to understand contemporary sociotechnical landscapes. For his video and installation works he engages in local collaborations with gig workers, software developers, or science fiction writers to explore viable alternatives to current distributions of technological access and arrangements of power. His works have been shown at the Academy of the Arts Berlin, Kunsthaus Graz, Vienna Art Week, Hong Gah Museum Taipei, Lulea Biennial, The Photographers’ Gallery and the ethnographic museums of Copenhagen and Dresden. His films have been screened (among others) at the Berlinale, Rotterdam International Film Festival, and the Word Film Festival Montreal.
I tried to find out if this event will be webcast or streamed but was unsuccessful. You can check the ArtSci Salon website, perhaps they’ll post something closer to the event date.
IHEX has nothing to do with high tech witches (sigh … mildly disappointing), it is the abbreviation for “Intelligent interfaces and Human factors in EXtended environments” and I got a June 29, 2022 announcement or call for papers via email,
International Workshop on Intelligent interfaces and Human factors in EXtended environments (IHEX) – SITIS 2022 16th international conference on Signal Image Technology & Internet based Systems, Dijon, France, October 19-21, 2022
Dear Colleagues, It is with great pleasure that we would like to invite you to send a contribution to the International Workshop on Intelligent interfaces and Human factors in EXtended environments (IHEX) at SITIS 2022 16th international conference on Signal Image Technology & Internet based Systems (Conference website: https://www.sitis-conference.org).
The workshop is about new approaches for designing and implementing intelligent eXtended Reality systems. Please find the call for papers below and forward it to colleagues who might be interested in contributing to the workshop. For any questions and information, please do not hesitate to get in touch.
Best Regards, Giuseppe Caggianese
CFP [Call for papers] ———- eXtended Reality is becoming more and more widespread; going beyond entertainment and cultural heritage fruition purposes, these technologies offer new challenges and opportunities also in educational, industrial and healthcare domains. The research community in this field deals with technological and human factors issues, presenting theoretical and methodological proposals for perception, tracking, interaction and visualization. Increasing attention is observed towards the use of machine learning and AI methodologies to perform data analysis and reasoning, manage a multimodal interaction, and ensure an adaptation to users’ needs and preferences. The workshop is aimed at investigating new approaches for the design and implementation of intelligent eXtended Reality systems. It intends to provide a forum to share and discuss not only technological and design advances but also ethical concerns about the implications of these technologies on changing social interactions, information access and experiences.
Topics for the workshop include, but are not limited to:
– Intelligent User Interfaces in eXtended environments – Computational Interaction for XR – Quality and User Experience in XR – Cognitive Models for XR – Semantic Computing in environments – XR-based serious games – Virtual Agents in eXtended environments – Adaptive Interfaces – Visual Reasoning – Content Modelling – Responsible Design of eXtended Environments – XR systems for Human Augmentation – AI methodologies applied to XR – ML approaches in XR – Ethical concerns in XR
VENUE ———- University of Burgundy main campus, Dijon, France, October 19-21, 2022
WORKSHOP CO-CHAIRS ———————————– Agnese Augello, Institute for high performance computing and networking, National Research Council, Italy Giuseppe Caggianese, Institute for high performance computing and networking, National Research Council, Italy Boriana Koleva, University of Nottingham, United Kingdom
PROGRAM COMMITTEE ———————————- Agnese Augello, Institute for high performance computing and networking, National Research Council, Italy Giuseppe Caggianese, Institute for high performance computing and networking, National Research Council, Italy Giuseppe Chiazzese, Institute for Educational Technology, National Research Council, Italy Dimitri Darzentas, Edinburgh Napier University, Scotland Martin Flintham, University of Nottingham, United Kingdom Ignazio Infantino, Institute for high performance computing and networking, National Research Council, Italy Boriana Koleva, University of Nottingham, United Kingdom Emel Küpçü, Xtinge Technology Inc., Turkey Effie Lai-Chong Law, Durham University, United Kingdom Pietro Neroni, Institute for high performance computing and networking, National Research Council, Italy
SUBMISSION AND DECISIONS ——————————————- Each submission should be at most 8 pages in total including bibliography and well-marked appendices and must follow the IEEE [Institute of Electrical and Electronics Engineers] double columns publication format.
Submissions will be peer-reviewed by at least two peer reviewers. Papers will be evaluated based on relevance, significance, impact, originality, technical soundness, and quality of presentation. At least one author should attend the conference to present an accepted paper.
IMPORTANT DATES —————————- Paper Submission July 15, 2022 Acceptance/Reject Notification. September 9, 2022 Camera-ready September 16, 2022 Author Registration September 16, 2022
CONFERENCE PROCEEDINGS ——————————————– All papers accepted for presentation at the main tracks and workshops will be included in the conference proceedings, which will be published by IEEE Computer Society and referenced in IEEE Xplore Digital Library, Scopus, DBLP and major indexes.
REGISTRATION ———————– At least one author of each accepted paper must register for the conference and present the work. A single registration allows attending both track and workshop sessions.
CONTACTS —————- For any questions, please contact us via email.
Artists’ Talk & Webcast The Canadian Music Centre, 20 St. Joseph Street Toronto Thursday, July 7 7:30 – 9 p.m. [ET] (doors open 7 pm)
These are a Few of Our Favourite Bees investigates wild, native bees and their ecology through playful dioramas, video, audio, relief print and poetry. Inspired by lambe lambe – South American miniature puppet stages for a single viewer – four distinct dioramas convey surreal yet enlightening worlds where bees lounge in cozy environs, animals watch educational films [emphasis mine] and ethereal sounds animate bowls of berries (having been pollinated by their diverse bee visitors). Displays reminiscent of natural history museums invite close inspection, revealing minutiae of these tiny, diverse animals, our native bees. From thumb-sized to extremely tiny, fuzzy to hairless, black, yellow, red or emerald green, each native bee tells a story while her actions create the fruits of pollination, reflecting the perpetual dance of animals, plants and planet. With a special appearance by Toronto’s official bee, the jewelled green sweat bee, Agapostemon virescens!
These are a Few of Our Favourite Bees Collective are: Sarah Peebles, Ele Willoughby, Rob Cruickshank & Stephen Humphrey
These are a Few of Our Favourite Bees
Sarah Peebles, Ele Willoughby, Rob Cruickshank & Stephen Humphrey
paper, relief print, video projection, audio, audio cable, mixed media
Bee specimens & bee barcodes generously provided by Laurence Packer – Packer Lab, York University; Scott MacIvor – BUGS Lab, U-T [University of Toronto] Scarborough; Sam Droege – USGS [US Geological Survey]; Barcode of Life Data Systems; Antonia Guidotti, Department of Natural History, Royal Ontario Museum
In addition to watching television, animals have been known to interact with touchscreen computers as mentioned in my June 24, 2016 posting, “Animal technology: a touchscreen for your dog, sonar lunch orders for dolphins, and more.”
In May, my crabapple tree blooms. In August, I pick the ripe crabapples. In September, I make jelly. Then I have breakfast. This would not be without a bee.
It could not be without a bee. The fruit and vegetables I enjoy eating, as well as the roses I admire as centrepieces, all depend on pollination.
Our native pollinators and their habitat are threatened. Insect populations are declining due to habitat loss, pesticide use, disease and climate change. 75% of flowering plants rely on pollinators to set seed and we humans get one-third of our food from flowering plants.
I invite you to enter this beautiful dining room and consider the importance of pollinators to the enjoyment of your next meal.
Tracey Lawko employs contemporary textile techniques to showcase changes in our environment. Building on a base of traditional hand-embroidery, free-motion longarm stitching and a love of drawing, her representational work is detailed and “drawn with thread”. Her nature studies draw attention to our native pollinators as she observes them around her studio in the Niagara Escarpment. Many are stitched using a centuries-old, three-dimensional technique called “Stumpwork”.
Tracey’s extensive exhibition history includes solo exhibitions at leading commercial galleries and public museums. Her work has been selected for major North American and International exhibitions, including the Concours International des Mini-Textiles, Musée Jean Lurçat, France, and is held in the permanent collection of the US National Quilt Museum and in private collections in North America and Europe.
With the launch of the James Webb Space Telescope (JWST; Webb Telescope) on December 25, 2021, the US National Aeronautics and Space Administration (NASA) has been all over the news for over a month as the telescope has unfolded itself to take its position in space.
In celebration of the Webb Telescope’s successful launch and unfolding process, NASA has announced an art/science (also known as, art/sci or sciart) has issued an extension to an art/sci challenge (I don’t know when it was first announced),
NASA’s biggest and most powerful space telescope ever launched on Dec. 25, 2021! The James Webb Space Telescope, or Webb, will be orbiting a million miles away to reveal the universe as never seen before. It will look at the first stars and galaxies, study distant planets around other stars, solve mysteries in our solar system and discover what we can’t even imagine. Its revolutionary technology will be able to look back in time at 13.5 billion years of our cosmic history.
Show us what you believe the Webb telescope will reveal by creating art. You can draw, paint, sing, write, dance — the universe is the limit! Share a picture or video of you and your creation with the hashtag #UnfoldTheUniverse for a chance to be featured on NASA’s website and social media channels.
How to Participate
1. Use any art supplies you’d like to create art. The art could be a drawing, song, poem, dance or something else! Check out the resources linked below for inspiration.
2. Take a picture of you holding your art, or film a less than one-minute video of you describing or performing your art.
3. Share your photo or video on Facebook, Twitter, or Instagram using #UnfoldTheUniverse for a chance to be featured on NASA’s website and social media accounts!
4. If your submission catches our eye, we’ll be in touch to obtain permission for it to be considered for NASA digital products.
Deadline for Submissions EXTENDED: Good news! We will now keep the #UnfoldTheUniverse art challenge open through the return of our first science images, expected to be about six months after launch. Keep your submissions coming – we love seeing your creativity!
The James Webb Space Telescope (JWST) is a space telescope and an international collaboration among NASA, the European Space Agency (ESA), and the Canadian Space Agency (CSA). [emphasis mine] The telescope is named after James E. Webb, who was the administrator of NASA from 1961 to 1968 and played an integral role in the Apollo program. It is intended to succeed the Hubble Space Telescope as NASA’s flagship mission in astrophysics. JWST was launched on 25 December 2021 on Ariane flight VA256. It is designed to provide improved infrared resolution and sensitivity over Hubble, viewing objects up to 100 times fainter than the faintest objects detectable by Hubble. This will enable a broad range of investigations across the fields of astronomy and cosmology, such as observations up to redshift z≈20 of some of the oldest and most distant objects and events in the Universe (including the first stars and the formation of the first galaxies), and detailed atmospheric characterization of potentially habitable exoplanets.
The James Webb Space Telescope has a mass about half of Hubble Space Telescope’s, but a 6.5 m (21 ft)-diameter gold-coated beryllium primary mirror made of 18 hexagonal mirrors, giving it a total size over six times as large as Hubble’s 2.4 m (7.9 ft). Of this, 0.9 m2 (9.7 sq ft) is obscured by the secondary support struts, making its actual light collecting area about 5.6 times larger than Hubble’s 4.525 m2 (48.71 sq ft) collecting area. Beryllium is a very stiff, hard, lightweight metal often used in aerospace that is non-magnetic and keeps its shape accurately in an ultra-cold environment – it has a specific stiffness (rigidity) six times that of steel or titanium, while being 30% lighter in weight than aluminium. The gold coating provides infrared reflectivity and durability.
Former Canadian Prime Minister Stephen Harper was very interested in space and the aeronautics industry and, accordingly, his government invested in the JWST.
Hidden Life Radio livestreams music generated from trees (their biodata, that is). Kristin Toussaint in her August 3, 2021 article for Fast Company describes the ‘radio station’, Note: Links have been removed,
Outside of a library in Cambridge, Massachusetts, an over-80-year-old copper beech tree is making music.
As the tree photosynthesizes and absorbs and evaporates water, a solar-powered sensor attached to a leaf measures the micro voltage of all that invisible activity. Sound designer and musician Skooby Laposky assigned a key and note range to those changes in this electric activity, turning the tree’s everyday biological processes into an ethereal song.
That music is available on Hidden Life Radio, an art project by Laposky, with assistance from the Cambridge Department of Public Works Urban Forestry, and funded in part by a grant from the Cambridge Arts Council. Hidden Life Radio also features the musical sounds of two other Cambridge trees: a honey locust and a red oak, both located outside of other Cambridge library branches. The sensors on these trees are solar-powered biodata sonification kits, a technology that has allowed people to turn all sorts of plant activity into music.
… Laposky has created a musical voice for these disappearing trees, and he hopes people tune into Hidden Life Radio and spend time listening to them over time. The music they produce occurs in real time, affected by the weather and whatever the tree is currently doing. Some days they might be silent, especially when there’s been several days without rain, and they’re dehydrated; Laposky is working on adding an archive that includes weather information, so people can go back and hear what the trees sound like on different days, under different conditions. The radio will play 24 hours a day until November, when the leaves will drop—a “natural cycle for the project to end,” Laposky says, “when there aren’t any leaves to connect to anymore.”
The 2021 season is over but you can find an archive of Hidden Life Radio livestreams here. Or, if you happen to be reading this page sometime after January 2022, you can try your luck and click here at Hidden Life Radio livestreams but remember, even if the project has started up again, the tree may not be making music when you check in. So, if you don’t hear anything the first time, try again.
Want to create your own biodata sonification project?
Toussaint’s article sent me on a search for more and I found a website where you can get biodata sonification kits. Sam Cusumano’s electricity for progress website offers lessons, as well as, kits and more.
Sophie Haigney’s February 21, 2020 article for NPR ([US] National Public Radio) highlights other plant music and more ways to tune in to and create it. (h/t Kristin Toussaint)
Simon Fraser University’s (SFU) Metacreation Lab for Creative AI (artificial intelligence) in Vancouver, Canada, has just sent me (via email) a January 2022 newsletter, which you can find here. There are a two items I found of special interest.
Max Planck Centre for Humans and Machines Seminars
Max Planck Institute Seminar – The rise of Creative AI & its ethics January 11, 2022 at 15:00 pm [sic] CET | 6:00 am PST
Next Monday [sic], Philippe Pasquier, director of the Metacreation Labn will be providing a seminar titled “The rise of Creative AI & its ethics” [Tuesday, January 11, 2022] at the Max Planck Institute’s Centre for Humans and Machine [sic].
The Centre for Humans and Machines invites interested attendees to our public seminars, which feature scientists from our institute and experts from all over the world. Their seminars usually take 1 hour and provide an opportunity to meet the speaker afterwards.
The seminar is openly accessible to the public via Webex Access, and will be a great opportunity to connect with colleagues and friends of the Lab on European and East Coast time. For more information and the link, head to the Centre for Humans and Machines’ Seminars page linked below.
The Centre’s seminar description offers an abstract for the talk and a profile of Philippe Pasquier,
Creative AI is the subfield of artificial intelligence concerned with the partial or complete automation of creative tasks. In turn, creative tasks are those for which the notion of optimality is ill-defined. Unlike car driving, chess moves, jeopardy answers or literal translations, creative tasks are more subjective in nature. Creative AI approaches have been proposed and evaluated in virtually every creative domain: design, visual art, music, poetry, cooking, … These algorithms most often perform at human-competitive or superhuman levels for their precise task. Two main use of these algorithms have emerged that have implications on workflows reminiscent of the industrial revolution:
– Augmentation (a.k.a, computer-assisted creativity or co-creativity): a human operator interacts with the algorithm, often in the context of already existing creative software.
– Automation (computational creativity): the creative task is performed entirely by the algorithms without human intervention in the generation process.
Both usages will have deep implications for education and work in creative fields. Away from the fear of strong – sentient – AI, taking over the world: What are the implications of these ongoing developments for students, educators and professionals? How will Creative AI transform the way we create, as well as what we create?
Philippe Pasquier is a professor at Simon Fraser University’s School for Interactive Arts and Technology, where he directs the Metacreation Lab for Creative AI since 2008. Philippe leads a research-creation program centred around generative systems for creative tasks. As such, he is a scientist specialized in artificial intelligence, a multidisciplinary media artist, an educator, and a community builder. His contributions range from theoretical research on generative systems, computational creativity, multi-agent systems, machine learning, affective computing, and evaluation methodologies. This work is applied in the creative software industry as well as through artistic practice in computer music, interactive and generative art.
Folks at the Metacreation Lab have made available an interactive search engine for sounds, from the January 2022 newsletter,
Audio Metaphor is an interactive search engine that transforms users’ queries into soundscapes interpreting them. Using state of the art algorithms for sound retrieval, segmentation, background and foreground classification, AuMe offers a way to explore the vast open source library of sounds available on the freesound.org online community through natural language and its semantic, symbolic, and metaphorical expressions.
We’re excited to see Audio Metaphor included among many other innovative projects on Freesound Labs, a directory of projects, hacks, apps, research and other initiatives that use content from Freesound or use the Freesound API. Take a minute to check out the variety of projects applying creative coding, machine learning, and many other techniques towards the exploration of sound and music creation, generative music, and soundscape composition in diverse forms an interfaces.
Audio Metaphor (AuMe) is a research project aimed at designing new methodologies and tools for sound design and composition practices in film, games, and sound art. Through this project, we have identified the processes involved in working with audio recordings in creative environments, addressing these in our research by implementing computational systems that can assist human operations.
We have successfully developed Audio Metaphor for the retrieval of audio file recommendations from natural language texts, and even used phrases generated automatically from Twitter to sonify the current state of Web 2.0. Another significant achievement of the project has been in the segmentation and classification of environmental audio with composition-specific categories, which were then applied in a generative system approach. This allows users to generate sound design simply by entering textual prompts.
As we direct Audio Metaphor further toward perception and cognition, we will continue to contribute to the music information retrieval field through environmental audio classification and segmentation. The project will continue to be instrumental in the design and implementation of new tools for sound designers and artists.
Adam Dhalla in a January 5, 2022 posting on the Nature Conservancy Canada blog announced a new location for a ‘Find the Birds’ game,
Since its launch six months ago …, with an initial Arizona simulated birding location, Find the Birds (a free educational mobile game about birds and conservation) now has over 7,000 players in 46 countries on six continents. In the game, players explore realistic habitats, find and take virtual photos of accurately animated local bird species and complete conservation quests. Thanks in a large part to the creative team at Thought Generation Society (the non-profit game production organization I’m working with), Find the Birds is a Canadian-made success story.
Going back nine months to an April 9, 2021 posting and the first ‘Find the Birds’ announcement by Adam Dhalla for the Nature Conservancy Canada blog,
It is not a stretch to say that our planet is in dire need of more conservationists, and environmentally minded people in general. Birds and birdwatching are gateways to introducing conservation and science to a new generation.
… it seems as though younger generations are often unaware of the amazing world in their backyard. They don’t hear the birdsong emanating from the trees during the morning chorus. …
This problem inspired my dad and me to come up with the original concept for Find the Birds, a free educational mobile game about birds and conservation. I was 10 at the time, and I discovered that I was usually the only kid out birdwatching. So we thought, why not bring the birds to them via the digital technology they are already immersed in?
Find the Birds reflects on the birding and conservation experience. Players travel the globe as an animated character on their smartphone or tablet and explore real-life, picturesque environments, finding different bird species. The unique element of this game is its attention to detail; everything in the game is based on science. …
Here’s a trailer for the game featuring its first location, Arizona,
Now back to Dhalla’s January 5, 2022 posting for more about the latest iteration of the game and other doings (Note: Links have been removed),
Recently, the British Columbia location was added, which features Sawmill Lake in the Okanagan Valley, Tofino on the coast and a journey in the Pacific Ocean. Some of the local bird species included are Steller’s jays (BC’s provincial bird), black oystercatchers and western meadowlarks. Conservation quests include placing nest boxes for northern saw-whet owls and cleaning up beach litter.
I’ve always loved Steller’s jays! We get a lot of them in our backyard. It’s far lesser known bird than blue jay, so I wanted to give them some attention. That’s the terrific thing about being the co-creator of the game: I get to help choose the species, the quests — everything! So all the birds in the BC locations are some of my favourites.
The black oystercatcher is another underappreciated species. I’ve seen them along the coasts of BC, where they are relatively common. …
To gauge the game’s impact on conservation education, I recently conducted an online player survey. Of the 101 players who completed the survey, 71 per cent were in the 8–15 age group, which means I am reaching my peers. But 21 per cent were late teens and adults, so the game’s appeal is not limited to children. Fifty-one per cent were male and 49 per cent female: this equality is encouraging, as most games in general have a much smaller percentage of female players.
And the game is helping people connect with nature! Ninety-eight per cent of players said the game increased their appreciation of birds. …
As a result of the game’s reputation and the above data, I was invited to present my findings at the 2022 International Ornithological Congress. So, I will be traveling to Durban, South Africa, next August to spread the word on reaching and teaching a new generation of birders, ornithologists and conservationists. …
Before getting to the announcement, this talk and Q&A (question and answer) session is being co-hosted by ArtSci Salon at the Fields Institute for Research in Mathematical Sciences and the OCAD University/DMG Bodies in Play (BiP) initiative.
For anyone curious about OCAD, it was the Ontario College of Art and Design and then in a very odd government/marketing (?) move, they added the word university. As for DMG, in their own words and from their About page, “DMG is a not-for-profit videogame arts organization that creates space for marginalized creators to make, play and critique videogames within a cultural context.” They are located in Toronto, Ontario. Finally, the Art/Sci Salon and the Fields Institute are located at the University of Toronto.
As for the talk, here’s more from the November 28, 2021 Art/Sci Salon announcement (received via email),
Inspired by her own experience with the health care system to treat a post-reproductive disease, interdisciplinary artist [Camille] Baker created the project INTER/her, an immersive installation and VR [virtual reality] experience exploring the inner world of women’s bodies and the reproductive diseases they suffer. The project was created to open up the conversation about phenomena experienced by women in their late 30’s (sometimes earlier) their 40’s, and sometimes after menopause. Working in consultation with a gynecologist, the project features interviews with several women telling their stories. The themes in the work include issues of female identity, sexuality, body image, loss of body parts, pain, disease, and cancer. INTER/her has a focus on female reproductive diseases explored through a feminist lens; as personal exploration, as a conversation starter, to raise greater public awareness and encourage community building. The work also represents the lived experience of women’s pain and anger, conflicting thoughts through self-care and the growth of disease. Feelings of mortality are explored through a medical process in male-dominated medical institutions and a dearth of reliable information. https://inter-her.art/ 
In 2021, the installation was shortlisted for the Lumen Prize.
Join us for a talk and Q&A with the artist to discuss her work and its future development.
After registering, you will receive a confirmation email containing information about joining the meeting.
This talk is Co-Hosted by the ArtSci Salon at the Fields Institute for Research in Mathematical Sciences and the OCAD University/DMG Bodies in Play (BiP) initiative.
This event will be recorded and archived on the ArtSci Salon Youtube channel
Camille Baker is a Professor in Interactive and Immersive Arts, University for the Creative Arts [UCA], Farnham Surrey (UK). She is an artist-performer/researcher/curator within various art forms: immersive experiences, participatory performance and interactive art, mobile media art, tech fashion/soft circuits/DIY electronics, responsive interfaces and environments, and emerging media curating. Maker of participatory performance and immersive artwork, Baker develops methods to explore expressive non-verbal modes of communication, extended embodiment and presence in real and mixed reality and interactive art contexts, using XR, haptics/ e-textiles, wearable devices and mobile media. She has an ongoing fascination with all things emotional, embodied, felt, sensed, the visceral, physical, and relational.
Her 2018 book _New Directions in Mobile Media and Performance_ showcases exciting approaches and artists in this space, as well as her own work. She has been running a regular meetup group with smart/e-textile artists and designers since 2014, called e-stitches, where participants share their practice and facilitate workshops of new techniques and innovations. Baker also has been Principal Investigator for UCA for the EU funded STARTS Ecosystem (starts.eu ) Apr 2019-Nov 2021 and founder initiator for the EU WEAR Sustain project Jan 2017-April 2019 (wearsustain.eu ).
The ‘metaverse’ seems to be everywhere these days (especially since Facebook has made a number of announcements bout theirs (more about that later in this posting).
At this point, the metaverse is very hyped up despite having been around for about 30 years. According to the Wikipedia timeline (see the Metaverse entry), the first one was a MOO in 1993 called ‘The Metaverse’. In any event, it seems like it might be a good time to see what’s changed since I dipped my toe into a metaverse (Second Life by Linden Labs) in 2007.
(For grammar buffs, I switched from definite article [the] to indefinite article [a] purposefully. In reading the various opinion pieces and announcements, it’s not always clear whether they’re talking about a single, overarching metaverse [the] replacing the single, overarching internet or whether there will be multiple metaverses, in which case [a].)
The hype/the buzz … call it what you will
This September 6, 2021 piece by Nick Pringle for Fast Company dates the beginning of the metaverse to a 1992 science fiction novel before launching into some typical marketing hype (for those who don’t know, hype is the short form for hyperbole; Note: Links have been removed),
The term metaverse was coined by American writer Neal Stephenson in his 1993 sci-fi hit Snow Crash. But what was far-flung fiction 30 years ago is now nearing reality. At Facebook’s most recent earnings call [June 2021], CEO Mark Zuckerberg announced the company’s vision to unify communities, creators, and commerce through virtual reality: “Our overarching goal across all of these initiatives is to help bring the metaverse to life.”
So what actually is the metaverse? It’s best explained as a collection of 3D worlds you explore as an avatar. Stephenson’s original vision depicted a digital 3D realm in which users interacted in a shared online environment. Set in the wake of a catastrophic global economic crash, the metaverse in Snow Crash emerged as the successor to the internet. Subcultures sprung up alongside new social hierarchies, with users expressing their status through the appearance of their digital avatars.
Today virtual worlds along these lines are formed, populated, and already generating serious money. Household names like Roblox and Fortnite are the most established spaces; however, there are many more emerging, such as Decentraland, Upland, Sandbox, and the soon to launch Victoria VR.
These metaverses [emphasis mine] are peaking at a time when reality itself feels dystopian, with a global pandemic, climate change, and economic uncertainty hanging over our daily lives. The pandemic in particular saw many of us escape reality into online worlds like Roblox and Fortnite. But these spaces have proven to be a place where human creativity can flourish amid crisis.
In fact, we are currently experiencing an explosion of platforms parallel to the dotcom boom. While many of these fledgling digital worlds will become what Ask Jeeves was to Google, I predict [emphasis mine] that a few will match the scale and reach of the tech giant—or even exceed it.
Because the metaverse brings a new dimension to the internet, brands and businesses will need to consider their current and future role within it. Some brands are already forging the way and establishing a new genre of marketing in the process: direct to avatar (D2A). Gucci sold a virtual bag for more than the real thing in Roblox; Nike dropped virtual Jordans in Fortnite; Coca-Cola launched avatar wearables in Decentraland, and Sotheby’s has an art gallery that your avatar can wander in your spare time.
D2A is being supercharged by blockchain technology and the advent of digital ownership via NFTs, or nonfungible tokens. NFTs are already making waves in art and gaming. More than $191 million was transacted on the “play to earn” blockchain game Axie Infinity in its first 30 days this year. This kind of growth makes NFTs hard for brands to ignore. In the process, blockchain and crypto are starting to feel less and less like “outsider tech.” There are still big barriers to be overcome—the UX of crypto being one, and the eye-watering environmental impact of mining being the other. I believe technology will find a way. History tends to agree.
Detractors see the metaverse as a pandemic fad, wrapping it up with the current NFT bubble or reducing it to Zuck’s [Jeffrey Zuckerberg and Facebook] dystopian corporate landscape. This misses the bigger behavior change that is happening among Gen Alpha. When you watch how they play, it becomes clear that the metaverse is more than a buzzword.
For Gen Alpha [emphasis mine], gaming is social life. While millennials relentlessly scroll feeds, Alphas and Zoomers [emphasis mine] increasingly stroll virtual spaces with their friends. Why spend the evening staring at Instagram when you can wander around a virtual Harajuku with your mates? If this seems ridiculous to you, ask any 13-year-old what they think.
Who is Nick Pringle and how accurate are his predictions?
By thinking “virtual first,” you can see how these spaces become highly experimental, creative, and valuable. The products you can design aren’t bound by physics or marketing convention—they can be anything, and are now directly “ownable” through blockchain. …
I believe that the metaverse is here to stay. That means brands and marketers now have the exciting opportunity to create products that exist in multiple realities. The winners will understand that the metaverse is not a copy of our world, and so we should not simply paste our products, experiences, and brands into it.
I emphasized “These metaverses …” in the previous section to highlight the fact that I find the use of ‘metaverses’ vs. ‘worlds’ confusing as the words are sometimes used as synonyms and sometimes as distinctions. We do it all the time in all sorts of conversations but for someone who’s an outsider to a particular occupational group or subculture, the shifts can make for confusion.
As for Gen Alpha and Zoomer, I’m not a fan of ‘Gen anything’ as shorthand for describing a cohort based on birth years. For example, “For Gen Alpha [emphasis mine], gaming is social life,” ignores social and economic classes, as well as, the importance of locations/geography, e.g., Afghanistan in contrast to the US.
To answer the question I asked, Pringle does not mention any record of accuracy for his predictions for the future but I was able to discover that he is a “multiple Cannes Lions award-winning creative” (more here).
In recent months you may have heard about something called the metaverse. Maybe you’ve read that the metaverse is going to replace the internet. Maybe we’re all supposed to live there. Maybe Facebook (or Epic, or Roblox, or dozens of smaller companies) is trying to take it over. And maybe it’s got something to do with NFTs [non-fungible tokens]?
Unlike a lot of things The Verge covers, the metaverse is tough to explain for one reason: it doesn’t necessarily exist. It’s partly a dream for the future of the internet and partly a neat way to encapsulate some current trends in online infrastructure, including the growth of real-time 3D worlds.
Then what is the real metaverse?
There’s no universally accepted definition of a real “metaverse,” except maybe that it’s a fancier successor to the internet. Silicon Valley metaverse proponents sometimes reference a description from venture capitalist Matthew Ball, author of the extensive Metaverse Primer:
“The Metaverse is an expansive network of persistent, real-time rendered 3D worlds and simulations that support continuity of identity, objects, history, payments, and entitlements, and can be experienced synchronously by an effectively unlimited number of users, each with an individual sense of presence.”
Facebook, arguably the tech company with the biggest stake in the metaverse, describes it more simply:
“The ‘metaverse’ is a set of virtual spaces where you can create and explore with other people who aren’t in the same physical space as you.”
There are also broader metaverse-related taxonomies like one from game designer Raph Koster, who draws a distinction between “online worlds,” “multiverses,” and “metaverses.” To Koster, online worlds are digital spaces — from rich 3D environments to text-based ones — focused on one main theme. Multiverses are “multiple different worlds connected in a network, which do not have a shared theme or ruleset,” including Ready Player One’s OASIS. And a metaverse is “a multiverse which interoperates more with the real world,” incorporating things like augmented reality overlays, VR dressing rooms for real stores, and even apps like Google Maps.
If you want something a little snarkier and more impressionistic, you can cite digital scholar Janet Murray — who has described the modern metaverse ideal as “a magical Zoom meeting that has all the playful release of Animal Crossing.”
But wait, now Ready Player One isn’t a metaverse and virtual worlds don’t have to be 3D? It sounds like some of these definitions conflict with each other.
An astute observation.
Why is the term “metaverse” even useful? “The internet” already covers mobile apps, websites, and all kinds of infrastructure services. Can’t we roll virtual worlds in there, too?
Matthew Ball favors the term “metaverse” because it creates a clean break with the present-day internet. [emphasis mine] “Using the metaverse as a distinctive descriptor allows us to understand the enormity of that change and in turn, the opportunity for disruption,” he said in a phone interview with The Verge. “It’s much harder to say ‘we’re late-cycle into the last thing and want to change it.’ But I think understanding this next wave of computing and the internet allows us to be more proactive than reactive and think about the future as we want it to be, rather than how to marginally affect the present.”
A more cynical spin is that “metaverse” lets companies dodge negative baggage associated with “the internet” in general and social media in particular. “As long as you can make technology seem fresh and new and cool, you can avoid regulation,” researcher Joan Donovan told The Washington Post in a recent article about Facebook and the metaverse. “You can run defense on that for several years before the government can catch up.”
There’s also one very simple reason: it sounds more futuristic than “internet” and gets investors and media people (like us!) excited.
People keep saying NFTs are part of the metaverse. Why?
NFTs are complicated in their own right, and you can read more about them here. Loosely, the thinking goes: NFTs are a way of recording who owns a specific virtual good, creating and transferring virtual goods is a big part of the metaverse, thus NFTs are a potentially useful financial architecture for the metaverse. Or in more practical terms: if you buy a virtual shirt in Metaverse Platform A, NFTs can create a permanent receipt and let you redeem the same shirt in Metaverse Platforms B to Z.
Lots of NFT designers are selling collectible avatars like CryptoPunks, Cool Cats, and Bored Apes, sometimes for astronomical sums. Right now these are mostly 2D art used as social media profile pictures. But we’re already seeing some crossover with “metaverse”-style services. The company Polygonal Mind, for instance, is building a system called CryptoAvatars that lets people buy 3D avatars as NFTs and then use them across multiple virtual worlds.
Since starting this post sometime in September 2021, the situation regarding Facebook has changed a few times. I’ve decided to begin my version of the story from a summer 2021 announcement.
On Monday, July 26, 2021, Facebook announced a new Metaverse product group. From a July 27, 2021 article by Scott Rosenberg for Yahoo News (Note: A link has been removed),
Facebook announced Monday it was forming a new Metaverse product group to advance its efforts to build a 3D social space using virtual and augmented reality tech.
Facebook’s new Metaverse product group will report to Andrew Bosworth, Facebook’s vice president of virtual and augmented reality [emphasis mine], who announced the new organization in a Facebook post.
Facebook, integrity, and safety in the metaverse
On September 27, 2021 Facebook posted this webpage (Building the Metaverse Responsibly by Andrew Bosworth, VP, Facebook Reality Labs [emphasis mine] and Nick Clegg, VP, Global Affairs) on its site,
The metaverse won’t be built overnight by a single company. We’ll collaborate with policymakers, experts and industry partners to bring this to life.
We’re announcing a $50 million investment in global research and program partners to ensure these products are developed responsibly.
We develop technology rooted in human connection that brings people together. As we focus on helping to build the next computing platform, our work across augmented and virtual reality and consumer hardware will deepen that human connection regardless of physical distance and without being tied to devices.
Introducing the XR [extended reality] Programs and Research Fund
There’s a long road ahead. But as a starting point, we’re announcing the XR Programs and Research Fund, a two-year $50 million investment in programs and external research to help us in this effort. Through this fund, we’ll collaborate with industry partners, civil rights groups, governments, nonprofits and academic institutions to determine how to build these technologies responsibly.
Rebranding Facebook’s integrity and safety issues away?
It seems Facebook’s credibility issues are such that the company is about to rebrand itself according to an October 19, 2021 article by Alex Heath for The Verge (Note: Links have been removed),
Facebook is planning to change its company name next week to reflect its focus on building the metaverse, according to a source with direct knowledge of the matter.
The coming name change, which CEO Mark Zuckerberg plans to talk about at the company’s annual Connect conference on October 28th , but could unveil sooner, is meant to signal the tech giant’s ambition to be known for more than social media and all the ills that entail. The rebrand would likely position the blue Facebook app as one of many products under a parent company overseeing groups like Instagram, WhatsApp, Oculus, and more. A spokesperson for Facebook declined to comment for this story.
Facebook already has more than 10,000 employees building consumer hardware like AR glasses that Zuckerberg believes will eventually be as ubiquitous as smartphones. In July, he told The Verge that, over the next several years, “we will effectively transition from people seeing us as primarily being a social media company to being a metaverse company.”
A rebrand could also serve to further separate the futuristic work Zuckerberg is focused on from the intense scrutiny Facebook is currently under for the way its social platform operates today. A former employee turned whistleblower, Frances Haugen, recently leaked a trove of damning internal documents to The Wall Street Journal and testified about them before Congress. Antitrust regulators in the US and elsewhere are trying to break the company up, and public trust in how Facebook does business is falling.
Facebook isn’t the first well-known tech company to change its company name as its ambitions expand. In 2015, Google reorganized entirely under a holding company called Alphabet, partly to signal that it was no longer just a search engine, but a sprawling conglomerate with companies making driverless cars and health tech. And Snapchat rebranded to Snap Inc. in 2016, the same year it started calling itself a “camera company” and debuted its first pair of Spectacles camera glasses.
If you have time, do read Heath’s article in its entirety.
“It reflects the broadening out of the Facebook business. And then, secondly, I do think that Facebook’s brand is probably not the greatest given all of the events of the last three years or so,” internet analyst James Cordwell at Atlantic Equities said.
“Having a different parent brand will guard against having this negative association transferred into a new brand, or other brands that are in the portfolio,” said Shankha Basu, associate professor of marketing at University of Leeds.
Tyler Jadah’s October 20, 2021 article for the Daily Hive includes an earlier announcement (not mentioned in the other two articles about the rebranding), Note: A link has been removed,
Earlier this week [October 17, 2021], Facebook announced it will start “a journey to help build the next computing platform” and will hire 10,000 new high-skilled jobs within the European Union (EU) over the next five years.
“Working with others, we’re developing what is often referred to as the ‘metaverse’ — a new phase of interconnected virtual experiences using technologies like virtual and augmented reality,” wrote Facebook’s Nick Clegg, the VP of Global Affairs. “At its heart is the idea that by creating a greater sense of “virtual presence,” interacting online can become much closer to the experience of interacting in person.”
Clegg says the metaverse has the potential to help unlock access to new creative, social, and economic opportunities across the globe and the virtual world.
In an email with Facebook’s Corporate Communications Canada, David Troya-Alvarez told Daily Hive, “We don’t comment on rumour or speculation,” in regards to The Verge‘s report.
I will update this posting when and if Facebook rebrands itself into a ‘metaverse’ company.
***See Oct. 28, 2021 update at the end of this posting and prepare yourself for ‘Meta’.***
Who (else) cares about integrity and safety in the metaverse?
In technology, first-mover advantage is often significant. This is why BigTech and other online platforms are beginning to acquire software businesses to position themselves for the arrival of the Metaverse. They hope to be at the forefront of profound changes that the Metaverse will bring in relation to digital interactions between people, between businesses, and between them both.
What is the Metaverse? The short answer is that it does not exist yet. At the moment it is vision for what the future will be like where personal and commercial life is conducted digitally in parallel with our lives in the physical world. Sounds too much like science fiction? For something that does not exist yet, the Metaverse is drawing a huge amount of attention and investment in the tech sector and beyond.
Here we look at what the Metaverse is, what its potential is for disruptive change, and some of the key legal and regulatory issues future stakeholders may need to consider.
What are the potential legal issues?
The revolutionary nature of the Metaverse is likely to give rise to a range of complex legal and regulatory issues. We consider some of the key ones below. As time goes by, naturally enough, new ones will emerge.
Participation in the Metaverse will involve the collection of unprecedented amounts and types of personal data. Today, smartphone apps and websites allow organisations to understand how individuals move around the web or navigate an app. Tomorrow, in the Metaverse, organisations will be able to collect information about individuals’ physiological responses, their movements and potentially even brainwave patterns, thereby gauging a much deeper understanding of their customers’ thought processes and behaviours.
Users participating in the Metaverse will also be “logged in” for extended amounts of time. This will mean that patterns of behaviour will be continually monitored, enabling the Metaverse and the businesses (vendors of goods and services) participating in the Metaverse to understand how best to service the users in an incredibly targeted way.
The hungry Metaverse participant
How might actors in the Metaverse target persons participating in the Metaverse? Let us assume one such woman is hungry at the time of participating. The Metaverse may observe a woman frequently glancing at café and restaurant windows and stopping to look at cakes in a bakery window, and determine that she is hungry and serve her food adverts accordingly.
Contrast this with current technology, where a website or app can generally only ascertain this type of information if the woman actively searched for food outlets or similar on her device.
Therefore, in the Metaverse, a user will no longer need to proactively provide personal data by opening up their smartphone and accessing their webpage or app of choice. Instead, their data will be gathered in the background while they go about their virtual lives.
This type of opportunity comes with great data protection responsibilities. Businesses developing, or participating in, the Metaverse will need to comply with data protection legislation when processing personal data in this new environment. The nature of the Metaverse raises a number of issues around how that compliance will be achieved in practice.
Who is responsible for complying with applicable data protection law?
In many jurisdictions, data protection laws place different obligations on entities depending on whether an entity determines the purpose and means of processing personal data (referred to as a “controller” under the EU General Data Protection Regulation (GDPR)) or just processes personal data on behalf of others (referred to as a “processor” under the GDPR).
In the Metaverse, establishing which entity or entities have responsibility for determining how and why personal data will be processed, and who processes personal data on behalf of another, may not be easy. It will likely involve picking apart a tangled web of relationships, and there may be no obvious or clear answers – for example:
Will there be one main administrator of the Metaverse who collects all personal data provided within it and determines how that personal data will be processed and shared? Or will multiple entities collect personal data through the Metaverse and each determine their own purposes for doing so?
Either way, many questions arise, including:
How should the different entities each display their own privacy notice to users? Or should this be done jointly? How and when should users’ consent be collected? Who is responsible if users’ personal data is stolen or misused while they are in the Metaverse? What data sharing arrangements need to be put in place and how will these be implemented?
There’s a lot more to this page including a look at Social Media Regulation and Intellectual Property Rights.
I’m starting to think we should talking about RR (real reality), as well as, VR (virtual reality), AR (augmented reality), MR (mixed reality), and XR (extended reality). It seems that all of these (except RR, which is implied) will be part of the ‘metaverse’, assuming that it ever comes into existence. Happily, I have found a good summarized description of VR/AR/MR/XR in a March 20, 2018 essay by North of 41 on medium.com,
Summary: VR is immersing people into a completely virtual environment; AR is creating an overlay of virtual content, but can’t interact with the environment; MR is a mixed of virtual reality and the reality, it creates virtual objects that can interact with the actual environment. XR brings all three Reality (AR, VR, MR) together under one term.
If you have the interest and approximately five spare minutes, read the entire March 20, 2018 essay, which has embedded images illustrating the various realities.
Here’s a description from one of the researchers, Mohamed Kari, of the video, which you can see above, and the paper he and his colleagues presented at the 20th IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 2021 (from the TransforMR page on YouTube),
We present TransforMR, a video see-through mixed reality system for mobile devices that performs 3D-pose-aware object substitution to create meaningful mixed reality scenes in previously unseen, uncontrolled, and open-ended real-world environments.
To get a sense of how recent this work is, ISMAR 2021 was held from October 4 – 8, 2021.
The team’s 2021 ISMAR paper, TransforMR Pose-Aware Object Substitution for Composing Alternate Mixed Realities by Mohamed Kari, Tobias Grosse-Puppendah, Luis Falconeri Coelho, Andreas Rene Fender, David Bethge, Reinhard Schütte, and Christian Holz lists two educational institutions I’d expect to see (University of Duisburg-Essen and ETH Zürich), the surprise was this one: Porsche AG. Perhaps that explains the preponderance of vehicles in this demonstration.
Space walking in virtual reality
Ivan Semeniuk’s October 2, 2021 article for the Globe and Mail highlights a collaboration between Montreal’s Felix and Paul Studios with NASA (US National Aeronautics and Space Administration) and Time studios,
Communing with the infinite while floating high above the Earth is an experience that, so far, has been known to only a handful.
Now, a Montreal production company aims to share that experience with audiences around the world, following the first ever recording of a spacewalk in the medium of virtual reality.
The company, which specializes in creating virtual-reality experiences with cinematic flair, got its long-awaited chance in mid-September when astronauts Thomas Pesquet and Akihiko Hoshide ventured outside the International Space Station for about seven hours to install supports and other equipment in preparation for a new solar array.
The footage will be used in the fourth and final instalment of Space Explorers: The ISS Experience, a virtual-reality journey to space that has already garnered a Primetime Emmy Award for its first two episodes.
From the outset, the production was developed to reach audiences through a variety of platforms for 360-degree viewing, including 5G-enabled smart phones and tablets. A domed theatre version of the experience for group audiences opened this week at the Rio Tinto Alcan Montreal Planetarium. Those who desire a more immersive experience can now see the first two episodes in VR form by using a headset available through the gaming and entertainment company Oculus. Scenes from the VR series are also on offer as part of The Infinite, an interactive exhibition developed by Montreal’s Phi Studio, whose works focus on the intersection of art and technology. The exhibition, which runs until Nov. 7 , has attracted 40,000 visitors since it opened in July [2021?].
At a time when billionaires are able to head off on private extraterrestrial sojourns that almost no one else could dream of, Lajeunesse [Félix Lajeunesse, co-founder and creative director of Felix and Paul studios] said his project was developed with a very different purpose in mind: making it easier for audiences to become eyewitnesses rather than distant spectators to humanity’s greatest adventure.
For the final instalments, the storyline takes viewers outside of the space station with cameras mounted on the Canadarm, and – for the climax of the series – by following astronauts during a spacewalk. These scenes required extensive planning, not only because of the limited time frame in which they could be gathered, but because of the lighting challenges presented by a constantly shifting sun as the space station circles the globe once every 90 minutes.
… Lajeunesse said that it was equally important to acquire shots that are not just technically spectacular but that serve the underlying themes of Space Explorers: The ISS Experience. These include an examination of human adaptation and advancement, and the unity that emerges within a group of individuals from many places and cultures and who must learn to co-exist in a high risk environment in order to achieve a common goal.
There always seems to be a lot of grappling with new and newish science/technology where people strive to coin terms and define them while everyone, including members of the corporate community, attempts to cash in.
The last time I looked (probably about two years ago), I wasn’t able to find any good definitions for alternate reality and mixed reality. (By good, I mean something which clearly explicated the difference between the two.) It was nice to find something this time.
As for Facebook and its attempts to join/create a/the metaverse, the company’s timing seems particularly fraught. As well, paradigm-shifting technology doesn’t usually start with large corporations. The company is ignoring its own history.
Writing this piece has reminded me of the upcoming movie, “Doctor Strange in the Multiverse of Madness” (Wikipedia entry). While this multiverse is based on a comic book, the idea of a Multiverse (Wikipedia entry) has been around for quite some time,
Early recorded examples of the idea of infinite worlds existed in the philosophy of Ancient Greek Atomism, which proposed that infinite parallel worlds arose from the collision of atoms. In the third century BCE, the philosopher Chrysippus suggested that the world eternally expired and regenerated, effectively suggesting the existence of multiple universes across time. The concept of multiple universes became more defined in the Middle Ages.
Multiple universes have been hypothesized in cosmology, physics, astronomy, religion, philosophy, transpersonal psychology, music, and all kinds of literature, particularly in science fiction, comic books and fantasy. In these contexts, parallel universes are also called “alternate universes”, “quantum universes”, “interpenetrating dimensions”, “parallel universes”, “parallel dimensions”, “parallel worlds”, “parallel realities”, “quantum realities”, “alternate realities”, “alternate timelines”, “alternate dimensions” and “dimensional planes”.
The physics community has debated the various multiverse theories over time. Prominent physicists are divided about whether any other universes exist outside of our own.
Living in a computer simulation or base reality
The whole thing is getting a little confusing for me so I think I’ll stick with RR (real reality) or as it’s also known base reality. For the notion of base reality, I want to thank astronomer David Kipping of Columbia University in Anil Ananthaswamy’s article for this analysis of the idea that we might all be living in a computer simulation (from my December 8, 2020 posting; scroll down about 50% of the way to the “Are we living in a computer simulation?” subhead),
… there is a more obvious answer: Occam’s razor, which says that in the absence of other evidence, the simplest explanation is more likely to be correct. The simulation hypothesis is elaborate, presuming realities nested upon realities, as well as simulated entities that can never tell that they are inside a simulation. “Because it is such an overly complicated, elaborate model in the first place, by Occam’s razor, it really should be disfavored, compared to the simple natural explanation,” Kipping says.
Maybe we are living in base reality after all—The Matrix, Musk and weird quantum physics notwithstanding.
To sum it up (briefly)
I’m sticking with the base reality (or real reality) concept, which is where various people and companies are attempting to create a multiplicity of metaverses or the metaverse effectively replacing the internet. This metaverse can include any all of these realities (AR/MR/VR/XR) along with base reality. As for Facebook’s attempt to build ‘the metaverse’, it seems a little grandiose.
The computer simulation theory is an interesting thought experiment (just like the multiverse is an interesting thought experiment). I’ll leave them there.
Wherever it is we are living, these are interesting times.
***Updated October 28, 2021: D. (Devindra) Hardawar’s October 28, 2021 article for engadget offers details about the rebranding along with a dash of cynicism (Note: A link has been removed),
Here’s what Facebook’s metaverse isn’t: It’s not an alternative world to help us escape from our dystopian reality, a la Snow Crash. It won’t require VR or AR glasses (at least, not at first). And, most importantly, it’s not something Facebook wants to keep to itself. Instead, as Mark Zuckerberg described to media ahead of today’s Facebook Connect conference, the company is betting it’ll be the next major computing platform after the rise of smartphones and the mobile web. Facebook is so confident, in fact, Zuckerberg announced that it’s renaming itself to “Meta.”
After spending the last decade becoming obsessed with our phones and tablets — learning to stare down and scroll practically as a reflex — the Facebook founder thinks we’ll be spending more time looking up at the 3D objects floating around us in the digital realm. Or maybe you’ll be following a friend’s avatar as they wander around your living room as a hologram. It’s basically a digital world layered right on top of the real world, or an “embodied internet” as Zuckerberg describes.
Before he got into the weeds for his grand new vision, though, Zuckerberg also preempted criticism about looking into the future now, as the Facebook Papers paint the company as a mismanaged behemoth that constantly prioritizes profit over safety. While acknowledging the seriousness of the issues the company is facing, noting that it’ll continue to focus on solving them with “industry-leading” investments, Zuckerberg said:
“The reality is is that there’s always going to be issues and for some people… they may have the view that there’s never really a great time to focus on the future… From my perspective, I think that we’re here to create things and we believe that we can do this and that technology can make things better. So we think it’s important to to push forward.”
Given the extent to which Facebook, and Zuckerberg in particular, have proven to be untrustworthy stewards of social technology, it’s almost laughable that the company wants us to buy into its future. But, like the rise of photo sharing and group chat apps, Zuckerberg at least has a good sense of what’s coming next. And for all of his talk of turning Facebook into a metaverse company, he’s adamant that he doesn’t want to build a metaverse that’s entirely owned by Facebook. He doesn’t think other companies will either. Like the mobile web, he thinks every major technology company will contribute something towards the metaverse. He’s just hoping to make Facebook a pioneer.
“Instead of looking at a screen, or today, how we look at the Internet, I think in the future you’re going to be in the experiences, and I think that’s just a qualitatively different experience,” Zuckerberg said. It’s not quite virtual reality as we think of it, and it’s not just augmented reality. But ultimately, he sees the metaverse as something that’ll help to deliver more presence for digital social experiences — the sense of being there, instead of just being trapped in a zoom window. And he expects there to be continuity across devices, so you’ll be able to start chatting with friends on your phone and seamlessly join them as a hologram when you slip on AR glasses.
D. (Devindra) Hardawar’s October 28, 2021 article provides a lot more details and I recommend reading it in its entirety.
Toronto’s (Canada) Art/Sci Salon (also known as, Art Science Salon) sent me an August 26, 2021 announcement (received via email) of an online show with a limited viewing period (BTW, nice play on words with the title echoing the name of the institution mentioned in the first sentence),
The Fields Institute was closed to the public for a long time. Yet, it has not been empty. Peculiar sounds and intriguing silences, the flows of the few individuals and the janitors occasional visiting the building made it surprisingly alive. Microorganisms, dust specs and other invisible guests populated undisturbed the space while the humans were away. The building is alive. We created site specific installations reflecting this condition: Elaine Whittaker and her poet collaborators take us to a journey of the microbes living in our proximal spaces. Joel Ong and his collaborators have recorded space data in the building: the result is an emergent digital organism. Roberta Buiani and Kavi interpret the venue as an organism which can be taken outside on a mobile gallery.
PROXIMAL FIELDS will be visible September 8-12 2021 at
With: Elaine Whittaker, Joel Ong, Nina Czegledy, Roberta Buiani, Sachin Karghie, Ryan Martin, Racelar Ho, Kavi. Poetry: Maureen Hynes, Sheila Stewart
Video: Natalie Plociennik
This event is one of many such events being held for Ars Electronica 2021 festival.
For anyone who remembers back to my May 3, 2021 posting (scroll down to the relevant subhead; a number of events were mentioned), I featured a show from the ArtSci Salon community called ‘Proximal Spaces’, a combined poetry reading and bioart experience.
Many of the same artists and poets seem to have continued working together to develop more work based on the ‘proximal’ for a larger international audience.
International and local scene details (e.g., same show? what is Ars Electronica? etc.)
As you may have noticed from the announcement, there are a lot of different institutions involved.
Local: Fields Institute and ArtSci Salon
The Fields Institute is properly known as The Fields Institute for Research in Mathematical Sciences and is located at the University of Toronto. Here’s more from their About Us webpage,
Founded in 1992, the Fields Institute was initially located at the University of Waterloo. Since 1995, it has occupied a purpose-built building on the St. George Campus of the University of Toronto.
The Institute is internationally renowned for strengthening collaboration, innovation, and learning in mathematics and across a broad range of disciplines. …
The Fields Institute is named after the Canadian mathematician John Charles Fields (1863-1932). Fields was a pioneer and visionary who recognized the scientific, educational, and economic value of research in the mathematical sciences. Fields spent many of his early years in Berlin and, to a lesser extent, in Paris and Göttingen, the principal mathematical centres of Europe of that time. These experiences led him, after his return to Canada, to work for the public support of university research, which he did very successfully. He also organized and presided over the 1924 meeting of the International Congress of Mathematicians in Toronto. This quadrennial meeting was, and still is, the major meeting of the mathematics world.
There is no Nobel Prize in mathematics, and Fields felt strongly that there should be a comparable award to recognize the most outstanding current research in mathematics. With this in mind, he established the International Medal for Outstanding Discoveries in Mathematics, which, contrary to his personal directive, is now known as the Fields Medal. Information on Fields Medal winners can be found through the International Mathematical Union, which chooses the quadrennial recipients of the prize.
Fields’ name was given to the Institute in recognition of his seminal contributions to world mathematics and his work on behalf of high level mathematical scholarship in Canada. The Institute aims to carry on the work of Fields and to promote the wider use and understanding of mathematics in Canada.
ArtSci Salon consists of a series of semi-informal gatherings facilitating discussion and cross-pollination between science, technology, and the arts. ArtSci Salon started in 2010 as a spin-off of Subtle Technologies Festival to satisfy increasing demands by the audience attending the Festival to have a more frequent (monthly or bi-monthly) outlet for debate and information sharing across disciplines. In addition, it responds to the recent expansion in the GTA [Greater Toronto Area] area of a community of scientists and artists increasingly seeking collaborations across disciplines to successfully accomplish their research projects and questions.
We are pleased to announce our upcoming March 2021 events (more details are in the schedule below):
It started life as a Festival for Art, Technology and Society in 1979 in Linz, Austria. Here’s a little more from their About webpage,
… Since September 18, 1979, our world has changed radically, and digitization has covered almost all areas of our lives. Ars Electronica’s philosophy has remained the same over the years. Our activities are always guided by the question of what new technologies mean for our lives. Together with artists, scientists, developers, designers, entrepreneurs and activists, we shed light on current developments in our digital society and speculate about their manifestations in the future. We never ask what technology can or will be able to do, but always what it should do for us. And we don’t try to adapt to technology, but we want the development of technology to be oriented towards us. Therefore, our artistic research always focuses on ourselves, our needs, our desires, our feelings.
They have a number of initiatives in addition to the festival. The next festival, A New Digital Deal, runs from September 8 – 12, 2021 (Ars Electronica 2021). Here’s a little more from the festival webpage,
Ars Electronica 2021, the festival for art, technology and society, will take place from September 8 to 12. For the second time since 1979, it will be a hybrid event that includes exhibitions, concerts, talks, conferences, workshops and guided tours in Linz, Austria, and more than 80 other locations around the globe.
Leonardo; The International Society for Arts, Sciences and Technology
Ars Electronica and Leonardo; The International Society for Arts, Sciences and Technology (ISAST) cooperate on projects but they are two different entities. Here’s more from the About LEONARDO webpage,
Fearlessly pioneering since 1968, Leonardo serves as THE community forging a transdisciplinary network to convene, research, collaborate, and disseminate best practices at the nexus of arts, science and technology worldwide. Leonardo’ serves a network of transdisciplinary scholars, artists, scientists, technologists and thinkers, who experiment with cutting-edge, new approaches, practices, systems and solutions to tackle the most complex challenges facing humanity today.
As a not-for-profit 501(c)3 enterprising think tank, Leonardo offers a global platform for creative exploration and collaboration reaching tens of thousands of people across 135 countries. Our flagship publication, Leonardo, the world’s leading scholarly journal on transdisciplinary art, anchors a robust publishing partnership with MIT Press; our partnership with ASU [Arizona State University] infuses educational innovation with digital art and media for lifelong learning; our creative programs span thought-provoking events, exhibits, residencies and fellowships, scholarship and social enterprise ventures.
I have a description of Leonardo’s LASER (Leonardo Art Science Evening Rendezvous), from my March 22, 2021 posting (the Garden comes up next),
“… a program of international gatherings that bring artists, scientists, humanists and technologists together for informal presentations, performances and conversations with the wider public. The mission of LASER is to encourage contribution to the cultural environment of a region by fostering interdisciplinary dialogue and opportunities for community building.”
Culturing transnational dialogue for creative hybridity
Leonardo LASER Garden gathers our global network of artists, scientists, humanists and technologists together in a series of hybrid formats addressing the world’s most pressing issues. Animated by the theme of a “new digital deal” and grounded in the UN Sustainability Goals, Leonardo LASER Garden cultivates our values of equity and inclusion by elevating underrepresented voices in a wide-ranging exploration of global challenges, digital communities and placemaking, space, networks and systems, the digital divide – and the impact of interdisciplinary art, science and technology discourse and collaboration.
Dovetailing with the launch of LASER Linz, this asynchronous multi-platform garden will highlight the best of the Leonardo Network (spanning 47 cities worldwide) and our transdisciplinary community. In “Extraordinary Times Call for Extraordinary Vision: Humanizing Digital Culture with the New Creativity Agenda & the UNSDGs [United Nations Sustainable Development Goals],” Leonardo/ISAST CEO Diana Ayton-Shenker presents our vision for shaping our global future. This will be followed by a Leonardo Community Lounge open to the general public, with the goal of encouraging contributions to the cultural environments of different regions through transnational exchange and community building.
Getting back to the beginning you can view Proximal Fields from September 8 – 12, 2021 as part of the Ars Electonica 2021 festival, specifically, the ‘garden’ series.