With the launch of the James Webb Space Telescope (JWST; Webb Telescope) on December 25, 2021, the US National Aeronautics and Space Administration (NASA) has been all over the news for over a month as the telescope has unfolded itself to take its position in space.
In celebration of the Webb Telescope’s successful launch and unfolding process, NASA has announced an art/science (also known as, art/sci or sciart) has issued an extension to an art/sci challenge (I don’t know when it was first announced),
NASA’s biggest and most powerful space telescope ever launched on Dec. 25, 2021! The James Webb Space Telescope, or Webb, will be orbiting a million miles away to reveal the universe as never seen before. It will look at the first stars and galaxies, study distant planets around other stars, solve mysteries in our solar system and discover what we can’t even imagine. Its revolutionary technology will be able to look back in time at 13.5 billion years of our cosmic history.
Show us what you believe the Webb telescope will reveal by creating art. You can draw, paint, sing, write, dance — the universe is the limit! Share a picture or video of you and your creation with the hashtag #UnfoldTheUniverse for a chance to be featured on NASA’s website and social media channels.
How to Participate
1. Use any art supplies you’d like to create art. The art could be a drawing, song, poem, dance or something else! Check out the resources linked below for inspiration.
2. Take a picture of you holding your art, or film a less than one-minute video of you describing or performing your art.
3. Share your photo or video on Facebook, Twitter, or Instagram using #UnfoldTheUniverse for a chance to be featured on NASA’s website and social media accounts!
4. If your submission catches our eye, we’ll be in touch to obtain permission for it to be considered for NASA digital products.
Deadline for Submissions EXTENDED: Good news! We will now keep the #UnfoldTheUniverse art challenge open through the return of our first science images, expected to be about six months after launch. Keep your submissions coming – we love seeing your creativity!
The James Webb Space Telescope (JWST) is a space telescope and an international collaboration among NASA, the European Space Agency (ESA), and the Canadian Space Agency (CSA). [emphasis mine] The telescope is named after James E. Webb, who was the administrator of NASA from 1961 to 1968 and played an integral role in the Apollo program. It is intended to succeed the Hubble Space Telescope as NASA’s flagship mission in astrophysics. JWST was launched on 25 December 2021 on Ariane flight VA256. It is designed to provide improved infrared resolution and sensitivity over Hubble, viewing objects up to 100 times fainter than the faintest objects detectable by Hubble. This will enable a broad range of investigations across the fields of astronomy and cosmology, such as observations up to redshift z≈20 of some of the oldest and most distant objects and events in the Universe (including the first stars and the formation of the first galaxies), and detailed atmospheric characterization of potentially habitable exoplanets.
The James Webb Space Telescope has a mass about half of Hubble Space Telescope’s, but a 6.5 m (21 ft)-diameter gold-coated beryllium primary mirror made of 18 hexagonal mirrors, giving it a total size over six times as large as Hubble’s 2.4 m (7.9 ft). Of this, 0.9 m2 (9.7 sq ft) is obscured by the secondary support struts, making its actual light collecting area about 5.6 times larger than Hubble’s 4.525 m2 (48.71 sq ft) collecting area. Beryllium is a very stiff, hard, lightweight metal often used in aerospace that is non-magnetic and keeps its shape accurately in an ultra-cold environment – it has a specific stiffness (rigidity) six times that of steel or titanium, while being 30% lighter in weight than aluminium. The gold coating provides infrared reflectivity and durability.
Former Canadian Prime Minister Stephen Harper was very interested in space and the aeronautics industry and, accordingly, his government invested in the JWST.
Hidden Life Radio livestreams music generated from trees (their biodata, that is). Kristin Toussaint in her August 3, 2021 article for Fast Company describes the ‘radio station’, Note: Links have been removed,
Outside of a library in Cambridge, Massachusetts, an over-80-year-old copper beech tree is making music.
As the tree photosynthesizes and absorbs and evaporates water, a solar-powered sensor attached to a leaf measures the micro voltage of all that invisible activity. Sound designer and musician Skooby Laposky assigned a key and note range to those changes in this electric activity, turning the tree’s everyday biological processes into an ethereal song.
That music is available on Hidden Life Radio, an art project by Laposky, with assistance from the Cambridge Department of Public Works Urban Forestry, and funded in part by a grant from the Cambridge Arts Council. Hidden Life Radio also features the musical sounds of two other Cambridge trees: a honey locust and a red oak, both located outside of other Cambridge library branches. The sensors on these trees are solar-powered biodata sonification kits, a technology that has allowed people to turn all sorts of plant activity into music.
… Laposky has created a musical voice for these disappearing trees, and he hopes people tune into Hidden Life Radio and spend time listening to them over time. The music they produce occurs in real time, affected by the weather and whatever the tree is currently doing. Some days they might be silent, especially when there’s been several days without rain, and they’re dehydrated; Laposky is working on adding an archive that includes weather information, so people can go back and hear what the trees sound like on different days, under different conditions. The radio will play 24 hours a day until November, when the leaves will drop—a “natural cycle for the project to end,” Laposky says, “when there aren’t any leaves to connect to anymore.”
The 2021 season is over but you can find an archive of Hidden Life Radio livestreams here. Or, if you happen to be reading this page sometime after January 2022, you can try your luck and click here at Hidden Life Radio livestreams but remember, even if the project has started up again, the tree may not be making music when you check in. So, if you don’t hear anything the first time, try again.
Want to create your own biodata sonification project?
Toussaint’s article sent me on a search for more and I found a website where you can get biodata sonification kits. Sam Cusumano’s electricity for progress website offers lessons, as well as, kits and more.
Sophie Haigney’s February 21, 2020 article for NPR ([US] National Public Radio) highlights other plant music and more ways to tune in to and create it. (h/t Kristin Toussaint)
Simon Fraser University’s (SFU) Metacreation Lab for Creative AI (artificial intelligence) in Vancouver, Canada, has just sent me (via email) a January 2022 newsletter, which you can find here. There are a two items I found of special interest.
Max Planck Centre for Humans and Machines Seminars
Max Planck Institute Seminar – The rise of Creative AI & its ethics January 11, 2022 at 15:00 pm [sic] CET | 6:00 am PST
Next Monday [sic], Philippe Pasquier, director of the Metacreation Labn will be providing a seminar titled “The rise of Creative AI & its ethics” [Tuesday, January 11, 2022] at the Max Planck Institute’s Centre for Humans and Machine [sic].
The Centre for Humans and Machines invites interested attendees to our public seminars, which feature scientists from our institute and experts from all over the world. Their seminars usually take 1 hour and provide an opportunity to meet the speaker afterwards.
The seminar is openly accessible to the public via Webex Access, and will be a great opportunity to connect with colleagues and friends of the Lab on European and East Coast time. For more information and the link, head to the Centre for Humans and Machines’ Seminars page linked below.
The Centre’s seminar description offers an abstract for the talk and a profile of Philippe Pasquier,
Creative AI is the subfield of artificial intelligence concerned with the partial or complete automation of creative tasks. In turn, creative tasks are those for which the notion of optimality is ill-defined. Unlike car driving, chess moves, jeopardy answers or literal translations, creative tasks are more subjective in nature. Creative AI approaches have been proposed and evaluated in virtually every creative domain: design, visual art, music, poetry, cooking, … These algorithms most often perform at human-competitive or superhuman levels for their precise task. Two main use of these algorithms have emerged that have implications on workflows reminiscent of the industrial revolution:
– Augmentation (a.k.a, computer-assisted creativity or co-creativity): a human operator interacts with the algorithm, often in the context of already existing creative software.
– Automation (computational creativity): the creative task is performed entirely by the algorithms without human intervention in the generation process.
Both usages will have deep implications for education and work in creative fields. Away from the fear of strong – sentient – AI, taking over the world: What are the implications of these ongoing developments for students, educators and professionals? How will Creative AI transform the way we create, as well as what we create?
Philippe Pasquier is a professor at Simon Fraser University’s School for Interactive Arts and Technology, where he directs the Metacreation Lab for Creative AI since 2008. Philippe leads a research-creation program centred around generative systems for creative tasks. As such, he is a scientist specialized in artificial intelligence, a multidisciplinary media artist, an educator, and a community builder. His contributions range from theoretical research on generative systems, computational creativity, multi-agent systems, machine learning, affective computing, and evaluation methodologies. This work is applied in the creative software industry as well as through artistic practice in computer music, interactive and generative art.
Folks at the Metacreation Lab have made available an interactive search engine for sounds, from the January 2022 newsletter,
Audio Metaphor is an interactive search engine that transforms users’ queries into soundscapes interpreting them. Using state of the art algorithms for sound retrieval, segmentation, background and foreground classification, AuMe offers a way to explore the vast open source library of sounds available on the freesound.org online community through natural language and its semantic, symbolic, and metaphorical expressions.
We’re excited to see Audio Metaphor included among many other innovative projects on Freesound Labs, a directory of projects, hacks, apps, research and other initiatives that use content from Freesound or use the Freesound API. Take a minute to check out the variety of projects applying creative coding, machine learning, and many other techniques towards the exploration of sound and music creation, generative music, and soundscape composition in diverse forms an interfaces.
Audio Metaphor (AuMe) is a research project aimed at designing new methodologies and tools for sound design and composition practices in film, games, and sound art. Through this project, we have identified the processes involved in working with audio recordings in creative environments, addressing these in our research by implementing computational systems that can assist human operations.
We have successfully developed Audio Metaphor for the retrieval of audio file recommendations from natural language texts, and even used phrases generated automatically from Twitter to sonify the current state of Web 2.0. Another significant achievement of the project has been in the segmentation and classification of environmental audio with composition-specific categories, which were then applied in a generative system approach. This allows users to generate sound design simply by entering textual prompts.
As we direct Audio Metaphor further toward perception and cognition, we will continue to contribute to the music information retrieval field through environmental audio classification and segmentation. The project will continue to be instrumental in the design and implementation of new tools for sound designers and artists.
Adam Dhalla in a January 5, 2022 posting on the Nature Conservancy Canada blog announced a new location for a ‘Find the Birds’ game,
Since its launch six months ago …, with an initial Arizona simulated birding location, Find the Birds (a free educational mobile game about birds and conservation) now has over 7,000 players in 46 countries on six continents. In the game, players explore realistic habitats, find and take virtual photos of accurately animated local bird species and complete conservation quests. Thanks in a large part to the creative team at Thought Generation Society (the non-profit game production organization I’m working with), Find the Birds is a Canadian-made success story.
Going back nine months to an April 9, 2021 posting and the first ‘Find the Birds’ announcement by Adam Dhalla for the Nature Conservancy Canada blog,
It is not a stretch to say that our planet is in dire need of more conservationists, and environmentally minded people in general. Birds and birdwatching are gateways to introducing conservation and science to a new generation.
… it seems as though younger generations are often unaware of the amazing world in their backyard. They don’t hear the birdsong emanating from the trees during the morning chorus. …
This problem inspired my dad and me to come up with the original concept for Find the Birds, a free educational mobile game about birds and conservation. I was 10 at the time, and I discovered that I was usually the only kid out birdwatching. So we thought, why not bring the birds to them via the digital technology they are already immersed in?
Find the Birds reflects on the birding and conservation experience. Players travel the globe as an animated character on their smartphone or tablet and explore real-life, picturesque environments, finding different bird species. The unique element of this game is its attention to detail; everything in the game is based on science. …
Here’s a trailer for the game featuring its first location, Arizona,
Now back to Dhalla’s January 5, 2022 posting for more about the latest iteration of the game and other doings (Note: Links have been removed),
Recently, the British Columbia location was added, which features Sawmill Lake in the Okanagan Valley, Tofino on the coast and a journey in the Pacific Ocean. Some of the local bird species included are Steller’s jays (BC’s provincial bird), black oystercatchers and western meadowlarks. Conservation quests include placing nest boxes for northern saw-whet owls and cleaning up beach litter.
I’ve always loved Steller’s jays! We get a lot of them in our backyard. It’s far lesser known bird than blue jay, so I wanted to give them some attention. That’s the terrific thing about being the co-creator of the game: I get to help choose the species, the quests — everything! So all the birds in the BC locations are some of my favourites.
The black oystercatcher is another underappreciated species. I’ve seen them along the coasts of BC, where they are relatively common. …
To gauge the game’s impact on conservation education, I recently conducted an online player survey. Of the 101 players who completed the survey, 71 per cent were in the 8–15 age group, which means I am reaching my peers. But 21 per cent were late teens and adults, so the game’s appeal is not limited to children. Fifty-one per cent were male and 49 per cent female: this equality is encouraging, as most games in general have a much smaller percentage of female players.
And the game is helping people connect with nature! Ninety-eight per cent of players said the game increased their appreciation of birds. …
As a result of the game’s reputation and the above data, I was invited to present my findings at the 2022 International Ornithological Congress. So, I will be traveling to Durban, South Africa, next August to spread the word on reaching and teaching a new generation of birders, ornithologists and conservationists. …
Before getting to the announcement, this talk and Q&A (question and answer) session is being co-hosted by ArtSci Salon at the Fields Institute for Research in Mathematical Sciences and the OCAD University/DMG Bodies in Play (BiP) initiative.
For anyone curious about OCAD, it was the Ontario College of Art and Design and then in a very odd government/marketing (?) move, they added the word university. As for DMG, in their own words and from their About page, “DMG is a not-for-profit videogame arts organization that creates space for marginalized creators to make, play and critique videogames within a cultural context.” They are located in Toronto, Ontario. Finally, the Art/Sci Salon and the Fields Institute are located at the University of Toronto.
As for the talk, here’s more from the November 28, 2021 Art/Sci Salon announcement (received via email),
Inspired by her own experience with the health care system to treat a post-reproductive disease, interdisciplinary artist [Camille] Baker created the project INTER/her, an immersive installation and VR [virtual reality] experience exploring the inner world of women’s bodies and the reproductive diseases they suffer. The project was created to open up the conversation about phenomena experienced by women in their late 30’s (sometimes earlier) their 40’s, and sometimes after menopause. Working in consultation with a gynecologist, the project features interviews with several women telling their stories. The themes in the work include issues of female identity, sexuality, body image, loss of body parts, pain, disease, and cancer. INTER/her has a focus on female reproductive diseases explored through a feminist lens; as personal exploration, as a conversation starter, to raise greater public awareness and encourage community building. The work also represents the lived experience of women’s pain and anger, conflicting thoughts through self-care and the growth of disease. Feelings of mortality are explored through a medical process in male-dominated medical institutions and a dearth of reliable information. https://inter-her.art/ 
In 2021, the installation was shortlisted for the Lumen Prize.
Join us for a talk and Q&A with the artist to discuss her work and its future development.
After registering, you will receive a confirmation email containing information about joining the meeting.
This talk is Co-Hosted by the ArtSci Salon at the Fields Institute for Research in Mathematical Sciences and the OCAD University/DMG Bodies in Play (BiP) initiative.
This event will be recorded and archived on the ArtSci Salon Youtube channel
Camille Baker is a Professor in Interactive and Immersive Arts, University for the Creative Arts [UCA], Farnham Surrey (UK). She is an artist-performer/researcher/curator within various art forms: immersive experiences, participatory performance and interactive art, mobile media art, tech fashion/soft circuits/DIY electronics, responsive interfaces and environments, and emerging media curating. Maker of participatory performance and immersive artwork, Baker develops methods to explore expressive non-verbal modes of communication, extended embodiment and presence in real and mixed reality and interactive art contexts, using XR, haptics/ e-textiles, wearable devices and mobile media. She has an ongoing fascination with all things emotional, embodied, felt, sensed, the visceral, physical, and relational.
Her 2018 book _New Directions in Mobile Media and Performance_ showcases exciting approaches and artists in this space, as well as her own work. She has been running a regular meetup group with smart/e-textile artists and designers since 2014, called e-stitches, where participants share their practice and facilitate workshops of new techniques and innovations. Baker also has been Principal Investigator for UCA for the EU funded STARTS Ecosystem (starts.eu ) Apr 2019-Nov 2021 and founder initiator for the EU WEAR Sustain project Jan 2017-April 2019 (wearsustain.eu ).
The ‘metaverse’ seems to be everywhere these days (especially since Facebook has made a number of announcements bout theirs (more about that later in this posting).
At this point, the metaverse is very hyped up despite having been around for about 30 years. According to the Wikipedia timeline (see the Metaverse entry), the first one was a MOO in 1993 called ‘The Metaverse’. In any event, it seems like it might be a good time to see what’s changed since I dipped my toe into a metaverse (Second Life by Linden Labs) in 2007.
(For grammar buffs, I switched from definite article [the] to indefinite article [a] purposefully. In reading the various opinion pieces and announcements, it’s not always clear whether they’re talking about a single, overarching metaverse [the] replacing the single, overarching internet or whether there will be multiple metaverses, in which case [a].)
The hype/the buzz … call it what you will
This September 6, 2021 piece by Nick Pringle for Fast Company dates the beginning of the metaverse to a 1992 science fiction novel before launching into some typical marketing hype (for those who don’t know, hype is the short form for hyperbole; Note: Links have been removed),
The term metaverse was coined by American writer Neal Stephenson in his 1993 sci-fi hit Snow Crash. But what was far-flung fiction 30 years ago is now nearing reality. At Facebook’s most recent earnings call [June 2021], CEO Mark Zuckerberg announced the company’s vision to unify communities, creators, and commerce through virtual reality: “Our overarching goal across all of these initiatives is to help bring the metaverse to life.”
So what actually is the metaverse? It’s best explained as a collection of 3D worlds you explore as an avatar. Stephenson’s original vision depicted a digital 3D realm in which users interacted in a shared online environment. Set in the wake of a catastrophic global economic crash, the metaverse in Snow Crash emerged as the successor to the internet. Subcultures sprung up alongside new social hierarchies, with users expressing their status through the appearance of their digital avatars.
Today virtual worlds along these lines are formed, populated, and already generating serious money. Household names like Roblox and Fortnite are the most established spaces; however, there are many more emerging, such as Decentraland, Upland, Sandbox, and the soon to launch Victoria VR.
These metaverses [emphasis mine] are peaking at a time when reality itself feels dystopian, with a global pandemic, climate change, and economic uncertainty hanging over our daily lives. The pandemic in particular saw many of us escape reality into online worlds like Roblox and Fortnite. But these spaces have proven to be a place where human creativity can flourish amid crisis.
In fact, we are currently experiencing an explosion of platforms parallel to the dotcom boom. While many of these fledgling digital worlds will become what Ask Jeeves was to Google, I predict [emphasis mine] that a few will match the scale and reach of the tech giant—or even exceed it.
Because the metaverse brings a new dimension to the internet, brands and businesses will need to consider their current and future role within it. Some brands are already forging the way and establishing a new genre of marketing in the process: direct to avatar (D2A). Gucci sold a virtual bag for more than the real thing in Roblox; Nike dropped virtual Jordans in Fortnite; Coca-Cola launched avatar wearables in Decentraland, and Sotheby’s has an art gallery that your avatar can wander in your spare time.
D2A is being supercharged by blockchain technology and the advent of digital ownership via NFTs, or nonfungible tokens. NFTs are already making waves in art and gaming. More than $191 million was transacted on the “play to earn” blockchain game Axie Infinity in its first 30 days this year. This kind of growth makes NFTs hard for brands to ignore. In the process, blockchain and crypto are starting to feel less and less like “outsider tech.” There are still big barriers to be overcome—the UX of crypto being one, and the eye-watering environmental impact of mining being the other. I believe technology will find a way. History tends to agree.
Detractors see the metaverse as a pandemic fad, wrapping it up with the current NFT bubble or reducing it to Zuck’s [Jeffrey Zuckerberg and Facebook] dystopian corporate landscape. This misses the bigger behavior change that is happening among Gen Alpha. When you watch how they play, it becomes clear that the metaverse is more than a buzzword.
For Gen Alpha [emphasis mine], gaming is social life. While millennials relentlessly scroll feeds, Alphas and Zoomers [emphasis mine] increasingly stroll virtual spaces with their friends. Why spend the evening staring at Instagram when you can wander around a virtual Harajuku with your mates? If this seems ridiculous to you, ask any 13-year-old what they think.
Who is Nick Pringle and how accurate are his predictions?
By thinking “virtual first,” you can see how these spaces become highly experimental, creative, and valuable. The products you can design aren’t bound by physics or marketing convention—they can be anything, and are now directly “ownable” through blockchain. …
I believe that the metaverse is here to stay. That means brands and marketers now have the exciting opportunity to create products that exist in multiple realities. The winners will understand that the metaverse is not a copy of our world, and so we should not simply paste our products, experiences, and brands into it.
I emphasized “These metaverses …” in the previous section to highlight the fact that I find the use of ‘metaverses’ vs. ‘worlds’ confusing as the words are sometimes used as synonyms and sometimes as distinctions. We do it all the time in all sorts of conversations but for someone who’s an outsider to a particular occupational group or subculture, the shifts can make for confusion.
As for Gen Alpha and Zoomer, I’m not a fan of ‘Gen anything’ as shorthand for describing a cohort based on birth years. For example, “For Gen Alpha [emphasis mine], gaming is social life,” ignores social and economic classes, as well as, the importance of locations/geography, e.g., Afghanistan in contrast to the US.
To answer the question I asked, Pringle does not mention any record of accuracy for his predictions for the future but I was able to discover that he is a “multiple Cannes Lions award-winning creative” (more here).
In recent months you may have heard about something called the metaverse. Maybe you’ve read that the metaverse is going to replace the internet. Maybe we’re all supposed to live there. Maybe Facebook (or Epic, or Roblox, or dozens of smaller companies) is trying to take it over. And maybe it’s got something to do with NFTs [non-fungible tokens]?
Unlike a lot of things The Verge covers, the metaverse is tough to explain for one reason: it doesn’t necessarily exist. It’s partly a dream for the future of the internet and partly a neat way to encapsulate some current trends in online infrastructure, including the growth of real-time 3D worlds.
Then what is the real metaverse?
There’s no universally accepted definition of a real “metaverse,” except maybe that it’s a fancier successor to the internet. Silicon Valley metaverse proponents sometimes reference a description from venture capitalist Matthew Ball, author of the extensive Metaverse Primer:
“The Metaverse is an expansive network of persistent, real-time rendered 3D worlds and simulations that support continuity of identity, objects, history, payments, and entitlements, and can be experienced synchronously by an effectively unlimited number of users, each with an individual sense of presence.”
Facebook, arguably the tech company with the biggest stake in the metaverse, describes it more simply:
“The ‘metaverse’ is a set of virtual spaces where you can create and explore with other people who aren’t in the same physical space as you.”
There are also broader metaverse-related taxonomies like one from game designer Raph Koster, who draws a distinction between “online worlds,” “multiverses,” and “metaverses.” To Koster, online worlds are digital spaces — from rich 3D environments to text-based ones — focused on one main theme. Multiverses are “multiple different worlds connected in a network, which do not have a shared theme or ruleset,” including Ready Player One’s OASIS. And a metaverse is “a multiverse which interoperates more with the real world,” incorporating things like augmented reality overlays, VR dressing rooms for real stores, and even apps like Google Maps.
If you want something a little snarkier and more impressionistic, you can cite digital scholar Janet Murray — who has described the modern metaverse ideal as “a magical Zoom meeting that has all the playful release of Animal Crossing.”
But wait, now Ready Player One isn’t a metaverse and virtual worlds don’t have to be 3D? It sounds like some of these definitions conflict with each other.
An astute observation.
Why is the term “metaverse” even useful? “The internet” already covers mobile apps, websites, and all kinds of infrastructure services. Can’t we roll virtual worlds in there, too?
Matthew Ball favors the term “metaverse” because it creates a clean break with the present-day internet. [emphasis mine] “Using the metaverse as a distinctive descriptor allows us to understand the enormity of that change and in turn, the opportunity for disruption,” he said in a phone interview with The Verge. “It’s much harder to say ‘we’re late-cycle into the last thing and want to change it.’ But I think understanding this next wave of computing and the internet allows us to be more proactive than reactive and think about the future as we want it to be, rather than how to marginally affect the present.”
A more cynical spin is that “metaverse” lets companies dodge negative baggage associated with “the internet” in general and social media in particular. “As long as you can make technology seem fresh and new and cool, you can avoid regulation,” researcher Joan Donovan told The Washington Post in a recent article about Facebook and the metaverse. “You can run defense on that for several years before the government can catch up.”
There’s also one very simple reason: it sounds more futuristic than “internet” and gets investors and media people (like us!) excited.
People keep saying NFTs are part of the metaverse. Why?
NFTs are complicated in their own right, and you can read more about them here. Loosely, the thinking goes: NFTs are a way of recording who owns a specific virtual good, creating and transferring virtual goods is a big part of the metaverse, thus NFTs are a potentially useful financial architecture for the metaverse. Or in more practical terms: if you buy a virtual shirt in Metaverse Platform A, NFTs can create a permanent receipt and let you redeem the same shirt in Metaverse Platforms B to Z.
Lots of NFT designers are selling collectible avatars like CryptoPunks, Cool Cats, and Bored Apes, sometimes for astronomical sums. Right now these are mostly 2D art used as social media profile pictures. But we’re already seeing some crossover with “metaverse”-style services. The company Polygonal Mind, for instance, is building a system called CryptoAvatars that lets people buy 3D avatars as NFTs and then use them across multiple virtual worlds.
Since starting this post sometime in September 2021, the situation regarding Facebook has changed a few times. I’ve decided to begin my version of the story from a summer 2021 announcement.
On Monday, July 26, 2021, Facebook announced a new Metaverse product group. From a July 27, 2021 article by Scott Rosenberg for Yahoo News (Note: A link has been removed),
Facebook announced Monday it was forming a new Metaverse product group to advance its efforts to build a 3D social space using virtual and augmented reality tech.
Facebook’s new Metaverse product group will report to Andrew Bosworth, Facebook’s vice president of virtual and augmented reality [emphasis mine], who announced the new organization in a Facebook post.
Facebook, integrity, and safety in the metaverse
On September 27, 2021 Facebook posted this webpage (Building the Metaverse Responsibly by Andrew Bosworth, VP, Facebook Reality Labs [emphasis mine] and Nick Clegg, VP, Global Affairs) on its site,
The metaverse won’t be built overnight by a single company. We’ll collaborate with policymakers, experts and industry partners to bring this to life.
We’re announcing a $50 million investment in global research and program partners to ensure these products are developed responsibly.
We develop technology rooted in human connection that brings people together. As we focus on helping to build the next computing platform, our work across augmented and virtual reality and consumer hardware will deepen that human connection regardless of physical distance and without being tied to devices.
Introducing the XR [extended reality] Programs and Research Fund
There’s a long road ahead. But as a starting point, we’re announcing the XR Programs and Research Fund, a two-year $50 million investment in programs and external research to help us in this effort. Through this fund, we’ll collaborate with industry partners, civil rights groups, governments, nonprofits and academic institutions to determine how to build these technologies responsibly.
Rebranding Facebook’s integrity and safety issues away?
It seems Facebook’s credibility issues are such that the company is about to rebrand itself according to an October 19, 2021 article by Alex Heath for The Verge (Note: Links have been removed),
Facebook is planning to change its company name next week to reflect its focus on building the metaverse, according to a source with direct knowledge of the matter.
The coming name change, which CEO Mark Zuckerberg plans to talk about at the company’s annual Connect conference on October 28th , but could unveil sooner, is meant to signal the tech giant’s ambition to be known for more than social media and all the ills that entail. The rebrand would likely position the blue Facebook app as one of many products under a parent company overseeing groups like Instagram, WhatsApp, Oculus, and more. A spokesperson for Facebook declined to comment for this story.
Facebook already has more than 10,000 employees building consumer hardware like AR glasses that Zuckerberg believes will eventually be as ubiquitous as smartphones. In July, he told The Verge that, over the next several years, “we will effectively transition from people seeing us as primarily being a social media company to being a metaverse company.”
A rebrand could also serve to further separate the futuristic work Zuckerberg is focused on from the intense scrutiny Facebook is currently under for the way its social platform operates today. A former employee turned whistleblower, Frances Haugen, recently leaked a trove of damning internal documents to The Wall Street Journal and testified about them before Congress. Antitrust regulators in the US and elsewhere are trying to break the company up, and public trust in how Facebook does business is falling.
Facebook isn’t the first well-known tech company to change its company name as its ambitions expand. In 2015, Google reorganized entirely under a holding company called Alphabet, partly to signal that it was no longer just a search engine, but a sprawling conglomerate with companies making driverless cars and health tech. And Snapchat rebranded to Snap Inc. in 2016, the same year it started calling itself a “camera company” and debuted its first pair of Spectacles camera glasses.
If you have time, do read Heath’s article in its entirety.
“It reflects the broadening out of the Facebook business. And then, secondly, I do think that Facebook’s brand is probably not the greatest given all of the events of the last three years or so,” internet analyst James Cordwell at Atlantic Equities said.
“Having a different parent brand will guard against having this negative association transferred into a new brand, or other brands that are in the portfolio,” said Shankha Basu, associate professor of marketing at University of Leeds.
Tyler Jadah’s October 20, 2021 article for the Daily Hive includes an earlier announcement (not mentioned in the other two articles about the rebranding), Note: A link has been removed,
Earlier this week [October 17, 2021], Facebook announced it will start “a journey to help build the next computing platform” and will hire 10,000 new high-skilled jobs within the European Union (EU) over the next five years.
“Working with others, we’re developing what is often referred to as the ‘metaverse’ — a new phase of interconnected virtual experiences using technologies like virtual and augmented reality,” wrote Facebook’s Nick Clegg, the VP of Global Affairs. “At its heart is the idea that by creating a greater sense of “virtual presence,” interacting online can become much closer to the experience of interacting in person.”
Clegg says the metaverse has the potential to help unlock access to new creative, social, and economic opportunities across the globe and the virtual world.
In an email with Facebook’s Corporate Communications Canada, David Troya-Alvarez told Daily Hive, “We don’t comment on rumour or speculation,” in regards to The Verge‘s report.
I will update this posting when and if Facebook rebrands itself into a ‘metaverse’ company.
***See Oct. 28, 2021 update at the end of this posting and prepare yourself for ‘Meta’.***
Who (else) cares about integrity and safety in the metaverse?
In technology, first-mover advantage is often significant. This is why BigTech and other online platforms are beginning to acquire software businesses to position themselves for the arrival of the Metaverse. They hope to be at the forefront of profound changes that the Metaverse will bring in relation to digital interactions between people, between businesses, and between them both.
What is the Metaverse? The short answer is that it does not exist yet. At the moment it is vision for what the future will be like where personal and commercial life is conducted digitally in parallel with our lives in the physical world. Sounds too much like science fiction? For something that does not exist yet, the Metaverse is drawing a huge amount of attention and investment in the tech sector and beyond.
Here we look at what the Metaverse is, what its potential is for disruptive change, and some of the key legal and regulatory issues future stakeholders may need to consider.
What are the potential legal issues?
The revolutionary nature of the Metaverse is likely to give rise to a range of complex legal and regulatory issues. We consider some of the key ones below. As time goes by, naturally enough, new ones will emerge.
Participation in the Metaverse will involve the collection of unprecedented amounts and types of personal data. Today, smartphone apps and websites allow organisations to understand how individuals move around the web or navigate an app. Tomorrow, in the Metaverse, organisations will be able to collect information about individuals’ physiological responses, their movements and potentially even brainwave patterns, thereby gauging a much deeper understanding of their customers’ thought processes and behaviours.
Users participating in the Metaverse will also be “logged in” for extended amounts of time. This will mean that patterns of behaviour will be continually monitored, enabling the Metaverse and the businesses (vendors of goods and services) participating in the Metaverse to understand how best to service the users in an incredibly targeted way.
The hungry Metaverse participant
How might actors in the Metaverse target persons participating in the Metaverse? Let us assume one such woman is hungry at the time of participating. The Metaverse may observe a woman frequently glancing at café and restaurant windows and stopping to look at cakes in a bakery window, and determine that she is hungry and serve her food adverts accordingly.
Contrast this with current technology, where a website or app can generally only ascertain this type of information if the woman actively searched for food outlets or similar on her device.
Therefore, in the Metaverse, a user will no longer need to proactively provide personal data by opening up their smartphone and accessing their webpage or app of choice. Instead, their data will be gathered in the background while they go about their virtual lives.
This type of opportunity comes with great data protection responsibilities. Businesses developing, or participating in, the Metaverse will need to comply with data protection legislation when processing personal data in this new environment. The nature of the Metaverse raises a number of issues around how that compliance will be achieved in practice.
Who is responsible for complying with applicable data protection law?
In many jurisdictions, data protection laws place different obligations on entities depending on whether an entity determines the purpose and means of processing personal data (referred to as a “controller” under the EU General Data Protection Regulation (GDPR)) or just processes personal data on behalf of others (referred to as a “processor” under the GDPR).
In the Metaverse, establishing which entity or entities have responsibility for determining how and why personal data will be processed, and who processes personal data on behalf of another, may not be easy. It will likely involve picking apart a tangled web of relationships, and there may be no obvious or clear answers – for example:
Will there be one main administrator of the Metaverse who collects all personal data provided within it and determines how that personal data will be processed and shared? Or will multiple entities collect personal data through the Metaverse and each determine their own purposes for doing so?
Either way, many questions arise, including:
How should the different entities each display their own privacy notice to users? Or should this be done jointly? How and when should users’ consent be collected? Who is responsible if users’ personal data is stolen or misused while they are in the Metaverse? What data sharing arrangements need to be put in place and how will these be implemented?
There’s a lot more to this page including a look at Social Media Regulation and Intellectual Property Rights.
I’m starting to think we should talking about RR (real reality), as well as, VR (virtual reality), AR (augmented reality), MR (mixed reality), and XR (extended reality). It seems that all of these (except RR, which is implied) will be part of the ‘metaverse’, assuming that it ever comes into existence. Happily, I have found a good summarized description of VR/AR/MR/XR in a March 20, 2018 essay by North of 41 on medium.com,
Summary: VR is immersing people into a completely virtual environment; AR is creating an overlay of virtual content, but can’t interact with the environment; MR is a mixed of virtual reality and the reality, it creates virtual objects that can interact with the actual environment. XR brings all three Reality (AR, VR, MR) together under one term.
If you have the interest and approximately five spare minutes, read the entire March 20, 2018 essay, which has embedded images illustrating the various realities.
Here’s a description from one of the researchers, Mohamed Kari, of the video, which you can see above, and the paper he and his colleagues presented at the 20th IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 2021 (from the TransforMR page on YouTube),
We present TransforMR, a video see-through mixed reality system for mobile devices that performs 3D-pose-aware object substitution to create meaningful mixed reality scenes in previously unseen, uncontrolled, and open-ended real-world environments.
To get a sense of how recent this work is, ISMAR 2021 was held from October 4 – 8, 2021.
The team’s 2021 ISMAR paper, TransforMR Pose-Aware Object Substitution for Composing Alternate Mixed Realities by Mohamed Kari, Tobias Grosse-Puppendah, Luis Falconeri Coelho, Andreas Rene Fender, David Bethge, Reinhard Schütte, and Christian Holz lists two educational institutions I’d expect to see (University of Duisburg-Essen and ETH Zürich), the surprise was this one: Porsche AG. Perhaps that explains the preponderance of vehicles in this demonstration.
Space walking in virtual reality
Ivan Semeniuk’s October 2, 2021 article for the Globe and Mail highlights a collaboration between Montreal’s Felix and Paul Studios with NASA (US National Aeronautics and Space Administration) and Time studios,
Communing with the infinite while floating high above the Earth is an experience that, so far, has been known to only a handful.
Now, a Montreal production company aims to share that experience with audiences around the world, following the first ever recording of a spacewalk in the medium of virtual reality.
The company, which specializes in creating virtual-reality experiences with cinematic flair, got its long-awaited chance in mid-September when astronauts Thomas Pesquet and Akihiko Hoshide ventured outside the International Space Station for about seven hours to install supports and other equipment in preparation for a new solar array.
The footage will be used in the fourth and final instalment of Space Explorers: The ISS Experience, a virtual-reality journey to space that has already garnered a Primetime Emmy Award for its first two episodes.
From the outset, the production was developed to reach audiences through a variety of platforms for 360-degree viewing, including 5G-enabled smart phones and tablets. A domed theatre version of the experience for group audiences opened this week at the Rio Tinto Alcan Montreal Planetarium. Those who desire a more immersive experience can now see the first two episodes in VR form by using a headset available through the gaming and entertainment company Oculus. Scenes from the VR series are also on offer as part of The Infinite, an interactive exhibition developed by Montreal’s Phi Studio, whose works focus on the intersection of art and technology. The exhibition, which runs until Nov. 7 , has attracted 40,000 visitors since it opened in July [2021?].
At a time when billionaires are able to head off on private extraterrestrial sojourns that almost no one else could dream of, Lajeunesse [Félix Lajeunesse, co-founder and creative director of Felix and Paul studios] said his project was developed with a very different purpose in mind: making it easier for audiences to become eyewitnesses rather than distant spectators to humanity’s greatest adventure.
For the final instalments, the storyline takes viewers outside of the space station with cameras mounted on the Canadarm, and – for the climax of the series – by following astronauts during a spacewalk. These scenes required extensive planning, not only because of the limited time frame in which they could be gathered, but because of the lighting challenges presented by a constantly shifting sun as the space station circles the globe once every 90 minutes.
… Lajeunesse said that it was equally important to acquire shots that are not just technically spectacular but that serve the underlying themes of Space Explorers: The ISS Experience. These include an examination of human adaptation and advancement, and the unity that emerges within a group of individuals from many places and cultures and who must learn to co-exist in a high risk environment in order to achieve a common goal.
There always seems to be a lot of grappling with new and newish science/technology where people strive to coin terms and define them while everyone, including members of the corporate community, attempts to cash in.
The last time I looked (probably about two years ago), I wasn’t able to find any good definitions for alternate reality and mixed reality. (By good, I mean something which clearly explicated the difference between the two.) It was nice to find something this time.
As for Facebook and its attempts to join/create a/the metaverse, the company’s timing seems particularly fraught. As well, paradigm-shifting technology doesn’t usually start with large corporations. The company is ignoring its own history.
Writing this piece has reminded me of the upcoming movie, “Doctor Strange in the Multiverse of Madness” (Wikipedia entry). While this multiverse is based on a comic book, the idea of a Multiverse (Wikipedia entry) has been around for quite some time,
Early recorded examples of the idea of infinite worlds existed in the philosophy of Ancient Greek Atomism, which proposed that infinite parallel worlds arose from the collision of atoms. In the third century BCE, the philosopher Chrysippus suggested that the world eternally expired and regenerated, effectively suggesting the existence of multiple universes across time. The concept of multiple universes became more defined in the Middle Ages.
Multiple universes have been hypothesized in cosmology, physics, astronomy, religion, philosophy, transpersonal psychology, music, and all kinds of literature, particularly in science fiction, comic books and fantasy. In these contexts, parallel universes are also called “alternate universes”, “quantum universes”, “interpenetrating dimensions”, “parallel universes”, “parallel dimensions”, “parallel worlds”, “parallel realities”, “quantum realities”, “alternate realities”, “alternate timelines”, “alternate dimensions” and “dimensional planes”.
The physics community has debated the various multiverse theories over time. Prominent physicists are divided about whether any other universes exist outside of our own.
Living in a computer simulation or base reality
The whole thing is getting a little confusing for me so I think I’ll stick with RR (real reality) or as it’s also known base reality. For the notion of base reality, I want to thank astronomer David Kipping of Columbia University in Anil Ananthaswamy’s article for this analysis of the idea that we might all be living in a computer simulation (from my December 8, 2020 posting; scroll down about 50% of the way to the “Are we living in a computer simulation?” subhead),
… there is a more obvious answer: Occam’s razor, which says that in the absence of other evidence, the simplest explanation is more likely to be correct. The simulation hypothesis is elaborate, presuming realities nested upon realities, as well as simulated entities that can never tell that they are inside a simulation. “Because it is such an overly complicated, elaborate model in the first place, by Occam’s razor, it really should be disfavored, compared to the simple natural explanation,” Kipping says.
Maybe we are living in base reality after all—The Matrix, Musk and weird quantum physics notwithstanding.
To sum it up (briefly)
I’m sticking with the base reality (or real reality) concept, which is where various people and companies are attempting to create a multiplicity of metaverses or the metaverse effectively replacing the internet. This metaverse can include any all of these realities (AR/MR/VR/XR) along with base reality. As for Facebook’s attempt to build ‘the metaverse’, it seems a little grandiose.
The computer simulation theory is an interesting thought experiment (just like the multiverse is an interesting thought experiment). I’ll leave them there.
Wherever it is we are living, these are interesting times.
***Updated October 28, 2021: D. (Devindra) Hardawar’s October 28, 2021 article for engadget offers details about the rebranding along with a dash of cynicism (Note: A link has been removed),
Here’s what Facebook’s metaverse isn’t: It’s not an alternative world to help us escape from our dystopian reality, a la Snow Crash. It won’t require VR or AR glasses (at least, not at first). And, most importantly, it’s not something Facebook wants to keep to itself. Instead, as Mark Zuckerberg described to media ahead of today’s Facebook Connect conference, the company is betting it’ll be the next major computing platform after the rise of smartphones and the mobile web. Facebook is so confident, in fact, Zuckerberg announced that it’s renaming itself to “Meta.”
After spending the last decade becoming obsessed with our phones and tablets — learning to stare down and scroll practically as a reflex — the Facebook founder thinks we’ll be spending more time looking up at the 3D objects floating around us in the digital realm. Or maybe you’ll be following a friend’s avatar as they wander around your living room as a hologram. It’s basically a digital world layered right on top of the real world, or an “embodied internet” as Zuckerberg describes.
Before he got into the weeds for his grand new vision, though, Zuckerberg also preempted criticism about looking into the future now, as the Facebook Papers paint the company as a mismanaged behemoth that constantly prioritizes profit over safety. While acknowledging the seriousness of the issues the company is facing, noting that it’ll continue to focus on solving them with “industry-leading” investments, Zuckerberg said:
“The reality is is that there’s always going to be issues and for some people… they may have the view that there’s never really a great time to focus on the future… From my perspective, I think that we’re here to create things and we believe that we can do this and that technology can make things better. So we think it’s important to to push forward.”
Given the extent to which Facebook, and Zuckerberg in particular, have proven to be untrustworthy stewards of social technology, it’s almost laughable that the company wants us to buy into its future. But, like the rise of photo sharing and group chat apps, Zuckerberg at least has a good sense of what’s coming next. And for all of his talk of turning Facebook into a metaverse company, he’s adamant that he doesn’t want to build a metaverse that’s entirely owned by Facebook. He doesn’t think other companies will either. Like the mobile web, he thinks every major technology company will contribute something towards the metaverse. He’s just hoping to make Facebook a pioneer.
“Instead of looking at a screen, or today, how we look at the Internet, I think in the future you’re going to be in the experiences, and I think that’s just a qualitatively different experience,” Zuckerberg said. It’s not quite virtual reality as we think of it, and it’s not just augmented reality. But ultimately, he sees the metaverse as something that’ll help to deliver more presence for digital social experiences — the sense of being there, instead of just being trapped in a zoom window. And he expects there to be continuity across devices, so you’ll be able to start chatting with friends on your phone and seamlessly join them as a hologram when you slip on AR glasses.
D. (Devindra) Hardawar’s October 28, 2021 article provides a lot more details and I recommend reading it in its entirety.
Toronto’s (Canada) Art/Sci Salon (also known as, Art Science Salon) sent me an August 26, 2021 announcement (received via email) of an online show with a limited viewing period (BTW, nice play on words with the title echoing the name of the institution mentioned in the first sentence),
The Fields Institute was closed to the public for a long time. Yet, it has not been empty. Peculiar sounds and intriguing silences, the flows of the few individuals and the janitors occasional visiting the building made it surprisingly alive. Microorganisms, dust specs and other invisible guests populated undisturbed the space while the humans were away. The building is alive. We created site specific installations reflecting this condition: Elaine Whittaker and her poet collaborators take us to a journey of the microbes living in our proximal spaces. Joel Ong and his collaborators have recorded space data in the building: the result is an emergent digital organism. Roberta Buiani and Kavi interpret the venue as an organism which can be taken outside on a mobile gallery.
PROXIMAL FIELDS will be visible September 8-12 2021 at
With: Elaine Whittaker, Joel Ong, Nina Czegledy, Roberta Buiani, Sachin Karghie, Ryan Martin, Racelar Ho, Kavi. Poetry: Maureen Hynes, Sheila Stewart
Video: Natalie Plociennik
This event is one of many such events being held for Ars Electronica 2021 festival.
For anyone who remembers back to my May 3, 2021 posting (scroll down to the relevant subhead; a number of events were mentioned), I featured a show from the ArtSci Salon community called ‘Proximal Spaces’, a combined poetry reading and bioart experience.
Many of the same artists and poets seem to have continued working together to develop more work based on the ‘proximal’ for a larger international audience.
International and local scene details (e.g., same show? what is Ars Electronica? etc.)
As you may have noticed from the announcement, there are a lot of different institutions involved.
Local: Fields Institute and ArtSci Salon
The Fields Institute is properly known as The Fields Institute for Research in Mathematical Sciences and is located at the University of Toronto. Here’s more from their About Us webpage,
Founded in 1992, the Fields Institute was initially located at the University of Waterloo. Since 1995, it has occupied a purpose-built building on the St. George Campus of the University of Toronto.
The Institute is internationally renowned for strengthening collaboration, innovation, and learning in mathematics and across a broad range of disciplines. …
The Fields Institute is named after the Canadian mathematician John Charles Fields (1863-1932). Fields was a pioneer and visionary who recognized the scientific, educational, and economic value of research in the mathematical sciences. Fields spent many of his early years in Berlin and, to a lesser extent, in Paris and Göttingen, the principal mathematical centres of Europe of that time. These experiences led him, after his return to Canada, to work for the public support of university research, which he did very successfully. He also organized and presided over the 1924 meeting of the International Congress of Mathematicians in Toronto. This quadrennial meeting was, and still is, the major meeting of the mathematics world.
There is no Nobel Prize in mathematics, and Fields felt strongly that there should be a comparable award to recognize the most outstanding current research in mathematics. With this in mind, he established the International Medal for Outstanding Discoveries in Mathematics, which, contrary to his personal directive, is now known as the Fields Medal. Information on Fields Medal winners can be found through the International Mathematical Union, which chooses the quadrennial recipients of the prize.
Fields’ name was given to the Institute in recognition of his seminal contributions to world mathematics and his work on behalf of high level mathematical scholarship in Canada. The Institute aims to carry on the work of Fields and to promote the wider use and understanding of mathematics in Canada.
ArtSci Salon consists of a series of semi-informal gatherings facilitating discussion and cross-pollination between science, technology, and the arts. ArtSci Salon started in 2010 as a spin-off of Subtle Technologies Festival to satisfy increasing demands by the audience attending the Festival to have a more frequent (monthly or bi-monthly) outlet for debate and information sharing across disciplines. In addition, it responds to the recent expansion in the GTA [Greater Toronto Area] area of a community of scientists and artists increasingly seeking collaborations across disciplines to successfully accomplish their research projects and questions.
We are pleased to announce our upcoming March 2021 events (more details are in the schedule below):
It started life as a Festival for Art, Technology and Society in 1979 in Linz, Austria. Here’s a little more from their About webpage,
… Since September 18, 1979, our world has changed radically, and digitization has covered almost all areas of our lives. Ars Electronica’s philosophy has remained the same over the years. Our activities are always guided by the question of what new technologies mean for our lives. Together with artists, scientists, developers, designers, entrepreneurs and activists, we shed light on current developments in our digital society and speculate about their manifestations in the future. We never ask what technology can or will be able to do, but always what it should do for us. And we don’t try to adapt to technology, but we want the development of technology to be oriented towards us. Therefore, our artistic research always focuses on ourselves, our needs, our desires, our feelings.
They have a number of initiatives in addition to the festival. The next festival, A New Digital Deal, runs from September 8 – 12, 2021 (Ars Electronica 2021). Here’s a little more from the festival webpage,
Ars Electronica 2021, the festival for art, technology and society, will take place from September 8 to 12. For the second time since 1979, it will be a hybrid event that includes exhibitions, concerts, talks, conferences, workshops and guided tours in Linz, Austria, and more than 80 other locations around the globe.
Leonardo; The International Society for Arts, Sciences and Technology
Ars Electronica and Leonardo; The International Society for Arts, Sciences and Technology (ISAST) cooperate on projects but they are two different entities. Here’s more from the About LEONARDO webpage,
Fearlessly pioneering since 1968, Leonardo serves as THE community forging a transdisciplinary network to convene, research, collaborate, and disseminate best practices at the nexus of arts, science and technology worldwide. Leonardo’ serves a network of transdisciplinary scholars, artists, scientists, technologists and thinkers, who experiment with cutting-edge, new approaches, practices, systems and solutions to tackle the most complex challenges facing humanity today.
As a not-for-profit 501(c)3 enterprising think tank, Leonardo offers a global platform for creative exploration and collaboration reaching tens of thousands of people across 135 countries. Our flagship publication, Leonardo, the world’s leading scholarly journal on transdisciplinary art, anchors a robust publishing partnership with MIT Press; our partnership with ASU [Arizona State University] infuses educational innovation with digital art and media for lifelong learning; our creative programs span thought-provoking events, exhibits, residencies and fellowships, scholarship and social enterprise ventures.
I have a description of Leonardo’s LASER (Leonardo Art Science Evening Rendezvous), from my March 22, 2021 posting (the Garden comes up next),
“… a program of international gatherings that bring artists, scientists, humanists and technologists together for informal presentations, performances and conversations with the wider public. The mission of LASER is to encourage contribution to the cultural environment of a region by fostering interdisciplinary dialogue and opportunities for community building.”
Culturing transnational dialogue for creative hybridity
Leonardo LASER Garden gathers our global network of artists, scientists, humanists and technologists together in a series of hybrid formats addressing the world’s most pressing issues. Animated by the theme of a “new digital deal” and grounded in the UN Sustainability Goals, Leonardo LASER Garden cultivates our values of equity and inclusion by elevating underrepresented voices in a wide-ranging exploration of global challenges, digital communities and placemaking, space, networks and systems, the digital divide – and the impact of interdisciplinary art, science and technology discourse and collaboration.
Dovetailing with the launch of LASER Linz, this asynchronous multi-platform garden will highlight the best of the Leonardo Network (spanning 47 cities worldwide) and our transdisciplinary community. In “Extraordinary Times Call for Extraordinary Vision: Humanizing Digital Culture with the New Creativity Agenda & the UNSDGs [United Nations Sustainable Development Goals],” Leonardo/ISAST CEO Diana Ayton-Shenker presents our vision for shaping our global future. This will be followed by a Leonardo Community Lounge open to the general public, with the goal of encouraging contributions to the cultural environments of different regions through transnational exchange and community building.
Getting back to the beginning you can view Proximal Fields from September 8 – 12, 2021 as part of the Ars Electonica 2021 festival, specifically, the ‘garden’ series.
The Canadian Institute for Advanced Research (CIFAR) is investigating the ‘Future of Being Human’ and has instituted a global call for proposals but there is one catch, your team has to have one person (with or without citizenship) who’s living and working in Canada. (Note: I am available.)
New program proposals should explore the long term intersection of humans, science and technology, social and cultural systems, and our environment. Our understanding of the world around us, and new insights into individual and societal behaviour, have the potential to provide enormous benefits to humanity and the planet.
We invite bold proposals from researchers at universities or research institutions that ask new questions about our complex emerging world. We are confronting challenging problems that require a diverse team incorporating multiple disciplines (potentially spanning the humanities, social sciences, arts, physical sciences, and life sciences [emphasis mine]) to engage in a sustained dialogue to develop new insights, and change the conversation on important questions facing science and humanity.
CIFAR is committed to creating a more diverse, equitable, and inclusive environment. We welcome proposals that include individuals from countries and institutions that are not yet represented in our research community.
Here’s a description, albeit, a little repetitive, of what CIFAR is asking researchers to do (from the Program Guide [PDF]),
For CIFAR’s next Global Call for Ideas, we are soliciting proposals related to The Future of Being Human, exploring in the long term the intersection of humans, science and technology, social and cultural systems, and our environment. Our understanding of the natural world around us, and new insights into individual and societal behaviour, have the potential to provide enormous benefits to humanity and the planet. We invite bold proposals that ask new questions about our complex emerging world, where the issues under study are entangled and dynamic. We are confronting challenging problems that necessitate a diverse team incorporating multiple disciplines (potentially spanning the humanities, social sciences, arts, physical sciences, and life sciences) to engage in a sustained dialogue to develop new insights, and change the conversation on important questions facing science and humanity. [p. 2 print; p. 4 PDF]
I stumbled across this event on my Twitter feed (h/t @katepullinger; Note: Kate Pullinger is a novelist and Professor of Creative Writing and Digital Media, Director of the Centre for Cultural and Creative Industries [CCCI] at Bath Spa University in the UK).
Anyone who visits here with any frequency will have noticed I have a number of articles on technology and the body (you can find them in the ‘human enhancement’ category and/or search fro the machine/flesh tag). Boddington’s view is more expansive than the one I’ve taken and I welcome it. First, here’s the event information and, then, a link to her open access paper from February 2021.
This year’s CCCI Public Lecture will be given by Ghislaine Boddington. Ghislaine is Creative Director of body>data>space and Reader in Digital Immersion at University of Greenwich. Ghislaine has worked at the intersection of the body, the digital, and spatial research for many years. This will be her first in-person appearance since the start of the pandemic, and she will share with us the many insights she has gathered during this extraordinary pivot to online interfaces much of the world has been forced to undertake.
With a background in performing arts and body technologies, Ghislaine is recognised as a pioneer in the exploration of digital intimacy, telepresence and virtual physical blending since the early 90s. As a curator, keynote speaker and radio presenter she has shared her outlook on the future human into the cultural, academic, creative industries and corporate sectors worldwide, examining topical issues with regards to personal data usage, connected bodies and collective embodiment. Her research led practice, examining the evolution of the body as the interface, is presented under the heading ‘The Internet of Bodies’. Recent direction and curation outputs include “me and my shadow” (Royal National Theatre 2012), FutureFest 2015-18 and Collective Reality (Nesta’s FutureFest / SAT Montreal 2016/17). In 2017 Ghislaine was awarded the international IX Immersion Experience Visionary Pioneer Award. She recently co-founded University of Greenwich Strategic Research Group ‘CLEI – Co-creating Liveness in Embodied Immersion’ and is an Associate Editor for AI & Society (Springer). Ghislaine is a long term advocate for diversity and inclusion, working as a Trustee for Stemette Futures and Spokesperson for Deutsche Bank ‘We in Social Tech’ initiative. She is a team member and presenter with BBC World Service flagship radio show/podcast Digital Planet.
Date and time
Thu, 24 June 2021 08:00 – 09:00 [am] PDT
Boddington’s paper is what ignited my interest; here’s a link to and a citation for it,
The Weave—virtual physical presence design—blending processes for the future
Coming from a performing arts background, dance led, in 1989, I became obsessed with the idea that there must be a way for us to be able to create and collaborate in our groups, across time and space, whenever we were not able to be together physically. The focus of my work, as a director, curator and presenter across the last 30 years, has been on our physical bodies and our data selves and how they have, through the extended use of our bodies into digitally created environments, started to merge and converge, shifting our relationship and understanding of our identity and our selfhood.
One of the key methodologies that I have been using since the mid-1990s is inter-authored group creation, a process we called The Weave (Boddington 2013a, b). It uses the simple and universal metaphor of braiding, plaiting or weaving three strands of action and intent, these three strands being:
1. The live body—whether that of the performer, the participant, or the public;
2. The technologies of today—our tools of virtually physical reflection;
3. The content—the theme in exploration.
As with a braid or a plait, the three strands must be weaved simultaneously. What is key to this weave is that in any co-creation between the body and technology, the technology cannot work without the body; hence, there will always be virtual/physical blending. [emphasis mine]
Cyborg culture is also moving forward at a pace with most countries having four or five cyborgs who have reached out into media status. Manel Munoz is the weather man as such, fascinated and affected by cyclones and anticyclones, his back of the head implant sent vibrations to different sides of his head linked to weather changes around him.
Neil Harbisson from Northern Ireland calls himself a trans-species rather than a cyborg, because his implant is permanently fused into the crown of his head. He is the first trans-species/cyborg to have his passport photo accepted as he exists with his fixed antenna. Neil has, from birth, an eye condition called greyscale, which means he only sees the world in grey and white. He uses his antennae camera to detect colour, and it sends a vibration with a different frequency for each colour viewed. He is learning what colours are within his viewpoint at any given time through the vibrations in his head, a synaesthetic method of transference of one sense for another. Moon Ribas, a Spanish choreographer and a dancer, had two implants placed into the top of her feet, set to sense seismic activity as it occurs worldwide. When a small earthquake occurs somewhere, she received small vibrations; a bigger eruption gives her body a more intense vibration. She dances as she receives and reacts to these transferred data. She feels a need to be closer to our earth, a part of nature (Harbisson et al. 2018).
Medical, non medical and sub-dermal implants
Medical implants, embedded into the body or subdermally (nearer the surface), have rapidly advanced in the last 30 years with extensive use of cardiac pacemakers, hip implants, implantable drug pumps and cochlear implants helping partial deaf people to hear.
Deep body and subdermal implants can be personalised to your own needs. They can be set to transmit chosen aspects of your body data outwards, but they also can receive and control data in return. There are about 200 medical implants in use today. Some are complex, like deep brain stimulation for motor neurone disease, and others we are more familiar with, for example, pacemakers. Most medical implants are not digitally linked to the outside world at present, but this is in rapid evolution.
Kevin Warwick, a pioneer in this area, has interconnected himself and his partner with implants for joint use of their personal and home computer systems through their BrainGate (Warwick 2008) implant, an interface between the nervous system and the technology. They are connected bodies. He works onwards with his experiments to feel the shape of distant objects and heat through fingertip implants.
‘Smart’ implants into the brain for deep brain stimulation are in use and in rapid advancement. The ethics of these developments is under constant debate in 2020 and will be onwards, as is proved by the mass coverage of the Neuralink, Elon Musk’s innovation which connects to the brain via wires, with the initial aim to cure human diseases such as dementia, depression and insomnia and onwards plans for potential treatment of paraplegia (Musk 2016).
Given how many times I’ve featured art/sci (also know as, art/science and/or sciart) and cyborgs and medical implants here, my excitement was a given.
Since posting about Science Odyssey, I have received a number of emails announcing event and not all of them are part of the Odyssey experience.
From the looks of things, May 2021 is going to be a very busy month. Given how early it is in the month I expect to receive another batch of notices and most likely will post another May 2021 events roundup.
At this point, there’s a heavy emphasis on architecture (human and other) and design.
Proximal Spaces on May 3, 2021
This is one of those event within an event notices. There’s a festival: FACTT 20/21 – Improbable Times. Trans-disciplinary & Trans-national Festival of Art & Science in Portugal and within the festival there is Proximal Spaces in Toronto, Canada. Here’s more from the ArtScience Salon (ArtSci Salon) May 1, 2021 announcement (received via email),
May 3, 2021 – 3.00 PM (EST) [12 pm PST]
Join us at this poetry reading by six Canadian artists responding to the work of eight bioartists. Event with be streamed on Facebook Live.
Please note that you don’t need to sign up in order to access the streaming as it is public.
Proximal Spaces’ is a multi-modal exhibition that explores the environment at multiple scales in concentric circles of proximity to the body. Inspired by Edward Hall’s [Edward Twitchell Hall or E. T. Hall] 1961 notation of intimate (1.5ft), personal (4ft), social (12ft) and public (25ft) spaces in his “Proxemics” diagrams, the installation portion presents similar diagrams of his concentric circles affixed to the wall of the gallery space, as well as developed in Augmented Reality around the venue. Each of these diagrams is a montage of microscopic and sub-microscopic images of the everyday environment as experienced by a collaborative team of international bioartists, and arrayed in a fractal form. In addition, an AR-enabled application explores the invisible environments of computer generated bioaerosols suspended in the air of virtual space.
This work visualizes the variegated response of the biological environment to unprecedented levels of physical distancing and self-isolation and recent developments in vaccine design that impact our understanding of interpersonal and interspecies ‘messaging’. What continues to thrive in the 6ft ‘dead spaces’ between us? What invisible particles linger on and create a biological archive through our movements through space? The artwork presents an interesting mode of interspecies engagement through hybrid virtual and physical interaction.
In the spring of 2021, six Canadian poets – Kelley Aitken, nancy viva davis halifax, Maureen Hynes, Anita Lahey, Dilys Leman, & Sheila Stewart – came together to pursue a lyric response to Proximal Spaces. They were challenged and inspired by the virtual exhibition with its combination of art, science, and proxemics. The focus of the artworks – what inhabits and thrives in the spaces and environments where we live, work, and breathe—generated six distinctive poems.
Poets: Kelley Aitken, nancy viva davis halifax, Maureen Hynes, Anita Lahey, Dilys Leman, & Sheila Stewart
Bioartists: Roberta Buiani, Nathalie Dubois Calero, Sarah Choukah, Nicole Clouston, Jess Holtz, Mick Lorusso, Maro Pebo, Felipe Shibuya
This project is part of FACTT-Improbable Times (http://factt.arteinstitute.org/), a project spearheaded and promoted by the Arte Institute we are in or production and conception partners with Cultivamos Cultura and Ectopia (Portugal), InArts Lab@Ionian University (Greece), ArtSci Salon@The Fields Institute and Sensorium@York University (Canada), School of Visual Arts (USA), UNAM [National Autonomous University of Mexico], Arte+Ciência and Bioscénica (Mexico), and Central Academy of Fine Arts (China). Together we will work and bring into being our ideas and actions for this during the year of 2021!
Morphogenesis: Geometry, Physics, and Biology on May 5, 2021
i love this image, he seems so delighted to show off the bug (?),
Here’s more from the Perimeter Institute for Theoretical Physics (PI) April 30, 2021 announcement (received via email),
Earth is home to millions of different species – from simple plants and unicellular organisms to trees and whales and humans. The incredible diversity of life on Earth led Charles Darwin to lament that it is “enough to drive the sanest man mad.”
How can we make sense of this diversity of form, which arises from the process of morphogenesis that links molecular- and cellular-level processes to conspire and lead to the emergence of “endless forms most beautiful,” as Darwin said?
In his May 5  lecture webcast, Harvard professor L. Mahadevan [Lakshminarayanan Mahadevan] will take viewers on a journey into the mathematical, physical, and biological workings of morphogenesis to demonstrate how scientists are beginning to unlock many of the secrets that have vexed scientists since Darwin.
Possible Worlds: “How Will We Live Together?” on May 6, 2021
For those who are interested in human architecture, there’s this from a May 3, 3021 Berggruen institute announcement (received via email) about a talk by Chilean architect and 2016 Pritzker Prize winner, Alejandro Gastón Aravena Mori (Alejandro Aravena),
Possible Worlds: How Will We Live Together
May 6, 2021
11am — Virtual
Possible Worlds: The UCLA [University of California at Los Angeles] – Berggruen Institute Speaker Series is a new partnership between the UCLA Division of Humanities and the Berggruen Institute.
Please click here to submit a question to Alejandro Aravena
About Alejandro Aravena Alejandro Aravena is an architect, founder and executive director of the firm Elemental. His works include the “Siamese Towers” at the Catholic University of Chile and the Novartis office campus in Shanghai. In 2016, the New York Times named Aravena one of the world’s “creative geniuses” who had helped define culture. He and Elemental have received numerous honors, including the 2016 Pritzker Architecture Prize, the 2015 London Design Museum’s Design of the Year award and the 2011 Index Award. Aravena currently serves as the president of the Pritzker Prize jury. Aravena’s lecture title, “How Will We Live Together?” echoes the theme of the upcoming international architecture exhibition, Biennale Architettura, in which Elemental will be participating.
Featuring a discussion with moderator Dana Cuff
Dana Cuff is Professor of Architecture and Urban Design at UCLA, where she is also Director of cityLAB, an award-winning think tank that advances goals of spatial justice through experimental urbanism and architecture (www.cityLAB.aud.ucla.edu). Since receiving her Ph.D. in Architecture from Berkeley, Cuff has published and lectured widely about affordable housing, the architectural profession, and Los Angeles’ urban history. She is author of several books, including The Provisional City about postwar housing in L.A., and a co-authored book called Urban Humanities: New Practices for Reimagining the City, documenting her collaborative, crossdisciplinary research and teaching at UCLA funded by the Mellon Foundation. Based on cityLAB’s design research, Cuff co-authored landmark legislation that permits “backyard homes” on some 8.1 million single-family properties, doubling the density of suburbs across California (AB 2299, Bloom-2016). In 2019, cityLAB opened a satellite center in the MacArthur Park/Westlake neighborhood where a deep, multi-year exchange with community organizations is already demonstrating ways that humanistic design of the public realm can create more compassionate cities. Cuff recently received three awards that describe her career: Women in Architecture Activist of the Year (2019, Architectural Record); Distinguished Leadership in Architectural Research (2020, ARCC); and Educator of the Year (2021, American Institute of Architects Los Angeles).
About the Series Possible Worlds: The UCLA – Berggruen Institute Speaker Series is a new partnership between the UCLA Division of Humanities and the Berggruen Institute. This semiannual series will bring some of today’s most imaginative intellectual leaders and creators to deliver public talks on the future of humanity. Through the lens of their singular achievements and experiences, these trailblazers in creativity, innovation, philosophy and politics will lecture on provocative topics that explore current challenges and transformations in human progress.
UCLA faculty and students have long been at the forefront of interpreting the world’s legacy of language, literature, art and science. UCLA Humanities serves a vital role in readying future leaders to articulate their thoughts with clarity and imagination, to interpret the world of ideas, and to live as informed citizens in an increasingly complex world. We are proud to be partnering in this lecture series with the Berggruen Institute, whose work addresses the “Great Transformations” taking place in technology and culture, politics and economics, global power arrangements, and even how we perceive ourselves as humans. The Institute seeks to connect deep thought in the human sciences — philosophy and culture — to the pursuit of practical improvements in governance.
A selection committee comprising representatives of UCLA and the Berggruen Institute has been formed to make recommendations for lecturers. The committee includes:
• Ursula Heise, Professor and Chair, Department of English; Professor, UCLA Institute of the Environment and Sustainability; Marcia H. Howard Term Chair in Literary Studies • Pamela Hieronymi, Professor of Philosophy • Anastasia Loukaitou-Sideris, Professor of Urban Planning; Associate Provost for Academic Planning • Todd Presner, Associate Dean, Digital Initiatives; Chair of the Digital Humanities Program; Michael and Irene Ross Endowed Chair of Yiddish Studies; Professor of Germanic Languages and Comparative Literature • Lynn Vavreck, Professor, Department of Political Science; Marvin Hoffenberg Professor of American Politics and Public Policy • David Schaberg, Senior Dean of the UCLA College; Dean of Humanities; Professor, Asian Languages & Cultures • Nils Gilman, Vice President of Programs, the Berggruen Institute
Generative Art and Computational Creativity starts May 7, 2021
A Spring 2021 MetaCreation Lab (Simon Fraser University; SFU) newsletter (received via email on April 23, 2021) highlights a number of festival submissions and papers along with some news about a free introductory course. First, the video introduction to the course,
This first course in the two-part program, Generative Art and Computational Creativity [there’s a fee for part two], proposes an introduction and overview of the history and practice of generative arts and computational creativity with an emphasis on the formal paradigms and algorithms used for generation. The full program will be taught by Associate Professor from the School of Interactive Arts and Technology at Simon Fraser University and multi-disciplinary researcher, Philippe Pasquier.
On the technical side, we will study core techniques from mathematics, artificial intelligence, and artificial life that are used by artists, designers and musicians across the creative industry. We will start with processes involving chance operations, chaos theory and fractals and move on to see how stochastic processes, and rule-based approaches can be used to explore creative spaces. We will study agents and multi-agent systems and delve into cellular automata, and virtual ecosystems to explore their potential to create novel and valuable artifacts and aesthetic experiences.
The presentation is illustrated by numerous examples from past and current productions across creative practices such as visual art, new media, music, poetry, literature, performing arts, design, architecture, games, robot-art, bio-art and net-art. Students get to practice these algorithms first hand and develop new generative pieces through assignments and projects in MAX. Finally, the course addresses relevant philosophical, and societal debates associated with the automation of creative tasks.
Music for this course was composed with the StyleMachineLite Max for Live engine of Metacreative Inc.
Artistic direction: Philippe Pasquier, Programmation: Arne Eigenfeldt, Sound Production: Philippe Bertrand
This course is in adaptive mode and is open for enrollment. Learn more about adaptive courses here.
Session 1: Introduction and Typology of Generative Art (May 7, 2021) To start off this course, we define generative art and computational creativity and discuss how these relate through the study of prominent examples. We establish a typology of generative systems based on levels of autonomy and agency.
Session 2: History Of Generative Art, Chance Operations, and Chaos Theory (May 14, 2021) Generative art is nothing new, and this session goes through the history of the field from pre-history to the popularization of computers. We study chance, noise, fractals, chaos theory, and their applications in visual art and music.
Session 3: Rule-Based Systems, Grammars and Markov Chains (May 21, 2021) This session introduces and illustrate the generative potential of rule-based and expert systems. We study generative grammars through the Chomsky hierarchy, and introduce L-systems, shape grammars, and Markov chains. We discuss how these have been applied in visual art, music, design, architecture, and electronic literature.
Session 4: Cognitive Agents And Multiagent Systems (May 28, 2021) This session introduces the concepts underlying the notion of artificial agents. We study the belief, desire, and intention (BDI) cognitive architecture, and message based agent communication resting on the speech act theory. We discuss musical agents, conversational agents, chat bots and twitter bots and their artistic potential.
Session 5: Reactive Agents And Multiagent Systems (June 4, 2021) In this session, we introduce reactive agents and the subsumption architecture. We study boids, and detail how complex behaviors can emerge from a distributed population of simple artificial agents. We look at a myriad of applications from ant painting to swarm music and we discuss artistic approaches to virtual ecosystems.
Session 6: A-Life And Cellular Automaton (June 11, 2021) In this concluding session, we introduce artificial life (A-life). We study cellular automaton, multi-agent ecosystems for music, visual art, non-photorealistic rendering, and gaming. The session also concludes the class by reflecting on the state of the art in the field and its consequences on creative practices.
The human being – so fragile, so ethereal, speaking a sweet language. A piece of architecture – so physically imminent, so solid, speaking a language of hardness.
Photo by Oliviero Godi – Frantoio Ipogeo nel Salento
Join photographer & architect Oliviero Godi as he explores the relationship between the body & the material, the transient & the permanent, in search of the correct balance where neither element prevails.
To make your donation, please send an e-transfer to email@example.com. Thank you!
Learn More [about this other upcoming Cultural Events]
Respiration and the Brain on May 25, 2021
Before getting to the April 29, 2021 BrainTalks announcement, here’s a little bit about BrainTalks from their webspace on the University of British Columbia (UBC) website,
BrainTalks is a series of talks inviting you to contemplate emerging research about the brain. Researchers studying the brain, from various disciplines including psychiatry, neuroscience, neuroimaging, and neurology, gather to discuss current leading edge topics on the mind.
As an audience member, you join the discussion at the end of the talk, both in the presence of the entire audience, and with an opportunity afterwards to talk with the speaker more informally in a catered networking session. The talks also serve as a connecting place for those interested in similar topics, potentially launching new endeavours or simply connecting people in discussions on how to approach their research, their knowledge, or their clinical practice.
For the general public, these talks serve as a channel where by knowledge usually sequestered in inaccessible journals or university classrooms, is now available, potentially allowing people to better understand their brains and minds, how they work, and how to optimize brain health.
[UBC School of Medicine Department of Psychiatry]
Onto the April 29, 2021 BrainTalks announcement (received via email),
BrainTalks: Respiration and the Brain
Tuesday, May 25th, 2021 from 6:00 PM – 7:30 PM [PT]
Join us for a series of online talks exploring questions of respiration and the brain. Emerging empirical research will be presented on ventilation-associated brain injury and breathing-based interventions for the treatment of stress and anxiety disorders. We presenters will include Dr. Thiago Bassi, Dr. Lloyd Lalande and Taylor Willi, MSc.
Dr. Thiago Bassi will address the biological connection between the brain and lungs, exploring the potential adverse effects of mechanical ventilation on the brain. Dr. Bassi is a neurosurgeon and neuroscientist, who worked clinically for more than ten years in Brazil. He joined the Lungpacer Medical team and C2B2 lab in 2017, and is currently completing his doctorate in Biomedicine Physiology at Simon Fraser University.
Dr. Lloyd Lalande will describe Guided Respiration Mindfulness Therapy (GRMT), as an emerging clinical breathwork intervention for its effectiveness in reducing depression, anxiety and stress, and in increasing mindfulness and sense of wellbeing. Dr. Lalonde is an Assistant Professor teaching psychology at the Buddhist TzuChi University of Science and Technology, and the developer of GRMT. His current research is based out of the TzuChi Buddhist General Hospital, investigating GRMT as an evidence-based treatment for a variety of outcomes.
Mr. Taylor Willi will present the findings of his dissertation research comparing the effect of performing daily brief relaxation techniques on measures of stress and anxiety. Mr. Willi completed a Masters Degree of Neuroscience at the University of British Columbia, and is currently completing his doctorate in Clinical Psychology at Simon Fraser University.
Each of the speakers will present an overview of their research findings investigating respiration in three unique ways. Following their presentations, the speakers will be available for an audience-drive panel discussion.