I received an April 5, 2023 announcement for the 2023 IEEE International Conference on Metrology for eXtended Reality, Artificial Intelligence, and Neural Engineering (IEEE MetroXRAINE 2023) via email. Understandably given that it’s an Institute of Electrical and Electronics Engineers (IEEE) conference, they’re looking for submissions focused on developing the technology,
Last days to submit your contribution to our Special Session on “eXtended Reality as a gateway to the Metaverse: Practices, Theories, Technologies and Applications” – IEEE International Conference on Metrology for eXtended Reality, Artificial Intelligence, and Neural Engineering (IEEE MetroXRAINE 2023) – October 25-27, 2023 – Milan – https://metroxraine.org/special-session-17.
I want to remind you that the deadline of April 7  [extended to April 14, 2023 as per April 11, 2023 notice received via email] is for the submission of a 1-2 page Abstract or a Graphical Abstract to show the idea you are proposing. You will have time to finalise your work by the deadline of May 15 .
Please see the CfP below for details and forward it to colleagues who might be interested in contributing to this special session.
I’m looking forward to meeting you, virtually or in your presence, at IEEE MetroXRAINE 2023.
Best regards, Giuseppe Caggianese
Research Scientist National Research Council (CNR) [Italy] Institute for High-Performance Computing and Networking (ICAR) Via Pietro Castellino 111, 80131, Naples, Italy
Here’s are specific for the Special Session’s Call for Papers (from the April 5, 2023 email announcement),
2023 IEEE International Conference on Metrology for eXtended Reality, Artificial Intelligence, and Neural Engineering (IEEE MetroXRAINE 2023) https://metroxraine.org/
October 25-27, 2023 – Milan, Italy.
SPECIAL SESSION DESCRIPTION ————————- The fast development of Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR) solutions over the last few years are transforming how people interact, work, and communicate. The eXtended Reality (XR) term encloses all those immersive technologies that can shift the boundaries between digital and physical worlds to realize the metaverse. According to tech companies and venture capitalists, the metaverse will be a super-platform that convenes sub-platforms: social media, online video games, and ease-of-life apps, all accessible through the same digital space and sharing the same digital economy. Inside the metaverse, virtual worlds will allow avatars to carry out all human endeavours, including creation, display, entertainment, social, and trading. Thus, the metaverse will evolve how users interact with brands, intellectual properties, health services, cultural heritage, and each other things on the Internet. A user could join friends to play a multiplayer game, watch a movie via a streaming service and then attend a university course precisely the same as in the real world. The metaverse development will require new software architecture that will enable decentralized and collaborative virtual worlds. These self-organized virtual worlds will be permanent and will require maintenance operations. In addition, it will be necessary to design an efficient data management system and prevent privacy violations. Finally, the convergence of physical reality, virtually enhanced, and an always-on virtual space highlighted the need to rethink the actual paradigms for visualization, interaction, and sharing of digital information, moving toward more natural, intuitive, dynamically customizable, multimodal, and multi-user solutions. This special session aims to focus on exploring how the realization of the metaverse can transform certain application domains such us: (i) healthcare, in which the metaverse solutions can, for instance, improve the communication between patients and physicians; (ii) cultural heritage, with potentially more effective solutions for tourism guidance, site maintenance, and heritage object conservation; and (iii) industry, where to enable data-driven decision making, smart maintenance, and overall asset optimisation.
The topics of interest include, but are not limited to, the following:
Hardware/Software Architectures for metaverse
Decentralized and Collaborative Architectures for metaverse
Interoperability for metaverse
Tools to help creators to build the metaverse0
Operations and Maintenance in metaverse
Data security and privacy mechanisms for metaverse
Cryptocurrency, token, NFT Solutions for metaverse
Fraud-Detection in metaverse
Cyber Security for metaverse
Data Analytics to Identify Malicious Behaviors in metaverse
Blockchain/AI technologies in metaverse
Emerging Technologies and Applications for metaverse
New models to evaluate the impact of the metaverse
Interactive Data Exploration and Presentation in metaverse
Human-Computer Interaction for metaverse
Human factors issues related to metaverse
Proof-of-Concept in Metaverse: Experimental Prototyping and Testbeds
Abstract Submission Deadline: April 7, 2023 (extended) NOTE: 1-2 pages abstract or a graphical abstract Final Paper Submission Deadline: May 15, 2023 (extended) Full Paper Acceptance Notification: June 15, 2023 Final Paper Submission Deadline: July 31, 2023
SUBMISSION AND DECISIONS ———————— Authors should prepare an Abstract (1 – 2 pages) that clearly indicates the originality of the contribution and the relevance of the work. The Abstract should include the title of the paper, names and affiliations of the authors, an abstract, keywords, an introduction describing the nature of the problem, a description of the contribution, the results achieved and their applicability.
When the first review process has been completed, authors receive a notification of either acceptance or rejection of the submission. If the abstract has been accepted, the authors can prepare a full paper. The format for the full paper is identical to the format for the abstract except for the number of pages: the full paper has a required minimum length of five (5) pages and a maximum of six (6) pages. Full Papers will be reviewed by the Technical Program Committee. Authors of accepted full papers must submit the final paper version according to the deadline, register for the workshop, and attend to present their papers. The maximum length for final papers is 6 pages. All contributions will be peer-reviewed and acceptance will be based on quality, originality and relevance. Accepted papers will be submitted for inclusion into IEEE Xplore Digital Library.
The papers must be submitted in PDF format electronically via EDAS online submission and review system: https://edas.info/newPaper.php?c=30746. To submit abstracts or draft papers to the special session, please follow the submission instructions for regular sessions, but remind to specify the special session to which the paper is directed.
The special session organizers and other external reviewers will review all submissions.
CONFERENCE PROCEEDINGS ———————————– All contributions will be peer-reviewed, and acceptance will be based on quality, originality, and relevance. Accepted papers will be submitted for inclusion into IEEE Xplore Digital Library.
Extended versions of presented papers are eligible for post-publication; more information will be provided soon.
I’ve started to think that paper books will be on an ‘endangered species’ list in the not too distant future. Now, it seems researchers at the University of Surrey (UK) may have staved off that scenario according to an August 3, 2022 news item on ScienceDaily,
Augmented reality might allow printed books to make a comeback against the e-book trend, according to researchers from the University of Surrey.
Surrey has introduced the third generation (3G) version of its Next Generation Paper (NGP) project, allowing the reader to consume information on the printed paper and screen side by side.
Dr Radu Sporea, Senior lecturer at the Advanced Technology Institute (ATI), comments:
“The way we consume literature has changed over time with so many more options than just paper books. Multiple electronic solutions currently exist, including e-readers and smart devices, but no hybrid solution which is sustainable on a commercial scale.
“Augmented books, or a-books, can be the future of many book genres, from travel and tourism to education. This technology exists to assist the reader in a deeper understanding of the written topic and get more through digital means without ruining the experience of reading a paper book.”
Power efficiency and pre-printed conductive paper are some of the new features which allow Surrey’s augmented books to now be manufactured on a semi-industrial scale. With no wiring visible to the reader, Surrey’s augmented reality books allow users to trigger digital content with a simple gesture (such as a swipe of a finger or turn of a page), which will then be displayed on a nearby device.
George Bairaktaris, Postgraduate researcher at the University of Surrey and part of the Next Generation Paper project team, said:
“The original research was carried out to enrich travel experiences by creating augmented travel guides. This upgraded 3G model allows for the possibility of using augmented books for different areas such as education. In addition, the new model disturbs the reader less by automatically recognising the open page and triggering the multimedia content.”
“What started as an augmented book project, evolved further into scalable user interfaces. The techniques and knowledge from the project led us into exploring organic materials and printing techniques to fabricate scalable sensors for interfaces beyond the a-book”.
Here’s a link to and a citation for the paper,
Augmented Books: Hybrid Electronics Bring Paper to Life by Georgios Bairaktaris, Brice Le Borgne, Vikram Turkani, Emily Corrigan-Kavanagh, David M. Frohlich, Radu A. Sporea. IEEE Pervasive Computing (early access) PrePrints pp. 1-8, DOI: 10.1109/MPRV.2022.3181440 Published: July 12, 2022
In this Buddhist sci-fi mystery set in near-future Phnom Penh, a young Cambodian detective untangles a link between her friend’s past-life dreams of a lost gold artifact and a neuroscientist’s determination to attain digital enlightenment.
Cambodian Sci-Fi Movie Karmalink Explores Enlightenment, Reincarnation, and Nanotechnology
The Cambodian science fiction move Karmalink, which won awards on its film festival debut last year for its intriguing mix of high-tech mystery and Buddhist philosophy, has released a new trailer ahead of its North American release next month.
“In near-future Phnom Penh, a teenage boy teams up with a street-smart girl from his neighborhood to untangle the mystery of his past-life dreams,” a synopsis on the website of executive producer Valerie Steinberg explains. “What begins as a hunt for a Buddhist treasure soon leads to greater discoveries that will either end in digital enlightenment or a total loss of identity.” (Valerie Steinberg)
Directed and co-written by Jake Wachtel, Karmalink’s story is set in the Cambodian capital Phnom Penh, and sets out to explore the intersection of the Buddhist themes of karma, reincarnation, and enlightenment with the consciousness-altering implications of augmented reality and artificial intelligence, as well as the growing disparity between rich and poor.
The main plot follows a 13-year-old boy, Leng Heng (Leng Heng Prak), and his friend, Srey Leak (Srey Leak Chhith), who live in a crowded, dilapidated community on the outskirts of Phnom Penh of the near future.
Heng has been having a recurring dream about a golden Buddha statue owned by various people who he believes to be his past incarnations. Heng enlists the help of Leak to untangle the links between his dreams and the aspirations of a prominent neuroscientist to attain digital enlightenment via nanotechnology [emphasis mine] in order to find the truth and discover their own destiny.
Unfortunately, there are no more details as to how nanotechnology helps with attaining ‘digital enlightenment’. As to what digital enlightenment might be, that too is a mystery.
The trailer is made up of the many rewards and snippets that the film received during its film festival run, which started in September 2021 at that year’s Venice’s International Film Critic’s Week. It was also announced a few days ago that it will be released theatrically in major US cities as well as Video On Demand in both the US and Canada on July 15, 2022. The film is spoken in Khmer with English subtitles and is a total of 102 minutes long. The film was created as a way to “interrogate processes of neo-colonialism, and highlighting the alienating effects of technological progress, Jake Wachtel’s Karmalink is a mind-bending tale of reincarnation, artificial consciousness, and the search for enlightenment.”
Sadly, the lead actor, Leng Heng Prak, has died since production of the film.
I stumbled across this November 15, 2022 news item on Nanowerk highlighting work on the sense of touch in the virual originally announced in October 2022,
A collaborative research team co-led by City University of Hong Kong (CityU) has developed a wearable tactile rendering system, which can mimic the sensation of touch with high spatial resolution and a rapid response rate. The team demonstrated its application potential in a braille display, adding the sense of touch in the metaverse for functions such as virtual reality shopping and gaming, and potentially facilitating the work of astronauts, deep-sea divers and others who need to wear thick gloves.
Here’s what you’ll need to wear for this virtual tactile experience,
“We can hear and see our families over a long distance via phones and cameras, but we still cannot feel or hug them. We are physically isolated by space and time, especially during this long-lasting pandemic,” said Dr Yang Zhengbao,Associate Professor in the Department of Mechanical Engineering of CityU, who co-led the study. “Although there has been great progress in developing sensors that digitally capture tactile features with high resolution and high sensitivity, we still lack a system that can effectively virtualize the sense of touch that can record and playback tactile sensations over space and time.”
In collaboration with Chinese tech giant Tencent’s Robotics X Laboratory, the team developed a novel electrotactile rendering system for displaying various tactile sensations with high spatial resolution and a rapid response rate. Their findings were published in the scientific journal Science Advances under the title “Super-resolution Wearable Electro-tactile Rendering System”.
Limitations in existing techniques
Existing techniques to reproduce tactile stimuli can be broadly classified into two categories: mechanical and electrical stimulation. By applying a localised mechanical force or vibration on the skin, mechanical actuators can elicit stable and continuous tactile sensations. However, they tend to be bulky, limiting the spatial resolution when integrated into a portable or wearable device. Electrotactile stimulators, in contrast, which evoke touch sensations in the skin at the location of the electrode by passing a local electric current though the skin, can be light and flexible while offering higher resolution and a faster response. But most of them rely on high voltage direct-current (DC) pulses (up to hundreds of volts) to penetrate the stratum corneum, the outermost layer of the skin, to stimulate the receptors and nerves, which poses a safety concern. Also, the tactile rendering resolution needed to be improved.
The latest electro-tactile actuator developed by the team is very thin and flexible and can be easily integrated into a finger cot. This fingertip wearable device can display different tactile sensations, such as pressure, vibration, and texture roughness in high fidelity. Instead of using DC pulses, the team developed a high-frequency alternating stimulation strategy and succeeded in lowering the operating voltage under 30 V, ensuring the tactile rendering is safe and comfortable.
They also proposed a novel super-resolution strategy that can render tactile sensation at locations between physical electrodes, instead of only at the electrode locations. This increases the spatial resolution of their stimulators by more than three times (from 25 to 105 points), so the user can feel more realistic tactile perception.
Tactile stimuli with high spatial resolution
“Our new system can elicit tactile stimuli with both high spatial resolution (76 dots/cm2), similar to the density of related receptors in the human skin, and a rapid response rate (4 kHz),” said Mr LinWeikang, a PhD student at CityU, who made and tested the device.
The team ran different tests to show various application possibilities of this new wearable electrotactile rendering system. For example, they proposed a new Braille strategy that is much easier for people with a visual impairment to learn.
The proposed strategy breaks down the alphabet and numerical digits into individual strokes and order in the same way they are written. By wearing the new electrotactile rendering system on a fingertip, the user can recognise the alphabet presented by feeling the direction and the sequence of the strokes with the fingertip sensor. “This would be particularly useful for people who lose their eye sight later in life, allowing them to continue to read and write using the same alphabetic system they are used to, without the need to learn the whole Braille dot system,” said Dr Yang.
Enabling touch in the metaverse
Second, the new system is well suited for VR/AR [virtual reality/augmented reality] applications and games, adding the sense of touch to the metaverse. The electrodes can be made highly flexible and scalable to cover larger areas, such as the palm. The team demonstrated that a user can virtually sense the texture of clothes in a virtual fashion shop. The user also experiences an itchy sensation in the fingertips when being licked by a VR cat. When stroking a virtual cat’s fur, the user can feel a variance in the roughness as the strokes change direction and speed.
The system can also be useful in transmitting fine tactile details through thick gloves. The team successfully integrated the thin, light electrodes of the electrotactile rendering system into flexible tactile sensors on a safety glove. The tactile sensor array captures the pressure distribution on the exterior of the glove and relays the information to the user in real time through tactile stimulation. In the experiment, the user could quickly and accurately locate a tiny steel washer just 1 mm in radius and 0.44mm thick based on the tactile feedback from the glove with sensors and stimulators. This shows the system’s potential in enabling high-fidelity tactile perception, which is currently unavailable to astronauts, firefighters, deep-sea divers and others who need wear thick protective suits or gloves.
“We expect our technology to benefit a broad spectrum of applications, such as information transmission, surgical training, teleoperation, and multimedia entertainment,” added Dr Yang.
Here’s a link to and a citation for the paper,
Super-resolution wearable electrotactile rendering system by Weikang Lin, Dongsheng Zhang, Wang Wei Lee, Xuelong Li, Ying Hong, Qiqi Pan, Ruirui Zhang, Guoxiang Peng, Hong Z. Tan, Zhengyou Zhang, Lei Wei, and Zhengbao Yang. Science Advances 9 Sep 2022 Vol 8, Issue 36 DOI: 10.1126/sciadv.abp8738
As noted in the headline for this post, I have two items. For anyone unfamiliar with XR and the other (AR, MR, and VR) realities, I found a good description which I placed in my October 22, 2021 posting (scroll down to the “How many realities are there?” subhead about 70% of the way down).
eXtended Reality in Rome
I got an invitation (via a February 24, 2022 email) to participate in a special session at one of the 2022 IEEE (Institute of Electrical and Electronics Engineers) conference (more about the conference later).
The fast development of Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR) solutions over the last few years are transforming how people interact, work, and communicate. The eXtended Reality (XR) term encloses all those immersive technologies that can shift the boundaries between digital and physical worlds to realize the Metaverse. According to tech companies and venture capitalists, the Metaverse will be a super-platform that convenes sub-platforms: social media, online video games, and ease-of-life apps, all accessible through the same digital space and sharing the same digital economy. Inside the Metaverse, virtual worlds will allow avatars to carry all human endeavours, including creation, display, entertainment, social, and trading. Thus, the Metaverse will evolve how users interact with brands, intellectual properties, and each other things on the Internet. A user could join friends to play a multiplayer game, watch a movie via a streaming service and then attend a university course precisely the same as in the real world.
The Metaverse development will require new software architecture that will enable decentralized and collaborative virtual worlds. These self-organized virtual worlds will be permanent and will require maintenance operations. In addition, it will be necessary to design efficient data management system and prevent privacy violations. Finally, the convergence of physical reality, virtually enhanced, and an always-on virtual space highlighted the need to rethink the actual paradigms for visualization, interaction, and sharing of digital information, moving toward more natural, intuitive, dynamically customizable, multimodal, and multi-user solutions.
The topics of interest include, but are not limited to, the following:
Hardware/Software Architectures for Metaverse
Decentralized and Collaborative Architectures for Metaverse
Interoperability for Metaverse
Tools to help creators to build the Metaverse
Operations and Maintenance in Metaverse
Data security and privacy mechanisms for Metaverse
Cryptocurrency, token, NFT Solutions for Metaverse
Fraud-Detection in Metaverse
Cyber Security for Metaverse
Data Analytics to Identify Malicious Behaviors in Metaverse
Blockchain/AI technologies in Metaverse
Emerging Technologies and Applications for Metaverse
New models to evaluate the impact of the Metaverse
Interactive Data Exploration and Presentation in Metaverse
Human factors issues related to Metaverse
Proof-of-Concept in Metaverse: Experimental Prototyping and Testbeds
ABOUT THE ORGANIZERS
Giuseppe Caggianese is a Research Scientist at the National Research Council of Italy. He received the Laurea degree in computer science magna cum laude in 2010 and the Ph.D. degree in Methods and Technologies for Environmental Monitoring in 2013 from the University of Basilicata, Italy.
His research activities are focused on the field of Human-Computer Interaction (HCI) and Artificial Intelligence (AI) to design and test advanced interfaces adaptive to specific uses and users in both augmented and virtual reality. He authored more than 30 scientific papers published in international journals, conference proceedings, and books. He also serves on program committees of several international conferences and workshops.
Ugo Erra is an Assistant Professor (qualified as Associate Professor) at the University of Basilicata (UNIBAS), Italy. He is the founder of the Computer Graphics Laboratory at the University of Basilicata. He received an MSc/diploma degree in Computer Science from the University of Salerno, Italy, in 2001 and a PhD in Computer Science in 2004.
His research focuses on Real-Time Computer Graphics, Information Visualization, Artificial Intelligence, and Parallel Computing. Has been involved in several research projects; among these, one project was funded by the European Commission as a research fellow, and four projects were founded by Area Science Park, a public national research organization that promotes the development of innovation processes, as principal investigator. He has (co-)authored about 14 international journal articles, 45 international conference proceedings, and two book chapters. He supervised four PhD students. He organized the Workshop on Parallel and Distributed Agent-Based Simulations, a satellite Workshop of Euro-Par, from 2013 to 2015. He served more than 20 international conferences as program committee member and more than ten journals as referee.
The 2022 IEEE International Conference on Metrology for eXtended Reality, Artificial Intelligence, and Neural Engineering (IEEE MetroXRAINE 2022) will be an international event mainly aimed at creating a synergy between experts in eXtended Reality, Brain-Computer Interface, and Artificial Intelligence, with special attention to measurement [i.e., metrology].
The conference will be a unique opportunity for discussion among scientists, technologists, and companies on very specific sectors in order to increase the visibility and the scientific impact for the participants. The organizing formula will be original owing to the emphasis on the interaction between the participants to exchange ideas and material useful for their research activities.
MetroXRAINE will be configured as a synergistic collection of sessions organized by the individual members of the Scientific Committee. Round tables will be held for different projects and hot research topics. Moreover, we will have demo sessions, students contests, interactive company expositions, awards, and so on.
The Conference will be a hybrid conference [emphasis mine], with the possibility of attendance remotely or in presence.
CALL FOR PAPERS
The Program Committee is inviting to submit Abstracts (1 – 2 pages) for the IEEE MetroXRAINE 2022 Conference, 26-28 October, 2022.
All contributions will be peer-reviewed and acceptance will be based on quality, originality and relevance. Accepted papers will be submitted for inclusion into IEEE Xplore Digital Library.
Extended versions of presented papers are eligible for post publication.
Abstract Submission Deadline:
March 28, 2022
Full Paper Submission Deadline:
May 10, 2022
Extended Abstract Acceptance Notification:
June 10, 2022
Final Paper Submission Deadline:
July 30, 2022
According to the email invitation, “IEEE MetroXRAINE 2022 … will be held on October 26-28, 2022 in Rome.” You can find more details on the conference website.
Council of Canadian Academies launches four projects
The Council of Canadian Academies (CCA) is pleased to announce it will undertake four new assessments beginning this spring:
Gene-edited Organisms for Pest Control Advances in gene editing tools and technologies have made the process of changing an organism’s genome more efficient, opening up a range of potential applications. One such application is in pest control. By editing genomes of organisms, and introducing them to wild populations, it’s now possible to control insect-borne disease and invasive species, or reverse insecticide resistance in pests. But the full implications of using these methods remains uncertain.
This assessment will examine the scientific, bioethical, and regulatory challenges associated with the use of gene-edited organisms and technologies for pest control.
Sponsor: Health Canada’s Pest Management Regulatory Agency
The Future of Arctic and Northern Research in Canada The Arctic is undergoing unprecedented changes, spurred in large part by climate change and globalization. Record levels of sea ice loss are expected to lead to increased trade through the Northwest Passage. Ocean warming and changes to the tundra will transform marine and terrestrial ecosystems, while permafrost thaw will have significant effects on infrastructure and the release of greenhouse gases. As a result of these trends, Northern communities, and Canada as an Arctic and maritime country, are facing profound economic, social, and ecosystem impacts.
This assessment will examine the key foundational elements to create an inclusive, collaborative, effective, and world-class Arctic and northern science system in Canada.
Sponsor: A consortium of Arctic and northern research and science organizations from across Canada led by ArcticNet
Quantum Technologies Quantum technologies will affect all sectors of the Canadian economy. Built on the principles of quantum physics, these emerging technologies present significant opportunities in the areas of sensing and metrology, computation and communication, and data science and artificial intelligence, among others. But there is also the potential they could be used to facilitate cyberattacks, putting financial systems, utility grids, infrastructure, personal privacy, and national security at risk. A comprehensive exploration of the capabilities and potential vulnerabilities of these technologies will help to inform their future deployment across society and the economy.
This assessment will examine the impacts, opportunities, and challenges quantum technologies present for industry, governments, and people in Canada.
Sponsor: National Research Council Canada and Innovation, Science and Economic Development Canada
International Science and Technology Partnership Opportunities International partnerships focused on science, technology, and innovation can provide Canada with an opportunity to advance the state of knowledge in areas of national importance, help address global challenges, and contribute to UN Sustainable Development Goals. Canadian companies could also benefit from global partnerships to access new and emerging markets.
While there are numerous opportunities for international collaborations, Canada has finite resources to support them. Potential partnerships need to be evaluated not just on strengths in areas such as science, technology, and innovation, but also political and economic factors.
This assessment will examine how public, private, and academic organizations can evaluate and prioritize science and technology partnership opportunities with other countries to achieve key national objectives.
Sponsor: Global Affairs Canada
Gene-edited Organisms for Pest Control and International Science and Technology Partnership Opportunities are funded by Innovation, Science and Economic Development Canada (ISED). Quantum Technologies is funded by the National Research Council of Council (NRC) and ISED, and the Future of Arctic and Northern Research in Canada is funded by a consortium of Arctic and northern research and science organizations from across Canada led by ArcticNet. The reports will be released in 2023-24.
Multidisciplinary expert panels will be appointed in the coming months for all four assessments.
You can find in-progress and completed CCA reports here.
Fingers crossed that the CCA looks a little further afield for their international experts than the US, UK, Australia, New Zealand, and northern Europe.
Finally, I’m guessing that the gene-editing and pest management report will cover and, gingerly, recommend germline editing (which is currently not allowed in Canada) and gene drives too.
It will be interesting to see who’s on that committee. If you’re really interested in the report topic, you may want to check out my April 26, 2019 posting and scroll down to the “Criminal ban on human gene-editing of inheritable cells (in Canada)” subhead where I examined what seemed to be an informal attempt to persuade policy makers to allow germline editing or gene-editing of inheritable cells in Canada.
The ‘metaverse’ seems to be everywhere these days (especially since Facebook has made a number of announcements bout theirs (more about that later in this posting).
At this point, the metaverse is very hyped up despite having been around for about 30 years. According to the Wikipedia timeline (see the Metaverse entry), the first one was a MOO in 1993 called ‘The Metaverse’. In any event, it seems like it might be a good time to see what’s changed since I dipped my toe into a metaverse (Second Life by Linden Labs) in 2007.
(For grammar buffs, I switched from definite article [the] to indefinite article [a] purposefully. In reading the various opinion pieces and announcements, it’s not always clear whether they’re talking about a single, overarching metaverse [the] replacing the single, overarching internet or whether there will be multiple metaverses, in which case [a].)
The hype/the buzz … call it what you will
This September 6, 2021 piece by Nick Pringle for Fast Company dates the beginning of the metaverse to a 1992 science fiction novel before launching into some typical marketing hype (for those who don’t know, hype is the short form for hyperbole; Note: Links have been removed),
The term metaverse was coined by American writer Neal Stephenson in his 1993 sci-fi hit Snow Crash. But what was far-flung fiction 30 years ago is now nearing reality. At Facebook’s most recent earnings call [June 2021], CEO Mark Zuckerberg announced the company’s vision to unify communities, creators, and commerce through virtual reality: “Our overarching goal across all of these initiatives is to help bring the metaverse to life.”
So what actually is the metaverse? It’s best explained as a collection of 3D worlds you explore as an avatar. Stephenson’s original vision depicted a digital 3D realm in which users interacted in a shared online environment. Set in the wake of a catastrophic global economic crash, the metaverse in Snow Crash emerged as the successor to the internet. Subcultures sprung up alongside new social hierarchies, with users expressing their status through the appearance of their digital avatars.
Today virtual worlds along these lines are formed, populated, and already generating serious money. Household names like Roblox and Fortnite are the most established spaces; however, there are many more emerging, such as Decentraland, Upland, Sandbox, and the soon to launch Victoria VR.
These metaverses [emphasis mine] are peaking at a time when reality itself feels dystopian, with a global pandemic, climate change, and economic uncertainty hanging over our daily lives. The pandemic in particular saw many of us escape reality into online worlds like Roblox and Fortnite. But these spaces have proven to be a place where human creativity can flourish amid crisis.
In fact, we are currently experiencing an explosion of platforms parallel to the dotcom boom. While many of these fledgling digital worlds will become what Ask Jeeves was to Google, I predict [emphasis mine] that a few will match the scale and reach of the tech giant—or even exceed it.
Because the metaverse brings a new dimension to the internet, brands and businesses will need to consider their current and future role within it. Some brands are already forging the way and establishing a new genre of marketing in the process: direct to avatar (D2A). Gucci sold a virtual bag for more than the real thing in Roblox; Nike dropped virtual Jordans in Fortnite; Coca-Cola launched avatar wearables in Decentraland, and Sotheby’s has an art gallery that your avatar can wander in your spare time.
D2A is being supercharged by blockchain technology and the advent of digital ownership via NFTs, or nonfungible tokens. NFTs are already making waves in art and gaming. More than $191 million was transacted on the “play to earn” blockchain game Axie Infinity in its first 30 days this year. This kind of growth makes NFTs hard for brands to ignore. In the process, blockchain and crypto are starting to feel less and less like “outsider tech.” There are still big barriers to be overcome—the UX of crypto being one, and the eye-watering environmental impact of mining being the other. I believe technology will find a way. History tends to agree.
Detractors see the metaverse as a pandemic fad, wrapping it up with the current NFT bubble or reducing it to Zuck’s [Jeffrey Zuckerberg and Facebook] dystopian corporate landscape. This misses the bigger behavior change that is happening among Gen Alpha. When you watch how they play, it becomes clear that the metaverse is more than a buzzword.
For Gen Alpha [emphasis mine], gaming is social life. While millennials relentlessly scroll feeds, Alphas and Zoomers [emphasis mine] increasingly stroll virtual spaces with their friends. Why spend the evening staring at Instagram when you can wander around a virtual Harajuku with your mates? If this seems ridiculous to you, ask any 13-year-old what they think.
Who is Nick Pringle and how accurate are his predictions?
By thinking “virtual first,” you can see how these spaces become highly experimental, creative, and valuable. The products you can design aren’t bound by physics or marketing convention—they can be anything, and are now directly “ownable” through blockchain. …
I believe that the metaverse is here to stay. That means brands and marketers now have the exciting opportunity to create products that exist in multiple realities. The winners will understand that the metaverse is not a copy of our world, and so we should not simply paste our products, experiences, and brands into it.
I emphasized “These metaverses …” in the previous section to highlight the fact that I find the use of ‘metaverses’ vs. ‘worlds’ confusing as the words are sometimes used as synonyms and sometimes as distinctions. We do it all the time in all sorts of conversations but for someone who’s an outsider to a particular occupational group or subculture, the shifts can make for confusion.
As for Gen Alpha and Zoomer, I’m not a fan of ‘Gen anything’ as shorthand for describing a cohort based on birth years. For example, “For Gen Alpha [emphasis mine], gaming is social life,” ignores social and economic classes, as well as, the importance of locations/geography, e.g., Afghanistan in contrast to the US.
To answer the question I asked, Pringle does not mention any record of accuracy for his predictions for the future but I was able to discover that he is a “multiple Cannes Lions award-winning creative” (more here).
In recent months you may have heard about something called the metaverse. Maybe you’ve read that the metaverse is going to replace the internet. Maybe we’re all supposed to live there. Maybe Facebook (or Epic, or Roblox, or dozens of smaller companies) is trying to take it over. And maybe it’s got something to do with NFTs [non-fungible tokens]?
Unlike a lot of things The Verge covers, the metaverse is tough to explain for one reason: it doesn’t necessarily exist. It’s partly a dream for the future of the internet and partly a neat way to encapsulate some current trends in online infrastructure, including the growth of real-time 3D worlds.
Then what is the real metaverse?
There’s no universally accepted definition of a real “metaverse,” except maybe that it’s a fancier successor to the internet. Silicon Valley metaverse proponents sometimes reference a description from venture capitalist Matthew Ball, author of the extensive Metaverse Primer:
“The Metaverse is an expansive network of persistent, real-time rendered 3D worlds and simulations that support continuity of identity, objects, history, payments, and entitlements, and can be experienced synchronously by an effectively unlimited number of users, each with an individual sense of presence.”
Facebook, arguably the tech company with the biggest stake in the metaverse, describes it more simply:
“The ‘metaverse’ is a set of virtual spaces where you can create and explore with other people who aren’t in the same physical space as you.”
There are also broader metaverse-related taxonomies like one from game designer Raph Koster, who draws a distinction between “online worlds,” “multiverses,” and “metaverses.” To Koster, online worlds are digital spaces — from rich 3D environments to text-based ones — focused on one main theme. Multiverses are “multiple different worlds connected in a network, which do not have a shared theme or ruleset,” including Ready Player One’s OASIS. And a metaverse is “a multiverse which interoperates more with the real world,” incorporating things like augmented reality overlays, VR dressing rooms for real stores, and even apps like Google Maps.
If you want something a little snarkier and more impressionistic, you can cite digital scholar Janet Murray — who has described the modern metaverse ideal as “a magical Zoom meeting that has all the playful release of Animal Crossing.”
But wait, now Ready Player One isn’t a metaverse and virtual worlds don’t have to be 3D? It sounds like some of these definitions conflict with each other.
An astute observation.
Why is the term “metaverse” even useful? “The internet” already covers mobile apps, websites, and all kinds of infrastructure services. Can’t we roll virtual worlds in there, too?
Matthew Ball favors the term “metaverse” because it creates a clean break with the present-day internet. [emphasis mine] “Using the metaverse as a distinctive descriptor allows us to understand the enormity of that change and in turn, the opportunity for disruption,” he said in a phone interview with The Verge. “It’s much harder to say ‘we’re late-cycle into the last thing and want to change it.’ But I think understanding this next wave of computing and the internet allows us to be more proactive than reactive and think about the future as we want it to be, rather than how to marginally affect the present.”
A more cynical spin is that “metaverse” lets companies dodge negative baggage associated with “the internet” in general and social media in particular. “As long as you can make technology seem fresh and new and cool, you can avoid regulation,” researcher Joan Donovan told The Washington Post in a recent article about Facebook and the metaverse. “You can run defense on that for several years before the government can catch up.”
There’s also one very simple reason: it sounds more futuristic than “internet” and gets investors and media people (like us!) excited.
People keep saying NFTs are part of the metaverse. Why?
NFTs are complicated in their own right, and you can read more about them here. Loosely, the thinking goes: NFTs are a way of recording who owns a specific virtual good, creating and transferring virtual goods is a big part of the metaverse, thus NFTs are a potentially useful financial architecture for the metaverse. Or in more practical terms: if you buy a virtual shirt in Metaverse Platform A, NFTs can create a permanent receipt and let you redeem the same shirt in Metaverse Platforms B to Z.
Lots of NFT designers are selling collectible avatars like CryptoPunks, Cool Cats, and Bored Apes, sometimes for astronomical sums. Right now these are mostly 2D art used as social media profile pictures. But we’re already seeing some crossover with “metaverse”-style services. The company Polygonal Mind, for instance, is building a system called CryptoAvatars that lets people buy 3D avatars as NFTs and then use them across multiple virtual worlds.
Since starting this post sometime in September 2021, the situation regarding Facebook has changed a few times. I’ve decided to begin my version of the story from a summer 2021 announcement.
On Monday, July 26, 2021, Facebook announced a new Metaverse product group. From a July 27, 2021 article by Scott Rosenberg for Yahoo News (Note: A link has been removed),
Facebook announced Monday it was forming a new Metaverse product group to advance its efforts to build a 3D social space using virtual and augmented reality tech.
Facebook’s new Metaverse product group will report to Andrew Bosworth, Facebook’s vice president of virtual and augmented reality [emphasis mine], who announced the new organization in a Facebook post.
Facebook, integrity, and safety in the metaverse
On September 27, 2021 Facebook posted this webpage (Building the Metaverse Responsibly by Andrew Bosworth, VP, Facebook Reality Labs [emphasis mine] and Nick Clegg, VP, Global Affairs) on its site,
The metaverse won’t be built overnight by a single company. We’ll collaborate with policymakers, experts and industry partners to bring this to life.
We’re announcing a $50 million investment in global research and program partners to ensure these products are developed responsibly.
We develop technology rooted in human connection that brings people together. As we focus on helping to build the next computing platform, our work across augmented and virtual reality and consumer hardware will deepen that human connection regardless of physical distance and without being tied to devices.
Introducing the XR [extended reality] Programs and Research Fund
There’s a long road ahead. But as a starting point, we’re announcing the XR Programs and Research Fund, a two-year $50 million investment in programs and external research to help us in this effort. Through this fund, we’ll collaborate with industry partners, civil rights groups, governments, nonprofits and academic institutions to determine how to build these technologies responsibly.
Rebranding Facebook’s integrity and safety issues away?
It seems Facebook’s credibility issues are such that the company is about to rebrand itself according to an October 19, 2021 article by Alex Heath for The Verge (Note: Links have been removed),
Facebook is planning to change its company name next week to reflect its focus on building the metaverse, according to a source with direct knowledge of the matter.
The coming name change, which CEO Mark Zuckerberg plans to talk about at the company’s annual Connect conference on October 28th , but could unveil sooner, is meant to signal the tech giant’s ambition to be known for more than social media and all the ills that entail. The rebrand would likely position the blue Facebook app as one of many products under a parent company overseeing groups like Instagram, WhatsApp, Oculus, and more. A spokesperson for Facebook declined to comment for this story.
Facebook already has more than 10,000 employees building consumer hardware like AR glasses that Zuckerberg believes will eventually be as ubiquitous as smartphones. In July, he told The Verge that, over the next several years, “we will effectively transition from people seeing us as primarily being a social media company to being a metaverse company.”
A rebrand could also serve to further separate the futuristic work Zuckerberg is focused on from the intense scrutiny Facebook is currently under for the way its social platform operates today. A former employee turned whistleblower, Frances Haugen, recently leaked a trove of damning internal documents to The Wall Street Journal and testified about them before Congress. Antitrust regulators in the US and elsewhere are trying to break the company up, and public trust in how Facebook does business is falling.
Facebook isn’t the first well-known tech company to change its company name as its ambitions expand. In 2015, Google reorganized entirely under a holding company called Alphabet, partly to signal that it was no longer just a search engine, but a sprawling conglomerate with companies making driverless cars and health tech. And Snapchat rebranded to Snap Inc. in 2016, the same year it started calling itself a “camera company” and debuted its first pair of Spectacles camera glasses.
If you have time, do read Heath’s article in its entirety.
“It reflects the broadening out of the Facebook business. And then, secondly, I do think that Facebook’s brand is probably not the greatest given all of the events of the last three years or so,” internet analyst James Cordwell at Atlantic Equities said.
“Having a different parent brand will guard against having this negative association transferred into a new brand, or other brands that are in the portfolio,” said Shankha Basu, associate professor of marketing at University of Leeds.
Tyler Jadah’s October 20, 2021 article for the Daily Hive includes an earlier announcement (not mentioned in the other two articles about the rebranding), Note: A link has been removed,
Earlier this week [October 17, 2021], Facebook announced it will start “a journey to help build the next computing platform” and will hire 10,000 new high-skilled jobs within the European Union (EU) over the next five years.
“Working with others, we’re developing what is often referred to as the ‘metaverse’ — a new phase of interconnected virtual experiences using technologies like virtual and augmented reality,” wrote Facebook’s Nick Clegg, the VP of Global Affairs. “At its heart is the idea that by creating a greater sense of “virtual presence,” interacting online can become much closer to the experience of interacting in person.”
Clegg says the metaverse has the potential to help unlock access to new creative, social, and economic opportunities across the globe and the virtual world.
In an email with Facebook’s Corporate Communications Canada, David Troya-Alvarez told Daily Hive, “We don’t comment on rumour or speculation,” in regards to The Verge‘s report.
I will update this posting when and if Facebook rebrands itself into a ‘metaverse’ company.
***See Oct. 28, 2021 update at the end of this posting and prepare yourself for ‘Meta’.***
Who (else) cares about integrity and safety in the metaverse?
In technology, first-mover advantage is often significant. This is why BigTech and other online platforms are beginning to acquire software businesses to position themselves for the arrival of the Metaverse. They hope to be at the forefront of profound changes that the Metaverse will bring in relation to digital interactions between people, between businesses, and between them both.
What is the Metaverse? The short answer is that it does not exist yet. At the moment it is vision for what the future will be like where personal and commercial life is conducted digitally in parallel with our lives in the physical world. Sounds too much like science fiction? For something that does not exist yet, the Metaverse is drawing a huge amount of attention and investment in the tech sector and beyond.
Here we look at what the Metaverse is, what its potential is for disruptive change, and some of the key legal and regulatory issues future stakeholders may need to consider.
What are the potential legal issues?
The revolutionary nature of the Metaverse is likely to give rise to a range of complex legal and regulatory issues. We consider some of the key ones below. As time goes by, naturally enough, new ones will emerge.
Participation in the Metaverse will involve the collection of unprecedented amounts and types of personal data. Today, smartphone apps and websites allow organisations to understand how individuals move around the web or navigate an app. Tomorrow, in the Metaverse, organisations will be able to collect information about individuals’ physiological responses, their movements and potentially even brainwave patterns, thereby gauging a much deeper understanding of their customers’ thought processes and behaviours.
Users participating in the Metaverse will also be “logged in” for extended amounts of time. This will mean that patterns of behaviour will be continually monitored, enabling the Metaverse and the businesses (vendors of goods and services) participating in the Metaverse to understand how best to service the users in an incredibly targeted way.
The hungry Metaverse participant
How might actors in the Metaverse target persons participating in the Metaverse? Let us assume one such woman is hungry at the time of participating. The Metaverse may observe a woman frequently glancing at café and restaurant windows and stopping to look at cakes in a bakery window, and determine that she is hungry and serve her food adverts accordingly.
Contrast this with current technology, where a website or app can generally only ascertain this type of information if the woman actively searched for food outlets or similar on her device.
Therefore, in the Metaverse, a user will no longer need to proactively provide personal data by opening up their smartphone and accessing their webpage or app of choice. Instead, their data will be gathered in the background while they go about their virtual lives.
This type of opportunity comes with great data protection responsibilities. Businesses developing, or participating in, the Metaverse will need to comply with data protection legislation when processing personal data in this new environment. The nature of the Metaverse raises a number of issues around how that compliance will be achieved in practice.
Who is responsible for complying with applicable data protection law?
In many jurisdictions, data protection laws place different obligations on entities depending on whether an entity determines the purpose and means of processing personal data (referred to as a “controller” under the EU General Data Protection Regulation (GDPR)) or just processes personal data on behalf of others (referred to as a “processor” under the GDPR).
In the Metaverse, establishing which entity or entities have responsibility for determining how and why personal data will be processed, and who processes personal data on behalf of another, may not be easy. It will likely involve picking apart a tangled web of relationships, and there may be no obvious or clear answers – for example:
Will there be one main administrator of the Metaverse who collects all personal data provided within it and determines how that personal data will be processed and shared? Or will multiple entities collect personal data through the Metaverse and each determine their own purposes for doing so?
Either way, many questions arise, including:
How should the different entities each display their own privacy notice to users? Or should this be done jointly? How and when should users’ consent be collected? Who is responsible if users’ personal data is stolen or misused while they are in the Metaverse? What data sharing arrangements need to be put in place and how will these be implemented?
There’s a lot more to this page including a look at Social Media Regulation and Intellectual Property Rights.
I’m starting to think we should talking about RR (real reality), as well as, VR (virtual reality), AR (augmented reality), MR (mixed reality), and XR (extended reality). It seems that all of these (except RR, which is implied) will be part of the ‘metaverse’, assuming that it ever comes into existence. Happily, I have found a good summarized description of VR/AR/MR/XR in a March 20, 2018 essay by North of 41 on medium.com,
Summary: VR is immersing people into a completely virtual environment; AR is creating an overlay of virtual content, but can’t interact with the environment; MR is a mixed of virtual reality and the reality, it creates virtual objects that can interact with the actual environment. XR brings all three Reality (AR, VR, MR) together under one term.
If you have the interest and approximately five spare minutes, read the entire March 20, 2018 essay, which has embedded images illustrating the various realities.
Here’s a description from one of the researchers, Mohamed Kari, of the video, which you can see above, and the paper he and his colleagues presented at the 20th IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 2021 (from the TransforMR page on YouTube),
We present TransforMR, a video see-through mixed reality system for mobile devices that performs 3D-pose-aware object substitution to create meaningful mixed reality scenes in previously unseen, uncontrolled, and open-ended real-world environments.
To get a sense of how recent this work is, ISMAR 2021 was held from October 4 – 8, 2021.
The team’s 2021 ISMAR paper, TransforMR Pose-Aware Object Substitution for Composing Alternate Mixed Realities by Mohamed Kari, Tobias Grosse-Puppendah, Luis Falconeri Coelho, Andreas Rene Fender, David Bethge, Reinhard Schütte, and Christian Holz lists two educational institutions I’d expect to see (University of Duisburg-Essen and ETH Zürich), the surprise was this one: Porsche AG. Perhaps that explains the preponderance of vehicles in this demonstration.
Space walking in virtual reality
Ivan Semeniuk’s October 2, 2021 article for the Globe and Mail highlights a collaboration between Montreal’s Felix and Paul Studios with NASA (US National Aeronautics and Space Administration) and Time studios,
Communing with the infinite while floating high above the Earth is an experience that, so far, has been known to only a handful.
Now, a Montreal production company aims to share that experience with audiences around the world, following the first ever recording of a spacewalk in the medium of virtual reality.
The company, which specializes in creating virtual-reality experiences with cinematic flair, got its long-awaited chance in mid-September when astronauts Thomas Pesquet and Akihiko Hoshide ventured outside the International Space Station for about seven hours to install supports and other equipment in preparation for a new solar array.
The footage will be used in the fourth and final instalment of Space Explorers: The ISS Experience, a virtual-reality journey to space that has already garnered a Primetime Emmy Award for its first two episodes.
From the outset, the production was developed to reach audiences through a variety of platforms for 360-degree viewing, including 5G-enabled smart phones and tablets. A domed theatre version of the experience for group audiences opened this week at the Rio Tinto Alcan Montreal Planetarium. Those who desire a more immersive experience can now see the first two episodes in VR form by using a headset available through the gaming and entertainment company Oculus. Scenes from the VR series are also on offer as part of The Infinite, an interactive exhibition developed by Montreal’s Phi Studio, whose works focus on the intersection of art and technology. The exhibition, which runs until Nov. 7 , has attracted 40,000 visitors since it opened in July [2021?].
At a time when billionaires are able to head off on private extraterrestrial sojourns that almost no one else could dream of, Lajeunesse [Félix Lajeunesse, co-founder and creative director of Felix and Paul studios] said his project was developed with a very different purpose in mind: making it easier for audiences to become eyewitnesses rather than distant spectators to humanity’s greatest adventure.
For the final instalments, the storyline takes viewers outside of the space station with cameras mounted on the Canadarm, and – for the climax of the series – by following astronauts during a spacewalk. These scenes required extensive planning, not only because of the limited time frame in which they could be gathered, but because of the lighting challenges presented by a constantly shifting sun as the space station circles the globe once every 90 minutes.
… Lajeunesse said that it was equally important to acquire shots that are not just technically spectacular but that serve the underlying themes of Space Explorers: The ISS Experience. These include an examination of human adaptation and advancement, and the unity that emerges within a group of individuals from many places and cultures and who must learn to co-exist in a high risk environment in order to achieve a common goal.
There always seems to be a lot of grappling with new and newish science/technology where people strive to coin terms and define them while everyone, including members of the corporate community, attempts to cash in.
The last time I looked (probably about two years ago), I wasn’t able to find any good definitions for alternate reality and mixed reality. (By good, I mean something which clearly explicated the difference between the two.) It was nice to find something this time.
As for Facebook and its attempts to join/create a/the metaverse, the company’s timing seems particularly fraught. As well, paradigm-shifting technology doesn’t usually start with large corporations. The company is ignoring its own history.
Writing this piece has reminded me of the upcoming movie, “Doctor Strange in the Multiverse of Madness” (Wikipedia entry). While this multiverse is based on a comic book, the idea of a Multiverse (Wikipedia entry) has been around for quite some time,
Early recorded examples of the idea of infinite worlds existed in the philosophy of Ancient Greek Atomism, which proposed that infinite parallel worlds arose from the collision of atoms. In the third century BCE, the philosopher Chrysippus suggested that the world eternally expired and regenerated, effectively suggesting the existence of multiple universes across time. The concept of multiple universes became more defined in the Middle Ages.
Multiple universes have been hypothesized in cosmology, physics, astronomy, religion, philosophy, transpersonal psychology, music, and all kinds of literature, particularly in science fiction, comic books and fantasy. In these contexts, parallel universes are also called “alternate universes”, “quantum universes”, “interpenetrating dimensions”, “parallel universes”, “parallel dimensions”, “parallel worlds”, “parallel realities”, “quantum realities”, “alternate realities”, “alternate timelines”, “alternate dimensions” and “dimensional planes”.
The physics community has debated the various multiverse theories over time. Prominent physicists are divided about whether any other universes exist outside of our own.
Living in a computer simulation or base reality
The whole thing is getting a little confusing for me so I think I’ll stick with RR (real reality) or as it’s also known base reality. For the notion of base reality, I want to thank astronomer David Kipping of Columbia University in Anil Ananthaswamy’s article for this analysis of the idea that we might all be living in a computer simulation (from my December 8, 2020 posting; scroll down about 50% of the way to the “Are we living in a computer simulation?” subhead),
… there is a more obvious answer: Occam’s razor, which says that in the absence of other evidence, the simplest explanation is more likely to be correct. The simulation hypothesis is elaborate, presuming realities nested upon realities, as well as simulated entities that can never tell that they are inside a simulation. “Because it is such an overly complicated, elaborate model in the first place, by Occam’s razor, it really should be disfavored, compared to the simple natural explanation,” Kipping says.
Maybe we are living in base reality after all—The Matrix, Musk and weird quantum physics notwithstanding.
To sum it up (briefly)
I’m sticking with the base reality (or real reality) concept, which is where various people and companies are attempting to create a multiplicity of metaverses or the metaverse effectively replacing the internet. This metaverse can include any all of these realities (AR/MR/VR/XR) along with base reality. As for Facebook’s attempt to build ‘the metaverse’, it seems a little grandiose.
The computer simulation theory is an interesting thought experiment (just like the multiverse is an interesting thought experiment). I’ll leave them there.
Wherever it is we are living, these are interesting times.
***Updated October 28, 2021: D. (Devindra) Hardawar’s October 28, 2021 article for engadget offers details about the rebranding along with a dash of cynicism (Note: A link has been removed),
Here’s what Facebook’s metaverse isn’t: It’s not an alternative world to help us escape from our dystopian reality, a la Snow Crash. It won’t require VR or AR glasses (at least, not at first). And, most importantly, it’s not something Facebook wants to keep to itself. Instead, as Mark Zuckerberg described to media ahead of today’s Facebook Connect conference, the company is betting it’ll be the next major computing platform after the rise of smartphones and the mobile web. Facebook is so confident, in fact, Zuckerberg announced that it’s renaming itself to “Meta.”
After spending the last decade becoming obsessed with our phones and tablets — learning to stare down and scroll practically as a reflex — the Facebook founder thinks we’ll be spending more time looking up at the 3D objects floating around us in the digital realm. Or maybe you’ll be following a friend’s avatar as they wander around your living room as a hologram. It’s basically a digital world layered right on top of the real world, or an “embodied internet” as Zuckerberg describes.
Before he got into the weeds for his grand new vision, though, Zuckerberg also preempted criticism about looking into the future now, as the Facebook Papers paint the company as a mismanaged behemoth that constantly prioritizes profit over safety. While acknowledging the seriousness of the issues the company is facing, noting that it’ll continue to focus on solving them with “industry-leading” investments, Zuckerberg said:
“The reality is is that there’s always going to be issues and for some people… they may have the view that there’s never really a great time to focus on the future… From my perspective, I think that we’re here to create things and we believe that we can do this and that technology can make things better. So we think it’s important to to push forward.”
Given the extent to which Facebook, and Zuckerberg in particular, have proven to be untrustworthy stewards of social technology, it’s almost laughable that the company wants us to buy into its future. But, like the rise of photo sharing and group chat apps, Zuckerberg at least has a good sense of what’s coming next. And for all of his talk of turning Facebook into a metaverse company, he’s adamant that he doesn’t want to build a metaverse that’s entirely owned by Facebook. He doesn’t think other companies will either. Like the mobile web, he thinks every major technology company will contribute something towards the metaverse. He’s just hoping to make Facebook a pioneer.
“Instead of looking at a screen, or today, how we look at the Internet, I think in the future you’re going to be in the experiences, and I think that’s just a qualitatively different experience,” Zuckerberg said. It’s not quite virtual reality as we think of it, and it’s not just augmented reality. But ultimately, he sees the metaverse as something that’ll help to deliver more presence for digital social experiences — the sense of being there, instead of just being trapped in a zoom window. And he expects there to be continuity across devices, so you’ll be able to start chatting with friends on your phone and seamlessly join them as a hologram when you slip on AR glasses.
D. (Devindra) Hardawar’s October 28, 2021 article provides a lot more details and I recommend reading it in its entirety.
The report was launched by 221 A, a Vancouver (Canada)-based arts and culture organization and funded by the Canada Council for the Arts through their Digital Strategy Fund. Here’s more from the BACP report in the voice of its research leader, Jesse McKee,
… The blockchain is the openly readable and unalterable ledger technology, which is most broadly known for supporting such applications as bitcoin and other cryptocurrencies. This report documents the first research phase in a three-phased approach to establishing our digital strategy [emphasis mine], as we [emphasis mine] learn from the blockchain development communities. This initiative’s approach is an institutional one, not one that is interpreting the technology for individuals, artists and designers alone. The central concept of the blockchain is that exchanges of value need not rely on centralized authentication from institutions such as banks, credit cards or the state, and that this exchange of value is better programmed and tracked with metadata to support the virtues, goals and values of a particular network. This concept relies on a shared, decentralized and trustless ledger. “Trustless” in the blockchain community is an evolution of the term trust, shifting its signification as a contract usually held between individuals, managed and upheld by a centralized social institution, and redistributing it amongst the actors in a blockchain network who uphold the platform’s technical operational codes and can access ledgers of exchange. All parties involved in the system are then able to reach a consensus on what the canonical truth is regarding the holding and exchange of value within the system.
… [from page 6 of the report]
McKee manages to keep the report from floating away in a sea of utopian bliss with some cautionary notes. Still, as a writer I’m surprised he didn’t notice that ‘blockchain‘ which (in English) is supposed to ‘unlock padlocks’ poses a linguistic conundrum if nothing else.
This looks like an interesting report but it’s helpful to have some ‘critical theory’ jargon. That said, the bulk of the report is relatively accessible reading although some of the essays (at the end) from the artist-researchers are tough going.
One more thought, the report does present many exciting and transformative possibilities and I would dearly love to see much of this come to pass. I am more hesitant than McKee and his colleagues and that hesitation is beautifully described in an essay (The Vampire Problem: Illustrating the Paradox of Transformative Experience) first published September 3, 2017 by Maria Popova (originally published on Brain Pickings),
To be human is to suffer from a peculiar congenital blindness: On the precipice of any great change, we can see with terrifying clarity the familiar firm footing we stand to lose, but we fill the abyss of the unfamiliar before us with dread at the potential loss rather than jubilation over the potential gain of gladnesses and gratifications we fail to envision because we haven’t yet experienced them. …
Arts and blockchain events in Vancouver
The 221 A launch event for the report kicked off a series of related events, here’s more from a 221 A May 17, 2021 news release (Note: the first and second events have already taken place),
Please join us for a live stream events series bringing together key contributors of the Blockchains & Cultural Padlocks Research Report alongside a host of leading figures across academic, urbanism, media and blockchain development communities.
The Vancouver Biennale folks first sent me information about Voxel Bridge in 2018 but this new material is the most substantive description yet, even without an opening date. From a June 6, 2021 article by Kevin Griffin for the Vancouver Sun (Note: Links have been removed),
The underside of the Cambie Bridge is about to be transformed into the unique digital world of Voxel Bridge. Part of the Vancouver Biennale, Voxel Bridge will exist both as a physical analogue art work and an online digital one.
The public art installation is by Jessica Angel. When it’s fully operational, Voxel Bridge will have several non-fungible tokens called NFTs that exist in an interactive 3-D world that uses blockchain technology. The intention is to create a fully immersive installation. Voxel Bridge is being described as the largest digital public art installation of its kind.
“To my knowledge, nothing has been done at this scale outdoors that’s fully interactive,” said Sammi Wei, the Vancouver Biennale‘s operations director. “Once the digital world is built in your phone, you’ll be able to walk around objects. When you touch one, it kind of vibrates.”
Just as a pixel refers to a point in a two-dimensional world, voxel refers to a similar unit in a 3-D world.
Voxel Bridge will be about itself: it will tell the story of what it means to use new decentralized technology called blockchain to create Voxel Bridge.
There are a few more Voxel Bridge details in a June 7, 2021 article by Vincent Plana for the Daily Hive,
… Voxel Bridge draws parallels between blockchain technology and the structural integrity of the underpass itself. The installation will be created by using adhesive vinyl and augmented reality technology.
Gfiffin’s description in his June 6, 2021 article gives you a sense of what it will be like to become immersed in Voxel Bridge,
Starting Monday [June 14, 2021], a crew will begin installing a vinyl overlay directly on the architecture on the underside of the bridge deck, around the columns, and underfoot on the sidewalk from West 2nd to the parking-lot road. Enclosing a space of about 18,000 square feet, the vinyl layer will be visible without any digital enhancement. It will look like an off-kilter circuit board.
“It’ll be like you’re standing in the middle of a circuit board,” [emphasis mine] she said. “At the same time, the visual perception will be slightly off. It’s like an optical illusion. You feel the ground is not quite where it’s supposed to be.”
Since posting about Science Odyssey, I have received a number of emails announcing event and not all of them are part of the Odyssey experience.
From the looks of things, May 2021 is going to be a very busy month. Given how early it is in the month I expect to receive another batch of notices and most likely will post another May 2021 events roundup.
At this point, there’s a heavy emphasis on architecture (human and other) and design.
Proximal Spaces on May 3, 2021
This is one of those event within an event notices. There’s a festival: FACTT 20/21 – Improbable Times. Trans-disciplinary & Trans-national Festival of Art & Science in Portugal and within the festival there is Proximal Spaces in Toronto, Canada. Here’s more from the ArtScience Salon (ArtSci Salon) May 1, 2021 announcement (received via email),
May 3, 2021 – 3.00 PM (EST) [12 pm PST]
Join us at this poetry reading by six Canadian artists responding to the work of eight bioartists. Event with be streamed on Facebook Live.
Please note that you don’t need to sign up in order to access the streaming as it is public.
Proximal Spaces’ is a multi-modal exhibition that explores the environment at multiple scales in concentric circles of proximity to the body. Inspired by Edward Hall’s [Edward Twitchell Hall or E. T. Hall] 1961 notation of intimate (1.5ft), personal (4ft), social (12ft) and public (25ft) spaces in his “Proxemics” diagrams, the installation portion presents similar diagrams of his concentric circles affixed to the wall of the gallery space, as well as developed in Augmented Reality around the venue. Each of these diagrams is a montage of microscopic and sub-microscopic images of the everyday environment as experienced by a collaborative team of international bioartists, and arrayed in a fractal form. In addition, an AR-enabled application explores the invisible environments of computer generated bioaerosols suspended in the air of virtual space.
This work visualizes the variegated response of the biological environment to unprecedented levels of physical distancing and self-isolation and recent developments in vaccine design that impact our understanding of interpersonal and interspecies ‘messaging’. What continues to thrive in the 6ft ‘dead spaces’ between us? What invisible particles linger on and create a biological archive through our movements through space? The artwork presents an interesting mode of interspecies engagement through hybrid virtual and physical interaction.
In the spring of 2021, six Canadian poets – Kelley Aitken, nancy viva davis halifax, Maureen Hynes, Anita Lahey, Dilys Leman, & Sheila Stewart – came together to pursue a lyric response to Proximal Spaces. They were challenged and inspired by the virtual exhibition with its combination of art, science, and proxemics. The focus of the artworks – what inhabits and thrives in the spaces and environments where we live, work, and breathe—generated six distinctive poems.
Poets: Kelley Aitken, nancy viva davis halifax, Maureen Hynes, Anita Lahey, Dilys Leman, & Sheila Stewart
Bioartists: Roberta Buiani, Nathalie Dubois Calero, Sarah Choukah, Nicole Clouston, Jess Holtz, Mick Lorusso, Maro Pebo, Felipe Shibuya
This project is part of FACTT-Improbable Times (http://factt.arteinstitute.org/), a project spearheaded and promoted by the Arte Institute we are in or production and conception partners with Cultivamos Cultura and Ectopia (Portugal), InArts Lab@Ionian University (Greece), ArtSci Salon@The Fields Institute and Sensorium@York University (Canada), School of Visual Arts (USA), UNAM [National Autonomous University of Mexico], Arte+Ciência and Bioscénica (Mexico), and Central Academy of Fine Arts (China). Together we will work and bring into being our ideas and actions for this during the year of 2021!
Morphogenesis: Geometry, Physics, and Biology on May 5, 2021
i love this image, he seems so delighted to show off the bug (?),
Here’s more from the Perimeter Institute for Theoretical Physics (PI) April 30, 2021 announcement (received via email),
Earth is home to millions of different species – from simple plants and unicellular organisms to trees and whales and humans. The incredible diversity of life on Earth led Charles Darwin to lament that it is “enough to drive the sanest man mad.”
How can we make sense of this diversity of form, which arises from the process of morphogenesis that links molecular- and cellular-level processes to conspire and lead to the emergence of “endless forms most beautiful,” as Darwin said?
In his May 5  lecture webcast, Harvard professor L. Mahadevan [Lakshminarayanan Mahadevan] will take viewers on a journey into the mathematical, physical, and biological workings of morphogenesis to demonstrate how scientists are beginning to unlock many of the secrets that have vexed scientists since Darwin.
Possible Worlds: “How Will We Live Together?” on May 6, 2021
For those who are interested in human architecture, there’s this from a May 3, 3021 Berggruen institute announcement (received via email) about a talk by Chilean architect and 2016 Pritzker Prize winner, Alejandro Gastón Aravena Mori (Alejandro Aravena),
Possible Worlds: How Will We Live Together
May 6, 2021
11am — Virtual
Possible Worlds: The UCLA [University of California at Los Angeles] – Berggruen Institute Speaker Series is a new partnership between the UCLA Division of Humanities and the Berggruen Institute.
Please click here to submit a question to Alejandro Aravena
About Alejandro Aravena Alejandro Aravena is an architect, founder and executive director of the firm Elemental. His works include the “Siamese Towers” at the Catholic University of Chile and the Novartis office campus in Shanghai. In 2016, the New York Times named Aravena one of the world’s “creative geniuses” who had helped define culture. He and Elemental have received numerous honors, including the 2016 Pritzker Architecture Prize, the 2015 London Design Museum’s Design of the Year award and the 2011 Index Award. Aravena currently serves as the president of the Pritzker Prize jury. Aravena’s lecture title, “How Will We Live Together?” echoes the theme of the upcoming international architecture exhibition, Biennale Architettura, in which Elemental will be participating.
Featuring a discussion with moderator Dana Cuff
Dana Cuff is Professor of Architecture and Urban Design at UCLA, where she is also Director of cityLAB, an award-winning think tank that advances goals of spatial justice through experimental urbanism and architecture (www.cityLAB.aud.ucla.edu). Since receiving her Ph.D. in Architecture from Berkeley, Cuff has published and lectured widely about affordable housing, the architectural profession, and Los Angeles’ urban history. She is author of several books, including The Provisional City about postwar housing in L.A., and a co-authored book called Urban Humanities: New Practices for Reimagining the City, documenting her collaborative, crossdisciplinary research and teaching at UCLA funded by the Mellon Foundation. Based on cityLAB’s design research, Cuff co-authored landmark legislation that permits “backyard homes” on some 8.1 million single-family properties, doubling the density of suburbs across California (AB 2299, Bloom-2016). In 2019, cityLAB opened a satellite center in the MacArthur Park/Westlake neighborhood where a deep, multi-year exchange with community organizations is already demonstrating ways that humanistic design of the public realm can create more compassionate cities. Cuff recently received three awards that describe her career: Women in Architecture Activist of the Year (2019, Architectural Record); Distinguished Leadership in Architectural Research (2020, ARCC); and Educator of the Year (2021, American Institute of Architects Los Angeles).
About the Series Possible Worlds: The UCLA – Berggruen Institute Speaker Series is a new partnership between the UCLA Division of Humanities and the Berggruen Institute. This semiannual series will bring some of today’s most imaginative intellectual leaders and creators to deliver public talks on the future of humanity. Through the lens of their singular achievements and experiences, these trailblazers in creativity, innovation, philosophy and politics will lecture on provocative topics that explore current challenges and transformations in human progress.
UCLA faculty and students have long been at the forefront of interpreting the world’s legacy of language, literature, art and science. UCLA Humanities serves a vital role in readying future leaders to articulate their thoughts with clarity and imagination, to interpret the world of ideas, and to live as informed citizens in an increasingly complex world. We are proud to be partnering in this lecture series with the Berggruen Institute, whose work addresses the “Great Transformations” taking place in technology and culture, politics and economics, global power arrangements, and even how we perceive ourselves as humans. The Institute seeks to connect deep thought in the human sciences — philosophy and culture — to the pursuit of practical improvements in governance.
A selection committee comprising representatives of UCLA and the Berggruen Institute has been formed to make recommendations for lecturers. The committee includes:
• Ursula Heise, Professor and Chair, Department of English; Professor, UCLA Institute of the Environment and Sustainability; Marcia H. Howard Term Chair in Literary Studies • Pamela Hieronymi, Professor of Philosophy • Anastasia Loukaitou-Sideris, Professor of Urban Planning; Associate Provost for Academic Planning • Todd Presner, Associate Dean, Digital Initiatives; Chair of the Digital Humanities Program; Michael and Irene Ross Endowed Chair of Yiddish Studies; Professor of Germanic Languages and Comparative Literature • Lynn Vavreck, Professor, Department of Political Science; Marvin Hoffenberg Professor of American Politics and Public Policy • David Schaberg, Senior Dean of the UCLA College; Dean of Humanities; Professor, Asian Languages & Cultures • Nils Gilman, Vice President of Programs, the Berggruen Institute
Generative Art and Computational Creativity starts May 7, 2021
A Spring 2021 MetaCreation Lab (Simon Fraser University; SFU) newsletter (received via email on April 23, 2021) highlights a number of festival submissions and papers along with some news about a free introductory course. First, the video introduction to the course,
This first course in the two-part program, Generative Art and Computational Creativity [there’s a fee for part two], proposes an introduction and overview of the history and practice of generative arts and computational creativity with an emphasis on the formal paradigms and algorithms used for generation. The full program will be taught by Associate Professor from the School of Interactive Arts and Technology at Simon Fraser University and multi-disciplinary researcher, Philippe Pasquier.
On the technical side, we will study core techniques from mathematics, artificial intelligence, and artificial life that are used by artists, designers and musicians across the creative industry. We will start with processes involving chance operations, chaos theory and fractals and move on to see how stochastic processes, and rule-based approaches can be used to explore creative spaces. We will study agents and multi-agent systems and delve into cellular automata, and virtual ecosystems to explore their potential to create novel and valuable artifacts and aesthetic experiences.
The presentation is illustrated by numerous examples from past and current productions across creative practices such as visual art, new media, music, poetry, literature, performing arts, design, architecture, games, robot-art, bio-art and net-art. Students get to practice these algorithms first hand and develop new generative pieces through assignments and projects in MAX. Finally, the course addresses relevant philosophical, and societal debates associated with the automation of creative tasks.
Music for this course was composed with the StyleMachineLite Max for Live engine of Metacreative Inc.
Artistic direction: Philippe Pasquier, Programmation: Arne Eigenfeldt, Sound Production: Philippe Bertrand
This course is in adaptive mode and is open for enrollment. Learn more about adaptive courses here.
Session 1: Introduction and Typology of Generative Art (May 7, 2021) To start off this course, we define generative art and computational creativity and discuss how these relate through the study of prominent examples. We establish a typology of generative systems based on levels of autonomy and agency.
Session 2: History Of Generative Art, Chance Operations, and Chaos Theory (May 14, 2021) Generative art is nothing new, and this session goes through the history of the field from pre-history to the popularization of computers. We study chance, noise, fractals, chaos theory, and their applications in visual art and music.
Session 3: Rule-Based Systems, Grammars and Markov Chains (May 21, 2021) This session introduces and illustrate the generative potential of rule-based and expert systems. We study generative grammars through the Chomsky hierarchy, and introduce L-systems, shape grammars, and Markov chains. We discuss how these have been applied in visual art, music, design, architecture, and electronic literature.
Session 4: Cognitive Agents And Multiagent Systems (May 28, 2021) This session introduces the concepts underlying the notion of artificial agents. We study the belief, desire, and intention (BDI) cognitive architecture, and message based agent communication resting on the speech act theory. We discuss musical agents, conversational agents, chat bots and twitter bots and their artistic potential.
Session 5: Reactive Agents And Multiagent Systems (June 4, 2021) In this session, we introduce reactive agents and the subsumption architecture. We study boids, and detail how complex behaviors can emerge from a distributed population of simple artificial agents. We look at a myriad of applications from ant painting to swarm music and we discuss artistic approaches to virtual ecosystems.
Session 6: A-Life And Cellular Automaton (June 11, 2021) In this concluding session, we introduce artificial life (A-life). We study cellular automaton, multi-agent ecosystems for music, visual art, non-photorealistic rendering, and gaming. The session also concludes the class by reflecting on the state of the art in the field and its consequences on creative practices.
The human being – so fragile, so ethereal, speaking a sweet language. A piece of architecture – so physically imminent, so solid, speaking a language of hardness.
Photo by Oliviero Godi – Frantoio Ipogeo nel Salento
Join photographer & architect Oliviero Godi as he explores the relationship between the body & the material, the transient & the permanent, in search of the correct balance where neither element prevails.
To make your donation, please send an e-transfer to firstname.lastname@example.org. Thank you!
Learn More [about this other upcoming Cultural Events]
Respiration and the Brain on May 25, 2021
Before getting to the April 29, 2021 BrainTalks announcement, here’s a little bit about BrainTalks from their webspace on the University of British Columbia (UBC) website,
BrainTalks is a series of talks inviting you to contemplate emerging research about the brain. Researchers studying the brain, from various disciplines including psychiatry, neuroscience, neuroimaging, and neurology, gather to discuss current leading edge topics on the mind.
As an audience member, you join the discussion at the end of the talk, both in the presence of the entire audience, and with an opportunity afterwards to talk with the speaker more informally in a catered networking session. The talks also serve as a connecting place for those interested in similar topics, potentially launching new endeavours or simply connecting people in discussions on how to approach their research, their knowledge, or their clinical practice.
For the general public, these talks serve as a channel where by knowledge usually sequestered in inaccessible journals or university classrooms, is now available, potentially allowing people to better understand their brains and minds, how they work, and how to optimize brain health.
[UBC School of Medicine Department of Psychiatry]
Onto the April 29, 2021 BrainTalks announcement (received via email),
BrainTalks: Respiration and the Brain
Tuesday, May 25th, 2021 from 6:00 PM – 7:30 PM [PT]
Join us for a series of online talks exploring questions of respiration and the brain. Emerging empirical research will be presented on ventilation-associated brain injury and breathing-based interventions for the treatment of stress and anxiety disorders. We presenters will include Dr. Thiago Bassi, Dr. Lloyd Lalande and Taylor Willi, MSc.
Dr. Thiago Bassi will address the biological connection between the brain and lungs, exploring the potential adverse effects of mechanical ventilation on the brain. Dr. Bassi is a neurosurgeon and neuroscientist, who worked clinically for more than ten years in Brazil. He joined the Lungpacer Medical team and C2B2 lab in 2017, and is currently completing his doctorate in Biomedicine Physiology at Simon Fraser University.
Dr. Lloyd Lalande will describe Guided Respiration Mindfulness Therapy (GRMT), as an emerging clinical breathwork intervention for its effectiveness in reducing depression, anxiety and stress, and in increasing mindfulness and sense of wellbeing. Dr. Lalonde is an Assistant Professor teaching psychology at the Buddhist TzuChi University of Science and Technology, and the developer of GRMT. His current research is based out of the TzuChi Buddhist General Hospital, investigating GRMT as an evidence-based treatment for a variety of outcomes.
Mr. Taylor Willi will present the findings of his dissertation research comparing the effect of performing daily brief relaxation techniques on measures of stress and anxiety. Mr. Willi completed a Masters Degree of Neuroscience at the University of British Columbia, and is currently completing his doctorate in Clinical Psychology at Simon Fraser University.
Each of the speakers will present an overview of their research findings investigating respiration in three unique ways. Following their presentations, the speakers will be available for an audience-drive panel discussion.
Plans for last year’s FACTT (Festival of Art and Science) 2020 had to be revised at the last minute due to COVID-19. This year, organizers were prepared so no in person sessions have to be cancelled or turned into virtual events. Here’s more from the Jan. 25, 2021 announcement I received (via email) from one of the festival partners, the ArtSci Salon at the University of Toronto,
Join us! Opening of FACTT 20-21 Improbable Times!
Thursday, January 28, 2021 at 3:30 PM EST – 5:30 PM EST Public · Anyone on or off Facebook – link will be disseminated closer to the event.
The Arte Institute and the RHI Initiative, in partnership with Cultivamos Cultura, have the pleasure to present the FACTT 2021 – Festival Art & Science. The festival opens on January 28, at 8.30 PM (GMT), and will be exhibited online on RHI Stage.
This year we are reshaping FACTT! Come join us for the kick-off of this amazing project!
A project spearheaded and promoted by the Arte Institute we are in or production and conception partners with Cultivamos Cultura and Ectopia (Portugal), InArts Lab@Ionian University (Greece), ArtSci Salon@The Fields Institute and Sensorium@York University (Canada), School of Visual Arts (USA), UNAM, Arte+Ciência and Bioscenica (Mexico), and Central Academy of Fine Arts (China).
Together we will work and bring into being our ideas and actions for this during the year of 2021!
FACTT 20/21 – Improbable Times presents a series of exceptional artworks jointly curated by Cultivamos Cultura and our partners. The challenge of a translation from the physical space that artworks occupy typically, into an exhibition that lives as a hybrid experience, involves rethinking the materiality of the work itself. It also questions whether we can live and interact with each other remotely and in person producing creative effective collaborative outcomes to immerse ourselves in. Improbable Times brings together a collection of works that reflect the times we live in, the constraints we are faced with, the drive to rethink what tomorrow may bring us, navigate it and build a better future, beyond borders.
January 28, 2021 | 8:30 PM (GMT)Program: – Introduction – Performance Toronto: void * ambience : Latency, with Joel Ong, Michael Palumbo and Kavi – Performance Mexico “El Tercero Cuerpo Sonoro” (Third Sonorous Body), by Arte+Ciência. – Q&A
The performance series void * ambience experiments with sound and video content that is developed through a focus on the topographies and networks through which these flow. Initiated during the time of COVID and social distancing, this project explores processes of information sharing, real-time performance and network communication protocols that contribute to the sustenance of our digital communities, shared experiences and telematic intimacies.
“El Tercero Cuerpo Sonoro” project is a digital drift that explores different relationships with the environment, nature, humans and non-humans from the formulation of an intersubjective body. Its main search is to generate resonances with and among the others.
In these complicated times in which it seems that our existence unfolds in front of the screen, confined to the space of the black mirror, it becomes urgent to challenge the limits and scopes of digital life. We need to rethink the way in which we inhabit the others as well as our own subjectivity.
Program: – Introduction – Performance Toronto: Proximal Spaces Artistic Directors: Joel Ong, Elaine Whittaker Graphic Designer: Natalie Plociennik Bhavesh Kakwani AR [augmented reality] development : Sachin Khargie, Ryan Martin Bioartists: Roberta Buiani, Nathalie Dubois Calero, Sarah Choukah, Nicole Clouston, Jess Holtz, Mick Lorusso, Maro Pebo, Felipe Shibuya – Performance Mexico Tercero Cuerpo Sonoro (Third Sonorous Body) by Arte+Ciência
FACTT team: Marta de Menezes, Suzanne Anker, Maria Antonia Gonzalez Valerio, Roberta Buiani, Jo Wei, Dalila Honorato, Joel Ong, Lena Lee and Minerva Ortiz.
For FACTT20/21 we propose to put together an exhibition where the virtual and the physical share space, a space that is hybrid from its conception, a space that desires to break the limits of access to culture, to collaboration, to the experience of art. A place where we can think deeply and creatively together about the adaptive moves we had and have to develop to the rapid and sudden changes our lives and environment are going through.
A new software system developed by Brown University [US] researchers turns cell phones into augmented reality portals, enabling users to place virtual building blocks, furniture and other objects into real-world backdrops, and use their hands to manipulate those objects as if they were really there.
The developers hope the new system, called Portal-ble, could be a tool for artists, designers, game developers and others to experiment with augmented reality (AR). The team will present the work later this month at the ACM Symposium on User Interface Software and Technology (UIST 2019) in New Orleans. The source code for Andriod is freely available for download on the researchers’ website, and iPhone code will follow soon.
“AR is going to be a great new mode of interaction,” said Jeff Huang, an assistant professor of computer science at Brown who developed the system with his students. “We wanted to make something that made AR portable so that people could use anywhere without any bulky headsets. We also wanted people to be able to interact with the virtual world in a natural way using their hands.”
Huang said the idea for Portal-ble’s “hands-on” interaction grew out of some frustration with AR apps like Pokemon GO. AR apps use smartphones to place virtual objects (like Pokemon characters) into real-world scenes, but interacting with those objects requires users to swipe on the screen.
“Swiping just wasn’t a satisfying way of interacting,” Huang said. “In the real world, we interact with objects with our hands. We turn doorknobs, pick things up and throw things. So we thought manipulating virtual objects by hand would be much more powerful than swiping. That’s what’s different about Portal-ble.”
The platform makes use of a small infrared sensor mounted on the back of a phone. The sensor tracks the position of people’s hands in relation to virtual objects, enabling users to pick objects up, turn them, stack them or drop them. It also lets people use their hands to virtually “paint” onto real-world backdrops. As a demonstration, Huang and his students used the system to paint a virtual garden into a green space on Brown’s College Hill campus.
Huang says the main technical contribution of the work was developing the right accommodations and feedback tools to enable people to interact intuitively with virtual objects.
“It turns out that picking up a virtual object is really hard if you try to apply real-world physics,” Huang said. “People try to grab in the wrong place, or they put their fingers through the objects. So we had to observe how people tried to interact with these objects and then make our system able accommodate those tendencies.”
To do that, Huang enlisted students in a class he was teaching to come up with tasks they might want to do in the AR world — stacking a set of blocks, for example. The students then asked other people to try performing those tasks using Portal-ble, while recording what people were able to do and what they couldn’t. They could then adjust the system’s physics and user interface to make interactions more successful.
“It’s a little like what happens when people draw lines in Photoshop,” Huang said. “The lines people draw are never perfect, but the program can smooth them out and make them perfectly straight. Those were the kinds of accommodations we were trying to make with these virtual objects.”
The team also added sensory feedback — visual highlights on objects and phone vibrations — to make interactions easier. Huang said he was somewhat surprised that phone vibrations helped users to interact. Users feel the vibrations in the hand they’re using to hold the phone, not in the hand that’s actually grabbing for the virtual object. Still, Huang said, vibration feedback still helped users to more successfully interact with objects.
In follow-up studies, users reported that the accommodations and feedback used by the system made tasks significantly easier, less time-consuming and more satisfying.
Huang and his students plan to continue working with Portal-ble — expanding its object library, refining interactions and developing new activities. They also hope to streamline the system to make it run entirely on a phone. Currently the infrared sensor requires an infrared sensor and external compute stick for extra processing power.
Huang hopes people will download the freely available source code and try it for themselves. “We really just want to put this out there and see what people do with it,” he said. “The code is on our website for people to download, edit and build off of. It will be interesting to see what people do with it.
Co-authors on the research paper were Jing Qian, Jiaju Ma, Xiangyu Li, Benjamin Attal, Haoming Lai, James Tompkin and John Hughes. The work was supported by the National Science Foundation (IIS-1552663) and by a gift from Pixar.
This is the first time I’ve seen an augmented reality system that seems accessible, i.e., affordable. You can find out more on the Portal-ble ‘resource’ page where you’ll also find a link to the source code repository. The researchers, as noted in the news release, have an Android version available now with an iPhone version to be released in the future.