Just when I thought I was almost caught up, I found this. The study I will be highlighting is from August 2023 but there are interesting developments all the way into October 2023 and beyond. First, the latest in AI (artificial intelligence) devices from an October 5, 2023 article by Lucas Arender for the Daily Hive, which describes the devices as AI wearables (you could also them wearable technology), Note: Links have been removed,
Rewind.ai launched Pendant, a necklace that records your conversations and transfers them to your smartphone, creating an audio database (of sorts) for your life.
Meta unveiled a pair of Ray-Ban smart glasses that include an AI chatbot that users can communicate with (which might make you look like you’re talking to yourself).
Sam Altman-backed startup Humane teased its new AI pin at Paris Fashion Week— a screenless lapel device that projects a smartphone-like interface onto users’ hands.
Microsoft filed a patent for an AI backpack that features GPS, voice command, and cameras that could… help us walk in the right direction?
The second item in the list ‘Ray-Ban Meta Smart Glasses’ is further described in an October 17, 2023 article by Sarah Bartnicka for the Daily Hive, Note: A link has been removed,
It’s a glorious day for tech dads everywhere: Meta and Ray-Ban smart glasses are officially for sale in Canada.
Driving the news: Meta has become the latest billion-dollar company to officially enter the smart glasses market with the second iteration [emphasis mine] of its design with Ray-Bans, now including a built-in Meta AI assistant, hands-free live streaming features, and a personal audio system.
…
This time around, the technology is better, and both Meta and Snap are pitching their smart glasses as a tool for creators to stay connected with their audiences rather than just a sleek piece of hardware that can blend your digital and physical realities [augmented or extended reality?].
…
Yes, but: As smart glasses creep back into the limelight, people are wary about wearing cameras on their faces. Concerns about always-on cameras and microphones that allow users to record their surroundings without the consent of others will likely stick around. [emphasis mine]
So, are these AI or smart or augmented reality (AR) glasses? In my October 22, 2021 post, I explored a number of realities in the context of the metaverse. Yes, it gets confusing. At any rate, i found these definitions,
Happily, I have found a good summarized description of VR/AR/MR/XR in a March 20, 2018 essay by North of 41 on medium.com,
“Summary: VR is immersing people into a completely virtual environment; AR is creating an overlay of virtual content, but can’t interact with the environment; MR is a mixed of virtual reality and the reality, it creates virtual objects that can interact with the actual environment. XR brings all three Reality (AR, VR, MR) together under one term.”
If you have the interest and approximately five spare minutes, read the entire March 20, 2018 essay, which has embedded images illustrating the various realities.
…
This may change over time but for now, answering the question, “AI or smart or augmented reality (AR) glasses?” you can say any or all three.
Someone wearing augmented reality (AR) or “smart” glasses could be Googling your face, turning you into a cat or recording your conversation – and that creates a major power imbalance, said Cornell researchers.
Currently, most work on AR glasses focuses primarily on the experience of the wearer. Researchers from the Cornell Ann S. Bowers College of Computing and Information Science and Brown University teamed up to explore how this technology affects interactions between the wearer and another person. Their explorations showed that, while the device generally made the wearer less anxious, things weren’t so rosy on the other side of the glasses.
Jenny Fu, a doctoral student in the field of information science, presented the findings in a new study, “Negotiating Dyadic Interactions through the Lens of Augmented Reality Glasses,” at the 2023 ACM Designing Interactive Systems Conference in July.
AR glasses superimpose virtual objects and text over the field of view to create a mixed-reality world for the user. Some designs are big and bulky, but as AR technology advances, smart glasses are becoming indistinguishable from regular glasses, raising concerns that a wearer could be secretly recording someone or even generating deepfakes with their likeness.
For the new study, Fu and co-author Malte Jung, associate professor of information science and the Nancy H. ’62 and Philip M. ’62 Young Sesquicentennial Faculty Fellow, worked with Ji Won Chung, a doctoral student, and Jeff Huang, associate professor of computer science, both at Brown, and Zachary Deocadiz-Smith, an independent extended reality designer.
They observed five pairs of individuals – a wearer and a non-wearer – as each pair discussed a desert survival activity. The wearer received Spectacles, an AR glasses prototype on loan from Snap Inc., the company behind Snapchat. The Spectacles look like avant-garde sunglasses and, for the study, came equipped with a video camera and five custom filters that transformed the non-wearer into a deer, cat, bear, clown or pig-bunny.
Following the activity, the pairs engaged in a participatory design session where they discussed how AR glasses could be improved, both for the wearer and the non-wearer. The participants were also interviewed and asked to reflect on their experiences.
According to the wearers, the fun filters reduced their anxiety and put them at ease during the exercise. The non-wearers, however, reported feeling disempowered because they didn’t know what was happening on the other side of the lenses. They were also upset that the filters robbed them of control over their own appearance. The possibility that the wearer could be secretly recording them without consent – especially when they didn’t know what they looked like – also put the non-wearers at a disadvantage.
The non-wearers weren’t completely powerless, however. A few demanded to know what the wearer was seeing, and moved their faces or bodies to evade the filters – giving them some control in negotiating their presence in the invisible mixed-reality world. “I think that’s the biggest takeaway I have from this study: I’m more powerful than I thought I was,” Fu said.
Another issue is that, like many AR glasses, Spectacles have darkened lenses so the wearer can see the projected virtual images. This lack of transparency also degraded the quality of the social interaction, the researchers reported.
“There is no direct eye contact, which makes people very confused, because they don’t know where the person is looking,” Fu said. “That makes their experiences of this conversation less pleasant, because the glasses blocked out all these nonverbal interactions.”
To create more positive experiences for people on both sides of the lenses, the study participants proposed that smart glasses designers add a projection display and a recording indicator light, so people nearby will know what the wearer is seeing and recording.
Fu also suggests designers test out their glasses in a social environment and hold a participatory design process like the one in their study. Additionally, they should consider these video interactions as a data source, she said.
That way, non-wearers can have a voice in the creation of the impending mixed-reality world.
Rina Diane Caballar’s September 25, 2023 article for IEEE (Institute of Electrical and Electronics Engineers) Spectrum magazine provides a few more insights about the research, Note: Links have been removed,
…
“This AR filter interaction is likely to happen in the future with the commercial emergence of AR glasses,” says Jenny Fu, a doctoral student at Cornell University’s Bowers College of Computing and Information Science and one of the two lead authors of the study. “How will that look like, and what are the social and emotional consequences of interacting and communicating through AR glasses?”
…
“When we think about design in HCI [human-computer interface], there is often a tendency to focus on the primary user and design just for them,” Jung says. “Because these technologies are so deeply embedded in social interactions and are used with others and around others, we often forget these ‘onlookers’ and we’re not designing with them in mind.”
…
Moreover, involving nonusers is especially key in developing more equitable tech products and creating more inclusive experiences. “That’s one of the points why previous AR iterations may not have worked—they designed it for the individual and not for the people surrounding them,” says Chung. She adds that a mindset shift is needed to actively make tech that doesn’t exclude people, which could lead to social systems that promote engagement and foster a sense of belonging for everyone.
…
Caballar’s September 25, 2023 article also appears in the January 2024 print version of the IEEE Spectrum with the title ““AR Glasses Upset the Social Dynamic.”
I received an April 5, 2023 announcement for the 2023 IEEE International Conference on Metrology for eXtended Reality, Artificial Intelligence, and Neural Engineering (IEEE MetroXRAINE 2023) via email. Understandably given that it’s an Institute of Electrical and Electronics Engineers (IEEE) conference, they’re looking for submissions focused on developing the technology,
Last days to submit your contribution to our Special Session on “eXtended Reality as a gateway to the Metaverse: Practices, Theories, Technologies and Applications” – IEEE International Conference on Metrology for eXtended Reality, Artificial Intelligence, and Neural Engineering (IEEE MetroXRAINE 2023) – October 25-27, 2023 – Milan – https://metroxraine.org/special-session-17.
I want to remind you that the deadline of April 7 [2023] [extended to April 14, 2023 as per April 11, 2023 notice received via email] is for the submission of a 1-2 page Abstract or a Graphical Abstract to show the idea you are proposing. You will have time to finalise your work by the deadline of May 15 [2023].
Please see the CfP below for details and forward it to colleagues who might be interested in contributing to this special session.
I’m looking forward to meeting you, virtually or in your presence, at IEEE MetroXRAINE 2023.
Best regards, Giuseppe Caggianese
Research Scientist National Research Council (CNR) [Italy] Institute for High-Performance Computing and Networking (ICAR) Via Pietro Castellino 111, 80131, Naples, Italy
Here’s are specific for the Special Session’s Call for Papers (from the April 5, 2023 email announcement),
Call for Papers – Special Session on: “EXTENDED REALITY AS A GATEWAY TO THE METAVERSE: PRACTICES, THEORIES, TECHNOLOGIES AND APPLICATIONS” https://metroxraine.org/special-session-17
2023 IEEE International Conference on Metrology for eXtended Reality, Artificial Intelligence, and Neural Engineering (IEEE MetroXRAINE 2023) https://metroxraine.org/
October 25-27, 2023 – Milan, Italy.
SPECIAL SESSION DESCRIPTION ————————- The fast development of Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR) solutions over the last few years are transforming how people interact, work, and communicate. The eXtended Reality (XR) term encloses all those immersive technologies that can shift the boundaries between digital and physical worlds to realize the metaverse. According to tech companies and venture capitalists, the metaverse will be a super-platform that convenes sub-platforms: social media, online video games, and ease-of-life apps, all accessible through the same digital space and sharing the same digital economy. Inside the metaverse, virtual worlds will allow avatars to carry out all human endeavours, including creation, display, entertainment, social, and trading. Thus, the metaverse will evolve how users interact with brands, intellectual properties, health services, cultural heritage, and each other things on the Internet. A user could join friends to play a multiplayer game, watch a movie via a streaming service and then attend a university course precisely the same as in the real world. The metaverse development will require new software architecture that will enable decentralized and collaborative virtual worlds. These self-organized virtual worlds will be permanent and will require maintenance operations. In addition, it will be necessary to design an efficient data management system and prevent privacy violations. Finally, the convergence of physical reality, virtually enhanced, and an always-on virtual space highlighted the need to rethink the actual paradigms for visualization, interaction, and sharing of digital information, moving toward more natural, intuitive, dynamically customizable, multimodal, and multi-user solutions. This special session aims to focus on exploring how the realization of the metaverse can transform certain application domains such us: (i) healthcare, in which the metaverse solutions can, for instance, improve the communication between patients and physicians; (ii) cultural heritage, with potentially more effective solutions for tourism guidance, site maintenance, and heritage object conservation; and (iii) industry, where to enable data-driven decision making, smart maintenance, and overall asset optimisation.
The topics of interest include, but are not limited to, the following:
Hardware/Software Architectures for metaverse
Decentralized and Collaborative Architectures for metaverse
Interoperability for metaverse
Tools to help creators to build the metaverse0
Operations and Maintenance in metaverse
Data security and privacy mechanisms for metaverse
Cryptocurrency, token, NFT Solutions for metaverse
Fraud-Detection in metaverse
Cyber Security for metaverse
Data Analytics to Identify Malicious Behaviors in metaverse
Blockchain/AI technologies in metaverse
Emerging Technologies and Applications for metaverse
New models to evaluate the impact of the metaverse
Interactive Data Exploration and Presentation in metaverse
Human-Computer Interaction for metaverse
Human factors issues related to metaverse
Proof-of-Concept in Metaverse: Experimental Prototyping and Testbeds
IMPORTANT DATES
Abstract Submission Deadline: April 7, 2023 (extended) NOTE: 1-2 pages abstract or a graphical abstract Final Paper Submission Deadline: May 15, 2023 (extended) Full Paper Acceptance Notification: June 15, 2023 Final Paper Submission Deadline: July 31, 2023
SUBMISSION AND DECISIONS ———————— Authors should prepare an Abstract (1 – 2 pages) that clearly indicates the originality of the contribution and the relevance of the work. The Abstract should include the title of the paper, names and affiliations of the authors, an abstract, keywords, an introduction describing the nature of the problem, a description of the contribution, the results achieved and their applicability.
When the first review process has been completed, authors receive a notification of either acceptance or rejection of the submission. If the abstract has been accepted, the authors can prepare a full paper. The format for the full paper is identical to the format for the abstract except for the number of pages: the full paper has a required minimum length of five (5) pages and a maximum of six (6) pages. Full Papers will be reviewed by the Technical Program Committee. Authors of accepted full papers must submit the final paper version according to the deadline, register for the workshop, and attend to present their papers. The maximum length for final papers is 6 pages. All contributions will be peer-reviewed and acceptance will be based on quality, originality and relevance. Accepted papers will be submitted for inclusion into IEEE Xplore Digital Library.
Submissions must be written in English and prepared according to the IEEE Conference Proceedings template. LaTeX and Word templates and an Overleaf sample project can be found at: https://metroxraine.org/initial-author-instructions.
The papers must be submitted in PDF format electronically via EDAS online submission and review system: https://edas.info/newPaper.php?c=30746. To submit abstracts or draft papers to the special session, please follow the submission instructions for regular sessions, but remind to specify the special session to which the paper is directed.
The special session organizers and other external reviewers will review all submissions.
CONFERENCE PROCEEDINGS ———————————– All contributions will be peer-reviewed, and acceptance will be based on quality, originality, and relevance. Accepted papers will be submitted for inclusion into IEEE Xplore Digital Library.
Extended versions of presented papers are eligible for post-publication; more information will be provided soon.
An April 5, 2022 news item on phys.org describes a museum display project designed to enhance learning, Note: Links have been removed,
Hands-on exhibits are staples of science and children’s museums around the world, and kids love them. The exhibits invite children to explore scientific concepts in fun and playful ways.
But do kids actually learn from them? Ideally, museum staff, parents or caregivers are on hand to help guide the children through the exhibits and facilitate learning, but that is not always possible.
Researchers from Carnegie Mellon University’s Human-Computer Interaction Institute (HCII) have demonstrated a more effective way to support learning and increase engagement. They used artificial intelligence to create a new genre of interactive, hands-on exhibits that includes an intelligent, virtual assistant to interact with visitors.
When the researchers compared their intelligent exhibit to a traditional one, they found that the intelligent exhibit increased learning and the time spent at the exhibit.
“Having artificial intelligence and computer vision turned the play into learning,” said Nesra Yannier, HCII faculty member and head of the project, who called the results “purposeful play.”
Earthquake tables are popular exhibits. In a typical example, kids build towers and then watch them tumble on a shaking table. Signs around the exhibit try to engage kids in thinking about science as they play, but it is not clear how well these work or how often they are even read.
Yannier led a team of researchers that built an AI-enhanced earthquake table outfitted with a camera, touchscreen, large display and an intelligent agent, NoRilla, that replaced the signs. NoRilla — a virtual gorilla — interacts with participants, taking them through different challenges and asking questions about why towers did or didn’t fall along the way and helping them make scientific discoveries.
The team — Yannier, Ken Koedinger and Scott Hudson from CMU; Kevin Crowley of the University of Pittsburgh; and Youngwook Do of the Georgia Institute of Technology — tested their intelligent earthquake exhibit at the Carnegie Science Center in Pittsburgh. Elementary-school-aged children attending a summer camp interacted with either the intelligent or traditional exhibit and completed pre- and post-tests as well as surveys to gauge what they learned and how much they enjoyed the experiment. Researchers also observed visitors interacting with the exhibit during regular hours.
The pre- and post-tests and surveys revealed that children learned significantly more from the AI-enhanced intelligent science exhibit compared to the traditional exhibit while having just as much fun. A surprising result was that even though children were doing more building in the traditional exhibit, their building skills did not improve at all, as they mostly engaged in random tweaking rather than understanding the underlying concepts. The AI-enhanced exhibit not only helped children understand the [underlying] scientific concepts better but also transferred to better building and engineering skills as well.
Their experiment at the Science Center also showed that people spent about six minutes at the intelligent exhibit, four times the 90-second average of the traditional one.
“What’s particularly impressive to me is how the system engages kids in doing real scientific experimentation and thinking,” said Koedinger, a professor in HCII, “The kids not only get it, they also have more fun than with usual exhibits even though more thinking is required.”
Parents of children who experienced the exhibit said it was more interactive, directed and instructional and offered two-way communication compared to other exhibits. They also commented that “it employs inquiry learning, which is the heart of how kids learn, but is also a play model, so it does not seem like a learning activity.”
“Our exhibit automated the guidance and support that make hands-on physical experimentation a valuable learning experience,” Yannier said. “In museums, parents may not have the relevant knowledge to help their children, and staff may not always be available. Using AI and computer vision, we can offer this experience to more children of different backgrounds and at a wider scale.”
The team’s research started at the Children’s Museum of Pittsburgh, where they tested the design of their intelligent exhibit and made improvements based on feedback from people who interacted with it.
“This research will have lasting implications for future exhibit experiences at the Science Center,” said Jason Brown, the Henry Buhl Jr. director of the Carnegie Science Center. “Creating hands-on fun and inspirational exhibit experiences that scaffold science, technology, engineering or mathematics learning and discovery is what positions us as one of the most unique museums in the region.”
The team recently published its findings in theJournal of the Learning Sciences. The intelligent science exhibit remains at the Carnegie Science Center as a long-term exhibit. It is also at the Children’s Museum of Atlanta and will soon be at the Please Touch Museum in Philadelphia and the Children’s Discovery Museum of San Jose in California.
“The Children’s Museum of Atlanta is enjoying being a part of this research study. As we have observed the NoRilla in action, we see high levels of ‘stay time’ for children and adults as they work to meet the challenges through the combination of hands-on activities with computer-based challenges,” said Karen Kelly, the director of exhibits and education at the Atlanta museum. “We love that this experience aligns with our mission of sparking every child’s imagination, sense of discovery and learning through the power of play.”
The CMU team is already working on creating other intelligent science exhibits using computer vision and AI to teach different scientific topics. Future projects include an exhibit with ramps and one with a balance scale.
Yannier stressed that this technology will not only enhance lessons in a museum, but could also assist students learning in the classroom or at home.
Presumably the researchers obtained consent from the children’s parents to observe and track how they were playing and learning. (On skimming through the paper, I didn’t see a formal discussion of methodology or consent.)
As well, there doesn’t seem to be any mention about the impact that being part of a study might have on the participants’ outcomes.
I’m not arguing with the researchers’ conclusions. It makes sense that it’s engaging and educational to play on an earthquake table while a virtual gorilla poses various challenges to attempts at rebuilding in the aftermath. My problem is that the research seems designed to prove a foregone conclusion without any critical analysis.
As noted in the headline for this post, I have two items. For anyone unfamiliar with XR and the other (AR, MR, and VR) realities, I found a good description which I placed in my October 22, 2021 posting (scroll down to the “How many realities are there?” subhead about 70% of the way down).
eXtended Reality in Rome
I got an invitation (via a February 24, 2022 email) to participate in a special session at one of the 2022 IEEE (Institute of Electrical and Electronics Engineers) conference (more about the conference later).
The fast development of Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR) solutions over the last few years are transforming how people interact, work, and communicate. The eXtended Reality (XR) term encloses all those immersive technologies that can shift the boundaries between digital and physical worlds to realize the Metaverse. According to tech companies and venture capitalists, the Metaverse will be a super-platform that convenes sub-platforms: social media, online video games, and ease-of-life apps, all accessible through the same digital space and sharing the same digital economy. Inside the Metaverse, virtual worlds will allow avatars to carry all human endeavours, including creation, display, entertainment, social, and trading. Thus, the Metaverse will evolve how users interact with brands, intellectual properties, and each other things on the Internet. A user could join friends to play a multiplayer game, watch a movie via a streaming service and then attend a university course precisely the same as in the real world.
The Metaverse development will require new software architecture that will enable decentralized and collaborative virtual worlds. These self-organized virtual worlds will be permanent and will require maintenance operations. In addition, it will be necessary to design efficient data management system and prevent privacy violations. Finally, the convergence of physical reality, virtually enhanced, and an always-on virtual space highlighted the need to rethink the actual paradigms for visualization, interaction, and sharing of digital information, moving toward more natural, intuitive, dynamically customizable, multimodal, and multi-user solutions.
TOPICS
The topics of interest include, but are not limited to, the following:
Hardware/Software Architectures for Metaverse
Decentralized and Collaborative Architectures for Metaverse
Interoperability for Metaverse
Tools to help creators to build the Metaverse
Operations and Maintenance in Metaverse
Data security and privacy mechanisms for Metaverse
Cryptocurrency, token, NFT Solutions for Metaverse
Fraud-Detection in Metaverse
Cyber Security for Metaverse
Data Analytics to Identify Malicious Behaviors in Metaverse
Blockchain/AI technologies in Metaverse
Emerging Technologies and Applications for Metaverse
New models to evaluate the impact of the Metaverse
Interactive Data Exploration and Presentation in Metaverse
Human factors issues related to Metaverse
Proof-of-Concept in Metaverse: Experimental Prototyping and Testbeds
ABOUT THE ORGANIZERS
Giuseppe Caggianese is a Research Scientist at the National Research Council of Italy. He received the Laurea degree in computer science magna cum laude in 2010 and the Ph.D. degree in Methods and Technologies for Environmental Monitoring in 2013 from the University of Basilicata, Italy.
His research activities are focused on the field of Human-Computer Interaction (HCI) and Artificial Intelligence (AI) to design and test advanced interfaces adaptive to specific uses and users in both augmented and virtual reality. He authored more than 30 scientific papers published in international journals, conference proceedings, and books. He also serves on program committees of several international conferences and workshops.
Ugo Erra is an Assistant Professor (qualified as Associate Professor) at the University of Basilicata (UNIBAS), Italy. He is the founder of the Computer Graphics Laboratory at the University of Basilicata. He received an MSc/diploma degree in Computer Science from the University of Salerno, Italy, in 2001 and a PhD in Computer Science in 2004.
His research focuses on Real-Time Computer Graphics, Information Visualization, Artificial Intelligence, and Parallel Computing. Has been involved in several research projects; among these, one project was funded by the European Commission as a research fellow, and four projects were founded by Area Science Park, a public national research organization that promotes the development of innovation processes, as principal investigator. He has (co-)authored about 14 international journal articles, 45 international conference proceedings, and two book chapters. He supervised four PhD students. He organized the Workshop on Parallel and Distributed Agent-Based Simulations, a satellite Workshop of Euro-Par, from 2013 to 2015. He served more than 20 international conferences as program committee member and more than ten journals as referee.
The 2022 IEEE International Conference on Metrology for eXtended Reality, Artificial Intelligence, and Neural Engineering (IEEE MetroXRAINE 2022) will be an international event mainly aimed at creating a synergy between experts in eXtended Reality, Brain-Computer Interface, and Artificial Intelligence, with special attention to measurement [i.e., metrology].
The conference will be a unique opportunity for discussion among scientists, technologists, and companies on very specific sectors in order to increase the visibility and the scientific impact for the participants. The organizing formula will be original owing to the emphasis on the interaction between the participants to exchange ideas and material useful for their research activities.
MetroXRAINE will be configured as a synergistic collection of sessions organized by the individual members of the Scientific Committee. Round tables will be held for different projects and hot research topics. Moreover, we will have demo sessions, students contests, interactive company expositions, awards, and so on.
The Conference will be a hybrid conference [emphasis mine], with the possibility of attendance remotely or in presence.
CALL FOR PAPERS
The Program Committee is inviting to submit Abstracts (1 – 2 pages) for the IEEE MetroXRAINE 2022 Conference, 26-28 October, 2022.
All contributions will be peer-reviewed and acceptance will be based on quality, originality and relevance. Accepted papers will be submitted for inclusion into IEEE Xplore Digital Library.
Extended versions of presented papers are eligible for post publication.
…
Abstract Submission Deadline:
March 28, 2022
Full Paper Submission Deadline:
May 10, 2022
Extended Abstract Acceptance Notification:
June 10, 2022
Final Paper Submission Deadline:
July 30, 2022
According to the email invitation, “IEEE MetroXRAINE 2022 … will be held on October 26-28, 2022 in Rome.” You can find more details on the conference website.
Council of Canadian Academies launches four projects
This too is from an email. From the Council of Canadian Academies (CCA) announcement received February 27, 2022 (you can find the original February 17, 2022 CCA news release here),
The Council of Canadian Academies (CCA) is pleased to announce it will undertake four new assessments beginning this spring:
Gene-edited Organisms for Pest Control Advances in gene editing tools and technologies have made the process of changing an organism’s genome more efficient, opening up a range of potential applications. One such application is in pest control. By editing genomes of organisms, and introducing them to wild populations, it’s now possible to control insect-borne disease and invasive species, or reverse insecticide resistance in pests. But the full implications of using these methods remains uncertain.
This assessment will examine the scientific, bioethical, and regulatory challenges associated with the use of gene-edited organisms and technologies for pest control.
Sponsor: Health Canada’s Pest Management Regulatory Agency
The Future of Arctic and Northern Research in Canada The Arctic is undergoing unprecedented changes, spurred in large part by climate change and globalization. Record levels of sea ice loss are expected to lead to increased trade through the Northwest Passage. Ocean warming and changes to the tundra will transform marine and terrestrial ecosystems, while permafrost thaw will have significant effects on infrastructure and the release of greenhouse gases. As a result of these trends, Northern communities, and Canada as an Arctic and maritime country, are facing profound economic, social, and ecosystem impacts.
This assessment will examine the key foundational elements to create an inclusive, collaborative, effective, and world-class Arctic and northern science system in Canada.
Sponsor: A consortium of Arctic and northern research and science organizations from across Canada led by ArcticNet
Quantum Technologies Quantum technologies will affect all sectors of the Canadian economy. Built on the principles of quantum physics, these emerging technologies present significant opportunities in the areas of sensing and metrology, computation and communication, and data science and artificial intelligence, among others. But there is also the potential they could be used to facilitate cyberattacks, putting financial systems, utility grids, infrastructure, personal privacy, and national security at risk. A comprehensive exploration of the capabilities and potential vulnerabilities of these technologies will help to inform their future deployment across society and the economy.
This assessment will examine the impacts, opportunities, and challenges quantum technologies present for industry, governments, and people in Canada.
Sponsor: National Research Council Canada and Innovation, Science and Economic Development Canada
International Science and Technology Partnership Opportunities International partnerships focused on science, technology, and innovation can provide Canada with an opportunity to advance the state of knowledge in areas of national importance, help address global challenges, and contribute to UN Sustainable Development Goals. Canadian companies could also benefit from global partnerships to access new and emerging markets.
While there are numerous opportunities for international collaborations, Canada has finite resources to support them. Potential partnerships need to be evaluated not just on strengths in areas such as science, technology, and innovation, but also political and economic factors.
This assessment will examine how public, private, and academic organizations can evaluate and prioritize science and technology partnership opportunities with other countries to achieve key national objectives.
Sponsor: Global Affairs Canada
Gene-edited Organisms for Pest Control and International Science and Technology Partnership Opportunities are funded by Innovation, Science and Economic Development Canada (ISED). Quantum Technologies is funded by the National Research Council of Council (NRC) and ISED, and the Future of Arctic and Northern Research in Canada is funded by a consortium of Arctic and northern research and science organizations from across Canada led by ArcticNet. The reports will be released in 2023-24.
Multidisciplinary expert panels will be appointed in the coming months for all four assessments.
You can find in-progress and completed CCA reports here.
Fingers crossed that the CCA looks a little further afield for their international experts than the US, UK, Australia, New Zealand, and northern Europe.
Finally, I’m guessing that the gene-editing and pest management report will cover and, gingerly, recommend germline editing (which is currently not allowed in Canada) and gene drives too.
It will be interesting to see who’s on that committee. If you’re really interested in the report topic, you may want to check out my April 26, 2019 posting and scroll down to the “Criminal ban on human gene-editing of inheritable cells (in Canada)” subhead where I examined what seemed to be an informal attempt to persuade policy makers to allow germline editing or gene-editing of inheritable cells in Canada.
The ‘metaverse’ seems to be everywhere these days (especially since Facebook has made a number of announcements bout theirs (more about that later in this posting).
At this point, the metaverse is very hyped up despite having been around for about 30 years. According to the Wikipedia timeline (see the Metaverse entry), the first one was a MOO in 1993 called ‘The Metaverse’. In any event, it seems like it might be a good time to see what’s changed since I dipped my toe into a metaverse (Second Life by Linden Labs) in 2007.
(For grammar buffs, I switched from definite article [the] to indefinite article [a] purposefully. In reading the various opinion pieces and announcements, it’s not always clear whether they’re talking about a single, overarching metaverse [the] replacing the single, overarching internet or whether there will be multiple metaverses, in which case [a].)
The hype/the buzz … call it what you will
This September 6, 2021 piece by Nick Pringle for Fast Company dates the beginning of the metaverse to a 1992 science fiction novel before launching into some typical marketing hype (for those who don’t know, hype is the short form for hyperbole; Note: Links have been removed),
The term metaverse was coined by American writer Neal Stephenson in his 1993 sci-fi hit Snow Crash. But what was far-flung fiction 30 years ago is now nearing reality. At Facebook’s most recent earnings call [June 2021], CEO Mark Zuckerberg announced the company’s vision to unify communities, creators, and commerce through virtual reality: “Our overarching goal across all of these initiatives is to help bring the metaverse to life.”
So what actually is the metaverse? It’s best explained as a collection of 3D worlds you explore as an avatar. Stephenson’s original vision depicted a digital 3D realm in which users interacted in a shared online environment. Set in the wake of a catastrophic global economic crash, the metaverse in Snow Crash emerged as the successor to the internet. Subcultures sprung up alongside new social hierarchies, with users expressing their status through the appearance of their digital avatars.
Today virtual worlds along these lines are formed, populated, and already generating serious money. Household names like Roblox and Fortnite are the most established spaces; however, there are many more emerging, such as Decentraland, Upland, Sandbox, and the soon to launch Victoria VR.
These metaverses [emphasis mine] are peaking at a time when reality itself feels dystopian, with a global pandemic, climate change, and economic uncertainty hanging over our daily lives. The pandemic in particular saw many of us escape reality into online worlds like Roblox and Fortnite. But these spaces have proven to be a place where human creativity can flourish amid crisis.
In fact, we are currently experiencing an explosion of platforms parallel to the dotcom boom. While many of these fledgling digital worlds will become what Ask Jeeves was to Google, I predict [emphasis mine] that a few will match the scale and reach of the tech giant—or even exceed it.
Because the metaverse brings a new dimension to the internet, brands and businesses will need to consider their current and future role within it. Some brands are already forging the way and establishing a new genre of marketing in the process: direct to avatar (D2A). Gucci sold a virtual bag for more than the real thing in Roblox; Nike dropped virtual Jordans in Fortnite; Coca-Cola launched avatar wearables in Decentraland, and Sotheby’s has an art gallery that your avatar can wander in your spare time.
D2A is being supercharged by blockchain technology and the advent of digital ownership via NFTs, or nonfungible tokens. NFTs are already making waves in art and gaming. More than $191 million was transacted on the “play to earn” blockchain game Axie Infinity in its first 30 days this year. This kind of growth makes NFTs hard for brands to ignore. In the process, blockchain and crypto are starting to feel less and less like “outsider tech.” There are still big barriers to be overcome—the UX of crypto being one, and the eye-watering environmental impact of mining being the other. I believe technology will find a way. History tends to agree.
…
Detractors see the metaverse as a pandemic fad, wrapping it up with the current NFT bubble or reducing it to Zuck’s [Jeffrey Zuckerberg and Facebook] dystopian corporate landscape. This misses the bigger behavior change that is happening among Gen Alpha. When you watch how they play, it becomes clear that the metaverse is more than a buzzword.
For Gen Alpha [emphasis mine], gaming is social life. While millennials relentlessly scroll feeds, Alphas and Zoomers [emphasis mine] increasingly stroll virtual spaces with their friends. Why spend the evening staring at Instagram when you can wander around a virtual Harajuku with your mates? If this seems ridiculous to you, ask any 13-year-old what they think.
…
Who is Nick Pringle and how accurate are his predictions?
… [the company] evolved from a computer-assisted film-making studio to a digital design and consulting company, as part of a major advertising network.
By thinking “virtual first,” you can see how these spaces become highly experimental, creative, and valuable. The products you can design aren’t bound by physics or marketing convention—they can be anything, and are now directly “ownable” through blockchain. …
I believe that the metaverse is here to stay. That means brands and marketers now have the exciting opportunity to create products that exist in multiple realities. The winners will understand that the metaverse is not a copy of our world, and so we should not simply paste our products, experiences, and brands into it.
…
I emphasized “These metaverses …” in the previous section to highlight the fact that I find the use of ‘metaverses’ vs. ‘worlds’ confusing as the words are sometimes used as synonyms and sometimes as distinctions. We do it all the time in all sorts of conversations but for someone who’s an outsider to a particular occupational group or subculture, the shifts can make for confusion.
As for Gen Alpha and Zoomer, I’m not a fan of ‘Gen anything’ as shorthand for describing a cohort based on birth years. For example, “For Gen Alpha [emphasis mine], gaming is social life,” ignores social and economic classes, as well as, the importance of locations/geography, e.g., Afghanistan in contrast to the US.
To answer the question I asked, Pringle does not mention any record of accuracy for his predictions for the future but I was able to discover that he is a “multiple Cannes Lions award-winning creative” (more here).
In recent months you may have heard about something called the metaverse. Maybe you’ve read that the metaverse is going to replace the internet. Maybe we’re all supposed to live there. Maybe Facebook (or Epic, or Roblox, or dozens of smaller companies) is trying to take it over. And maybe it’s got something to do with NFTs [non-fungible tokens]?
Unlike a lot of things The Verge covers, the metaverse is tough to explain for one reason: it doesn’t necessarily exist. It’s partly a dream for the future of the internet and partly a neat way to encapsulate some current trends in online infrastructure, including the growth of real-time 3D worlds.
…
Then what is the real metaverse?
There’s no universally accepted definition of a real “metaverse,” except maybe that it’s a fancier successor to the internet. Silicon Valley metaverse proponents sometimes reference a description from venture capitalist Matthew Ball, author of the extensive Metaverse Primer:
“The Metaverse is an expansive network of persistent, real-time rendered 3D worlds and simulations that support continuity of identity, objects, history, payments, and entitlements, and can be experienced synchronously by an effectively unlimited number of users, each with an individual sense of presence.”
Facebook, arguably the tech company with the biggest stake in the metaverse, describes it more simply:
“The ‘metaverse’ is a set of virtual spaces where you can create and explore with other people who aren’t in the same physical space as you.”
There are also broader metaverse-related taxonomies like one from game designer Raph Koster, who draws a distinction between “online worlds,” “multiverses,” and “metaverses.” To Koster, online worlds are digital spaces — from rich 3D environments to text-based ones — focused on one main theme. Multiverses are “multiple different worlds connected in a network, which do not have a shared theme or ruleset,” including Ready Player One’s OASIS. And a metaverse is “a multiverse which interoperates more with the real world,” incorporating things like augmented reality overlays, VR dressing rooms for real stores, and even apps like Google Maps.
If you want something a little snarkier and more impressionistic, you can cite digital scholar Janet Murray — who has described the modern metaverse ideal as “a magical Zoom meeting that has all the playful release of Animal Crossing.”
But wait, now Ready Player One isn’t a metaverse and virtual worlds don’t have to be 3D? It sounds like some of these definitions conflict with each other.
An astute observation.
…
Why is the term “metaverse” even useful? “The internet” already covers mobile apps, websites, and all kinds of infrastructure services. Can’t we roll virtual worlds in there, too?
Matthew Ball favors the term “metaverse” because it creates a clean break with the present-day internet. [emphasis mine] “Using the metaverse as a distinctive descriptor allows us to understand the enormity of that change and in turn, the opportunity for disruption,” he said in a phone interview with The Verge. “It’s much harder to say ‘we’re late-cycle into the last thing and want to change it.’ But I think understanding this next wave of computing and the internet allows us to be more proactive than reactive and think about the future as we want it to be, rather than how to marginally affect the present.”
A more cynical spin is that “metaverse” lets companies dodge negative baggage associated with “the internet” in general and social media in particular. “As long as you can make technology seem fresh and new and cool, you can avoid regulation,” researcher Joan Donovan told The Washington Post in a recent article about Facebook and the metaverse. “You can run defense on that for several years before the government can catch up.”
There’s also one very simple reason: it sounds more futuristic than “internet” and gets investors and media people (like us!) excited.
…
People keep saying NFTs are part of the metaverse. Why?
NFTs are complicated in their own right, and you can read more about them here. Loosely, the thinking goes: NFTs are a way of recording who owns a specific virtual good, creating and transferring virtual goods is a big part of the metaverse, thus NFTs are a potentially useful financial architecture for the metaverse. Or in more practical terms: if you buy a virtual shirt in Metaverse Platform A, NFTs can create a permanent receipt and let you redeem the same shirt in Metaverse Platforms B to Z.
Lots of NFT designers are selling collectible avatars like CryptoPunks, Cool Cats, and Bored Apes, sometimes for astronomical sums. Right now these are mostly 2D art used as social media profile pictures. But we’re already seeing some crossover with “metaverse”-style services. The company Polygonal Mind, for instance, is building a system called CryptoAvatars that lets people buy 3D avatars as NFTs and then use them across multiple virtual worlds.
Since starting this post sometime in September 2021, the situation regarding Facebook has changed a few times. I’ve decided to begin my version of the story from a summer 2021 announcement.
On Monday, July 26, 2021, Facebook announced a new Metaverse product group. From a July 27, 2021 article by Scott Rosenberg for Yahoo News (Note: A link has been removed),
Facebook announced Monday it was forming a new Metaverse product group to advance its efforts to build a 3D social space using virtual and augmented reality tech.
…
Facebook’s new Metaverse product group will report to Andrew Bosworth, Facebook’s vice president of virtual and augmented reality [emphasis mine], who announced the new organization in a Facebook post.
…
Facebook, integrity, and safety in the metaverse
On September 27, 2021 Facebook posted this webpage (Building the Metaverse Responsibly by Andrew Bosworth, VP, Facebook Reality Labs [emphasis mine] and Nick Clegg, VP, Global Affairs) on its site,
The metaverse won’t be built overnight by a single company. We’ll collaborate with policymakers, experts and industry partners to bring this to life.
We’re announcing a $50 million investment in global research and program partners to ensure these products are developed responsibly.
We develop technology rooted in human connection that brings people together. As we focus on helping to build the next computing platform, our work across augmented and virtual reality and consumer hardware will deepen that human connection regardless of physical distance and without being tied to devices.
…
Introducing the XR [extended reality] Programs and Research Fund
There’s a long road ahead. But as a starting point, we’re announcing the XR Programs and Research Fund, a two-year $50 million investment in programs and external research to help us in this effort. Through this fund, we’ll collaborate with industry partners, civil rights groups, governments, nonprofits and academic institutions to determine how to build these technologies responsibly.
Rebranding Facebook’s integrity and safety issues away?
It seems Facebook’s credibility issues are such that the company is about to rebrand itself according to an October 19, 2021 article by Alex Heath for The Verge (Note: Links have been removed),
Facebook is planning to change its company name next week to reflect its focus on building the metaverse, according to a source with direct knowledge of the matter.
The coming name change, which CEO Mark Zuckerberg plans to talk about at the company’s annual Connect conference on October 28th [2021], but could unveil sooner, is meant to signal the tech giant’s ambition to be known for more than social media and all the ills that entail. The rebrand would likely position the blue Facebook app as one of many products under a parent company overseeing groups like Instagram, WhatsApp, Oculus, and more. A spokesperson for Facebook declined to comment for this story.
Facebook already has more than 10,000 employees building consumer hardware like AR glasses that Zuckerberg believes will eventually be as ubiquitous as smartphones. In July, he told The Verge that, over the next several years, “we will effectively transition from people seeing us as primarily being a social media company to being a metaverse company.”
A rebrand could also serve to further separate the futuristic work Zuckerberg is focused on from the intense scrutiny Facebook is currently under for the way its social platform operates today. A former employee turned whistleblower, Frances Haugen, recently leaked a trove of damning internal documents to The Wall Street Journal and testified about them before Congress. Antitrust regulators in the US and elsewhere are trying to break the company up, and public trust in how Facebook does business is falling.
Facebook isn’t the first well-known tech company to change its company name as its ambitions expand. In 2015, Google reorganized entirely under a holding company called Alphabet, partly to signal that it was no longer just a search engine, but a sprawling conglomerate with companies making driverless cars and health tech. And Snapchat rebranded to Snap Inc. in 2016, the same year it started calling itself a “camera company” and debuted its first pair of Spectacles camera glasses.
…
If you have time, do read Heath’s article in its entirety.
An October 20, 2021 Thomson Reuters item on CBC (Canadian Broadcasting Corporation) news online includes quotes from some industry analysts about the rebrand,
…
“It reflects the broadening out of the Facebook business. And then, secondly, I do think that Facebook’s brand is probably not the greatest given all of the events of the last three years or so,” internet analyst James Cordwell at Atlantic Equities said.
…
“Having a different parent brand will guard against having this negative association transferred into a new brand, or other brands that are in the portfolio,” said Shankha Basu, associate professor of marketing at University of Leeds.
…
Tyler Jadah’s October 20, 2021 article for the Daily Hive includes an earlier announcement (not mentioned in the other two articles about the rebranding), Note: A link has been removed,
…
Earlier this week [October 17, 2021], Facebook announced it will start “a journey to help build the next computing platform” and will hire 10,000 new high-skilled jobs within the European Union (EU) over the next five years.
“Working with others, we’re developing what is often referred to as the ‘metaverse’ — a new phase of interconnected virtual experiences using technologies like virtual and augmented reality,” wrote Facebook’s Nick Clegg, the VP of Global Affairs. “At its heart is the idea that by creating a greater sense of “virtual presence,” interacting online can become much closer to the experience of interacting in person.”
Clegg says the metaverse has the potential to help unlock access to new creative, social, and economic opportunities across the globe and the virtual world.
In an email with Facebook’s Corporate Communications Canada, David Troya-Alvarez told Daily Hive, “We don’t comment on rumour or speculation,” in regards to The Verge‘s report.
I will update this posting when and if Facebook rebrands itself into a ‘metaverse’ company.
***See Oct. 28, 2021 update at the end of this posting and prepare yourself for ‘Meta’.***
Who (else) cares about integrity and safety in the metaverse?
In technology, first-mover advantage is often significant. This is why BigTech and other online platforms are beginning to acquire software businesses to position themselves for the arrival of the Metaverse. They hope to be at the forefront of profound changes that the Metaverse will bring in relation to digital interactions between people, between businesses, and between them both.
What is the Metaverse? The short answer is that it does not exist yet. At the moment it is vision for what the future will be like where personal and commercial life is conducted digitally in parallel with our lives in the physical world. Sounds too much like science fiction? For something that does not exist yet, the Metaverse is drawing a huge amount of attention and investment in the tech sector and beyond.
Here we look at what the Metaverse is, what its potential is for disruptive change, and some of the key legal and regulatory issues future stakeholders may need to consider.
What are the potential legal issues?
The revolutionary nature of the Metaverse is likely to give rise to a range of complex legal and regulatory issues. We consider some of the key ones below. As time goes by, naturally enough, new ones will emerge.
Data
Participation in the Metaverse will involve the collection of unprecedented amounts and types of personal data. Today, smartphone apps and websites allow organisations to understand how individuals move around the web or navigate an app. Tomorrow, in the Metaverse, organisations will be able to collect information about individuals’ physiological responses, their movements and potentially even brainwave patterns, thereby gauging a much deeper understanding of their customers’ thought processes and behaviours.
Users participating in the Metaverse will also be “logged in” for extended amounts of time. This will mean that patterns of behaviour will be continually monitored, enabling the Metaverse and the businesses (vendors of goods and services) participating in the Metaverse to understand how best to service the users in an incredibly targeted way.
The hungry Metaverse participant
How might actors in the Metaverse target persons participating in the Metaverse? Let us assume one such woman is hungry at the time of participating. The Metaverse may observe a woman frequently glancing at café and restaurant windows and stopping to look at cakes in a bakery window, and determine that she is hungry and serve her food adverts accordingly.
Contrast this with current technology, where a website or app can generally only ascertain this type of information if the woman actively searched for food outlets or similar on her device.
Therefore, in the Metaverse, a user will no longer need to proactively provide personal data by opening up their smartphone and accessing their webpage or app of choice. Instead, their data will be gathered in the background while they go about their virtual lives.
This type of opportunity comes with great data protection responsibilities. Businesses developing, or participating in, the Metaverse will need to comply with data protection legislation when processing personal data in this new environment. The nature of the Metaverse raises a number of issues around how that compliance will be achieved in practice.
Who is responsible for complying with applicable data protection law?
In many jurisdictions, data protection laws place different obligations on entities depending on whether an entity determines the purpose and means of processing personal data (referred to as a “controller” under the EU General Data Protection Regulation (GDPR)) or just processes personal data on behalf of others (referred to as a “processor” under the GDPR).
In the Metaverse, establishing which entity or entities have responsibility for determining how and why personal data will be processed, and who processes personal data on behalf of another, may not be easy. It will likely involve picking apart a tangled web of relationships, and there may be no obvious or clear answers – for example:
Will there be one main administrator of the Metaverse who collects all personal data provided within it and determines how that personal data will be processed and shared? Or will multiple entities collect personal data through the Metaverse and each determine their own purposes for doing so?
Either way, many questions arise, including:
How should the different entities each display their own privacy notice to users? Or should this be done jointly? How and when should users’ consent be collected? Who is responsible if users’ personal data is stolen or misused while they are in the Metaverse? What data sharing arrangements need to be put in place and how will these be implemented?
…
There’s a lot more to this page including a look at Social Media Regulation and Intellectual Property Rights.
I’m starting to think we should talking about RR (real reality), as well as, VR (virtual reality), AR (augmented reality), MR (mixed reality), and XR (extended reality). It seems that all of these (except RR, which is implied) will be part of the ‘metaverse’, assuming that it ever comes into existence. Happily, I have found a good summarized description of VR/AR/MR/XR in a March 20, 2018 essay by North of 41 on medium.com,
Summary: VR is immersing people into a completely virtual environment; AR is creating an overlay of virtual content, but can’t interact with the environment; MR is a mixed of virtual reality and the reality, it creates virtual objects that can interact with the actual environment. XR brings all three Reality (AR, VR, MR) together under one term.
If you have the interest and approximately five spare minutes, read the entire March 20, 2018 essay, which has embedded images illustrating the various realities.
Here’s a description from one of the researchers, Mohamed Kari, of the video, which you can see above, and the paper he and his colleagues presented at the 20th IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 2021 (from the TransforMR page on YouTube),
We present TransforMR, a video see-through mixed reality system for mobile devices that performs 3D-pose-aware object substitution to create meaningful mixed reality scenes in previously unseen, uncontrolled, and open-ended real-world environments.
To get a sense of how recent this work is, ISMAR 2021 was held from October 4 – 8, 2021.
The team’s 2021 ISMAR paper, TransforMR Pose-Aware Object Substitution for Composing Alternate Mixed Realities by Mohamed Kari, Tobias Grosse-Puppendah, Luis Falconeri Coelho, Andreas Rene Fender, David Bethge, Reinhard Schütte, and Christian Holz lists two educational institutions I’d expect to see (University of Duisburg-Essen and ETH Zürich), the surprise was this one: Porsche AG. Perhaps that explains the preponderance of vehicles in this demonstration.
Space walking in virtual reality
Ivan Semeniuk’s October 2, 2021 article for the Globe and Mail highlights a collaboration between Montreal’s Felix and Paul Studios with NASA (US National Aeronautics and Space Administration) and Time studios,
Communing with the infinite while floating high above the Earth is an experience that, so far, has been known to only a handful.
Now, a Montreal production company aims to share that experience with audiences around the world, following the first ever recording of a spacewalk in the medium of virtual reality.
…
The company, which specializes in creating virtual-reality experiences with cinematic flair, got its long-awaited chance in mid-September when astronauts Thomas Pesquet and Akihiko Hoshide ventured outside the International Space Station for about seven hours to install supports and other equipment in preparation for a new solar array.
The footage will be used in the fourth and final instalment of Space Explorers: The ISS Experience, a virtual-reality journey to space that has already garnered a Primetime Emmy Award for its first two episodes.
From the outset, the production was developed to reach audiences through a variety of platforms for 360-degree viewing, including 5G-enabled smart phones and tablets. A domed theatre version of the experience for group audiences opened this week at the Rio Tinto Alcan Montreal Planetarium. Those who desire a more immersive experience can now see the first two episodes in VR form by using a headset available through the gaming and entertainment company Oculus. Scenes from the VR series are also on offer as part of The Infinite, an interactive exhibition developed by Montreal’s Phi Studio, whose works focus on the intersection of art and technology. The exhibition, which runs until Nov. 7 [2021], has attracted 40,000 visitors since it opened in July [2021?].
…
At a time when billionaires are able to head off on private extraterrestrial sojourns that almost no one else could dream of, Lajeunesse [Félix Lajeunesse, co-founder and creative director of Felix and Paul studios] said his project was developed with a very different purpose in mind: making it easier for audiences to become eyewitnesses rather than distant spectators to humanity’s greatest adventure.
…
For the final instalments, the storyline takes viewers outside of the space station with cameras mounted on the Canadarm, and – for the climax of the series – by following astronauts during a spacewalk. These scenes required extensive planning, not only because of the limited time frame in which they could be gathered, but because of the lighting challenges presented by a constantly shifting sun as the space station circles the globe once every 90 minutes.
…
… Lajeunesse said that it was equally important to acquire shots that are not just technically spectacular but that serve the underlying themes of Space Explorers: The ISS Experience. These include an examination of human adaptation and advancement, and the unity that emerges within a group of individuals from many places and cultures and who must learn to co-exist in a high risk environment in order to achieve a common goal.
There always seems to be a lot of grappling with new and newish science/technology where people strive to coin terms and define them while everyone, including members of the corporate community, attempts to cash in.
The last time I looked (probably about two years ago), I wasn’t able to find any good definitions for alternate reality and mixed reality. (By good, I mean something which clearly explicated the difference between the two.) It was nice to find something this time.
As for Facebook and its attempts to join/create a/the metaverse, the company’s timing seems particularly fraught. As well, paradigm-shifting technology doesn’t usually start with large corporations. The company is ignoring its own history.
Multiverses
Writing this piece has reminded me of the upcoming movie, “Doctor Strange in the Multiverse of Madness” (Wikipedia entry). While this multiverse is based on a comic book, the idea of a Multiverse (Wikipedia entry) has been around for quite some time,
Early recorded examples of the idea of infinite worlds existed in the philosophy of Ancient Greek Atomism, which proposed that infinite parallel worlds arose from the collision of atoms. In the third century BCE, the philosopher Chrysippus suggested that the world eternally expired and regenerated, effectively suggesting the existence of multiple universes across time.[1] The concept of multiple universes became more defined in the Middle Ages.
…
Multiple universes have been hypothesized in cosmology, physics, astronomy, religion, philosophy, transpersonal psychology, music, and all kinds of literature, particularly in science fiction, comic books and fantasy. In these contexts, parallel universes are also called “alternate universes”, “quantum universes”, “interpenetrating dimensions”, “parallel universes”, “parallel dimensions”, “parallel worlds”, “parallel realities”, “quantum realities”, “alternate realities”, “alternate timelines”, “alternate dimensions” and “dimensional planes”.
The physics community has debated the various multiverse theories over time. Prominent physicists are divided about whether any other universes exist outside of our own.
…
Living in a computer simulation or base reality
The whole thing is getting a little confusing for me so I think I’ll stick with RR (real reality) or as it’s also known base reality. For the notion of base reality, I want to thank astronomer David Kipping of Columbia University in Anil Ananthaswamy’s article for this analysis of the idea that we might all be living in a computer simulation (from my December 8, 2020 posting; scroll down about 50% of the way to the “Are we living in a computer simulation?” subhead),
… there is a more obvious answer: Occam’s razor, which says that in the absence of other evidence, the simplest explanation is more likely to be correct. The simulation hypothesis is elaborate, presuming realities nested upon realities, as well as simulated entities that can never tell that they are inside a simulation. “Because it is such an overly complicated, elaborate model in the first place, by Occam’s razor, it really should be disfavored, compared to the simple natural explanation,” Kipping says.
Maybe we are living in base reality after all—The Matrix, Musk and weird quantum physics notwithstanding.
To sum it up (briefly)
I’m sticking with the base reality (or real reality) concept, which is where various people and companies are attempting to create a multiplicity of metaverses or the metaverse effectively replacing the internet. This metaverse can include any all of these realities (AR/MR/VR/XR) along with base reality. As for Facebook’s attempt to build ‘the metaverse’, it seems a little grandiose.
The computer simulation theory is an interesting thought experiment (just like the multiverse is an interesting thought experiment). I’ll leave them there.
Wherever it is we are living, these are interesting times.
***Updated October 28, 2021: D. (Devindra) Hardawar’s October 28, 2021 article for engadget offers details about the rebranding along with a dash of cynicism (Note: A link has been removed),
Here’s what Facebook’s metaverse isn’t: It’s not an alternative world to help us escape from our dystopian reality, a la Snow Crash. It won’t require VR or AR glasses (at least, not at first). And, most importantly, it’s not something Facebook wants to keep to itself. Instead, as Mark Zuckerberg described to media ahead of today’s Facebook Connect conference, the company is betting it’ll be the next major computing platform after the rise of smartphones and the mobile web. Facebook is so confident, in fact, Zuckerberg announced that it’s renaming itself to “Meta.”
After spending the last decade becoming obsessed with our phones and tablets — learning to stare down and scroll practically as a reflex — the Facebook founder thinks we’ll be spending more time looking up at the 3D objects floating around us in the digital realm. Or maybe you’ll be following a friend’s avatar as they wander around your living room as a hologram. It’s basically a digital world layered right on top of the real world, or an “embodied internet” as Zuckerberg describes.
Before he got into the weeds for his grand new vision, though, Zuckerberg also preempted criticism about looking into the future now, as the Facebook Papers paint the company as a mismanaged behemoth that constantly prioritizes profit over safety. While acknowledging the seriousness of the issues the company is facing, noting that it’ll continue to focus on solving them with “industry-leading” investments, Zuckerberg said:
“The reality is is that there’s always going to be issues and for some people… they may have the view that there’s never really a great time to focus on the future… From my perspective, I think that we’re here to create things and we believe that we can do this and that technology can make things better. So we think it’s important to to push forward.”
Given the extent to which Facebook, and Zuckerberg in particular, have proven to be untrustworthy stewards of social technology, it’s almost laughable that the company wants us to buy into its future. But, like the rise of photo sharing and group chat apps, Zuckerberg at least has a good sense of what’s coming next. And for all of his talk of turning Facebook into a metaverse company, he’s adamant that he doesn’t want to build a metaverse that’s entirely owned by Facebook. He doesn’t think other companies will either. Like the mobile web, he thinks every major technology company will contribute something towards the metaverse. He’s just hoping to make Facebook a pioneer.
“Instead of looking at a screen, or today, how we look at the Internet, I think in the future you’re going to be in the experiences, and I think that’s just a qualitatively different experience,” Zuckerberg said. It’s not quite virtual reality as we think of it, and it’s not just augmented reality. But ultimately, he sees the metaverse as something that’ll help to deliver more presence for digital social experiences — the sense of being there, instead of just being trapped in a zoom window. And he expects there to be continuity across devices, so you’ll be able to start chatting with friends on your phone and seamlessly join them as a hologram when you slip on AR glasses.
…
D. (Devindra) Hardawar’s October 28, 2021 article provides a lot more details and I recommend reading it in its entirety.