Just when I thought I was almost caught up, I found this. The study I will be highlighting is from August 2023 but there are interesting developments all the way into October 2023 and beyond. First, the latest in AI (artificial intelligence) devices from an October 5, 2023 article by Lucas Arender for the Daily Hive, which describes the devices as AI wearables (you could also them wearable technology), Note: Links have been removed,
Rewind.ai launched Pendant, a necklace that records your conversations and transfers them to your smartphone, creating an audio database (of sorts) for your life.
Meta unveiled a pair of Ray-Ban smart glasses that include an AI chatbot that users can communicate with (which might make you look like you’re talking to yourself).
Sam Altman-backed startup Humane teased its new AI pin at Paris Fashion Week— a screenless lapel device that projects a smartphone-like interface onto users’ hands.
Microsoft filed a patent for an AI backpack that features GPS, voice command, and cameras that could… help us walk in the right direction?
The second item in the list ‘Ray-Ban Meta Smart Glasses’ is further described in an October 17, 2023 article by Sarah Bartnicka for the Daily Hive, Note: A link has been removed,
It’s a glorious day for tech dads everywhere: Meta and Ray-Ban smart glasses are officially for sale in Canada.
Driving the news: Meta has become the latest billion-dollar company to officially enter the smart glasses market with the second iteration [emphasis mine] of its design with Ray-Bans, now including a built-in Meta AI assistant, hands-free live streaming features, and a personal audio system.
…
This time around, the technology is better, and both Meta and Snap are pitching their smart glasses as a tool for creators to stay connected with their audiences rather than just a sleek piece of hardware that can blend your digital and physical realities [augmented or extended reality?].
…
Yes, but: As smart glasses creep back into the limelight, people are wary about wearing cameras on their faces. Concerns about always-on cameras and microphones that allow users to record their surroundings without the consent of others will likely stick around. [emphasis mine]
So, are these AI or smart or augmented reality (AR) glasses? In my October 22, 2021 post, I explored a number of realities in the context of the metaverse. Yes, it gets confusing. At any rate, i found these definitions,
Happily, I have found a good summarized description of VR/AR/MR/XR in a March 20, 2018 essay by North of 41 on medium.com,
“Summary: VR is immersing people into a completely virtual environment; AR is creating an overlay of virtual content, but can’t interact with the environment; MR is a mixed of virtual reality and the reality, it creates virtual objects that can interact with the actual environment. XR brings all three Reality (AR, VR, MR) together under one term.”
If you have the interest and approximately five spare minutes, read the entire March 20, 2018 essay, which has embedded images illustrating the various realities.
…
This may change over time but for now, answering the question, “AI or smart or augmented reality (AR) glasses?” you can say any or all three.
Someone wearing augmented reality (AR) or “smart” glasses could be Googling your face, turning you into a cat or recording your conversation – and that creates a major power imbalance, said Cornell researchers.
Currently, most work on AR glasses focuses primarily on the experience of the wearer. Researchers from the Cornell Ann S. Bowers College of Computing and Information Science and Brown University teamed up to explore how this technology affects interactions between the wearer and another person. Their explorations showed that, while the device generally made the wearer less anxious, things weren’t so rosy on the other side of the glasses.
Jenny Fu, a doctoral student in the field of information science, presented the findings in a new study, “Negotiating Dyadic Interactions through the Lens of Augmented Reality Glasses,” at the 2023 ACM Designing Interactive Systems Conference in July.
AR glasses superimpose virtual objects and text over the field of view to create a mixed-reality world for the user. Some designs are big and bulky, but as AR technology advances, smart glasses are becoming indistinguishable from regular glasses, raising concerns that a wearer could be secretly recording someone or even generating deepfakes with their likeness.
For the new study, Fu and co-author Malte Jung, associate professor of information science and the Nancy H. ’62 and Philip M. ’62 Young Sesquicentennial Faculty Fellow, worked with Ji Won Chung, a doctoral student, and Jeff Huang, associate professor of computer science, both at Brown, and Zachary Deocadiz-Smith, an independent extended reality designer.
They observed five pairs of individuals – a wearer and a non-wearer – as each pair discussed a desert survival activity. The wearer received Spectacles, an AR glasses prototype on loan from Snap Inc., the company behind Snapchat. The Spectacles look like avant-garde sunglasses and, for the study, came equipped with a video camera and five custom filters that transformed the non-wearer into a deer, cat, bear, clown or pig-bunny.
Following the activity, the pairs engaged in a participatory design session where they discussed how AR glasses could be improved, both for the wearer and the non-wearer. The participants were also interviewed and asked to reflect on their experiences.
According to the wearers, the fun filters reduced their anxiety and put them at ease during the exercise. The non-wearers, however, reported feeling disempowered because they didn’t know what was happening on the other side of the lenses. They were also upset that the filters robbed them of control over their own appearance. The possibility that the wearer could be secretly recording them without consent – especially when they didn’t know what they looked like – also put the non-wearers at a disadvantage.
The non-wearers weren’t completely powerless, however. A few demanded to know what the wearer was seeing, and moved their faces or bodies to evade the filters – giving them some control in negotiating their presence in the invisible mixed-reality world. “I think that’s the biggest takeaway I have from this study: I’m more powerful than I thought I was,” Fu said.
Another issue is that, like many AR glasses, Spectacles have darkened lenses so the wearer can see the projected virtual images. This lack of transparency also degraded the quality of the social interaction, the researchers reported.
“There is no direct eye contact, which makes people very confused, because they don’t know where the person is looking,” Fu said. “That makes their experiences of this conversation less pleasant, because the glasses blocked out all these nonverbal interactions.”
To create more positive experiences for people on both sides of the lenses, the study participants proposed that smart glasses designers add a projection display and a recording indicator light, so people nearby will know what the wearer is seeing and recording.
Fu also suggests designers test out their glasses in a social environment and hold a participatory design process like the one in their study. Additionally, they should consider these video interactions as a data source, she said.
That way, non-wearers can have a voice in the creation of the impending mixed-reality world.
Rina Diane Caballar’s September 25, 2023 article for IEEE (Institute of Electrical and Electronics Engineers) Spectrum magazine provides a few more insights about the research, Note: Links have been removed,
…
“This AR filter interaction is likely to happen in the future with the commercial emergence of AR glasses,” says Jenny Fu, a doctoral student at Cornell University’s Bowers College of Computing and Information Science and one of the two lead authors of the study. “How will that look like, and what are the social and emotional consequences of interacting and communicating through AR glasses?”
…
“When we think about design in HCI [human-computer interface], there is often a tendency to focus on the primary user and design just for them,” Jung says. “Because these technologies are so deeply embedded in social interactions and are used with others and around others, we often forget these ‘onlookers’ and we’re not designing with them in mind.”
…
Moreover, involving nonusers is especially key in developing more equitable tech products and creating more inclusive experiences. “That’s one of the points why previous AR iterations may not have worked—they designed it for the individual and not for the people surrounding them,” says Chung. She adds that a mindset shift is needed to actively make tech that doesn’t exclude people, which could lead to social systems that promote engagement and foster a sense of belonging for everyone.
…
Caballar’s September 25, 2023 article also appears in the January 2024 print version of the IEEE Spectrum with the title ““AR Glasses Upset the Social Dynamic.”
I wonder if Vancouver’s Mayor Ken Sim will be joining the folks at the giant culture/tech event known as South by Southwest® (SxSW) later in 2024. Our peripatetic mayor seems to enjoy traveling to sports events (FIFA 2023 in Qatar), to Los Angeles to convince producers of a hit television series, “The Last of Us,” that they film the second season in Vancouver, and, to Austin, Texas for SxSW 2023. Note: FIFA is Fédération internationale de football association or ‘International Association Football Federation’.
It’s not entirely clear why Mayor Sim’s presence was necessary at any of these events. In October 2023, he finished his first year in office; a business owner and accountant, Sim is best known for his home care business, “Nurse Next Door” and his bagel business, “Rosemary Rocksalt,” meaning he wouldn’t seem to have much relevant experience with sports and film events.
I gather Mayor Sim’s presence was part of the 2023 hype (for those who don’t know, it’s from ‘hyperbole’) where SxSW was concerned, from the Vancouver Day at SxSW 2023 event page,
Vancouver Day
Past(03/12/2023) 12:00PM – 6:00PM
FREE W/ RSVP | ALL AGES
Swan Dive
The momentum and vibrancy of Vancouver’s innovation industry can’t be stopped!
The full day event will see the Canadian city’s premier technology innovators, creative tech industries, and musical artists show why Vancouver is consistently voted one of the most desirable places to live in the world.
We will have talks/panels with the biggest names in VR/AR/Metaverse, AI, Web3, premier technology innovators, top startups, investors and global thought-leaders. We will keep Canada House buzzing throughout the day with activations/demos from top companies from Vancouver and based on our unique culture of wellness and adventure will keep guests entertained, and giveaways will take place across the afternoon.
The Canadian city is showing why Vancouver has become the second largest AR/VR/Metaverse ecosystem globally (with the highest concentration of 3D talent than anywhere in the world), a leader in Web3 with companies like Dapper Labs leading the way and becoming a hotbed in technology like artificial intelligence.
The Frontier Collective’s Vancouver’s Takeover of SXSW is a signature event that will enhance Vancouver as the Innovation and Creative Tech leader on the world stage.It is an opportunity for the global community to encounter cutting-edge ideas, network with other professionals who share a similar appetite for a forward focused experience and define their next steps.
Some of our special guests include City of Vancouver Mayor Ken Sim [emphasis mine], Innovation Commissioner of the Government of BC- Gerri Sinclair, Amy Peck of Endeavor XR, Tony Parisi of Lamina1 and many more.
In the evening, guests can expect a special VIP event with first-class musical acts, installations, wellness activations and drinks, and the chance to mingle with investors, top brands, and top business leaders from around the world.
To round out the event, a hand-picked roster of Vancouver musicians will keep guests dancing late into the night.
This is from Mayor Sim’s Twitter (now X) feed, Note: The photographs have not been included,
Mayor Ken Sim@KenSimCity Another successful day at #SXSW2023 showcasing Vancouver and British Columbia while connecting with creators, innovators, and entrepreneurs from around the world! #vanpoli#SXSW
2024 hype at SxSW and Vancouver’s Frontier Collective
New year and same hype but no Mayor Sim? From a January 22, 2024 article by Daniel Chai for the Daily Hive, Note: A link has been removed,
Frontier Collective, a coalition of Vancouver business leaders, culture entrepreneurs, and community builders, is returning to the South by Southwest (SXSW) Conference next month to showcase the city’s tech innovation on the global stage.
The first organization to formally represent and promote the region’s fastest-growing tech industries, Frontier Collective is hosting the Vancouver Takeover: Frontiers of Innovation from March 8 to 12 [2024].
According to Dan Burgar, CEO and co-founder of Frontier Collective, the showcase is not just about presenting new advancements but is also an invitation to the world to be part of a boundary-transcending journey.
…
“This year’s Vancouver Takeover is more than an event; it’s a beacon for the brightest minds and a celebration of the limitless possibilities that emerge when we dare to innovate together.”
…
Speakers lined up for the SXSW Vancouver Takeover in Austin, Texas, include executives from Google, Warner Bros, Amazon, JP Morgan, Amazon, LG, NTT, Newlab, and the Wall Street Journal.
…
“The Frontier Collective is excited to showcase a new era of technological innovation at SXSW 2024, building on the success of last year’s Takeover,” added Natasha Jaswal, VP of operations and events of Frontier Collective, in a statement. “Beyond creating a captivating event; its intentional and curated programming provides a great opportunity for local companies to gain exposure on an international stage, positioning Vancouver as a global powerhouse in frontier tech innovation.
Join us for a curated experience of music, art, frontier technologies and provocative panel discussions. We are organizing three major events, designed to ignite conversation and turn ideas into action.
We’re excited to bring together leaders from Vancouver and around the world to generate creative thinking at the biggest tech festival.
Let’s create the future together!
You have a choice of two parties and a day long event. Enjoy!
Who is the Frontier Collective?
The group announced itself in 2022, from a February 17, 2022 article in techcouver, Note: Links have been removed,
The Frontier Collective is the first organization to formally represent and advance the interests of the region’s fastest-growing industries, including Web3, the metaverse, VR/AR [virtual reality/augmented reality], AI [artificial intelligence], climate tech, and creative industries such as eSports [electronic sports], NFTs [non-fungible tokens], VFX [visual effects], and animation.
Did you know the Vancouver area currently boasts the world’s second largest virtual and augmented reality sector and hosts the globe’s biggest cluster of top VFX, video games and animation studios, as well as the highest concentration of 3D talent?
Did you know NFT technology was created in Vancouver and the city remains a top destination for blockchain and Web3 development?
Frontier Collective’s coalition of young entrepreneurs and business leaders wants to raise awareness of Vancouver’s greatness by promoting the region’s innovative tech industry on the world stage, growing investment and infrastructure for early-stage companies, and attracting diverse talent to Vancouver.
“These technologies move at an exponential pace. With the right investment and support, Vancouver has an immense opportunity to lead the world in frontier tech, ushering in a new wave of transformation, economic prosperity and high-paying jobs. Without backing from governments and leaders, these companies may look elsewhere for more welcoming environments.” said Dan Burgar, Co-founder and Head of the Frontier Collective. Burgar heads the local chapter of the VR/AR Association.
…
Their plan includes the creation of a 100,000-square-foot innovation hub in Vancouver to help incubate startups in Web3, VR/AR, and AI, and to establish the region as a centre for metaverse technology.
…
Frontier Collective’s team includes industry leaders at the Vancouver Economic Commission [emphasis mine; Under Mayor Sim and his majority City Council, the commission has been dissolved; see September 21, 2023 Vancouver Sun article “Vancouver scraps economic commission” by Tiffany Crawford], Collision Conference, Canadian incubator Launch, Invest Vancouver, and the BDC Deep Tech Fund. These leaders continue to develop and support frontier technology in their own organizations and as part of the Collective.
Interestingly, a February 7, 2023 article by the editors of BC Business magazine seems to presage the Vancouver Economic Commission’s demise. Note: Links have been removed,
Last year, tech coalition Frontier Collective announced plans to position Vancouver as Canada’s tech capital by 2030. Specializing in subjects like Web3, the metaverse, VR/AR, AI and animation, it seems to be following through on its ambition, as the group is about to place Vancouver in front of a global audience at SXSW 2023, a major conference and festival celebrating tech, innovation and entertainment.
Taking place in Austin, Texas from March 10-14 [2023], Vancouver Takeover is going to feature speakers, stories and activations, as well as opportunities for companies to connect with industry leaders and investors. Supported by local businesses like YVR Airport, Destination Vancouver, Low Tide Properties and others, Frontier is also working with partners from Trade and Invest BC, Telefilm and the Canadian Consulate. Attendees will spot familiar faces onstage, including the likes of Minister of Jobs, Economic Development and Innovation Brenda Bailey, Vancouver mayor Ken Sim [emphasis mine] and B.C. Innovation Commissioner Gerri Sinclair.
…
That’s right, no mention of the Vancouver Economic Commission.
As for the Frontier Collective Team (accessed January 29, 2024), the list of ‘industry leaders’ (18 people with a gender breakdown that appears to be 10 male and 8 female) and staff members (a Senior VP who appears to be male and the other seven staff members who appear to be female) can be found here. (Should there be a more correct way to do the gender breakdown, please let me know in the Comments.)
i find the group’s name a bit odd, ‘frontier’ is something I associate with the US. Americans talk about frontiers, Canadians not so much.
If you are interested in attending the daylong (11 am – 9 pm) Vancouver Takeover at SxSW 2024 event on March 10, 2024, just click here.
Aside: swagger at Vancouver City Hall, economic prosperity, & more?
What follows is not germane to the VR/AR community, SxSW of any year, or the Frontier Collective but it may help to understand why the City of Vancouver’s current mayor is going to events where he would seem to have no useful role to play.
Matt O’Grady’s October 4, 2023 article for Vancouver Magazine offers an eyeopening review of Mayor Ken Sim’s first year in office.
Ken Sim swept to power a year ago promising to reduce waste, make our streets safer and bring Vancouver’s “swagger” back. But can his open-book style win over the critics?
I’m sitting on a couch in the mayor’s third-floor offices, and Ken Sim is walking over to his turntable to put on another record. “How about the Police? I love this album.”
With the opening strains of “Every Breath You Take” crackling to life, Sim is explaining his approach to conflict resolution, and how he takes inspiration from the classic management tome Getting to Yes: Negotiating Agreement Without Giving In.
…
Odd choice for a song to set the tone for an interview. Here’s more about the song and its origins according to the song’s Wikipedia entry,
…
To escape the public eye, Sting retreated to the Caribbean. He started writing the song at Ian Fleming’s writing desk on the Goldeneye estate in Oracabessa, Jamaica.[14] The lyrics are the words of a possessive lover who is watching “every breath you take; every move you make”. Sting recalled:
“I woke up in the middle of the night with that line in my head, sat down at the piano and had written it in half an hour. The tune itself is generic, an aggregate of hundreds of others, but the words are interesting. It sounds like a comforting love song. I didn’t realise at the time how sinister it is. I think I was thinking of Big Brother, surveillance and control.”[15][emphasis mine]
Suddenly, the office door swings open and Sim’s chief of staff, Trevor Ford, pokes his head in (for the third time in the past 10 minutes). “We have to go. Now.”
“Okay, okay,” says Sim, turning back to address me. “Do you mind if I change while we’re talking?” And so the door closes again—and, without further ado, the Mayor of Vancouver drops trou [emphasis mine] and goes in search of a pair of shorts, continuing with a story about how some of his west-side friends are vocally against the massive Jericho Lands development promising to reshape their 4th and Alma neighbourhood.
“And I’m like, ‘Let me be very clear: I 100-percent support it, this is why—and we’ll have to agree to disagree,’” he says, trading his baby-blue polo for a fitted charcoal grey T-shirt. Meanwhile, as Sim does his wardrobe change, I’m doing everything I can to keep my eyes on my keyboard—and hoping the mayor finds his missing shorts.
It’s fair to assume that previous mayors weren’t in the habit of getting naked in front of journalists. At least, I can’t quite picture Kennedy Stewart doing so, or Larry or Gordon Campbell either.
But it also fits a pattern that’s developing with Ken Sim as a leader entirely comfortable in his own skin. He’s in a hurry to accomplish big things—no matter who’s watching and what they might say (or write). And he eagerly embraces the idea of bringing Vancouver’s “swagger” back—outlined in his inaugural State of the City address, and underlined when he shotgunned a beer at July’s [2023] Khatsahlano Street Party.
…
O’Grady’s October 4, 2023 article goes on to mention some of the more practical initiatives undertaken by Mayor Sim and his supermajority of ABC (Sim’s party, A Better City) city councillors in their efforts to deal with some of the city’s longstanding and intractable problems,
For a reminder of Sim’s key priorities, you need only look at the whiteboard in the mayor’s office. At the top, there’s a row labelled “Daily Focus (Top 4)”—which are, in order, 3-3-3-1 (ABC’s housing program); Chinatown; Business Advocacy; and Mental Health/Safety.
On some files, like Chinatown, there have been clear advances: council unanimously approved the Uplifting Chinatown Action Plan in January, which devotes more resources to cleaning and sanitation services, graffiti removal, beautification and other community supports. The plan also includes a new flat rate of $2 per hour for parking meters throughout Chinatown (to encourage more people to visit and shop in the area) and a new satellite City Hall office, to improve representation. And on mental health and public safety, the ABC council moved quickly in November to take action on its promise to fund 100 new police officers and 100 new mental health professionals [emphasis mine]—though the actual hiring will take time.
…
O’Grady likely wrote his article a few months before its October 2023 publication date (a standard practice for magazine articles), which may explain why he didn’t mention this, from an October 10, 2023 article by Michelle Gamage and Jen St. Denis for The Tyee,
100 Cops, Not Even 10 Nurses
…
One year after Mayor Ken Sim and the ABC party swept into power on a promise to hire 100 cops and 100 mental health nurses to address fears about crime and safety in Vancouver, only part of that campaign pledge has been fulfilled.
At a police board meeting in September, Chief Adam Palmer announced that 100 new police officers have now joined the Vancouver Police Department.
But just 9.5 full-time equivalent positions have been filled to support the mental health [emphasis mine] side of the promise.
In fact, Vancouver Coastal Health says it’s no longer aiming [emphasis mine] to hire 100 nurses. Instead, it’s aiming for 58 staff and specialists [emphasis mine], including social workers, community liaison workers and peers, as well as other disciplines alongside nurses to deliver care.
…
At the police board meeting on Sept. 21 [2023], Palmer said the VPD has had no trouble recruiting new police officers and has now hired 70 new recruits who are first-time officers, as well as at least 24 experienced officers from other police services.
…
In contrast, it’s been a struggle for VCH to recruit nurses specializing in mental health.
BC Nurses’ Union president Adriane Gear said she remembers wondering where Sim was planning on finding 100 nurses [emphasis mine] when he first made the campaign pledge. In B.C. there are around 5,000 full-time nursing vacancies, she said. Specialized nurses are an even more “finite resource,” she added.
…
I haven’t seen any information as to why the number was reduced from 100 mental health positions to 58. I’m also curious as to how Mayor Ken Sim whose business is called ‘Nurse Next Door’ doesn’t seem to know there’s a shortage of nurses in the province and elsewhere.
Last year, the World Economic Forum in collaboration with Quartz published a January 28, 2022 article by Aurora Almendral about the worldwide nursing shortage and the effects of COVID pandemic,
…
The report’s [from the International Council of Nurses (ICN)] survey of nurse associations around the world painted a grim picture of strained workforce. In Spain, nurses reported a chronic lack of PPE, and 30% caught covid. In Canada, 52% of nurses reported inadequate staffing, and 47% met the diagnostic cut-off for potential PTSD [emphasis mine].
Burnout plagued nurses around the world: 40% in Uganda, 60% in Belgium, and 63% in the US. In Oman, 38% nurses said they were depressed, and 73% had trouble sleeping. Fifty-seven percent of UK nurses planned to leave their jobs in 2021, up from 36% in 2020. Thirty-eight percent of nurses in Lebanon did not want to be nurses anymore, but stayed in their jobs because their families needed the money.
In Australia, 17% of nurses had sought mental health support. In China, 6.5% of nurses reported suicidal thoughts.
…
Moving on from Mayor Sim’s odd display of ignorance (or was it cynical calculation from a candidate determined to win over a more centrist voting population?), O’Grady’s October 4, 2023 article ends on this note,
…
When Sim runs for reelection in 2026, as he promises to do, he’ll have a great backdrop for his campaign—the city having just hosted several games for the FIFA World Cup, which is expected to bring in $1 billion and 900,000 visitors over five years.
The renewed swagger of Sim’s city will be on full display for the world to see. So too—if left unresolved—will some of Vancouver’s most glaring and intractable social problems.
I was born in Vancouver and don’t recall the city as having swagger, at any time. As for the economic prosperity that’s always promised with big events like the FIFA world cup, I’d like to see how much the 2010 Olympic Games held in Vancouver cost taxpayers and whether or not there were long lasting economic benefits. From a July 9, 2022 posting on Bob Mackin’s thebreaker.news,
…
The all-in cost to build and operate the Vancouver 2010 Games was as much as $8 billion, but the B.C. Auditor General never conducted a final report. The organizing committee, VANOC, was not covered by the freedom of information law and its records were transferred to the Vancouver Archives after the Games with restrictions not to open the board minutes and financial ledgers before fall 2025.
Mayor Sim will have two more big opportunities to show off his swagger in 2025 . (1) The Invictus Games come to Vancouver and Whistler in February 2025 and will likely bring Prince Harry and the Duchess of Sussex, Meghan Markle to the area (see the April 22, 2022 Associated Press article by Gemma Karstens-Smith on the Canadian Broadcasting Corporation website) and (2) The 2025 Junos (the Canadian equivalent to the Grammys) from March 26 – 30, 2025 with the awards show being held on March 30, 2025 (see the January 25, 2024 article by Daniel Chai for the Daily Hive website).
While he waits, Sim may have a ‘swagger’ opportunity later this month (February 2024) when Prince Harry and the Duchess of Sussex (Meghan Markle) visit the Vancouver and Whistler for a “a three-day Invictus Games’ One Year to Go event in Vancouver and Whistler,” see Daniel Chai’s February 2, 2024 article for more details.
Don’t forget, should you be in Austin, Texas for the 2024 SxSW, the daylong (11 am – 9 pm) Vancouver Takeover at SxSW 2024 event is on March 10, 2024, just click here to register. Who knows? You might get to meet Vancouver’s, Mayor Ken Sim. Or, if you can’t make it to Austin, Texas, O’Grady’s October 4, 2023 article offer an unusual political profile.
I’m primarily interested in the VR and the ‘atoms’ of Swedish artist Hilma af Klint but first there are the NFT (non-fungible tokens). From an October 28, 2022 article by Louis Jebb for The Art Newspaper,
More than a century after she completed her chef d’oeuvre—193 abstract canvases known collectively as Paintings for the Temple (1906-15)—Hilma af Klint has emerged this year as a multimedia power player. Her work—graphic, colourful and deeply idiosyncratic—has demonstrated a Van Gogh-like power to generate footfall and has given rise to projects across multiple formats, from books and films to experiences in virtual and augmented reality (VR/AR).
Now, from 14 November [2022], digital versions of all 193 of her Paintings for the Temple, created by Acute Art, will be offered as NFTs in one edition, for sale on Goda (Gallery of Digital Assets), the platform launched earlier this year by the multi-Grammy award-winning philanthropist and recording artist Pharrell Williams. A second edition of the NFTs will remain with Bokförlaget Stolpe, the publishers of the Af Klint catalogue raisonée. The originals belong to the not-for-profit Hilma af Klint Foundation in Sweden.
“Hilma af Klint was an incredible pioneer!” says Pharrell Williams. “It took us a century to fully understand. Now that we do, we need to rewrite art history! Beautiful and meaningful art truly transcends time, and Hilma af Klint’s work is a perfect example of that. We’re honoured to show her work on this platform and to truly celebrate a remarkable woman.” For KAWS, who acts as an art adviser on the Goda platform, Af Klint was a visionary. “I find it great that she finally gets the attention she deserves,” KAWS says. “During her lifetime the audience wasn’t ready but today we are. She painted for the future. She painted for us!”
…
VR
Hilma af Klint dreamt of a spiral shaped building to house her most important work, but the idea never materialised. More than a century later, af Klint’s vision has been translated into a VR experience where some of her most important paintings come alive. Hilma af Klint – The Temple is produced in collaboration with [Bokförlaget Stolpe and] Acute Art and premiered at Koko Camden during the Frieze Art Fair 2022. The virtual reality work Hilma af Klint – The Temple is a 12-minute VR experience which includes 193 of Hilma af Klint’s paintings in a format that transcends time and space and makes a significant portion of her artistic output available to the public.
Hilma af Klint – The Temple VR was on tour since it first debuted in 2022 and Elissaveta M. Brandon wrote up her experience in New York City in a October 25, 2023 article for Fast Company, Note: Links have been removed,
It is noon on a Tuesday, and I am sitting in a cocktail bar. But instead of a Negroni on my table, there is a VR headset.
The reason for this anomaly dates back to 1915, when the Swedish artist Hilma af Klint completed a series of paintings titled, Paintings for the Temple. The artist died in 1944, but from the 124 notebooks she left behind, we know that she dreamed of housing these paintings in a spiral-shaped building known as the Temple.
That building never materialized in real life, but it has now—in virtual reality.
Af Klint, which The Art Newspaper has described as “the mystic Swedish mother of early-modern abstraction,” is having a bit of a moment. A museum dedicated solely to her work remains to be built, but over the past few years, the artist has been the subject of a sprawling exhibition at the Guggenheim, a biopic, a new biography, a catalogue raisonné (a comprehensive, annotated list of all known works by the artist), an augmented reality “art walk” in London’s Regent’s Park, and now, a virtual reality temple.
The VR experience—I lack the words to describe it in any other way—is titled, Hilma af Klint: The Temple and lasts 12 minutes. It was conceived by the London-based extended-reality studio Acute Art in collaboration with [Bokförlaget] Stolpe Publishing. After various stints at the Tate Modern in London, the Institut Suédois in Paris, and Bozar in Brussels, it has now arrived at the Fotografiska Museum in New York City, where it is on view until November 19 [2023], inside a cocktail bar, which is tucked away behind a door in the museum’s lobby, and fittingly called Chapel Bar.
The artist left behind a large body of abstract work inspired by her spiritual encounters. Her series, Paintings for the Temple, was, in fact, born out of a séance, during which she was asked to take on a more extensive project than her previous work. Paintings for the Temple took 9 years to complete; it took me 12 minutes to explore.
…
Atoms
While the focus is usually on af Klint’s spirituality and her absence from art history, there’s also her interest in science, from Brandon’s October 25, 2023 article,
…, I wonder how af Klint would have felt about her paintings being presented in virtual reality. According to Birnbaum [Daniel Birnbaum, current director and curator of Acute Art], who is the former director of Moderna Museet, Sweden’s museum of modern art in Stockholm, af Klint had a scientific mind. “One wonders what she would have thought of computation and recent inventions, like the blockchain,” he says. Stolpe also points me to the artist’s Atom Series—the atom being a major theme during her lifetime.
The Guggenheim Museum in New York still has material from its 2018 blockbuster Hilma af Klint show available online, including this October 24, 2018 combined audio/transcript article, which includes these tidbits in the transcript,
The Atom Series (1917) by Hilma af Klint
…
Tracey Bashkoff [Director of Collections and Senior Curator at the Guggenheim]: Hilma af Klint is working at a time where the most recent scientific discoveries show that there is a world beyond our observable world, and that things like atoms and sound waves and x-rays and particles exist, that we don’t observe with the naked eye. And so, the question of opening up an invisible world from our physical world, being able to make observations of another dimension of reality, becomes an issue of exploration for af Klint and for many of the thinkers of her time.
Narrator: These works are from TheAtom Series, which was executed in 1917. The atom was a major theme in science and society at large during the artist’s lifetime. In the last five years of the 19th century, the accepted understanding of atoms was overturned by the discovery of subatomic particles. At the same time, scientists were making numerous discoveries about electromagnetism, x-rays, radioactive decay, and other phenomena.
The audio file is about 2 mins. long and it’s a short transcript.
Not sure how I ended up on a National Film Board of Canada (NFB) list but this morning (August 14, 2023) their latest emailed newsletter provided a thrill. From the August 11, 2023 NFB newsletter,
Montreal premiere: Ask Noam Chomsky anything in this interactive VR experience
CHOM5KY vs CHOMSKY is an interactive virtual reality installation created by Sandra Rodriguez, that lets you have a whole conversation with an AI-generated version of public intellectual Noam Chomsky. Be one of the first people to experience it at the NFB Space in Montreal.
Artificial intelligence is everywhere—from the photo enhancer in your smartphone to self-parking cars and the virtual assistant in your kitchen. But what is it exactly?
CHOM5KY vs CHOMSKY: A playful conversation on AI is an engaging and collaborative virtual reality experience that invites us to examine the promises and pitfalls of AI. If machine intelligence is promoted as an inevitable future, we should all be able to ask: What are we hoping to achieve with it? And at what cost?
Visitors use VR headsets to enter the AI world, where they are greeted by CHOM5KY—an artificial entity inspired by and built from the vast array of digital traces of renowned professor Noam Chomsky. CHOM5KY is a friend and serves as a guide, inviting us to peek under the hood of machine-learning systems, and offering thought-provoking takes on how artificial intelligence intersects with human life.
Why Noam Chomsky? [emphasis mine] Professor Chomsky is a philosopher, social critic, historian and political activist, but is perhaps best known for his work in linguistics and cognitive science, the study of the mind. As one of the most recorded and digitized living intellectuals, he has left behind an extensive wake of data traces, enough to create an AI system based on his legacy. Chomsky is also skeptical about the pompous promises made of AI. Which makes him the perfect guide to encourage visitors to question everything they see—and help demystify AI.
Sandra Rodriguez, Ph.D., is a director/producer and sociologist of new media technology. She has written and directed documentary features, web docs and VR, XR and AI experiences that have garnered multiple awards (including a Peabody, best VR awards at DOK Leipzig and the PRIX NUMIX, and the prestigious Golden Nica at the Prix Ars Electronica). She has served as UX lead and consultant for esteemed institutions such as CBC/Radio-Canada and the United Nations. Fascinated by storytelling and emergent technology’s potential for social change, Sandra has created a body of work that spans AI-dance performance, multi-user VR and large-scale XR installations. She is a Sundance Story Lab Fellow and MacArthur Grantee. She is also a Scholar and Lecturer at the Massachusetts Institute of Technology (MIT), where she leads “Hacking XR,” MIT’s first official class in immersive media creation. [Note: XR is Extended Reality]
SCHNELLE BUNTE BILDER Co-producers
The media art collective SCHNELLE BUNTE BILDER was founded in Berlin in 2011. Technically on the cutting edge, their hearts beat for art, culture and science. Together with curators, musicians and other artists, they develop productions for exhibitions and cultural events. Somewhere between art and technology, they combine classical and generative animation with creative coding and critical design to create extraordinary media scenography. Since then, the studio has grown organically and currently consists of a solid core of designers, artists and developers: Michael Burk, Ann-Katrin Krenz, Felix Worseck, Niklas Söder and Johannes Lemke.
Marie-Pier Gauthier Producer
Marie-Pier Gauthier is a producer at the NFB’s Montreal Interactive Studio and has been contributing to this storytelling laboratory for the past 12 years. Whether it’s digital creations on mobile devices or the web, interactive installations, or virtual or augmented reality experiences, she guides and supervises projects by innovative creators working at the crossroads of disciplines, who use a range of storytelling tools, including social networks, code, design, artificial intelligence and conversational robots. Marie-Pier Gauthier has collaborated on more than 100 interactive works (The Enemy, Do Not Track, Way To Go, Motto) that have received over 100 awards in Canada and abroad.
Laurence Dolbec Producer
Laurence Dolbec is a producer in the interactive studio at the National Film Board of Canada. She has more than 12 years of experience working in production, notably for some of Quebec’s most creative institutions including Place des Arts, TOHU and C2 Montréal. Laurence started her career in New York City working for Livestream, which is now part of Vimeo. Her most recent productions explores the spheres of artificial intelligence and knowledge.
Louis-Richard Tremblay Executive Producer
Louis-Richard Tremblay has been an executive producer with the French Program’s Interactive Studio since 2019. He first stepped into a producer role with the NFB in 2013, after a dozen or so years at CBC/Radio-Canada. Fascinated by the power of interactive experiences and media of all kinds, he has guided numerous international co-productions at the NFB, helped produce dozens of award-winning works in Canada and internationally, and regularly participates in panels, conferences and master classes.
CREDITS
Created by Sandra Rodriguez, CHOM5KY vs. CHOMSKY is a co-production by the National Film Board of Canada and SCHNELLE BUNTE BILDER, with support from the Medienboard Berlin-Brandenburg.
…
Two Oddities: Berlin and Moov AI and tickets for the Canadian premiere
World Premiere in Berlin on 4 Nov 2022 at Berlin Science Week
…
CHOM5KY vs CHOMSKY is a Co-production between the National Film Board of Canada and the Studio SCHNELLE BUNTE BILDER based in Berlin, supported by Medienboard Berlin-Brandenburg.
Starting September 6 [2023] in Montreal. Buy your tickets! Explore the world of artificial intelligence with an engaging and collaborative virtual reality experience by Sandra Rodriguez that examines the promises and pitfalls of AI.
The virtual reality experience takes about 25 minutes, but each timeslot runs 45 minutes, to take into account the introduction and getting set up. Please arrive 5 minutes early.
…
How much does a ticket cost?
We offer a general admission ticket, for $26 + applicable taxes.
I’m not sure why Moov AI (based in Montreal, Canada) doesn’t appear in the most recent credits for the project but they host that teaser as an example from one of their projects,
Replicate Noam Chomsky’s persona
…
Chomsky vs. Chomsky is a virtual reality and artificial intelligence immersive experience that showcases an interaction guided by CHOMSKY_AI, the virtual host built from digital traces of Noam Chomsky.
Sandra Rodriguez is the director of this project, which was realized in collaboration with the NFB, the MIT Open Documentary Lab, Schnellebuntebilder, and Moov AI.
…
Using the digital traces left by Noam Chomsky and archives of his interviews, our team built an AI conversational agent that replicates his personality and cynical humor.
This chatbot is at the heart of the technical solution we developed and deployed to support the experience and link the creative vision of the project’s director to the technical requirements to ensure a fluid and immersive experience.
…
AI to power a chatbot.
After the immense success of the prototype at Sundance 2020, the teams involved in the project are hard at work completing the final phase of production of Chomsky vs. Chomsky.
The project team is equipped with a CHOMSKY_AI conversational device that is true to the director’s artistic vision and allows thousands of people worldwide to chat with the digital doppelgänger of such a significant figure in contemporary history.
What a privilege!
Canadians and Noam Chomsky
“Manufacturing Consent: Noam Chomsky and the Media” was one of the most successful feature documentaries in Canadian history. From the Manufacturing Consent (film) Wikipedia entry, Note: Links have been removed,
Manufacturing Consent: Noam Chomsky and the Media[1] is a 1992 documentary film that explores the political life and ideas of linguist, intellectual, and political activist Noam Chomsky. Canadian filmmakers Mark Achbar and Peter Wintonick expand the analysis of political economy and mass media presented in Manufacturing Consent, a 1988 book Chomsky wrote with Edward S. Herman.
Funny, provocative and surprisingly accessible, MANUFACTURING CONSENT explores the political life and ideas of world-renowned linguist, intellectual and political activist Noam Chomsky. Through a dynamic collage of biography, archival gems, imaginative graphics and outrageous illustrations, Mark Achbar and Peter Wintonick’s award-winning documentary highlights Chomsky’s probing analysis of mass media and his critique of the forces at work behind the daily news. Available for the first time anywhere on DVD, MANUFACTURING CONSENT features appearances by journalists Bill Moyers and Peter Jennings, pundit William F. Buckley Jr., novelist Tom Wolfe and philosopher Michel Foucault. This Edition features an exclusive ten-years-after video interview with Chomsky.
Let’s clear up a few things. First, as noted in the headline, the Cambridge Festival (March 17 – April 2, 2023) is being held in the UK by the University of Cambridge in the town of Cambridge. Second, the specific festival event featured here is a display put together by students and professors at Anglia Ruskin University (ARU) and in the town of Cambridge as part of the festival and will be held for two days, March 31 – April 1, 2023.
Dreams are being turned into reality as new research investigating the unusual experiences of people with depersonalisation symptoms is being brought to life in an art exhibition at Anglia Ruskin University (ARU) in Cambridge, England.
ARU neuroscientist Dr Jane Aspell has led a major international study into depersonalisation, funded by the Bial Foundation. The “Living in a Dream” project, results from which will be published later this year, found that people who experience depersonalisation symptoms sometimes experience life from a very different perspective, both while awake and while dreaming.
Those experiencing depersonalisation often report feeling as though they are not real and that their body does not belong to them. Dr Aspell’s study, which is the first to examine how people with this disorder experience dreams, collected almost 1,000 dream reports from participants.
Now these dreams have been recreated by eight students from ARU’s MA Illustration course and the artwork will go on display for the first time on 31 March and 1 April as part of the Cambridge Festival.
This collaboration between art and science, led by psychologist Matt Gwyther and illustrator Dr Nanette Hoogslag, with the support of artist and creative technologist Emily Godden, has resulted in 12 original artworks, which have been created using the latest audio-visual technologies, including artificial intelligence (AI), and are presented using a mix of audio-visual installation, virtual reality (VR) experiences, and traditional media.
Dr Jane Aspell, Associate Professor of Cognitive Neuroscience at ARU and Head of the Self and Body Lab, said: “People who experience depersonalisation sometimes feel detached from their self and body, and a common complaint is that it’s like they are watching their own life as a film.
“Because their waking reality is so different, myself and my international collaborators – Dr Anna Ciaunica, Professor Bigna Lenggenhager and Dr Jennifer Windt – were keen to investigate how they experience their dreams.
“People who took part in the study completed daily ‘dream diaries’, and it is fabulous to see how these dreams have been recreated by this group of incredibly talented artists.”
Matt Gwyther added: “Dreams are both incredibly visual and surreal, and you lose so much when attempting to put them into words. By bringing them to life as art, it has not only produced fabulous artwork, but it also helps us as scientists better understand the experiences of our research participants.”
Amongst the artists contributing to the exhibition is MA student Jewel Chang, who has recreated a dream about being chased. When the person woke up, they continued to experience it and were unsure whether they were experiencing the dream or reality.
False awakenings and multiple layers of dreams can be confusing, affecting our perception of time and space. Jewel used AI to create an environment with depth and endless moving patterns that makes the visitor feel trapped in their dream, unable to escape.
Kelsey Wu, meanwhile, used special 3D software and cameras to recreate a dream of floating over hills and forests, and losing balance. The immersive piece, with the audience invited to sit on a grass-covered floor, creates a sense of loss of control of the body, which moves in an abnormal and unbalanced way, and evokes a struggle between illusion and reality as the landscape continuously moves.
Dr Nanette Hoogslag, Course Leader for the MA in Illustration at ARU, said: “This project has been a unique challenge, where students not only applied themselves in supporting scientific research, but investigated and used a range of new technologies, including virtual reality and AI-generated imagery. The final pieces are absolutely remarkable, and also slightly unsettling!”
I stumbled across this November 15, 2022 news item on Nanowerk highlighting work on the sense of touch in the virual originally announced in October 2022,
A collaborative research team co-led by City University of Hong Kong (CityU) has developed a wearable tactile rendering system, which can mimic the sensation of touch with high spatial resolution and a rapid response rate. The team demonstrated its application potential in a braille display, adding the sense of touch in the metaverse for functions such as virtual reality shopping and gaming, and potentially facilitating the work of astronauts, deep-sea divers and others who need to wear thick gloves.
…
Here’s what you’ll need to wear for this virtual tactile experience,
“We can hear and see our families over a long distance via phones and cameras, but we still cannot feel or hug them. We are physically isolated by space and time, especially during this long-lasting pandemic,” said Dr Yang Zhengbao,Associate Professor in the Department of Mechanical Engineering of CityU, who co-led the study. “Although there has been great progress in developing sensors that digitally capture tactile features with high resolution and high sensitivity, we still lack a system that can effectively virtualize the sense of touch that can record and playback tactile sensations over space and time.”
In collaboration with Chinese tech giant Tencent’s Robotics X Laboratory, the team developed a novel electrotactile rendering system for displaying various tactile sensations with high spatial resolution and a rapid response rate. Their findings were published in the scientific journal Science Advances under the title “Super-resolution Wearable Electro-tactile Rendering System”.
Limitations in existing techniques
Existing techniques to reproduce tactile stimuli can be broadly classified into two categories: mechanical and electrical stimulation. By applying a localised mechanical force or vibration on the skin, mechanical actuators can elicit stable and continuous tactile sensations. However, they tend to be bulky, limiting the spatial resolution when integrated into a portable or wearable device. Electrotactile stimulators, in contrast, which evoke touch sensations in the skin at the location of the electrode by passing a local electric current though the skin, can be light and flexible while offering higher resolution and a faster response. But most of them rely on high voltage direct-current (DC) pulses (up to hundreds of volts) to penetrate the stratum corneum, the outermost layer of the skin, to stimulate the receptors and nerves, which poses a safety concern. Also, the tactile rendering resolution needed to be improved.
The latest electro-tactile actuator developed by the team is very thin and flexible and can be easily integrated into a finger cot. This fingertip wearable device can display different tactile sensations, such as pressure, vibration, and texture roughness in high fidelity. Instead of using DC pulses, the team developed a high-frequency alternating stimulation strategy and succeeded in lowering the operating voltage under 30 V, ensuring the tactile rendering is safe and comfortable.
They also proposed a novel super-resolution strategy that can render tactile sensation at locations between physical electrodes, instead of only at the electrode locations. This increases the spatial resolution of their stimulators by more than three times (from 25 to 105 points), so the user can feel more realistic tactile perception.
Tactile stimuli with high spatial resolution
“Our new system can elicit tactile stimuli with both high spatial resolution (76 dots/cm2), similar to the density of related receptors in the human skin, and a rapid response rate (4 kHz),” said Mr LinWeikang, a PhD student at CityU, who made and tested the device.
The team ran different tests to show various application possibilities of this new wearable electrotactile rendering system. For example, they proposed a new Braille strategy that is much easier for people with a visual impairment to learn.
The proposed strategy breaks down the alphabet and numerical digits into individual strokes and order in the same way they are written. By wearing the new electrotactile rendering system on a fingertip, the user can recognise the alphabet presented by feeling the direction and the sequence of the strokes with the fingertip sensor. “This would be particularly useful for people who lose their eye sight later in life, allowing them to continue to read and write using the same alphabetic system they are used to, without the need to learn the whole Braille dot system,” said Dr Yang.
Enabling touch in the metaverse
Second, the new system is well suited for VR/AR [virtual reality/augmented reality] applications and games, adding the sense of touch to the metaverse. The electrodes can be made highly flexible and scalable to cover larger areas, such as the palm. The team demonstrated that a user can virtually sense the texture of clothes in a virtual fashion shop. The user also experiences an itchy sensation in the fingertips when being licked by a VR cat. When stroking a virtual cat’s fur, the user can feel a variance in the roughness as the strokes change direction and speed.
The system can also be useful in transmitting fine tactile details through thick gloves. The team successfully integrated the thin, light electrodes of the electrotactile rendering system into flexible tactile sensors on a safety glove. The tactile sensor array captures the pressure distribution on the exterior of the glove and relays the information to the user in real time through tactile stimulation. In the experiment, the user could quickly and accurately locate a tiny steel washer just 1 mm in radius and 0.44mm thick based on the tactile feedback from the glove with sensors and stimulators. This shows the system’s potential in enabling high-fidelity tactile perception, which is currently unavailable to astronauts, firefighters, deep-sea divers and others who need wear thick protective suits or gloves.
“We expect our technology to benefit a broad spectrum of applications, such as information transmission, surgical training, teleoperation, and multimedia entertainment,” added Dr Yang.
…
Here’s a link to and a citation for the paper,
Super-resolution wearable electrotactile rendering system by Weikang Lin, Dongsheng Zhang, Wang Wei Lee, Xuelong Li, Ying Hong, Qiqi Pan, Ruirui Zhang, Guoxiang Peng, Hong Z. Tan, Zhengyou Zhang, Lei Wei, and Zhengbao Yang. Science Advances 9 Sep 2022 Vol 8, Issue 36 DOI: 10.1126/sciadv.abp8738
As noted in the headline for this post, I have two items. For anyone unfamiliar with XR and the other (AR, MR, and VR) realities, I found a good description which I placed in my October 22, 2021 posting (scroll down to the “How many realities are there?” subhead about 70% of the way down).
eXtended Reality in Rome
I got an invitation (via a February 24, 2022 email) to participate in a special session at one of the 2022 IEEE (Institute of Electrical and Electronics Engineers) conference (more about the conference later).
The fast development of Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR) solutions over the last few years are transforming how people interact, work, and communicate. The eXtended Reality (XR) term encloses all those immersive technologies that can shift the boundaries between digital and physical worlds to realize the Metaverse. According to tech companies and venture capitalists, the Metaverse will be a super-platform that convenes sub-platforms: social media, online video games, and ease-of-life apps, all accessible through the same digital space and sharing the same digital economy. Inside the Metaverse, virtual worlds will allow avatars to carry all human endeavours, including creation, display, entertainment, social, and trading. Thus, the Metaverse will evolve how users interact with brands, intellectual properties, and each other things on the Internet. A user could join friends to play a multiplayer game, watch a movie via a streaming service and then attend a university course precisely the same as in the real world.
The Metaverse development will require new software architecture that will enable decentralized and collaborative virtual worlds. These self-organized virtual worlds will be permanent and will require maintenance operations. In addition, it will be necessary to design efficient data management system and prevent privacy violations. Finally, the convergence of physical reality, virtually enhanced, and an always-on virtual space highlighted the need to rethink the actual paradigms for visualization, interaction, and sharing of digital information, moving toward more natural, intuitive, dynamically customizable, multimodal, and multi-user solutions.
TOPICS
The topics of interest include, but are not limited to, the following:
Hardware/Software Architectures for Metaverse
Decentralized and Collaborative Architectures for Metaverse
Interoperability for Metaverse
Tools to help creators to build the Metaverse
Operations and Maintenance in Metaverse
Data security and privacy mechanisms for Metaverse
Cryptocurrency, token, NFT Solutions for Metaverse
Fraud-Detection in Metaverse
Cyber Security for Metaverse
Data Analytics to Identify Malicious Behaviors in Metaverse
Blockchain/AI technologies in Metaverse
Emerging Technologies and Applications for Metaverse
New models to evaluate the impact of the Metaverse
Interactive Data Exploration and Presentation in Metaverse
Human factors issues related to Metaverse
Proof-of-Concept in Metaverse: Experimental Prototyping and Testbeds
ABOUT THE ORGANIZERS
Giuseppe Caggianese is a Research Scientist at the National Research Council of Italy. He received the Laurea degree in computer science magna cum laude in 2010 and the Ph.D. degree in Methods and Technologies for Environmental Monitoring in 2013 from the University of Basilicata, Italy.
His research activities are focused on the field of Human-Computer Interaction (HCI) and Artificial Intelligence (AI) to design and test advanced interfaces adaptive to specific uses and users in both augmented and virtual reality. He authored more than 30 scientific papers published in international journals, conference proceedings, and books. He also serves on program committees of several international conferences and workshops.
Ugo Erra is an Assistant Professor (qualified as Associate Professor) at the University of Basilicata (UNIBAS), Italy. He is the founder of the Computer Graphics Laboratory at the University of Basilicata. He received an MSc/diploma degree in Computer Science from the University of Salerno, Italy, in 2001 and a PhD in Computer Science in 2004.
His research focuses on Real-Time Computer Graphics, Information Visualization, Artificial Intelligence, and Parallel Computing. Has been involved in several research projects; among these, one project was funded by the European Commission as a research fellow, and four projects were founded by Area Science Park, a public national research organization that promotes the development of innovation processes, as principal investigator. He has (co-)authored about 14 international journal articles, 45 international conference proceedings, and two book chapters. He supervised four PhD students. He organized the Workshop on Parallel and Distributed Agent-Based Simulations, a satellite Workshop of Euro-Par, from 2013 to 2015. He served more than 20 international conferences as program committee member and more than ten journals as referee.
The 2022 IEEE International Conference on Metrology for eXtended Reality, Artificial Intelligence, and Neural Engineering (IEEE MetroXRAINE 2022) will be an international event mainly aimed at creating a synergy between experts in eXtended Reality, Brain-Computer Interface, and Artificial Intelligence, with special attention to measurement [i.e., metrology].
The conference will be a unique opportunity for discussion among scientists, technologists, and companies on very specific sectors in order to increase the visibility and the scientific impact for the participants. The organizing formula will be original owing to the emphasis on the interaction between the participants to exchange ideas and material useful for their research activities.
MetroXRAINE will be configured as a synergistic collection of sessions organized by the individual members of the Scientific Committee. Round tables will be held for different projects and hot research topics. Moreover, we will have demo sessions, students contests, interactive company expositions, awards, and so on.
The Conference will be a hybrid conference [emphasis mine], with the possibility of attendance remotely or in presence.
CALL FOR PAPERS
The Program Committee is inviting to submit Abstracts (1 – 2 pages) for the IEEE MetroXRAINE 2022 Conference, 26-28 October, 2022.
All contributions will be peer-reviewed and acceptance will be based on quality, originality and relevance. Accepted papers will be submitted for inclusion into IEEE Xplore Digital Library.
Extended versions of presented papers are eligible for post publication.
…
Abstract Submission Deadline:
March 28, 2022
Full Paper Submission Deadline:
May 10, 2022
Extended Abstract Acceptance Notification:
June 10, 2022
Final Paper Submission Deadline:
July 30, 2022
According to the email invitation, “IEEE MetroXRAINE 2022 … will be held on October 26-28, 2022 in Rome.” You can find more details on the conference website.
Council of Canadian Academies launches four projects
This too is from an email. From the Council of Canadian Academies (CCA) announcement received February 27, 2022 (you can find the original February 17, 2022 CCA news release here),
The Council of Canadian Academies (CCA) is pleased to announce it will undertake four new assessments beginning this spring:
Gene-edited Organisms for Pest Control Advances in gene editing tools and technologies have made the process of changing an organism’s genome more efficient, opening up a range of potential applications. One such application is in pest control. By editing genomes of organisms, and introducing them to wild populations, it’s now possible to control insect-borne disease and invasive species, or reverse insecticide resistance in pests. But the full implications of using these methods remains uncertain.
This assessment will examine the scientific, bioethical, and regulatory challenges associated with the use of gene-edited organisms and technologies for pest control.
Sponsor: Health Canada’s Pest Management Regulatory Agency
The Future of Arctic and Northern Research in Canada The Arctic is undergoing unprecedented changes, spurred in large part by climate change and globalization. Record levels of sea ice loss are expected to lead to increased trade through the Northwest Passage. Ocean warming and changes to the tundra will transform marine and terrestrial ecosystems, while permafrost thaw will have significant effects on infrastructure and the release of greenhouse gases. As a result of these trends, Northern communities, and Canada as an Arctic and maritime country, are facing profound economic, social, and ecosystem impacts.
This assessment will examine the key foundational elements to create an inclusive, collaborative, effective, and world-class Arctic and northern science system in Canada.
Sponsor: A consortium of Arctic and northern research and science organizations from across Canada led by ArcticNet
Quantum Technologies Quantum technologies will affect all sectors of the Canadian economy. Built on the principles of quantum physics, these emerging technologies present significant opportunities in the areas of sensing and metrology, computation and communication, and data science and artificial intelligence, among others. But there is also the potential they could be used to facilitate cyberattacks, putting financial systems, utility grids, infrastructure, personal privacy, and national security at risk. A comprehensive exploration of the capabilities and potential vulnerabilities of these technologies will help to inform their future deployment across society and the economy.
This assessment will examine the impacts, opportunities, and challenges quantum technologies present for industry, governments, and people in Canada.
Sponsor: National Research Council Canada and Innovation, Science and Economic Development Canada
International Science and Technology Partnership Opportunities International partnerships focused on science, technology, and innovation can provide Canada with an opportunity to advance the state of knowledge in areas of national importance, help address global challenges, and contribute to UN Sustainable Development Goals. Canadian companies could also benefit from global partnerships to access new and emerging markets.
While there are numerous opportunities for international collaborations, Canada has finite resources to support them. Potential partnerships need to be evaluated not just on strengths in areas such as science, technology, and innovation, but also political and economic factors.
This assessment will examine how public, private, and academic organizations can evaluate and prioritize science and technology partnership opportunities with other countries to achieve key national objectives.
Sponsor: Global Affairs Canada
Gene-edited Organisms for Pest Control and International Science and Technology Partnership Opportunities are funded by Innovation, Science and Economic Development Canada (ISED). Quantum Technologies is funded by the National Research Council of Council (NRC) and ISED, and the Future of Arctic and Northern Research in Canada is funded by a consortium of Arctic and northern research and science organizations from across Canada led by ArcticNet. The reports will be released in 2023-24.
Multidisciplinary expert panels will be appointed in the coming months for all four assessments.
You can find in-progress and completed CCA reports here.
Fingers crossed that the CCA looks a little further afield for their international experts than the US, UK, Australia, New Zealand, and northern Europe.
Finally, I’m guessing that the gene-editing and pest management report will cover and, gingerly, recommend germline editing (which is currently not allowed in Canada) and gene drives too.
It will be interesting to see who’s on that committee. If you’re really interested in the report topic, you may want to check out my April 26, 2019 posting and scroll down to the “Criminal ban on human gene-editing of inheritable cells (in Canada)” subhead where I examined what seemed to be an informal attempt to persuade policy makers to allow germline editing or gene-editing of inheritable cells in Canada.
Before getting to the announcement, this talk and Q&A (question and answer) session is being co-hosted by ArtSci Salon at the Fields Institute for Research in Mathematical Sciences and the OCAD University/DMG Bodies in Play (BiP) initiative.
For anyone curious about OCAD, it was the Ontario College of Art and Design and then in a very odd government/marketing (?) move, they added the word university. As for DMG, in their own words and from their About page, “DMG is a not-for-profit videogame arts organization that creates space for marginalized creators to make, play and critique videogames within a cultural context.” They are located in Toronto, Ontario. Finally, the Art/Sci Salon and the Fields Institute are located at the University of Toronto.
As for the talk, here’s more from the November 28, 2021 Art/Sci Salon announcement (received via email),
Inspired by her own experience with the health care system to treat a post-reproductive disease, interdisciplinary artist [Camille] Baker created the project INTER/her, an immersive installation and VR [virtual reality] experience exploring the inner world of women’s bodies and the reproductive diseases they suffer. The project was created to open up the conversation about phenomena experienced by women in their late 30’s (sometimes earlier) their 40’s, and sometimes after menopause. Working in consultation with a gynecologist, the project features interviews with several women telling their stories. The themes in the work include issues of female identity, sexuality, body image, loss of body parts, pain, disease, and cancer. INTER/her has a focus on female reproductive diseases explored through a feminist lens; as personal exploration, as a conversation starter, to raise greater public awareness and encourage community building. The work also represents the lived experience of women’s pain and anger, conflicting thoughts through self-care and the growth of disease. Feelings of mortality are explored through a medical process in male-dominated medical institutions and a dearth of reliable information. https://inter-her.art/ [1]
In 2021, the installation was shortlisted for the Lumen Prize.
Join us for a talk and Q&A with the artist to discuss her work and its future development.
After registering, you will receive a confirmation email containing information about joining the meeting.
This talk is Co-Hosted by the ArtSci Salon at the Fields Institute for Research in Mathematical Sciences and the OCAD University/DMG Bodies in Play (BiP) initiative.
This event will be recorded and archived on the ArtSci Salon Youtube channel
Bio
Camille Baker is a Professor in Interactive and Immersive Arts, University for the Creative Arts [UCA], Farnham Surrey (UK). She is an artist-performer/researcher/curator within various art forms: immersive experiences, participatory performance and interactive art, mobile media art, tech fashion/soft circuits/DIY electronics, responsive interfaces and environments, and emerging media curating. Maker of participatory performance and immersive artwork, Baker develops methods to explore expressive non-verbal modes of communication, extended embodiment and presence in real and mixed reality and interactive art contexts, using XR, haptics/ e-textiles, wearable devices and mobile media. She has an ongoing fascination with all things emotional, embodied, felt, sensed, the visceral, physical, and relational.
Her 2018 book _New Directions in Mobile Media and Performance_ showcases exciting approaches and artists in this space, as well as her own work. She has been running a regular meetup group with smart/e-textile artists and designers since 2014, called e-stitches, where participants share their practice and facilitate workshops of new techniques and innovations. Baker also has been Principal Investigator for UCA for the EU funded STARTS Ecosystem (starts.eu [2]) Apr 2019-Nov 2021 and founder initiator for the EU WEAR Sustain project Jan 2017-April 2019 (wearsustain.eu [3]).
The ‘metaverse’ seems to be everywhere these days (especially since Facebook has made a number of announcements bout theirs (more about that later in this posting).
At this point, the metaverse is very hyped up despite having been around for about 30 years. According to the Wikipedia timeline (see the Metaverse entry), the first one was a MOO in 1993 called ‘The Metaverse’. In any event, it seems like it might be a good time to see what’s changed since I dipped my toe into a metaverse (Second Life by Linden Labs) in 2007.
(For grammar buffs, I switched from definite article [the] to indefinite article [a] purposefully. In reading the various opinion pieces and announcements, it’s not always clear whether they’re talking about a single, overarching metaverse [the] replacing the single, overarching internet or whether there will be multiple metaverses, in which case [a].)
The hype/the buzz … call it what you will
This September 6, 2021 piece by Nick Pringle for Fast Company dates the beginning of the metaverse to a 1992 science fiction novel before launching into some typical marketing hype (for those who don’t know, hype is the short form for hyperbole; Note: Links have been removed),
The term metaverse was coined by American writer Neal Stephenson in his 1993 sci-fi hit Snow Crash. But what was far-flung fiction 30 years ago is now nearing reality. At Facebook’s most recent earnings call [June 2021], CEO Mark Zuckerberg announced the company’s vision to unify communities, creators, and commerce through virtual reality: “Our overarching goal across all of these initiatives is to help bring the metaverse to life.”
So what actually is the metaverse? It’s best explained as a collection of 3D worlds you explore as an avatar. Stephenson’s original vision depicted a digital 3D realm in which users interacted in a shared online environment. Set in the wake of a catastrophic global economic crash, the metaverse in Snow Crash emerged as the successor to the internet. Subcultures sprung up alongside new social hierarchies, with users expressing their status through the appearance of their digital avatars.
Today virtual worlds along these lines are formed, populated, and already generating serious money. Household names like Roblox and Fortnite are the most established spaces; however, there are many more emerging, such as Decentraland, Upland, Sandbox, and the soon to launch Victoria VR.
These metaverses [emphasis mine] are peaking at a time when reality itself feels dystopian, with a global pandemic, climate change, and economic uncertainty hanging over our daily lives. The pandemic in particular saw many of us escape reality into online worlds like Roblox and Fortnite. But these spaces have proven to be a place where human creativity can flourish amid crisis.
In fact, we are currently experiencing an explosion of platforms parallel to the dotcom boom. While many of these fledgling digital worlds will become what Ask Jeeves was to Google, I predict [emphasis mine] that a few will match the scale and reach of the tech giant—or even exceed it.
Because the metaverse brings a new dimension to the internet, brands and businesses will need to consider their current and future role within it. Some brands are already forging the way and establishing a new genre of marketing in the process: direct to avatar (D2A). Gucci sold a virtual bag for more than the real thing in Roblox; Nike dropped virtual Jordans in Fortnite; Coca-Cola launched avatar wearables in Decentraland, and Sotheby’s has an art gallery that your avatar can wander in your spare time.
D2A is being supercharged by blockchain technology and the advent of digital ownership via NFTs, or nonfungible tokens. NFTs are already making waves in art and gaming. More than $191 million was transacted on the “play to earn” blockchain game Axie Infinity in its first 30 days this year. This kind of growth makes NFTs hard for brands to ignore. In the process, blockchain and crypto are starting to feel less and less like “outsider tech.” There are still big barriers to be overcome—the UX of crypto being one, and the eye-watering environmental impact of mining being the other. I believe technology will find a way. History tends to agree.
…
Detractors see the metaverse as a pandemic fad, wrapping it up with the current NFT bubble or reducing it to Zuck’s [Jeffrey Zuckerberg and Facebook] dystopian corporate landscape. This misses the bigger behavior change that is happening among Gen Alpha. When you watch how they play, it becomes clear that the metaverse is more than a buzzword.
For Gen Alpha [emphasis mine], gaming is social life. While millennials relentlessly scroll feeds, Alphas and Zoomers [emphasis mine] increasingly stroll virtual spaces with their friends. Why spend the evening staring at Instagram when you can wander around a virtual Harajuku with your mates? If this seems ridiculous to you, ask any 13-year-old what they think.
…
Who is Nick Pringle and how accurate are his predictions?
… [the company] evolved from a computer-assisted film-making studio to a digital design and consulting company, as part of a major advertising network.
By thinking “virtual first,” you can see how these spaces become highly experimental, creative, and valuable. The products you can design aren’t bound by physics or marketing convention—they can be anything, and are now directly “ownable” through blockchain. …
I believe that the metaverse is here to stay. That means brands and marketers now have the exciting opportunity to create products that exist in multiple realities. The winners will understand that the metaverse is not a copy of our world, and so we should not simply paste our products, experiences, and brands into it.
…
I emphasized “These metaverses …” in the previous section to highlight the fact that I find the use of ‘metaverses’ vs. ‘worlds’ confusing as the words are sometimes used as synonyms and sometimes as distinctions. We do it all the time in all sorts of conversations but for someone who’s an outsider to a particular occupational group or subculture, the shifts can make for confusion.
As for Gen Alpha and Zoomer, I’m not a fan of ‘Gen anything’ as shorthand for describing a cohort based on birth years. For example, “For Gen Alpha [emphasis mine], gaming is social life,” ignores social and economic classes, as well as, the importance of locations/geography, e.g., Afghanistan in contrast to the US.
To answer the question I asked, Pringle does not mention any record of accuracy for his predictions for the future but I was able to discover that he is a “multiple Cannes Lions award-winning creative” (more here).
In recent months you may have heard about something called the metaverse. Maybe you’ve read that the metaverse is going to replace the internet. Maybe we’re all supposed to live there. Maybe Facebook (or Epic, or Roblox, or dozens of smaller companies) is trying to take it over. And maybe it’s got something to do with NFTs [non-fungible tokens]?
Unlike a lot of things The Verge covers, the metaverse is tough to explain for one reason: it doesn’t necessarily exist. It’s partly a dream for the future of the internet and partly a neat way to encapsulate some current trends in online infrastructure, including the growth of real-time 3D worlds.
…
Then what is the real metaverse?
There’s no universally accepted definition of a real “metaverse,” except maybe that it’s a fancier successor to the internet. Silicon Valley metaverse proponents sometimes reference a description from venture capitalist Matthew Ball, author of the extensive Metaverse Primer:
“The Metaverse is an expansive network of persistent, real-time rendered 3D worlds and simulations that support continuity of identity, objects, history, payments, and entitlements, and can be experienced synchronously by an effectively unlimited number of users, each with an individual sense of presence.”
Facebook, arguably the tech company with the biggest stake in the metaverse, describes it more simply:
“The ‘metaverse’ is a set of virtual spaces where you can create and explore with other people who aren’t in the same physical space as you.”
There are also broader metaverse-related taxonomies like one from game designer Raph Koster, who draws a distinction between “online worlds,” “multiverses,” and “metaverses.” To Koster, online worlds are digital spaces — from rich 3D environments to text-based ones — focused on one main theme. Multiverses are “multiple different worlds connected in a network, which do not have a shared theme or ruleset,” including Ready Player One’s OASIS. And a metaverse is “a multiverse which interoperates more with the real world,” incorporating things like augmented reality overlays, VR dressing rooms for real stores, and even apps like Google Maps.
If you want something a little snarkier and more impressionistic, you can cite digital scholar Janet Murray — who has described the modern metaverse ideal as “a magical Zoom meeting that has all the playful release of Animal Crossing.”
But wait, now Ready Player One isn’t a metaverse and virtual worlds don’t have to be 3D? It sounds like some of these definitions conflict with each other.
An astute observation.
…
Why is the term “metaverse” even useful? “The internet” already covers mobile apps, websites, and all kinds of infrastructure services. Can’t we roll virtual worlds in there, too?
Matthew Ball favors the term “metaverse” because it creates a clean break with the present-day internet. [emphasis mine] “Using the metaverse as a distinctive descriptor allows us to understand the enormity of that change and in turn, the opportunity for disruption,” he said in a phone interview with The Verge. “It’s much harder to say ‘we’re late-cycle into the last thing and want to change it.’ But I think understanding this next wave of computing and the internet allows us to be more proactive than reactive and think about the future as we want it to be, rather than how to marginally affect the present.”
A more cynical spin is that “metaverse” lets companies dodge negative baggage associated with “the internet” in general and social media in particular. “As long as you can make technology seem fresh and new and cool, you can avoid regulation,” researcher Joan Donovan told The Washington Post in a recent article about Facebook and the metaverse. “You can run defense on that for several years before the government can catch up.”
There’s also one very simple reason: it sounds more futuristic than “internet” and gets investors and media people (like us!) excited.
…
People keep saying NFTs are part of the metaverse. Why?
NFTs are complicated in their own right, and you can read more about them here. Loosely, the thinking goes: NFTs are a way of recording who owns a specific virtual good, creating and transferring virtual goods is a big part of the metaverse, thus NFTs are a potentially useful financial architecture for the metaverse. Or in more practical terms: if you buy a virtual shirt in Metaverse Platform A, NFTs can create a permanent receipt and let you redeem the same shirt in Metaverse Platforms B to Z.
Lots of NFT designers are selling collectible avatars like CryptoPunks, Cool Cats, and Bored Apes, sometimes for astronomical sums. Right now these are mostly 2D art used as social media profile pictures. But we’re already seeing some crossover with “metaverse”-style services. The company Polygonal Mind, for instance, is building a system called CryptoAvatars that lets people buy 3D avatars as NFTs and then use them across multiple virtual worlds.
Since starting this post sometime in September 2021, the situation regarding Facebook has changed a few times. I’ve decided to begin my version of the story from a summer 2021 announcement.
On Monday, July 26, 2021, Facebook announced a new Metaverse product group. From a July 27, 2021 article by Scott Rosenberg for Yahoo News (Note: A link has been removed),
Facebook announced Monday it was forming a new Metaverse product group to advance its efforts to build a 3D social space using virtual and augmented reality tech.
…
Facebook’s new Metaverse product group will report to Andrew Bosworth, Facebook’s vice president of virtual and augmented reality [emphasis mine], who announced the new organization in a Facebook post.
…
Facebook, integrity, and safety in the metaverse
On September 27, 2021 Facebook posted this webpage (Building the Metaverse Responsibly by Andrew Bosworth, VP, Facebook Reality Labs [emphasis mine] and Nick Clegg, VP, Global Affairs) on its site,
The metaverse won’t be built overnight by a single company. We’ll collaborate with policymakers, experts and industry partners to bring this to life.
We’re announcing a $50 million investment in global research and program partners to ensure these products are developed responsibly.
We develop technology rooted in human connection that brings people together. As we focus on helping to build the next computing platform, our work across augmented and virtual reality and consumer hardware will deepen that human connection regardless of physical distance and without being tied to devices.
…
Introducing the XR [extended reality] Programs and Research Fund
There’s a long road ahead. But as a starting point, we’re announcing the XR Programs and Research Fund, a two-year $50 million investment in programs and external research to help us in this effort. Through this fund, we’ll collaborate with industry partners, civil rights groups, governments, nonprofits and academic institutions to determine how to build these technologies responsibly.
Rebranding Facebook’s integrity and safety issues away?
It seems Facebook’s credibility issues are such that the company is about to rebrand itself according to an October 19, 2021 article by Alex Heath for The Verge (Note: Links have been removed),
Facebook is planning to change its company name next week to reflect its focus on building the metaverse, according to a source with direct knowledge of the matter.
The coming name change, which CEO Mark Zuckerberg plans to talk about at the company’s annual Connect conference on October 28th [2021], but could unveil sooner, is meant to signal the tech giant’s ambition to be known for more than social media and all the ills that entail. The rebrand would likely position the blue Facebook app as one of many products under a parent company overseeing groups like Instagram, WhatsApp, Oculus, and more. A spokesperson for Facebook declined to comment for this story.
Facebook already has more than 10,000 employees building consumer hardware like AR glasses that Zuckerberg believes will eventually be as ubiquitous as smartphones. In July, he told The Verge that, over the next several years, “we will effectively transition from people seeing us as primarily being a social media company to being a metaverse company.”
A rebrand could also serve to further separate the futuristic work Zuckerberg is focused on from the intense scrutiny Facebook is currently under for the way its social platform operates today. A former employee turned whistleblower, Frances Haugen, recently leaked a trove of damning internal documents to The Wall Street Journal and testified about them before Congress. Antitrust regulators in the US and elsewhere are trying to break the company up, and public trust in how Facebook does business is falling.
Facebook isn’t the first well-known tech company to change its company name as its ambitions expand. In 2015, Google reorganized entirely under a holding company called Alphabet, partly to signal that it was no longer just a search engine, but a sprawling conglomerate with companies making driverless cars and health tech. And Snapchat rebranded to Snap Inc. in 2016, the same year it started calling itself a “camera company” and debuted its first pair of Spectacles camera glasses.
…
If you have time, do read Heath’s article in its entirety.
An October 20, 2021 Thomson Reuters item on CBC (Canadian Broadcasting Corporation) news online includes quotes from some industry analysts about the rebrand,
…
“It reflects the broadening out of the Facebook business. And then, secondly, I do think that Facebook’s brand is probably not the greatest given all of the events of the last three years or so,” internet analyst James Cordwell at Atlantic Equities said.
…
“Having a different parent brand will guard against having this negative association transferred into a new brand, or other brands that are in the portfolio,” said Shankha Basu, associate professor of marketing at University of Leeds.
…
Tyler Jadah’s October 20, 2021 article for the Daily Hive includes an earlier announcement (not mentioned in the other two articles about the rebranding), Note: A link has been removed,
…
Earlier this week [October 17, 2021], Facebook announced it will start “a journey to help build the next computing platform” and will hire 10,000 new high-skilled jobs within the European Union (EU) over the next five years.
“Working with others, we’re developing what is often referred to as the ‘metaverse’ — a new phase of interconnected virtual experiences using technologies like virtual and augmented reality,” wrote Facebook’s Nick Clegg, the VP of Global Affairs. “At its heart is the idea that by creating a greater sense of “virtual presence,” interacting online can become much closer to the experience of interacting in person.”
Clegg says the metaverse has the potential to help unlock access to new creative, social, and economic opportunities across the globe and the virtual world.
In an email with Facebook’s Corporate Communications Canada, David Troya-Alvarez told Daily Hive, “We don’t comment on rumour or speculation,” in regards to The Verge‘s report.
I will update this posting when and if Facebook rebrands itself into a ‘metaverse’ company.
***See Oct. 28, 2021 update at the end of this posting and prepare yourself for ‘Meta’.***
Who (else) cares about integrity and safety in the metaverse?
In technology, first-mover advantage is often significant. This is why BigTech and other online platforms are beginning to acquire software businesses to position themselves for the arrival of the Metaverse. They hope to be at the forefront of profound changes that the Metaverse will bring in relation to digital interactions between people, between businesses, and between them both.
What is the Metaverse? The short answer is that it does not exist yet. At the moment it is vision for what the future will be like where personal and commercial life is conducted digitally in parallel with our lives in the physical world. Sounds too much like science fiction? For something that does not exist yet, the Metaverse is drawing a huge amount of attention and investment in the tech sector and beyond.
Here we look at what the Metaverse is, what its potential is for disruptive change, and some of the key legal and regulatory issues future stakeholders may need to consider.
What are the potential legal issues?
The revolutionary nature of the Metaverse is likely to give rise to a range of complex legal and regulatory issues. We consider some of the key ones below. As time goes by, naturally enough, new ones will emerge.
Data
Participation in the Metaverse will involve the collection of unprecedented amounts and types of personal data. Today, smartphone apps and websites allow organisations to understand how individuals move around the web or navigate an app. Tomorrow, in the Metaverse, organisations will be able to collect information about individuals’ physiological responses, their movements and potentially even brainwave patterns, thereby gauging a much deeper understanding of their customers’ thought processes and behaviours.
Users participating in the Metaverse will also be “logged in” for extended amounts of time. This will mean that patterns of behaviour will be continually monitored, enabling the Metaverse and the businesses (vendors of goods and services) participating in the Metaverse to understand how best to service the users in an incredibly targeted way.
The hungry Metaverse participant
How might actors in the Metaverse target persons participating in the Metaverse? Let us assume one such woman is hungry at the time of participating. The Metaverse may observe a woman frequently glancing at café and restaurant windows and stopping to look at cakes in a bakery window, and determine that she is hungry and serve her food adverts accordingly.
Contrast this with current technology, where a website or app can generally only ascertain this type of information if the woman actively searched for food outlets or similar on her device.
Therefore, in the Metaverse, a user will no longer need to proactively provide personal data by opening up their smartphone and accessing their webpage or app of choice. Instead, their data will be gathered in the background while they go about their virtual lives.
This type of opportunity comes with great data protection responsibilities. Businesses developing, or participating in, the Metaverse will need to comply with data protection legislation when processing personal data in this new environment. The nature of the Metaverse raises a number of issues around how that compliance will be achieved in practice.
Who is responsible for complying with applicable data protection law?
In many jurisdictions, data protection laws place different obligations on entities depending on whether an entity determines the purpose and means of processing personal data (referred to as a “controller” under the EU General Data Protection Regulation (GDPR)) or just processes personal data on behalf of others (referred to as a “processor” under the GDPR).
In the Metaverse, establishing which entity or entities have responsibility for determining how and why personal data will be processed, and who processes personal data on behalf of another, may not be easy. It will likely involve picking apart a tangled web of relationships, and there may be no obvious or clear answers – for example:
Will there be one main administrator of the Metaverse who collects all personal data provided within it and determines how that personal data will be processed and shared? Or will multiple entities collect personal data through the Metaverse and each determine their own purposes for doing so?
Either way, many questions arise, including:
How should the different entities each display their own privacy notice to users? Or should this be done jointly? How and when should users’ consent be collected? Who is responsible if users’ personal data is stolen or misused while they are in the Metaverse? What data sharing arrangements need to be put in place and how will these be implemented?
…
There’s a lot more to this page including a look at Social Media Regulation and Intellectual Property Rights.
I’m starting to think we should talking about RR (real reality), as well as, VR (virtual reality), AR (augmented reality), MR (mixed reality), and XR (extended reality). It seems that all of these (except RR, which is implied) will be part of the ‘metaverse’, assuming that it ever comes into existence. Happily, I have found a good summarized description of VR/AR/MR/XR in a March 20, 2018 essay by North of 41 on medium.com,
Summary: VR is immersing people into a completely virtual environment; AR is creating an overlay of virtual content, but can’t interact with the environment; MR is a mixed of virtual reality and the reality, it creates virtual objects that can interact with the actual environment. XR brings all three Reality (AR, VR, MR) together under one term.
If you have the interest and approximately five spare minutes, read the entire March 20, 2018 essay, which has embedded images illustrating the various realities.
Here’s a description from one of the researchers, Mohamed Kari, of the video, which you can see above, and the paper he and his colleagues presented at the 20th IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 2021 (from the TransforMR page on YouTube),
We present TransforMR, a video see-through mixed reality system for mobile devices that performs 3D-pose-aware object substitution to create meaningful mixed reality scenes in previously unseen, uncontrolled, and open-ended real-world environments.
To get a sense of how recent this work is, ISMAR 2021 was held from October 4 – 8, 2021.
The team’s 2021 ISMAR paper, TransforMR Pose-Aware Object Substitution for Composing Alternate Mixed Realities by Mohamed Kari, Tobias Grosse-Puppendah, Luis Falconeri Coelho, Andreas Rene Fender, David Bethge, Reinhard Schütte, and Christian Holz lists two educational institutions I’d expect to see (University of Duisburg-Essen and ETH Zürich), the surprise was this one: Porsche AG. Perhaps that explains the preponderance of vehicles in this demonstration.
Space walking in virtual reality
Ivan Semeniuk’s October 2, 2021 article for the Globe and Mail highlights a collaboration between Montreal’s Felix and Paul Studios with NASA (US National Aeronautics and Space Administration) and Time studios,
Communing with the infinite while floating high above the Earth is an experience that, so far, has been known to only a handful.
Now, a Montreal production company aims to share that experience with audiences around the world, following the first ever recording of a spacewalk in the medium of virtual reality.
…
The company, which specializes in creating virtual-reality experiences with cinematic flair, got its long-awaited chance in mid-September when astronauts Thomas Pesquet and Akihiko Hoshide ventured outside the International Space Station for about seven hours to install supports and other equipment in preparation for a new solar array.
The footage will be used in the fourth and final instalment of Space Explorers: The ISS Experience, a virtual-reality journey to space that has already garnered a Primetime Emmy Award for its first two episodes.
From the outset, the production was developed to reach audiences through a variety of platforms for 360-degree viewing, including 5G-enabled smart phones and tablets. A domed theatre version of the experience for group audiences opened this week at the Rio Tinto Alcan Montreal Planetarium. Those who desire a more immersive experience can now see the first two episodes in VR form by using a headset available through the gaming and entertainment company Oculus. Scenes from the VR series are also on offer as part of The Infinite, an interactive exhibition developed by Montreal’s Phi Studio, whose works focus on the intersection of art and technology. The exhibition, which runs until Nov. 7 [2021], has attracted 40,000 visitors since it opened in July [2021?].
…
At a time when billionaires are able to head off on private extraterrestrial sojourns that almost no one else could dream of, Lajeunesse [Félix Lajeunesse, co-founder and creative director of Felix and Paul studios] said his project was developed with a very different purpose in mind: making it easier for audiences to become eyewitnesses rather than distant spectators to humanity’s greatest adventure.
…
For the final instalments, the storyline takes viewers outside of the space station with cameras mounted on the Canadarm, and – for the climax of the series – by following astronauts during a spacewalk. These scenes required extensive planning, not only because of the limited time frame in which they could be gathered, but because of the lighting challenges presented by a constantly shifting sun as the space station circles the globe once every 90 minutes.
…
… Lajeunesse said that it was equally important to acquire shots that are not just technically spectacular but that serve the underlying themes of Space Explorers: The ISS Experience. These include an examination of human adaptation and advancement, and the unity that emerges within a group of individuals from many places and cultures and who must learn to co-exist in a high risk environment in order to achieve a common goal.
There always seems to be a lot of grappling with new and newish science/technology where people strive to coin terms and define them while everyone, including members of the corporate community, attempts to cash in.
The last time I looked (probably about two years ago), I wasn’t able to find any good definitions for alternate reality and mixed reality. (By good, I mean something which clearly explicated the difference between the two.) It was nice to find something this time.
As for Facebook and its attempts to join/create a/the metaverse, the company’s timing seems particularly fraught. As well, paradigm-shifting technology doesn’t usually start with large corporations. The company is ignoring its own history.
Multiverses
Writing this piece has reminded me of the upcoming movie, “Doctor Strange in the Multiverse of Madness” (Wikipedia entry). While this multiverse is based on a comic book, the idea of a Multiverse (Wikipedia entry) has been around for quite some time,
Early recorded examples of the idea of infinite worlds existed in the philosophy of Ancient Greek Atomism, which proposed that infinite parallel worlds arose from the collision of atoms. In the third century BCE, the philosopher Chrysippus suggested that the world eternally expired and regenerated, effectively suggesting the existence of multiple universes across time.[1] The concept of multiple universes became more defined in the Middle Ages.
…
Multiple universes have been hypothesized in cosmology, physics, astronomy, religion, philosophy, transpersonal psychology, music, and all kinds of literature, particularly in science fiction, comic books and fantasy. In these contexts, parallel universes are also called “alternate universes”, “quantum universes”, “interpenetrating dimensions”, “parallel universes”, “parallel dimensions”, “parallel worlds”, “parallel realities”, “quantum realities”, “alternate realities”, “alternate timelines”, “alternate dimensions” and “dimensional planes”.
The physics community has debated the various multiverse theories over time. Prominent physicists are divided about whether any other universes exist outside of our own.
…
Living in a computer simulation or base reality
The whole thing is getting a little confusing for me so I think I’ll stick with RR (real reality) or as it’s also known base reality. For the notion of base reality, I want to thank astronomer David Kipping of Columbia University in Anil Ananthaswamy’s article for this analysis of the idea that we might all be living in a computer simulation (from my December 8, 2020 posting; scroll down about 50% of the way to the “Are we living in a computer simulation?” subhead),
… there is a more obvious answer: Occam’s razor, which says that in the absence of other evidence, the simplest explanation is more likely to be correct. The simulation hypothesis is elaborate, presuming realities nested upon realities, as well as simulated entities that can never tell that they are inside a simulation. “Because it is such an overly complicated, elaborate model in the first place, by Occam’s razor, it really should be disfavored, compared to the simple natural explanation,” Kipping says.
Maybe we are living in base reality after all—The Matrix, Musk and weird quantum physics notwithstanding.
To sum it up (briefly)
I’m sticking with the base reality (or real reality) concept, which is where various people and companies are attempting to create a multiplicity of metaverses or the metaverse effectively replacing the internet. This metaverse can include any all of these realities (AR/MR/VR/XR) along with base reality. As for Facebook’s attempt to build ‘the metaverse’, it seems a little grandiose.
The computer simulation theory is an interesting thought experiment (just like the multiverse is an interesting thought experiment). I’ll leave them there.
Wherever it is we are living, these are interesting times.
***Updated October 28, 2021: D. (Devindra) Hardawar’s October 28, 2021 article for engadget offers details about the rebranding along with a dash of cynicism (Note: A link has been removed),
Here’s what Facebook’s metaverse isn’t: It’s not an alternative world to help us escape from our dystopian reality, a la Snow Crash. It won’t require VR or AR glasses (at least, not at first). And, most importantly, it’s not something Facebook wants to keep to itself. Instead, as Mark Zuckerberg described to media ahead of today’s Facebook Connect conference, the company is betting it’ll be the next major computing platform after the rise of smartphones and the mobile web. Facebook is so confident, in fact, Zuckerberg announced that it’s renaming itself to “Meta.”
After spending the last decade becoming obsessed with our phones and tablets — learning to stare down and scroll practically as a reflex — the Facebook founder thinks we’ll be spending more time looking up at the 3D objects floating around us in the digital realm. Or maybe you’ll be following a friend’s avatar as they wander around your living room as a hologram. It’s basically a digital world layered right on top of the real world, or an “embodied internet” as Zuckerberg describes.
Before he got into the weeds for his grand new vision, though, Zuckerberg also preempted criticism about looking into the future now, as the Facebook Papers paint the company as a mismanaged behemoth that constantly prioritizes profit over safety. While acknowledging the seriousness of the issues the company is facing, noting that it’ll continue to focus on solving them with “industry-leading” investments, Zuckerberg said:
“The reality is is that there’s always going to be issues and for some people… they may have the view that there’s never really a great time to focus on the future… From my perspective, I think that we’re here to create things and we believe that we can do this and that technology can make things better. So we think it’s important to to push forward.”
Given the extent to which Facebook, and Zuckerberg in particular, have proven to be untrustworthy stewards of social technology, it’s almost laughable that the company wants us to buy into its future. But, like the rise of photo sharing and group chat apps, Zuckerberg at least has a good sense of what’s coming next. And for all of his talk of turning Facebook into a metaverse company, he’s adamant that he doesn’t want to build a metaverse that’s entirely owned by Facebook. He doesn’t think other companies will either. Like the mobile web, he thinks every major technology company will contribute something towards the metaverse. He’s just hoping to make Facebook a pioneer.
“Instead of looking at a screen, or today, how we look at the Internet, I think in the future you’re going to be in the experiences, and I think that’s just a qualitatively different experience,” Zuckerberg said. It’s not quite virtual reality as we think of it, and it’s not just augmented reality. But ultimately, he sees the metaverse as something that’ll help to deliver more presence for digital social experiences — the sense of being there, instead of just being trapped in a zoom window. And he expects there to be continuity across devices, so you’ll be able to start chatting with friends on your phone and seamlessly join them as a hologram when you slip on AR glasses.
…
D. (Devindra) Hardawar’s October 28, 2021 article provides a lot more details and I recommend reading it in its entirety.
Markus Buehler and his musical spider webs are making news again.
The image (so pretty) you see in the above comes from a Markus Buehler presentation that was made at the American Chemical Society (ACS) meeting. ACS Spring 2021 being held online April 5-30, 2021. The image was also shown during a press conference which the ACS has made available for public viewing. More about that later in this posting.
Spiders are master builders, expertly weaving strands of silk into intricate 3D webs that serve as the spider’s home and hunting ground. If humans could enter the spider’s world, they could learn about web construction, arachnid behavior and more. Today, scientists report that they have translated the structure of a web into music, which could have applications ranging from better 3D printers to cross-species communication and otherworldly musical compositions.
The researchers will present their results today at the spring meeting of the American Chemical Society (ACS). ACS Spring 2021 is being held online April 5-30 [2021]. Live sessions will be hosted April 5-16, and on-demand and networking content will continue through April 30 [2021]. The meeting features nearly 9,000 presentations on a wide range of science topics.
“The spider lives in an environment of vibrating strings,” says Markus Buehler, Ph.D., the project’s principal investigator, who is presenting the work. “They don’t see very well, so they sense their world through vibrations, which have different frequencies.” Such vibrations occur, for example, when the spider stretches a silk strand during construction, or when the wind or a trapped fly moves the web.
Buehler, who has long been interested in music, wondered if he could extract rhythms and melodies of non-human origin from natural materials, such as spider webs. “Webs could be a new source for musical inspiration that is very different from the usual human experience,” he says. In addition, by experiencing a web through hearing as well as vision, Buehler and colleagues at the Massachusetts Institute of Technology (MIT), together with collaborator Tomás Saraceno at Studio Tomás Saraceno, hoped to gain new insights into the 3D architecture and construction of webs.
With these goals in mind, the researchers scanned a natural spider web with a laser to capture 2D cross-sections and then used computer algorithms to reconstruct the web’s 3D network. The team assigned different frequencies of sound to strands of the web, creating “notes” that they combined in patterns based on the web’s 3D structure to generate melodies. The researchers then created a harp-like instrument and played the spider web music in several live performances around the world.
The team also made a virtual reality setup that allowed people to visually and audibly “enter” the web. “The virtual reality environment is really intriguing because your ears are going to pick up structural features that you might see but not immediately recognize,” Buehler says. “By hearing it and seeing it at the same time, you can really start to understand the environment the spider lives in.”
To gain insights into how spiders build webs, the researchers scanned a web during the construction process, transforming each stage into music with different sounds. “The sounds our harp-like instrument makes change during the process, reflecting the way the spider builds the web,” Buehler says. “So, we can explore the temporal sequence of how the web is being constructed in audible form.” This step-by-step knowledge of how a spider builds a web could help in devising “spider-mimicking” 3D printers that build complex microelectronics. “The spider’s way of ‘printing’ the web is remarkable because no support material is used, as is often needed in current 3D printing methods,” he says.
In other experiments, the researchers explored how the sound of a web changes as it’s exposed to different mechanical forces, such as stretching. “In the virtual reality environment, we can begin to pull the web apart, and when we do that, the tension of the strings and the sound they produce change. At some point, the strands break, and they make a snapping sound,” Buehler says.
The team is also interested in learning how to communicate with spiders in their own language. They recorded web vibrations produced when spiders performed different activities, such as building a web, communicating with other spiders or sending courtship signals. Although the frequencies sounded similar to the human ear, a machine learning algorithm correctly classified the sounds into the different activities. “Now we’re trying to generate synthetic signals to basically speak the language of the spider,” Buehler says. “If we expose them to certain patterns of rhythms or vibrations, can we affect what they do, and can we begin to communicate with them? Those are really exciting ideas.”
You can go here for the April 12, 2021 ‘Making music from spider webs’ ACS press conference’ it runs about 30 mins. and you will hear some ‘spider music’ played.
Getting back to the image and spider webs in general, we are most familiar with orb webs (in the part of Canada where I from if nowhere else), which look like spirals and are 2D. There are several other types of webs some of which are 3D, like tangle webs, also known as cobwebs, funnel webs and more. See this March 18, 2020 article “9 Types of Spider Webs: Identification + Pictures & Spiders” by Zach David on Beyond the Treat for more about spiders and their webs. If you have the time, I recommend reading it.
I’ve been following Buehler’s spider web/music work for close to ten years now; the latest previous posting is an October 23, 2019 posting where you’ll find a link to an application that makes music from proteins (spider webs are made up of proteins; scroll down about 30% of the way; it’s in the 2nd to last line of the quoted text about the embedded video).
Here is a video (2 mins. 17 secs.) of a spider web music performance that Buehler placed on YouTube,
Feb 3, 2021
Markus J. Buehler
Spider’s Canvas/Arachonodrone show excerpt at Palais de Tokyo, Paris, on November 2018. Video by MIT CAST. More videos can be found on www.arachnodrone.com. The performance was commissioned by Studio Tomás Saraceno (STS), in the context of Saraceno’s carte blanche exhibition, ON AIR. Spider’s Canvas/Arachnodrone was performed by Isabelle Su and Ian Hattwick on the spider web instrument, Evan Ziporyn on the EWI (Electronic Wind Instrument), and Christine Southworth on the guitar and EBow (Electronic Bow)
Spider’s Canvas / Arachnodrone is inspired by the multifaceted work of artist Tomas Saraceno, specifically his work using multiple species of spiders to make sculptural webs. Different species make very different types of webs, ranging not just in size but in design and functionality. Tomas’ own web sculptures are in essence collaborations with the spiders themselves, placing them sequentially over time in the same space, so that the complex, 3-dimensional sculptural web that results is in fact built by several spiders, working together.
Meanwhile, back among the humans at MIT, Isabelle Su, a Course 1 doctoral student in civil engineering, has been focusing on analyzing the structure of single-species spider webs, specifically the ‘tent webs’ of the cyrtophora citricola, a tropical spider of particular interest to her, Tomas, and Professor Markus Buehler. Tomas gave the department a cyrtophora spider, the department gave the spider a space (a small terrarium without glass), and she in turn built a beautiful and complex web. Isabelle then scanned it in 3D and made a virtual model. At the suggestion of Evan Ziporyn and Eran Egozy, she then ported the model into Unity, a VR/game making program, where a ‘player’ can move through it in numerous ways. Evan & Christine Southworth then worked with her on ‘sonifying’ the web and turning it into an interactive virtual instrument, effectively turning the web into a 1700-string resonating instrument, based on the proportional length of each individual piece of silk and their proximity to one another. As we move through the web (currently just with a computer trackpad, but eventually in a VR environment), we create a ‘sonic biome’: complex ‘just intonation’ chords that come in and out of earshot according to which of her strings we are closest to. That part was all done in MAX/MSP, a very flexible high level audio programming environment, which was connected with the virtual environment in Unity. Our new colleague Ian Hattwick joined the team focusing on sound design and spatialization, building an interface that allowed him the sonically ‘sculpt’ the sculpture in real time, changing amplitude, resonance, and other factors. During this performance at Palais de Tokyo, Isabelle toured the web – that’s what the viewer sees – while Ian adjusted sounds, so in essence they were together “playing the web.” Isabelle provides a space (the virtual web) and a specific location within it (by driving through), which is what the viewer sees, from multiple angles, on the 3 scrims. The location has certain acoustic potentialities, and Ian occupies them sonically, just as a real human performer does in a real acoustic space. A rough analogy might be something like wandering through a gothic cathedral or a resonant cave, using your voice or an instrument at different volumes and on different pitches to find sonorous resonances, echoes, etc. Meanwhile, Evan and Christine are improvising with the web instrument, building on Ian’s sound, with Evan on EWI (Electronic Wind Instrument) and Christine on electric guitar with EBow.
For the visuals, Southworth wanted to create the illusion that the performers were actually inside the web. We built a structure covered in sharkstooth scrim, with 3 projectors projecting in and through from 3 sides. Southworth created images using her photographs of local Lexington, MA spider webs mixed with slides of the scan of the web at MIT, and then mixed those images with the projection of the game, creating an interactive replica of Saraceno’s multi-species webs.
If you listen to the press conference, you will hear Buehler talk about practical applications for this work in materials science.