Monthly Archives: October 2021

The metaverse or not

The ‘metaverse’ seems to be everywhere these days (especially since Facebook has made a number of announcements bout theirs (more about that later in this posting).

At this point, the metaverse is very hyped up despite having been around for about 30 years. According to the Wikipedia timeline (see the Metaverse entry), the first one was a MOO in 1993 called ‘The Metaverse’. In any event, it seems like it might be a good time to see what’s changed since I dipped my toe into a metaverse (Second Life by Linden Labs) in 2007.

(For grammar buffs, I switched from definite article [the] to indefinite article [a] purposefully. In reading the various opinion pieces and announcements, it’s not always clear whether they’re talking about a single, overarching metaverse [the] replacing the single, overarching internet or whether there will be multiple metaverses, in which case [a].)

The hype/the buzz … call it what you will

This September 6, 2021 piece by Nick Pringle for Fast Company dates the beginning of the metaverse to a 1992 science fiction novel before launching into some typical marketing hype (for those who don’t know, hype is the short form for hyperbole; Note: Links have been removed),

The term metaverse was coined by American writer Neal Stephenson in his 1993 sci-fi hit Snow Crash. But what was far-flung fiction 30 years ago is now nearing reality. At Facebook’s most recent earnings call [June 2021], CEO Mark Zuckerberg announced the company’s vision to unify communities, creators, and commerce through virtual reality: “Our overarching goal across all of these initiatives is to help bring the metaverse to life.”

So what actually is the metaverse? It’s best explained as a collection of 3D worlds you explore as an avatar. Stephenson’s original vision depicted a digital 3D realm in which users interacted in a shared online environment. Set in the wake of a catastrophic global economic crash, the metaverse in Snow Crash emerged as the successor to the internet. Subcultures sprung up alongside new social hierarchies, with users expressing their status through the appearance of their digital avatars.

Today virtual worlds along these lines are formed, populated, and already generating serious money. Household names like Roblox and Fortnite are the most established spaces; however, there are many more emerging, such as Decentraland, Upland, Sandbox, and the soon to launch Victoria VR.

These metaverses [emphasis mine] are peaking at a time when reality itself feels dystopian, with a global pandemic, climate change, and economic uncertainty hanging over our daily lives. The pandemic in particular saw many of us escape reality into online worlds like Roblox and Fortnite. But these spaces have proven to be a place where human creativity can flourish amid crisis.

In fact, we are currently experiencing an explosion of platforms parallel to the dotcom boom. While many of these fledgling digital worlds will become what Ask Jeeves was to Google, I predict [emphasis mine] that a few will match the scale and reach of the tech giant—or even exceed it.

Because the metaverse brings a new dimension to the internet, brands and businesses will need to consider their current and future role within it. Some brands are already forging the way and establishing a new genre of marketing in the process: direct to avatar (D2A). Gucci sold a virtual bag for more than the real thing in Roblox; Nike dropped virtual Jordans in Fortnite; Coca-Cola launched avatar wearables in Decentraland, and Sotheby’s has an art gallery that your avatar can wander in your spare time.

D2A is being supercharged by blockchain technology and the advent of digital ownership via NFTs, or nonfungible tokens. NFTs are already making waves in art and gaming. More than $191 million was transacted on the “play to earn” blockchain game Axie Infinity in its first 30 days this year. This kind of growth makes NFTs hard for brands to ignore. In the process, blockchain and crypto are starting to feel less and less like “outsider tech.” There are still big barriers to be overcome—the UX of crypto being one, and the eye-watering environmental impact of mining being the other. I believe technology will find a way. History tends to agree.

Detractors see the metaverse as a pandemic fad, wrapping it up with the current NFT bubble or reducing it to Zuck’s [Jeffrey Zuckerberg and Facebook] dystopian corporate landscape. This misses the bigger behavior change that is happening among Gen Alpha. When you watch how they play, it becomes clear that the metaverse is more than a buzzword.

For Gen Alpha [emphasis mine], gaming is social life. While millennials relentlessly scroll feeds, Alphas and Zoomers [emphasis mine] increasingly stroll virtual spaces with their friends. Why spend the evening staring at Instagram when you can wander around a virtual Harajuku with your mates? If this seems ridiculous to you, ask any 13-year-old what they think.

Who is Nick Pringle and how accurate are his predictions?

At the end of his September 6, 2021 piece, you’ll find this,

Nick Pringle is SVP [Senior Vice President] executive creative director at R/GA London.

According to the R/GA Wikipedia entry,

… [the company] evolved from a computer-assisted film-making studio to a digital design and consulting company, as part of a major advertising network.

Here’s how Pringle sees our future, his September 6, 2021 piece,

By thinking “virtual first,” you can see how these spaces become highly experimental, creative, and valuable. The products you can design aren’t bound by physics or marketing convention—they can be anything, and are now directly “ownable” through blockchain. …

I believe that the metaverse is here to stay. That means brands and marketers now have the exciting opportunity to create products that exist in multiple realities. The winners will understand that the metaverse is not a copy of our world, and so we should not simply paste our products, experiences, and brands into it.

I emphasized “These metaverses …” in the previous section to highlight the fact that I find the use of ‘metaverses’ vs. ‘worlds’ confusing as the words are sometimes used as synonyms and sometimes as distinctions. We do it all the time in all sorts of conversations but for someone who’s an outsider to a particular occupational group or subculture, the shifts can make for confusion.

As for Gen Alpha and Zoomer, I’m not a fan of ‘Gen anything’ as shorthand for describing a cohort based on birth years. For example, “For Gen Alpha [emphasis mine], gaming is social life,” ignores social and economic classes, as well as, the importance of locations/geography, e.g., Afghanistan in contrast to the US.

To answer the question I asked, Pringle does not mention any record of accuracy for his predictions for the future but I was able to discover that he is a “multiple Cannes Lions award-winning creative” (more here).

A more measured view of the metaverse

An October 4, 2021 article (What is the metaverse, and do I have to care? One part definition, one part aspiration, one part hype) by Adi Robertson and Jay Peters for The Verge offers a deeper dive into the metaverse (Note: Links have been removed),

In recent months you may have heard about something called the metaverse. Maybe you’ve read that the metaverse is going to replace the internet. Maybe we’re all supposed to live there. Maybe Facebook (or Epic, or Roblox, or dozens of smaller companies) is trying to take it over. And maybe it’s got something to do with NFTs [non-fungible tokens]?

Unlike a lot of things The Verge covers, the metaverse is tough to explain for one reason: it doesn’t necessarily exist. It’s partly a dream for the future of the internet and partly a neat way to encapsulate some current trends in online infrastructure, including the growth of real-time 3D worlds.

Then what is the real metaverse?

There’s no universally accepted definition of a real “metaverse,” except maybe that it’s a fancier successor to the internet. Silicon Valley metaverse proponents sometimes reference a description from venture capitalist Matthew Ball, author of the extensive Metaverse Primer:

“The Metaverse is an expansive network of persistent, real-time rendered 3D worlds and simulations that support continuity of identity, objects, history, payments, and entitlements, and can be experienced synchronously by an effectively unlimited number of users, each with an individual sense of presence.”

Facebook, arguably the tech company with the biggest stake in the metaverse, describes it more simply:

“The ‘metaverse’ is a set of virtual spaces where you can create and explore with other people who aren’t in the same physical space as you.”

There are also broader metaverse-related taxonomies like one from game designer Raph Koster, who draws a distinction between “online worlds,” “multiverses,” and “metaverses.” To Koster, online worlds are digital spaces — from rich 3D environments to text-based ones — focused on one main theme. Multiverses are “multiple different worlds connected in a network, which do not have a shared theme or ruleset,” including Ready Player One’s OASIS. And a metaverse is “a multiverse which interoperates more with the real world,” incorporating things like augmented reality overlays, VR dressing rooms for real stores, and even apps like Google Maps.

If you want something a little snarkier and more impressionistic, you can cite digital scholar Janet Murray — who has described the modern metaverse ideal as “a magical Zoom meeting that has all the playful release of Animal Crossing.”

But wait, now Ready Player One isn’t a metaverse and virtual worlds don’t have to be 3D? It sounds like some of these definitions conflict with each other.

An astute observation.

Why is the term “metaverse” even useful? “The internet” already covers mobile apps, websites, and all kinds of infrastructure services. Can’t we roll virtual worlds in there, too?

Matthew Ball favors the term “metaverse” because it creates a clean break with the present-day internet. [emphasis mine] “Using the metaverse as a distinctive descriptor allows us to understand the enormity of that change and in turn, the opportunity for disruption,” he said in a phone interview with The Verge. “It’s much harder to say ‘we’re late-cycle into the last thing and want to change it.’ But I think understanding this next wave of computing and the internet allows us to be more proactive than reactive and think about the future as we want it to be, rather than how to marginally affect the present.”

A more cynical spin is that “metaverse” lets companies dodge negative baggage associated with “the internet” in general and social media in particular. “As long as you can make technology seem fresh and new and cool, you can avoid regulation,” researcher Joan Donovan told The Washington Post in a recent article about Facebook and the metaverse. “You can run defense on that for several years before the government can catch up.”

There’s also one very simple reason: it sounds more futuristic than “internet” and gets investors and media people (like us!) excited.

People keep saying NFTs are part of the metaverse. Why?

NFTs are complicated in their own right, and you can read more about them here. Loosely, the thinking goes: NFTs are a way of recording who owns a specific virtual good, creating and transferring virtual goods is a big part of the metaverse, thus NFTs are a potentially useful financial architecture for the metaverse. Or in more practical terms: if you buy a virtual shirt in Metaverse Platform A, NFTs can create a permanent receipt and let you redeem the same shirt in Metaverse Platforms B to Z.

Lots of NFT designers are selling collectible avatars like CryptoPunks, Cool Cats, and Bored Apes, sometimes for astronomical sums. Right now these are mostly 2D art used as social media profile pictures. But we’re already seeing some crossover with “metaverse”-style services. The company Polygonal Mind, for instance, is building a system called CryptoAvatars that lets people buy 3D avatars as NFTs and then use them across multiple virtual worlds.

If you have the time, the October 4, 2021 article (What is the metaverse, and do I have to care? One part definition, one part aspiration, one part hype) is definitely worth the read.

Facebook’s multiverse and other news

Since starting this post sometime in September 2021, the situation regarding Facebook has changed a few times. I’ve decided to begin my version of the story from a summer 2021 announcement.

On Monday, July 26, 2021, Facebook announced a new Metaverse product group. From a July 27, 2021 article by Scott Rosenberg for Yahoo News (Note: A link has been removed),

Facebook announced Monday it was forming a new Metaverse product group to advance its efforts to build a 3D social space using virtual and augmented reality tech.

Facebook’s new Metaverse product group will report to Andrew Bosworth, Facebook’s vice president of virtual and augmented reality [emphasis mine], who announced the new organization in a Facebook post.

Facebook, integrity, and safety in the metaverse

On September 27, 2021 Facebook posted this webpage (Building the Metaverse Responsibly by Andrew Bosworth, VP, Facebook Reality Labs [emphasis mine] and Nick Clegg, VP, Global Affairs) on its site,

The metaverse won’t be built overnight by a single company. We’ll collaborate with policymakers, experts and industry partners to bring this to life.

We’re announcing a $50 million investment in global research and program partners to ensure these products are developed responsibly.

We develop technology rooted in human connection that brings people together. As we focus on helping to build the next computing platform, our work across augmented and virtual reality and consumer hardware will deepen that human connection regardless of physical distance and without being tied to devices. 

Introducing the XR [extended reality] Programs and Research Fund

There’s a long road ahead. But as a starting point, we’re announcing the XR Programs and Research Fund, a two-year $50 million investment in programs and external research to help us in this effort. Through this fund, we’ll collaborate with industry partners, civil rights groups, governments, nonprofits and academic institutions to determine how to build these technologies responsibly. 

..

Where integrity and safety are concerned Facebook is once again having some credibility issues according to an October 5, 2021 Associated Press article (Whistleblower testifies Facebook chooses profit over safety, calls for ‘congressional action’) posted on the Canadian Broadcasting Corporation’s (CBC) news online website.

Rebranding Facebook’s integrity and safety issues away?

It seems Facebook’s credibility issues are such that the company is about to rebrand itself according to an October 19, 2021 article by Alex Heath for The Verge (Note: Links have been removed),

Facebook is planning to change its company name next week to reflect its focus on building the metaverse, according to a source with direct knowledge of the matter.

The coming name change, which CEO Mark Zuckerberg plans to talk about at the company’s annual Connect conference on October 28th [2021], but could unveil sooner, is meant to signal the tech giant’s ambition to be known for more than social media and all the ills that entail. The rebrand would likely position the blue Facebook app as one of many products under a parent company overseeing groups like Instagram, WhatsApp, Oculus, and more. A spokesperson for Facebook declined to comment for this story.

Facebook already has more than 10,000 employees building consumer hardware like AR glasses that Zuckerberg believes will eventually be as ubiquitous as smartphones. In July, he told The Verge that, over the next several years, “we will effectively transition from people seeing us as primarily being a social media company to being a metaverse company.”

A rebrand could also serve to further separate the futuristic work Zuckerberg is focused on from the intense scrutiny Facebook is currently under for the way its social platform operates today. A former employee turned whistleblower, Frances Haugen, recently leaked a trove of damning internal documents to The Wall Street Journal and testified about them before Congress. Antitrust regulators in the US and elsewhere are trying to break the company up, and public trust in how Facebook does business is falling.

Facebook isn’t the first well-known tech company to change its company name as its ambitions expand. In 2015, Google reorganized entirely under a holding company called Alphabet, partly to signal that it was no longer just a search engine, but a sprawling conglomerate with companies making driverless cars and health tech. And Snapchat rebranded to Snap Inc. in 2016, the same year it started calling itself a “camera company” and debuted its first pair of Spectacles camera glasses.

If you have time, do read Heath’s article in its entirety.

An October 20, 2021 Thomson Reuters item on CBC (Canadian Broadcasting Corporation) news online includes quotes from some industry analysts about the rebrand,

“It reflects the broadening out of the Facebook business. And then, secondly, I do think that Facebook’s brand is probably not the greatest given all of the events of the last three years or so,” internet analyst James Cordwell at Atlantic Equities said.

“Having a different parent brand will guard against having this negative association transferred into a new brand, or other brands that are in the portfolio,” said Shankha Basu, associate professor of marketing at University of Leeds.

Tyler Jadah’s October 20, 2021 article for the Daily Hive includes an earlier announcement (not mentioned in the other two articles about the rebranding), Note: A link has been removed,

Earlier this week [October 17, 2021], Facebook announced it will start “a journey to help build the next computing platform” and will hire 10,000 new high-skilled jobs within the European Union (EU) over the next five years.

“Working with others, we’re developing what is often referred to as the ‘metaverse’ — a new phase of interconnected virtual experiences using technologies like virtual and augmented reality,” wrote Facebook’s Nick Clegg, the VP of Global Affairs. “At its heart is the idea that by creating a greater sense of “virtual presence,” interacting online can become much closer to the experience of interacting in person.”

Clegg says the metaverse has the potential to help unlock access to new creative, social, and economic opportunities across the globe and the virtual world.

In an email with Facebook’s Corporate Communications Canada, David Troya-Alvarez told Daily Hive, “We don’t comment on rumour or speculation,” in regards to The Verge‘s report.

I will update this posting when and if Facebook rebrands itself into a ‘metaverse’ company.

***See Oct. 28, 2021 update at the end of this posting and prepare yourself for ‘Meta’.***

Who (else) cares about integrity and safety in the metaverse?

Apparently, the international legal firm, Norton Rose Fulbright also cares about safety and integrity in the metaverse. Here’s more from their July 2021 The Metaverse: The evolution of a universal digital platform webpage,

In technology, first-mover advantage is often significant. This is why BigTech and other online platforms are beginning to acquire software businesses to position themselves for the arrival of the Metaverse.  They hope to be at the forefront of profound changes that the Metaverse will bring in relation to digital interactions between people, between businesses, and between them both. 

What is the Metaverse? The short answer is that it does not exist yet. At the moment it is vision for what the future will be like where personal and commercial life is conducted digitally in parallel with our lives in the physical world. Sounds too much like science fiction? For something that does not exist yet, the Metaverse is drawing a huge amount of attention and investment in the tech sector and beyond.  

Here we look at what the Metaverse is, what its potential is for disruptive change, and some of the key legal and regulatory issues future stakeholders may need to consider.

What are the potential legal issues?

The revolutionary nature of the Metaverse is likely to give rise to a range of complex legal and regulatory issues. We consider some of the key ones below. As time goes by, naturally enough, new ones will emerge.

Data

Participation in the Metaverse will involve the collection of unprecedented amounts and types of personal data. Today, smartphone apps and websites allow organisations to understand how individuals move around the web or navigate an app. Tomorrow, in the Metaverse, organisations will be able to collect information about individuals’ physiological responses, their movements and potentially even brainwave patterns, thereby gauging a much deeper understanding of their customers’ thought processes and behaviours.

Users participating in the Metaverse will also be “logged in” for extended amounts of time. This will mean that patterns of behaviour will be continually monitored, enabling the Metaverse and the businesses (vendors of goods and services) participating in the Metaverse to understand how best to service the users in an incredibly targeted way.

The hungry Metaverse participant

How might actors in the Metaverse target persons participating in the Metaverse? Let us assume one such woman is hungry at the time of participating. The Metaverse may observe a woman frequently glancing at café and restaurant windows and stopping to look at cakes in a bakery window, and determine that she is hungry and serve her food adverts accordingly.

Contrast this with current technology, where a website or app can generally only ascertain this type of information if the woman actively searched for food outlets or similar on her device.

Therefore, in the Metaverse, a user will no longer need to proactively provide personal data by opening up their smartphone and accessing their webpage or app of choice. Instead, their data will be gathered in the background while they go about their virtual lives. 

This type of opportunity comes with great data protection responsibilities. Businesses developing, or participating in, the Metaverse will need to comply with data protection legislation when processing personal data in this new environment. The nature of the Metaverse raises a number of issues around how that compliance will be achieved in practice.

Who is responsible for complying with applicable data protection law? 

In many jurisdictions, data protection laws place different obligations on entities depending on whether an entity determines the purpose and means of processing personal data (referred to as a “controller” under the EU General Data Protection Regulation (GDPR)) or just processes personal data on behalf of others (referred to as a “processor” under the GDPR). 

In the Metaverse, establishing which entity or entities have responsibility for determining how and why personal data will be processed, and who processes personal data on behalf of another, may not be easy. It will likely involve picking apart a tangled web of relationships, and there may be no obvious or clear answers – for example:

Will there be one main administrator of the Metaverse who collects all personal data provided within it and determines how that personal data will be processed and shared?
Or will multiple entities collect personal data through the Metaverse and each determine their own purposes for doing so? 

Either way, many questions arise, including:

How should the different entities each display their own privacy notice to users? 
Or should this be done jointly? 
How and when should users’ consent be collected? 
Who is responsible if users’ personal data is stolen or misused while they are in the Metaverse? 
What data sharing arrangements need to be put in place and how will these be implemented?

There’s a lot more to this page including a look at Social Media Regulation and Intellectual Property Rights.

One other thing, according to the Norton Rose Fulbright Wikipedia entry, it is one of the ten largest legal firms in the world.

How many realities are there?

I’m starting to think we should talking about RR (real reality), as well as, VR (virtual reality), AR (augmented reality), MR (mixed reality), and XR (extended reality). It seems that all of these (except RR, which is implied) will be part of the ‘metaverse’, assuming that it ever comes into existence. Happily, I have found a good summarized description of VR/AR/MR/XR in a March 20, 2018 essay by North of 41 on medium.com,

Summary: VR is immersing people into a completely virtual environment; AR is creating an overlay of virtual content, but can’t interact with the environment; MR is a mixed of virtual reality and the reality, it creates virtual objects that can interact with the actual environment. XR brings all three Reality (AR, VR, MR) together under one term.

If you have the interest and approximately five spare minutes, read the entire March 20, 2018 essay, which has embedded images illustrating the various realities.

Alternate Mixed Realities: an example

TransforMR: Pose-Aware Object Substitution for Composing Alternate Mixed Realities (ISMAR ’21)

Here’s a description from one of the researchers, Mohamed Kari, of the video, which you can see above, and the paper he and his colleagues presented at the 20th IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 2021 (from the TransforMR page on YouTube),

We present TransforMR, a video see-through mixed reality system for mobile devices that performs 3D-pose-aware object substitution to create meaningful mixed reality scenes in previously unseen, uncontrolled, and open-ended real-world environments.

To get a sense of how recent this work is, ISMAR 2021 was held from October 4 – 8, 2021.

The team’s 2021 ISMAR paper, TransforMR Pose-Aware Object Substitution for Composing Alternate Mixed Realities by Mohamed Kari, Tobias Grosse-Puppendah, Luis Falconeri Coelho, Andreas Rene Fender, David Bethge, Reinhard Schütte, and Christian Holz lists two educational institutions I’d expect to see (University of Duisburg-Essen and ETH Zürich), the surprise was this one: Porsche AG. Perhaps that explains the preponderance of vehicles in this demonstration.

Space walking in virtual reality

Ivan Semeniuk’s October 2, 2021 article for the Globe and Mail highlights a collaboration between Montreal’s Felix and Paul Studios with NASA (US National Aeronautics and Space Administration) and Time studios,

Communing with the infinite while floating high above the Earth is an experience that, so far, has been known to only a handful.

Now, a Montreal production company aims to share that experience with audiences around the world, following the first ever recording of a spacewalk in the medium of virtual reality.

The company, which specializes in creating virtual-reality experiences with cinematic flair, got its long-awaited chance in mid-September when astronauts Thomas Pesquet and Akihiko Hoshide ventured outside the International Space Station for about seven hours to install supports and other equipment in preparation for a new solar array.

The footage will be used in the fourth and final instalment of Space Explorers: The ISS Experience, a virtual-reality journey to space that has already garnered a Primetime Emmy Award for its first two episodes.

From the outset, the production was developed to reach audiences through a variety of platforms for 360-degree viewing, including 5G-enabled smart phones and tablets. A domed theatre version of the experience for group audiences opened this week at the Rio Tinto Alcan Montreal Planetarium. Those who desire a more immersive experience can now see the first two episodes in VR form by using a headset available through the gaming and entertainment company Oculus. Scenes from the VR series are also on offer as part of The Infinite, an interactive exhibition developed by Montreal’s Phi Studio, whose works focus on the intersection of art and technology. The exhibition, which runs until Nov. 7 [2021], has attracted 40,000 visitors since it opened in July [2021?].

At a time when billionaires are able to head off on private extraterrestrial sojourns that almost no one else could dream of, Lajeunesse [Félix Lajeunesse, co-founder and creative director of Felix and Paul studios] said his project was developed with a very different purpose in mind: making it easier for audiences to become eyewitnesses rather than distant spectators to humanity’s greatest adventure.

For the final instalments, the storyline takes viewers outside of the space station with cameras mounted on the Canadarm, and – for the climax of the series – by following astronauts during a spacewalk. These scenes required extensive planning, not only because of the limited time frame in which they could be gathered, but because of the lighting challenges presented by a constantly shifting sun as the space station circles the globe once every 90 minutes.

… Lajeunesse said that it was equally important to acquire shots that are not just technically spectacular but that serve the underlying themes of Space Explorers: The ISS Experience. These include an examination of human adaptation and advancement, and the unity that emerges within a group of individuals from many places and cultures and who must learn to co-exist in a high risk environment in order to achieve a common goal.

If you have the time, do read Semeniuk’s October 2, 2021 article in its entirety. You can find the exhibits (hopefully, you’re in Montreal) The Infinite here and Space Explorers: The ISS experience here (see the preview below),

The realities and the ‘verses

There always seems to be a lot of grappling with new and newish science/technology where people strive to coin terms and define them while everyone, including members of the corporate community, attempts to cash in.

The last time I looked (probably about two years ago), I wasn’t able to find any good definitions for alternate reality and mixed reality. (By good, I mean something which clearly explicated the difference between the two.) It was nice to find something this time.

As for Facebook and its attempts to join/create a/the metaverse, the company’s timing seems particularly fraught. As well, paradigm-shifting technology doesn’t usually start with large corporations. The company is ignoring its own history.

Multiverses

Writing this piece has reminded me of the upcoming movie, “Doctor Strange in the Multiverse of Madness” (Wikipedia entry). While this multiverse is based on a comic book, the idea of a Multiverse (Wikipedia entry) has been around for quite some time,

Early recorded examples of the idea of infinite worlds existed in the philosophy of Ancient Greek Atomism, which proposed that infinite parallel worlds arose from the collision of atoms. In the third century BCE, the philosopher Chrysippus suggested that the world eternally expired and regenerated, effectively suggesting the existence of multiple universes across time.[1] The concept of multiple universes became more defined in the Middle Ages.

Multiple universes have been hypothesized in cosmology, physics, astronomy, religion, philosophy, transpersonal psychology, music, and all kinds of literature, particularly in science fiction, comic books and fantasy. In these contexts, parallel universes are also called “alternate universes”, “quantum universes”, “interpenetrating dimensions”, “parallel universes”, “parallel dimensions”, “parallel worlds”, “parallel realities”, “quantum realities”, “alternate realities”, “alternate timelines”, “alternate dimensions” and “dimensional planes”.

The physics community has debated the various multiverse theories over time. Prominent physicists are divided about whether any other universes exist outside of our own.

Living in a computer simulation or base reality

The whole thing is getting a little confusing for me so I think I’ll stick with RR (real reality) or as it’s also known base reality. For the notion of base reality, I want to thank astronomer David Kipping of Columbia University in Anil Ananthaswamy’s article for this analysis of the idea that we might all be living in a computer simulation (from my December 8, 2020 posting; scroll down about 50% of the way to the “Are we living in a computer simulation?” subhead),

… there is a more obvious answer: Occam’s razor, which says that in the absence of other evidence, the simplest explanation is more likely to be correct. The simulation hypothesis is elaborate, presuming realities nested upon realities, as well as simulated entities that can never tell that they are inside a simulation. “Because it is such an overly complicated, elaborate model in the first place, by Occam’s razor, it really should be disfavored, compared to the simple natural explanation,” Kipping says.

Maybe we are living in base reality after all—The Matrix, Musk and weird quantum physics notwithstanding.

To sum it up (briefly)

I’m sticking with the base reality (or real reality) concept, which is where various people and companies are attempting to create a multiplicity of metaverses or the metaverse effectively replacing the internet. This metaverse can include any all of these realities (AR/MR/VR/XR) along with base reality. As for Facebook’s attempt to build ‘the metaverse’, it seems a little grandiose.

The computer simulation theory is an interesting thought experiment (just like the multiverse is an interesting thought experiment). I’ll leave them there.

Wherever it is we are living, these are interesting times.

***Updated October 28, 2021: D. (Devindra) Hardawar’s October 28, 2021 article for engadget offers details about the rebranding along with a dash of cynicism (Note: A link has been removed),

Here’s what Facebook’s metaverse isn’t: It’s not an alternative world to help us escape from our dystopian reality, a la Snow Crash. It won’t require VR or AR glasses (at least, not at first). And, most importantly, it’s not something Facebook wants to keep to itself. Instead, as Mark Zuckerberg described to media ahead of today’s Facebook Connect conference, the company is betting it’ll be the next major computing platform after the rise of smartphones and the mobile web. Facebook is so confident, in fact, Zuckerberg announced that it’s renaming itself to “Meta.”

After spending the last decade becoming obsessed with our phones and tablets — learning to stare down and scroll practically as a reflex — the Facebook founder thinks we’ll be spending more time looking up at the 3D objects floating around us in the digital realm. Or maybe you’ll be following a friend’s avatar as they wander around your living room as a hologram. It’s basically a digital world layered right on top of the real world, or an “embodied internet” as Zuckerberg describes.

Before he got into the weeds for his grand new vision, though, Zuckerberg also preempted criticism about looking into the future now, as the Facebook Papers paint the company as a mismanaged behemoth that constantly prioritizes profit over safety. While acknowledging the seriousness of the issues the company is facing, noting that it’ll continue to focus on solving them with “industry-leading” investments, Zuckerberg said: 

“The reality is is that there’s always going to be issues and for some people… they may have the view that there’s never really a great time to focus on the future… From my perspective, I think that we’re here to create things and we believe that we can do this and that technology can make things better. So we think it’s important to to push forward.”

Given the extent to which Facebook, and Zuckerberg in particular, have proven to be untrustworthy stewards of social technology, it’s almost laughable that the company wants us to buy into its future. But, like the rise of photo sharing and group chat apps, Zuckerberg at least has a good sense of what’s coming next. And for all of his talk of turning Facebook into a metaverse company, he’s adamant that he doesn’t want to build a metaverse that’s entirely owned by Facebook. He doesn’t think other companies will either. Like the mobile web, he thinks every major technology company will contribute something towards the metaverse. He’s just hoping to make Facebook a pioneer.

“Instead of looking at a screen, or today, how we look at the Internet, I think in the future you’re going to be in the experiences, and I think that’s just a qualitatively different experience,” Zuckerberg said. It’s not quite virtual reality as we think of it, and it’s not just augmented reality. But ultimately, he sees the metaverse as something that’ll help to deliver more presence for digital social experiences — the sense of being there, instead of just being trapped in a zoom window. And he expects there to be continuity across devices, so you’ll be able to start chatting with friends on your phone and seamlessly join them as a hologram when you slip on AR glasses.

D. (Devindra) Hardawar’s October 28, 2021 article provides a lot more details and I recommend reading it in its entirety.

World CRISPR Day on October 20, 2021 from 8:00 a.m. – 6:00 p.m. PDT

H/t to rapper Baba Brinkman (born in Canada and based in New York City) for the tweet/retweet about his upcoming appearance at World CRISPR (clustered regularly interspaced palindromic repeats) Day on October 20, 2021 from 8:00 a.m. – 6:00 p.m. PDT,

Baba Brinkman @BabaBrinkman

True facts! I’ve been working with incredible #CRISPR innovator @Synthego and the @EventRapInc team, and tomorrow is #WorldCRISPRDay! Look for new DNA-themed videos and streamed performances all day from @HilaTheKilla, @CoreyJGray, @ZEPS, @MCAbdominal and me. Sign up to watch!

Synthego
@Synthego· 2h
Multiple musical notes BREAKING NEWS Multiple musical notes We’re delighted to announce that @BabaBrinkman will be performing live at #WorldCRISPRDay! Register today so you don’t miss out on this special and exclusive performance at the biggest event in #CRISPR! https://hubs.li/H0ZGfSG0

World CRISPR Day (it’s free) is being hosted by Synthego, from their About Us (company) webpage,

Synthego is a genome engineering company that enables the acceleration of life science research and development in the pursuit of improved human health.

The company leverages machine learning, automation, and gene editing to build platforms for science at scale. With its foundations in engineering disciplines, the company’s platform technologies vertically integrate proprietary hardware, software, bioinformatics, chemistries, and molecular biology to advance basic research, target validation, and clinical trials.

With its technologies cited in hundreds of peer-reviewed publications and utilized by thousands of commercial and academic researchers and therapeutic drug developers, Synthego is at the forefront of innovation enabling the next generation of medicines by delivering genome editing at an unprecedented scale.

Here’s the company’s (undated) announcement about the upcoming World CRISPR Day,

Synthego is proud to host the 2nd annual World CRISPR Day virtual event on October 20, 2021, where we can share, listen, and learn about the latest advancements in CRISPR. The day will include presentations from the world’s leading Genome Engineers, a panel discussion featuring the women of CRISPR, and much more! Don’t miss your chance to learn from the experts how CRISPR is editing the future of medicine.

Despite the COVID-related challenges that the global research community continues to face, scientists have persevered in their relentless pursuit of advancing human health. The field of CRISPR has been no exception. With development of new CRISPR innovations, drug discovery and diagnostic methods, and numerous successful reports of CRISPR-based cell and gene therapy clinical trials, the promise of CRISPR in the clinic is becoming a reality.

Join us at World CRISPR Day to hear academic and industry experts talk about their transformative research, visit our partner’s booths, take advantage of the different networking sessions with your peers, and much more!

Register now for free!

You can find World CRISPR Day 2021 here and you can find Baba Brinkman’s website here.

Having looked at the pop up pages describing the panel discussions and participants and having looked at their World CRISPR Day 2021 and 2020 videos, I strongly suspect that this day focuses on CRISPR as the solution to any number of problems in the life sciences, an area, where coincidentally, Synthego and its partners have significant expertise. With that proviso in mind, I’m sure this will be a very interesting and worthwhile day.

Rapid formation of micro- and nanoplastics in the environment

Image: Nora Meides.

A June 18, 2021 news item on phys.org announces the results of research into how materials made of plastic break down into micro- and nanoplastic particles in the environment,

Most microplastic particles in the environment originate from larger pieces of plastic. In a long-term study, an interdisciplinary research team at the University of Bayreuth has simulated how quickly plastic breaks down into fragments under natural influences. High-tech laboratory tests on polystyrene show two phases of abiotic degradation. To begin with, the stability of the plastic is weakened by photo-oxidation. Then cracks form and more and more and smaller fragments are released into the environment. The study, published in the journal Environmental Science & Technology, allows conclusions to be drawn about other plastics that are common in the environment.

A June 17, 2021 University of Bayreuth press release, which originated the news item, provides more detail,

Polystyrene is an inexpensive plastic that is often used for packaging and thermal insulation, and is therefore particularly common in plastic waste. As part of their long-term study, the Bayreuth researchers for the first time combined analytical investigations, which were also carried out on polystyrene particles at the atomic level, with measurements determining the behaviour of these particles under mechanical stress. On the basis of this, they developed a model for abiotic degradation, i.e. degradation without the influence of living organisms.

“Our study shows that a single microplastic particle with a diameter of 160 micrometres releases about 500 particles in the order of 20 micrometres – i.e. 0.02 millimetres – over the course of one and a half years of being exposed to natural weathering processes in the environment. Over time, these particles in turn break down into smaller and smaller fragments. An ecocorona can form around these tiny particles, possibly facilitating penetration into the cells of living organisms. This was discovered a few months ago by another Bayreuth research group,” says first author Nora Meides, a doctoral student in macromolecular chemistry at the University of Bayreuth.

n the water, the microplastic particles were exposed to two stress factors: intense sunlight and continuous mechanical stress produced by agitation. In the real-world environment, sunlight and mechanical stress are in fact the two main abiotic factors that contribute to the gradual fragmentation of the particles. Irradiation by sunlight triggers oxidation processes on the surface of the particles. This photo-oxidation, in combination with mechanical stress, has significant consequences. The polystyrene chains become ever shorter. Furthermore, they become increasingly polar, i.e. centres of charge are formed in the molecules. In the second phase, the microplastic particles begin to fragment. Here, the particles break down into smaller and smaller micro- and nanoplastic fragments.

“Our research results are a valuable basis for investigating the abiotic degradation of macro- and microplastics in the environment – both on land and at the surface of water – in more detail, using other types of plastic as examples. We were surprised by the speed of fragmentation ourselves, which again shows the potential risks that could emanate from the growing burden of plastics on the environment. Especially larger plastic waste objects, are – when exposed to sunlight and abrasion – a reservoir of constant microplastic input. It is precisely these tiny particles, barely visible to the naked eye, that spread to the remotest ecosystems via various transport routes,” says Teresa Menzel, PhD student in the area of Polymer Engineering.

“The polystyrene investigated in our long-term study has a carbon-chain backbone, just like polyethylene and polypropylene. It is very likely that the two-phase model we have developed on polystyrene can be transferred to these plastics,” adds lead author Prof. Dr. Jürgen Senker, Professor of Inorganic Chemistry, who coordinated the research work. 

The study that has now been published is the result of the close interdisciplinary cooperation of a working group belonging to the DFG Collaborative Research Centre “Microplastics” at the University of Bayreuth. In this team, scientists from macromolecular chemistry, inorganic chemistry, engineering science, and animal ecology are jointly researching the formation and degradation of microplastics. Numerous types of research technology are available on the Bayreuth campus for this purpose, which were used in the long-term study: among others, ¹³C-MAS-NMR spectroscopy, energy dispersive X-ray spectroscopy (EDX), scanning electron microscopy (SEM), and gel permeation chromatography (GPC).

Here’s a link to and a citation for the paper,

Reconstructing the Environmental Degradation of Polystyrene by Accelerated Weathering by Nora Meides, Teresa Menzel, Björn Poetzschner, Martin G. J. Löder, Ulrich Mansfeld, Peter Strohriegl, Volker Altstaedt, and Jürgen Senker. Environ. Sci. Technol. 2021, 55, 12, 7930–7938 DOI: https://doi.org/10.1021/acs.est.0c07718 Publication Date: May 21, 2021 Copyright © 2021 The Authors. Published by American Chemical Society

This paper is behind a paywall.

Use AI to reduce worries about nanoparticles in food

A June 16, 2021 news item on ScienceDaily announces research into the impact that engineered metallic nanoparticles used in agricultural practices have on food,

While crop yield has achieved a substantial boost from nanotechnology in recent years, alarms over the health risks posed by nanoparticles within fresh produce and grains have also increased. In particular, nanoparticles entering the soil through irrigation, fertilizers and other sources have raised concerns about whether plants absorb these minute particles enough to cause toxicity.

In a new study published online in the journal Environmental Science and Technology, researchers at Texas A&M University have used machine learning [a form of artificial intelligence {AI}] to evaluate the salient properties of metallic nanoparticles that make them more susceptible for plant uptake. The researchers said their algorithm could indicate how much plants accumulate nanoparticles in their roots and shoots.

A June 16, 2021 Texas A&M University news release (also on EurekAlert), which originated the news item, describes the research, which employed two different machine learning algorithms, in more detail,

Nanoparticles are a burgeoning trend in several fields, including medicine, consumer products and agriculture. Depending on the type of nanoparticle, some have favorable surface properties, charge and magnetism, among other features. These qualities make them ideal for a number of applications. For example, in agriculture, nanoparticles may be used as antimicrobials to protect plants from pathogens. Alternatively, they can be used to bind to fertilizers or insecticides and then programmed for slow release to increase plant absorption.

These agricultural practices and others, like irrigation, can cause nanoparticles to accumulate in the soil. However, with the different types of nanoparticles that could exist in the ground and a staggeringly large number of terrestrial plant species, including food crops, it is not clearly known if certain properties of nanoparticles make them more likely to be absorbed by some plant species than others.

“As you can imagine, if we have to test the presence of each nanoparticle for every plant species, it is a huge number of experiments, which is very time-consuming and expensive,” said Xingmao “Samuel” Ma, associate professor in the Zachry Department of Civil and Environmental Engineering. “To give you an idea, silver nanoparticles alone can have hundreds of different sizes, shapes and surface coatings, and so, experimentally testing each one, even for a single plant species, is impractical.”

Instead, for their study, the researchers chose two different machine learning algorithms, an artificial neural network and gene-expression programming. They first trained these algorithms on a database created from past research on different metallic nanoparticles and the specific plants in which they accumulated. In particular, their database contained the size, shape and other characteristics of different nanoparticles, along with information on how much of these particles were absorbed from soil or nutrient-enriched water into the plant body.

Once trained, their machine learning algorithms could correctly predict the likelihood of a given metallic nanoparticle to accumulate in a plant species. Also, their algorithms revealed that when plants are in a nutrient-enriched or hydroponic solution, the chemical makeup of the metallic nanoparticle determines the propensity of accumulation in the roots and shoots. But if plants are grown in soil, the contents of organic matter and the clay in soil are key to nanoparticle uptake.

Ma said that while the machine learning algorithms could make predictions for most food crops and terrestrial plants, they might not yet be ready for aquatic plants. He also noted that the next step in his research would be to investigate if the machine learning algorithms could predict nanoparticle uptake from leaves rather than through the roots.

“It is quite understandable that people are concerned about the presence of nanoparticles in their fruits, vegetables and grains,” said Ma. “But instead of not using nanotechnology altogether, we would like farmers to reap the many benefits provided by this technology but avoid the potential food safety concerns.”

This image accompanies the paper’s research abstract,

[downloaded frm https://pubs.acs.org/doi/full/10.1021/acs.est.1c01603]

Here’s a link to and a citation for the paper,

Prediction of Plant Uptake and Translocation of Engineered Metallic Nanoparticles by Machine Learning by Xiaoxuan Wang, Liwei Liu, Weilan Zhang, and Xingmao Ma. Environ. Sci. Technol. 2021, 55, 11, 7491–7500 DOI: https://doi.org/10.1021/acs.est.1c01603 Publication Date:May 17, 2021 Copyright © 2021 American Chemical Society

This paper is behind a paywall.

Walrus from Space project (citizen science)

Image:: Norwegian Atlantic Walrus. Photo: Tor Lund / WWF [Downloaded from: https://eminetra.co.uk/climate-change-the-walrus-from-space-project-is-calling-on-the-general-public-to-help-search-for-animals-on-satellite-imagery-climate-news/755984/]

Yesterday (October 14, 2021), the World Wildlife Federation (WWF) announced their Walrus from Space project in a press release,

WWF and British Antarctic Survey (BAS) are seeking the public’s help to search for walrus in thousands of satellite images taken from space, with the aim of learning more about how walrus will be impacted by the climate crisis. It’s hoped half a million people worldwide will join the new ‘Walrus from Space’ research project, a census of Atlantic walrus and walrus from the Laptev Sea, using satellite images provided by space and intelligence company Maxar Technologies’ DigitalGlobe.

Walrus are facing the reality of the climate crisis: their Arctic home is warming almost three times faster than the rest of the world and roughly 13% of summer sea ice is disappearing per decade.

From the comfort of their own homes, aspiring conservationists around the world can study the satellite pictures online, spot areas where walrus haul out onto land, and then count them. The data collected in this census of Atlantic and Laptev walrus will give scientists a clearer picture of how each population is doing—without disturbing the animals. The data will also help inform management decisions aimed at conservation efforts for the species.

Walrus use sea ice for resting and to give birth to their young. As sea ice diminishes, more walrus are forced to seek refuge on land, congregating for the chance to rest. Overcrowded beaches can have fatal consequences; walrus are easily frightened, and when spooked they stampede towards the water, trampling one another in their panic. Resting on land (as opposed to sea ice) may also force walrus to swim further and expand more energy to reach their food—food which in turn is being negatively impacted by the warming and acidification of the ocean.

In addition walrus can also be disturbed by shipping traffic and industrial development as the loss of sea ice makes the Arctic more accessible. Walrus are almost certainly going to be impacted by the climate crisis, which could result in significant population declines.

Rod Downie, chief polar adviser at WWF, said:

“Walrus are an iconic species of great cultural significance to the people of the Arctic, but climate change is melting their icy home. It’s easy to feel powerless in the face of the climate and nature emergency, but this project enables individuals to take action to understand a species threatened by the climate crisis, and to help to safeguard their future. “What happens in the Arctic doesn’t stay there; the climate crisis is a global problem, bigger than any person, species or region. Ahead of hosting this year’s global climate summit, the UK must raise its ambition and keep all of its climate promises—for the sake of the walrus, and the world.”

Previous population estimates are based upon the best data and knowledge available, but there are challenges associated with working with marine mammals in such a vast, remote and largely inaccessible place. This project will build upon the knowledge of Indigenous communities, using satellite technology to provide an up-to-date count of Atlantic and Laptev walrus populations.

Hannah Cubaynes, wildlife from space research associate at British Antarctic Survey, said:

“Assessing walrus populations by traditional methods is very difficult as they live in extremely remote areas, spend much of their time on the sea ice and move around a lot, Satellite images can solve this problem as they can survey huge tracts of coastline to assess where walrus are and help us count the ones that we find. “However, doing that for all the Atlantic and Laptev walrus will take huge amounts of imagery, too much for a single scientist or small team, so we need help from thousands of citizen scientists to help us learn more about this iconic animal.”

Earlier this year Cub Scouts from across the UK became walrus spotters to test the platform ahead of its public release. The Scouts have been a partner of WWF since the early 1970s, and over 57 million scouts globally are engaged in environmental projects.

Cub Scout Imogen Scullard, age 9, said:

“I love learning about the planet and how it works. We need to protect it from climate change. We are helping the planet by doing the walrus count with space satellites, which is really cool. It was a hard thing to do but we stuck at it”

The ‘Walrus From Space’ project, which is supported by players of the People’s Postcode Lottery, as well as RBC Tech For Nature and WWF supporters, aims to recruit more than 500,000 citizen scientists over the next five years. Over the course of the project counting methods will be continually refined and improved as data is gathered.

Laura Chow, head of charities at People’s Postcode Lottery, said:

“We’re delighted that players’ support is bringing this fantastic project to life. We encourage everyone to get involved in finding walrus so they can play a part in helping us better understand the effects of climate change on this species and their ecosystem. “Players of People’s Postcode Lottery are supporting this project as part of our Postcode Climate Challenge initiative, which is providing 12 charities with an additional £24 million for projects tackling climate change this year.”

Aspiring conservationists can help protect the species by going to wwf.org.uk/walrusfromspace where they can register to participate, and then be guided through a training module before joining the walrus census.

Download our FAQ

The WWF has released a charming video invitation”Become A Walrus Detective,” (Note: It may be a little over the top for some),

The WWF has a Learn about Walrus from Space webpage, which features the video above and includes a registration button.

Is the United Kingdom an Arctic nation?

No. They are not. (You can check here on the Arctic Countries webpage of The Arctic Institute website.)

Nonetheless and leaving aside that the Arctic and the Antarctic are literally polar opposites, I gather that the British Government in the form of the British Antarctic Survey (BAS), is quite interested in the Arctic, viz.: the Walrus from Space project.

If you keep digging you’ll find a chain of UK government agencies, from the BAS About page (at the bottom), Note: Links have been removed,,

British Antarctic Survey (BAS) is a component of the Natural Environment Research Council (NERC).

NERC is part of UK Research and Innovation

Keep digging (from the UK Research and Innovation entry on Wikipedia), Note: Links have been removed,

UK Research and Innovation (UKRI) is a non-departmental public body of the Government of the United Kingdom that directs research and innovation funding, funded through the science budget of the Department for Business, Energy and Industrial Strategy [emphases mine].

Interesting, non?

There doesn’t have to be a sinister connection between a government agency devoted to supporting business and industry and a climate change project. If we are to grapple with climate change in a significant way, we will need cooperation from many groups and coutnries (some of which may have been adversaries in the past).

Of course, the problem with the business community is that efforts aimed at the public good are often publicity stunts.

For anyone curious about the businesses mentioned in the press release, Maxar Technologies can be found here, Maxar’s DigitalGlobe here, and RBC (Royal of Bank of Canada) Tech for Nature here.

BTW, I love that walrus picture at the beginning of this posting.

A gas, gas, gas for creating semiconducting nanomaterials?

A June 14, 2021 news item on phys.org highlights some new research from Rice University (Texas, US),

Scientific studies describing the most basic processes often have the greatest impact in the long run. A new work by Rice University engineers could be one such, and it’s a gas, gas, gas for nanomaterials.

Yes, I ‘stole’ the phrase from the news item/release for my headline. For anyone unfamiliar with the word gas’ used as slang, it mean something is good or wonderful (See Urban Dictionary).

Getting back to science, gas, and nanomaterials, a June 11, 2021 Rice University news release (also on EurekAlert), which originated the news item, answers some questions about how manufacturing nanomaterial used in electronics could be more easily manufactured,

Rice materials theorist Boris Yakobson, graduate student Jincheng Lei and alumnus Yu Xie of Rice’s Brown School of Engineering have unveiled how a popular 2D material, molybdenum disulfide (MoS2), flashes into existence during chemical vapor deposition (CVD).

Knowing how the process works will give scientists and engineers a way to optimize the bulk manufacture of MoS2 and other valuable materials classed as transition metal dichalcogenides (TMDs), semiconducting crystals that are good bets to find a home in next-generation electronics.

Their study in the American Chemical Society journal ACS Nano focuses on MoS2’s “pre-history,” specifically what happens in a CVD furnace once all the solid ingredients are in place. CVD, often associated with graphene and carbon nanotubes, has been exploited to make a variety of 2D materials by providing solid precursors and catalysts that sublimate into gas and react. The chemistry dictates which molecules fall out of the gas and settle on a substrate, like copper or silicone, and assemble into a 2D crystal.

The problem has been that once the furnace cranks up, it’s impossible to see or measure the complicated chain of reactions in the chemical stew in real time.

“Hundreds of labs are cooking these TMDs, quite oblivious to the intricate transformations occurring in the dark oven,” said Yakobson, the Karl F. Hasselmann Professor of Materials Science and NanoEngineering and a professor of chemistry. “Here, we’re using quantum-chemical simulations and analysis to reveal what’s there, in the dark, that leads to synthesis.”

Yakobson’s theories often lead experimentalists to make his predictions come true. (For example, boron buckyballs.) This time, the Rice lab determined the path molybdenum oxide (MoO3) and sulfur powder take to deposit an atomically thin lattice onto a surface.

The short answer is that it takes three steps. First, the solids are sublimated through heating to change them from solid to gas, including what Yakobson called a “beautiful” ring-molecule, trimolybdenum nonaoxide (Mo3O9). Second, the molybdenum-containing gases react with sulfur atoms under high heat, up to 4,040 degrees Fahrenheit. Third, molybdenum and sulfur molecules fall to the surface, where they crystallize into the jacks-like lattice that is characteristic of TMDs.

What happens in the middle step was of the most interest to the researchers. The lab’s simulations showed a trio of main gas phase reactants are the prime suspects in making MoS2: sulfur, the ring-like Mo3O9 molecules that form in sulfur’s presence and the subsequent hybrid of MoS6 that forms the crystal, releasing excess sulfur atoms in the process.

Lei said the molecular dynamics simulations showed the activation barriers that must be overcome to move the process along, usually in picoseconds.

“In our molecular dynamics simulation, we find that this ring is opened by its interaction with sulfur, which attacks oxygen connected to the molybdenum atoms,” he said. “The ring becomes a chain, and further interactions with the sulfur molecules separate this chain into molybdenum sulfide monomers. The most important part is the chain breaking, which overcomes the highest energy barrier.”

That realization could help labs streamline the process, Lei said. “If we can find precursor molecules with only one molybdenum atom, we would not need to overcome the high barrier of breaking the chain,” he said.

Yakobson said the study could apply to other TMDs.

“The findings raise oftentimes empirical nanoengineering to become a basic science-guided endeavor, where processes can be predicted and optimized,” he said, noting that while the chemistry has been generally known since the discovery of TMD fullerenes in the early ’90s, understanding the specifics will further the development of 2D synthesis.

“Only now can we ‘sequence’ the step-by-step chemistry involved,” Yakobson said. “That will allow us to improve the quality of 2D material, and also see which gas side-products might be useful and captured on the way, opening opportunities for chemical engineering.”

Here’s a link to and a citation for the paper,

Gas-Phase “Prehistory” and Molecular Precursors in Monolayer Metal Dichalcogenides Synthesis: The Case of MoS2 by Jincheng Lei, Yu Xie, and Boris I. Yakobson. ACS Nano 2021, 15, 6, 10525–10531 DOI: https://doi.org/10.1021/acsnano.1c03103 Publication Date: June 9, 2021 Copyright © 2021 American Chemical Society

This paper is behind a paywall.

Autonopia will pilot automated window cleaning in Vancouver (Canada) in 2022

Construction worker working outdoors with the project. Courtesy: Autonopia

Kenneth Chan in a June 10, 2021 article for the Daily Hive describes a startup company in Vancouver (Canada), which hopes to run a pilot project in 2022 for its “HŌMĀN, a highly capable, fast and efficient autonomous machine, designed specifically for cleaning the glasses [windows] perfectly and quickly.” (The description is from Autonopia’s homepage.)

Chan’s June 10, 2021 article describe the new automated window washer as a roomba-like robot,

The business of washing windows on a tower with human labour is a dangerous, inefficient, and costly practice, but a Vancouver innovator’s robotic solution could potentially disrupt this service globally.

Researchers with robotic systems startup Autonopia have come up with a robot that can mimic the behaviour of human window washers, including getting into the nooks and crannies of all types of complicated building facades — any surface structure.

It is also far more efficient than humans, cleaning windows three to four times faster, and can withstand wind and cold temperatures. According to a [news?] release, the robot is described as a modular device with a plug-and-play design [emphasis mine] that allows it to work on any building without requiring any additional infrastructure to be installed.

While artificial intelligence and the robotic device replaces manual work, it still requires a skilled operator to oversee the cleaning.

“It’s intimidating, hard work that most workers don’t want to do, [emphasis mine]” said Autonopia co-founder Mohammad Dabiri, who came up with the idea after witnessing an accident in Southeast Asia [emphasis mine].

“There’s high overhead to manage the hiring, allocation and training of workers, and sometimes they quit as soon as it comes time to go on a high rise.”

“We realized this problem has existed for a while, and yet none of the available solutions has managed to scale,” said Kamali Hossein, the co-founder and CTO of Autonopia, and a Mitacs postdoctoral research [sic] in mechatronic systems engineering at Simon Fraser University.

To clarify, the company is Autonopia and the product the company is promoting is HŌMĀN, an automated or robotic window washer for tall buildings (towers).

HŌMĀN (as it’s written in the Encyclopedia Iranica) or Houmān, as it’s written in Wikipedia, seems to be a literary hero or, perhaps, superhero,

… is one of the most famous Turanian heroes in Shahnameh, the national epic of Greater Iran. Houmān is famous for his bravery, loyalty, and chivalry, such that even Iranians who are longtime enemies of Turanians admire his personality. He is a descendant of Tur, a son of Viseh and brother of Piran. Houmān is the highest ranking Turanian commander and after Piran, he is the second leading member of Viseh clan. Houman first appears in the story of Rostam and Sohrab, …

Autonopia’s website is very attractive and weirdly uninformative. I looked for a more in depth description of ‘plug and play’ and found this,

Modular and Maintainable

The design of simple, but highly capable and modular components, along with the overall simplicity of the robot structure allows for a shorter build time and maintenance turnover. …

Cleans any tower

The flexible and capable design of the robot allows it to adjust to the complexities of the structures and it can maneuver uneven surfaces of different buildings very quickly and safely. No tower is off-limits for HŌMĀN. It is designed to cater to the specific requirements of each high-rise

I wish there were more details about the hardware and the software, e.g., there’s no mention of artificial intelligence as mentioned in Chan’s article.

As for whether or not this is “intimidating, hard work that most workers don’t want to do,” I wonder how Mohammad Dabiri can be so certain. If this product is successful, it will have an impact on people who rely on this work for their livelihoods. Possibly adding some insult to injury, Dabiri and Hossein claim their product is better at the job than humans are.

Nobody can argue about making work safer but it would be nice if some of these eager, entrepreneurial types put some thought into the impact both positive and negative that their bright ideas can have on other people.

As for whether HŌMĀN can work on any tower, photographs like the one at the beginning of this posting, feature modern office buildings which look like glass sheets held together with steel and concrete. So, it doesn’t look likely to work (and it’s probably not feasible from a business perspective) on older buildings with fewer stories, stone ornamentation, and even more nooks and crannies. As for some of the newer buildings which feature odd shapes and are reintroducing ornamentation, I’d imagine that will be problematic. But perhaps the market is overseas where tall buildings can range from 65 stories to over 100 stories (Wikipedia ‘List of tallest buildings‘). After all the genesis for this project was an incident in Southeast Asia. Vancouver doesn’t have 65 story buildings—yet. But, I’m sure there’s a developer or two out there with some plans.

A graphene ‘camera’ and your beating heart: say cheese

Comparing it to a ‘camera’, even with the quotes, is a bit of a stretch for my taste but I can’t come up with a better comparison. Here’s a video so you can judge for yourself,

Caption: This video repeats three times the graphene camera images of a single beat of an embryonic chicken heart. The images, separated by 5 milliseconds, were measured by a laser bouncing off a graphene sheet lying beneath the heart. The images are about 2 millimeters on a side. Credit: UC Berkeley images by Halleh Balch, Alister McGuire and Jason Horng

A June 16, 2021 news item on ScienceDaily announces the research,

Bay Area [San Francisco, California] scientists have captured the real-time electrical activity of a beating heart, using a sheet of graphene to record an optical image — almost like a video camera — of the faint electric fields generated by the rhythmic firing of the heart’s muscle cells.

A University of California at Berkeley (UC Berkeley) June 16, 2021 news release (also on EurekAlert) by Robert Sanders, which originated the news item, provides more detail,

The graphene camera represents a new type of sensor useful for studying cells and tissues that generate electrical voltages, including groups of neurons or cardiac muscle cells. To date, electrodes or chemical dyes have been used to measure electrical firing in these cells. But electrodes and dyes measure the voltage at one point only; a graphene sheet measures the voltage continuously over all the tissue it touches.

The development, published online last week in the journal Nano Letters, comes from a collaboration between two teams of quantum physicists at the University of California, Berkeley, and physical chemists at Stanford University.

“Because we are imaging all cells simultaneously onto a camera, we don’t have to scan, and we don’t have just a point measurement. We can image the entire network of cells at the same time,” said Halleh Balch, one of three first authors of the paper and a recent Ph.D. recipient in UC Berkeley’s Department of Physics.

While the graphene sensor works without having to label cells with dyes or tracers, it can easily be combined with standard microscopy to image fluorescently labeled nerve or muscle tissue while simultaneously recording the electrical signals the cells use to communicate.

“The ease with which you can image an entire region of a sample could be especially useful in the study of neural networks that have all sorts of cell types involved,” said another first author of the study, Allister McGuire, who recently received a Ph.D. from Stanford and. “If you have a fluorescently labeled cell system, you might only be targeting a certain type of neuron. Our system would allow you to capture electrical activity in all neurons and their support cells with very high integrity, which could really impact the way that people do these network level studies.”

Graphene is a one-atom thick sheet of carbon atoms arranged in a two-dimensional hexagonal pattern reminiscent of honeycomb. The 2D structure has captured the interest of physicists for several decades because of its unique electrical properties and robustness and its interesting optical and optoelectronic properties.

“This is maybe the first example where you can use an optical readout of 2D materials to measure biological electrical fields,” said senior author Feng Wang, UC Berkeley professor of physics. “People have used 2D materials to do some sensing with pure electrical readout before, but this is unique in that it works with microscopy so that you can do parallel detection.”

The team calls the tool a critically coupled waveguide-amplified graphene electric field sensor, or CAGE sensor.

“This study is just a preliminary one; we want to showcase to biologists that there is such a tool you can use, and you can do great imaging. It has fast time resolution and great electric field sensitivity,” said the third first author, Jason Horng, a UC Berkeley Ph.D. recipient who is now a postdoctoral fellow at the National Institute of Standards and Technology. “Right now, it is just a prototype, but in the future, I think we can improve the device.”

Graphene is sensitive to electric fields

Ten years ago, Wang discovered that an electric field affects how graphene reflects or absorbs light. Balch and Horng exploited this discovery in designing the graphene camera. They obtained a sheet of graphene about 1 centimeter on a side produced by chemical vapor deposition in the lab of UC Berkeley physics professor Michael Crommie and placed on it a live heart from a chicken embryo, freshly extracted from a fertilized egg. These experiments were performed in the Stanford lab of Bianxiao Cui, who develops nanoscale tools to study electrical signaling in neurons and cardiac cells.

The team showed that when the graphene was tuned properly, the electrical signals that flowed along the surface of the heart during a beat were sufficient to change the reflectance of the graphene sheet.

“When cells contract, they fire action potentials that generate a small electric field outside of the cell,” Balch said. “The absorption of graphene right under that cell is modified, so we will see a change in the amount of light that comes back from that position on the large area of graphene.”

In initial studies, however, Horng found that the change in reflectance was too small to detect easily. An electric field reduces the reflectance of graphene by at most 2%; the effect was much less from changes in the electric field when the heart muscle cells fired an action potential.

Together, Balch, Horng and Wang found a way to amplify this signal by adding a thin waveguide below graphene, forcing the reflected laser light to bounce internally about 100 times before escaping. This made the change in reflectance detectable by a normal optical video camera.

“One way of thinking about it is that the more times that light bounces off of graphene as it propagates through this little cavity, the more effects that light feels from graphene’s response, and that allows us to obtain very, very high sensitivity to electric fields and voltages down to microvolts,” Balch said.

The increased amplification necessarily lowers the resolution of the image, but at 10 microns, it is more than enough to study cardiac cells that are several tens of microns across, she said.

Another application, McGuire said, is to test the effect of drug candidates on heart muscle before these drugs go into clinical trials to see whether, for example, they induce an unwanted arrhythmia. To demonstrate this, he and his colleagues observed the beating chicken heart with CAGE and an optical microscope while infusing it with a drug, blebbistatin, that inhibits the muscle protein myosin. They observed the heart stop beating, but CAGE showed that the electrical signals were unaffected.

Because graphene sheets are mechanically tough, they could also be placed directly on the surface of the brain to get a continuous measure of electrical activity — for example, to monitor neuron firing in the brains of those with epilepsy or to study fundamental brain activity. Today’s electrode arrays measure activity at a few hundred points, not continuously over the brain surface.

“One of the things that is amazing to me about this project is that electric fields mediate chemical interactions, mediate biophysical interactions — they mediate all sorts of processes in the natural world — but we never measure them. We measure current, and we measure voltage,” Balch said. “The ability to actually image electric fields gives you a look at a modality that you previously had little insight into.”

Here’s a link to and a citation for the paper,

Graphene Electric Field Sensor Enables Single Shot Label-Free Imaging of Bioelectric Potentials by Halleh B. Balch, Allister F. McGuire, Jason Horng, Hsin-Zon Tsai, Kevin K. Qi, Yi-Shiou Duh, Patrick R. Forrester, Michael F. Crommie, Bianxiao Cui, and Feng Wang. Nano Lett. 2021, XXXX, XXX, XXX-XXX OI: https://doi.org/10.1021/acs.nanolett.1c00543 Publication Date: June 8, 2021 © 2021 American Chemical Society

This paper is behind a paywall.

Tough colour and the flower beetle

The flower beetle Torynorrhina flammea. [downloaded from https://www.nanowerk.com/nanotechnology-news2/newsid=58269.php]

That is one gorgeous beetle and a June 17, 2021 news item on Nanowerk reveals that it features in a structural colour story (i.e, how structures rather than pigments create colour),

The unique mechanical and optical properties found in the exoskeleton of a humble Asian beetle has the potential to offer a fascinating new insight into how to develop new, effective bio-inspired technologies.

Pioneering new research by a team of international scientists, including Professor Pete Vukusic from the University of Exeter, has revealed a distinctive, and previously unknown property within the carapace of the flower beetle – a member of the scarab beetle family.

The study showed that the beetle has small micropillars within the carapace – or the upper section of the exoskeleton – that give the insect both strength and flexibility to withstand damage very effectively.

Crucially, these micropillars are incorporated into highly regular layering in the exoskeleton that concurrently give the beetle an intensely bright metallic colour appearance.

A June 18, 2021 University of Exeter press release (also on EurekAlert but published June 17, 2021), delves further into the researchers’ new insights,

For this new study, the scientists used sophisticated modelling techniques to determine which of the two functions – very high mechanical strength or conspicuously bright colour – were more important to the survival of the beetle.

They found that although these micropillars do create a highly enhanced toughness of the beetle shell, they were most beneficial for optimising the scattering of coloured light that generates its conspicuous appearance.

The research is published this week in the leading journal, Proceedings of the National Academy of Sciences, PNAS.

Professor Vukusic, one of three leads of the research along with Professor Li at Virginia Tech and Professor Kolle at MIT [Massachusetts Institute of Technology], said: “The astonishing insights generated by this research have only been possible through close collaborative work between Virginia Tech, MIT, Harvard and Exeter, in labs that trailblaze the fields of materials, mechanics and optics. Our follow-up venture to make use of these bio-inspired principles will be an even more exciting journey.”.

The seeds of the pioneering research were sown more than 16 years ago as part of a short project created by Professor Vukusic in the Exeter undergraduate Physics labs. Those early tests and measurements, made by enthusiastic undergraduate students, revealed the possibility of intriguing multifunctionality.

The original students examined the form and structure of beetles’ carapce to try to understand the simple origin of their colour. They noticed for the first time, however, the presence of strength-inducing micropillars.

Professor Vukusic ultimately carried these initial findings to collaborators Professor Ling Li at Virginia Tech and Professor Mathias Kolle at Harvard and then MIT who specialise in the materials sciences and applied optics. Using much more sophisticated measurement and modelling techniques, the combined research team were also to confirm the unique role played by the micropillars in enhancing the beetles’ strength and toughness without compromising its intense metallic colour.

The results from the study could also help inspire a new generation of bio-inspired materials, as well as the more traditional evolutionary research.

By understanding which of the functions provides the greater benefit to these beetles, scientists can develop new techniques to replicate and reproduce the exoskeleton structure, while ensuring that it has brilliant colour appearance with highly effective strength and toughness.

Professor Vukusic added: “Such natural systems as these never fail to impress with the way in which they perform, be it optical, mechanical or in another area of function. The way in which their optical or mechanical properties appear highly tolerant of all manner of imperfections too, continues to offer lessons to us about scientific and technological avenues we absolutely should explore. There is exciting science ahead of us on this journey.”

Here’s a link to and a citation for the paper,

Microstructural design for mechanical–optical multifunctionality in the exoskeleton of the flower beetle Torynorrhina flammea by Zian Jia, Matheus C. Fernandes, Zhifei Deng, Ting Yang, Qiuting Zhang, Alfie Lethbridge, Jie Yin, Jae-Hwang Lee, Lin Han, James C. Weaver, Katia Bertoldi, Joanna Aizenberg, Mathias Kolle, Pete Vukusic, and Ling Li. PNAS June 22, 2021 118 (25) e2101017118; DOI: https://doi.org/10.1073/pnas.2101017118

This paper is behind a paywall.