Tag Archives: Neal Stephenson

The metaverse or not

The ‘metaverse’ seems to be everywhere these days (especially since Facebook has made a number of announcements bout theirs (more about that later in this posting).

At this point, the metaverse is very hyped up despite having been around for about 30 years. According to the Wikipedia timeline (see the Metaverse entry), the first one was a MOO in 1993 called ‘The Metaverse’. In any event, it seems like it might be a good time to see what’s changed since I dipped my toe into a metaverse (Second Life by Linden Labs) in 2007.

(For grammar buffs, I switched from definite article [the] to indefinite article [a] purposefully. In reading the various opinion pieces and announcements, it’s not always clear whether they’re talking about a single, overarching metaverse [the] replacing the single, overarching internet or whether there will be multiple metaverses, in which case [a].)

The hype/the buzz … call it what you will

This September 6, 2021 piece by Nick Pringle for Fast Company dates the beginning of the metaverse to a 1992 science fiction novel before launching into some typical marketing hype (for those who don’t know, hype is the short form for hyperbole; Note: Links have been removed),

The term metaverse was coined by American writer Neal Stephenson in his 1993 sci-fi hit Snow Crash. But what was far-flung fiction 30 years ago is now nearing reality. At Facebook’s most recent earnings call [June 2021], CEO Mark Zuckerberg announced the company’s vision to unify communities, creators, and commerce through virtual reality: “Our overarching goal across all of these initiatives is to help bring the metaverse to life.”

So what actually is the metaverse? It’s best explained as a collection of 3D worlds you explore as an avatar. Stephenson’s original vision depicted a digital 3D realm in which users interacted in a shared online environment. Set in the wake of a catastrophic global economic crash, the metaverse in Snow Crash emerged as the successor to the internet. Subcultures sprung up alongside new social hierarchies, with users expressing their status through the appearance of their digital avatars.

Today virtual worlds along these lines are formed, populated, and already generating serious money. Household names like Roblox and Fortnite are the most established spaces; however, there are many more emerging, such as Decentraland, Upland, Sandbox, and the soon to launch Victoria VR.

These metaverses [emphasis mine] are peaking at a time when reality itself feels dystopian, with a global pandemic, climate change, and economic uncertainty hanging over our daily lives. The pandemic in particular saw many of us escape reality into online worlds like Roblox and Fortnite. But these spaces have proven to be a place where human creativity can flourish amid crisis.

In fact, we are currently experiencing an explosion of platforms parallel to the dotcom boom. While many of these fledgling digital worlds will become what Ask Jeeves was to Google, I predict [emphasis mine] that a few will match the scale and reach of the tech giant—or even exceed it.

Because the metaverse brings a new dimension to the internet, brands and businesses will need to consider their current and future role within it. Some brands are already forging the way and establishing a new genre of marketing in the process: direct to avatar (D2A). Gucci sold a virtual bag for more than the real thing in Roblox; Nike dropped virtual Jordans in Fortnite; Coca-Cola launched avatar wearables in Decentraland, and Sotheby’s has an art gallery that your avatar can wander in your spare time.

D2A is being supercharged by blockchain technology and the advent of digital ownership via NFTs, or nonfungible tokens. NFTs are already making waves in art and gaming. More than $191 million was transacted on the “play to earn” blockchain game Axie Infinity in its first 30 days this year. This kind of growth makes NFTs hard for brands to ignore. In the process, blockchain and crypto are starting to feel less and less like “outsider tech.” There are still big barriers to be overcome—the UX of crypto being one, and the eye-watering environmental impact of mining being the other. I believe technology will find a way. History tends to agree.

Detractors see the metaverse as a pandemic fad, wrapping it up with the current NFT bubble or reducing it to Zuck’s [Jeffrey Zuckerberg and Facebook] dystopian corporate landscape. This misses the bigger behavior change that is happening among Gen Alpha. When you watch how they play, it becomes clear that the metaverse is more than a buzzword.

For Gen Alpha [emphasis mine], gaming is social life. While millennials relentlessly scroll feeds, Alphas and Zoomers [emphasis mine] increasingly stroll virtual spaces with their friends. Why spend the evening staring at Instagram when you can wander around a virtual Harajuku with your mates? If this seems ridiculous to you, ask any 13-year-old what they think.

Who is Nick Pringle and how accurate are his predictions?

At the end of his September 6, 2021 piece, you’ll find this,

Nick Pringle is SVP [Senior Vice President] executive creative director at R/GA London.

According to the R/GA Wikipedia entry,

… [the company] evolved from a computer-assisted film-making studio to a digital design and consulting company, as part of a major advertising network.

Here’s how Pringle sees our future, his September 6, 2021 piece,

By thinking “virtual first,” you can see how these spaces become highly experimental, creative, and valuable. The products you can design aren’t bound by physics or marketing convention—they can be anything, and are now directly “ownable” through blockchain. …

I believe that the metaverse is here to stay. That means brands and marketers now have the exciting opportunity to create products that exist in multiple realities. The winners will understand that the metaverse is not a copy of our world, and so we should not simply paste our products, experiences, and brands into it.

I emphasized “These metaverses …” in the previous section to highlight the fact that I find the use of ‘metaverses’ vs. ‘worlds’ confusing as the words are sometimes used as synonyms and sometimes as distinctions. We do it all the time in all sorts of conversations but for someone who’s an outsider to a particular occupational group or subculture, the shifts can make for confusion.

As for Gen Alpha and Zoomer, I’m not a fan of ‘Gen anything’ as shorthand for describing a cohort based on birth years. For example, “For Gen Alpha [emphasis mine], gaming is social life,” ignores social and economic classes, as well as, the importance of locations/geography, e.g., Afghanistan in contrast to the US.

To answer the question I asked, Pringle does not mention any record of accuracy for his predictions for the future but I was able to discover that he is a “multiple Cannes Lions award-winning creative” (more here).

A more measured view of the metaverse

An October 4, 2021 article (What is the metaverse, and do I have to care? One part definition, one part aspiration, one part hype) by Adi Robertson and Jay Peters for The Verge offers a deeper dive into the metaverse (Note: Links have been removed),

In recent months you may have heard about something called the metaverse. Maybe you’ve read that the metaverse is going to replace the internet. Maybe we’re all supposed to live there. Maybe Facebook (or Epic, or Roblox, or dozens of smaller companies) is trying to take it over. And maybe it’s got something to do with NFTs [non-fungible tokens]?

Unlike a lot of things The Verge covers, the metaverse is tough to explain for one reason: it doesn’t necessarily exist. It’s partly a dream for the future of the internet and partly a neat way to encapsulate some current trends in online infrastructure, including the growth of real-time 3D worlds.

Then what is the real metaverse?

There’s no universally accepted definition of a real “metaverse,” except maybe that it’s a fancier successor to the internet. Silicon Valley metaverse proponents sometimes reference a description from venture capitalist Matthew Ball, author of the extensive Metaverse Primer:

“The Metaverse is an expansive network of persistent, real-time rendered 3D worlds and simulations that support continuity of identity, objects, history, payments, and entitlements, and can be experienced synchronously by an effectively unlimited number of users, each with an individual sense of presence.”

Facebook, arguably the tech company with the biggest stake in the metaverse, describes it more simply:

“The ‘metaverse’ is a set of virtual spaces where you can create and explore with other people who aren’t in the same physical space as you.”

There are also broader metaverse-related taxonomies like one from game designer Raph Koster, who draws a distinction between “online worlds,” “multiverses,” and “metaverses.” To Koster, online worlds are digital spaces — from rich 3D environments to text-based ones — focused on one main theme. Multiverses are “multiple different worlds connected in a network, which do not have a shared theme or ruleset,” including Ready Player One’s OASIS. And a metaverse is “a multiverse which interoperates more with the real world,” incorporating things like augmented reality overlays, VR dressing rooms for real stores, and even apps like Google Maps.

If you want something a little snarkier and more impressionistic, you can cite digital scholar Janet Murray — who has described the modern metaverse ideal as “a magical Zoom meeting that has all the playful release of Animal Crossing.”

But wait, now Ready Player One isn’t a metaverse and virtual worlds don’t have to be 3D? It sounds like some of these definitions conflict with each other.

An astute observation.

Why is the term “metaverse” even useful? “The internet” already covers mobile apps, websites, and all kinds of infrastructure services. Can’t we roll virtual worlds in there, too?

Matthew Ball favors the term “metaverse” because it creates a clean break with the present-day internet. [emphasis mine] “Using the metaverse as a distinctive descriptor allows us to understand the enormity of that change and in turn, the opportunity for disruption,” he said in a phone interview with The Verge. “It’s much harder to say ‘we’re late-cycle into the last thing and want to change it.’ But I think understanding this next wave of computing and the internet allows us to be more proactive than reactive and think about the future as we want it to be, rather than how to marginally affect the present.”

A more cynical spin is that “metaverse” lets companies dodge negative baggage associated with “the internet” in general and social media in particular. “As long as you can make technology seem fresh and new and cool, you can avoid regulation,” researcher Joan Donovan told The Washington Post in a recent article about Facebook and the metaverse. “You can run defense on that for several years before the government can catch up.”

There’s also one very simple reason: it sounds more futuristic than “internet” and gets investors and media people (like us!) excited.

People keep saying NFTs are part of the metaverse. Why?

NFTs are complicated in their own right, and you can read more about them here. Loosely, the thinking goes: NFTs are a way of recording who owns a specific virtual good, creating and transferring virtual goods is a big part of the metaverse, thus NFTs are a potentially useful financial architecture for the metaverse. Or in more practical terms: if you buy a virtual shirt in Metaverse Platform A, NFTs can create a permanent receipt and let you redeem the same shirt in Metaverse Platforms B to Z.

Lots of NFT designers are selling collectible avatars like CryptoPunks, Cool Cats, and Bored Apes, sometimes for astronomical sums. Right now these are mostly 2D art used as social media profile pictures. But we’re already seeing some crossover with “metaverse”-style services. The company Polygonal Mind, for instance, is building a system called CryptoAvatars that lets people buy 3D avatars as NFTs and then use them across multiple virtual worlds.

If you have the time, the October 4, 2021 article (What is the metaverse, and do I have to care? One part definition, one part aspiration, one part hype) is definitely worth the read.

Facebook’s multiverse and other news

Since starting this post sometime in September 2021, the situation regarding Facebook has changed a few times. I’ve decided to begin my version of the story from a summer 2021 announcement.

On Monday, July 26, 2021, Facebook announced a new Metaverse product group. From a July 27, 2021 article by Scott Rosenberg for Yahoo News (Note: A link has been removed),

Facebook announced Monday it was forming a new Metaverse product group to advance its efforts to build a 3D social space using virtual and augmented reality tech.

Facebook’s new Metaverse product group will report to Andrew Bosworth, Facebook’s vice president of virtual and augmented reality [emphasis mine], who announced the new organization in a Facebook post.

Facebook, integrity, and safety in the metaverse

On September 27, 2021 Facebook posted this webpage (Building the Metaverse Responsibly by Andrew Bosworth, VP, Facebook Reality Labs [emphasis mine] and Nick Clegg, VP, Global Affairs) on its site,

The metaverse won’t be built overnight by a single company. We’ll collaborate with policymakers, experts and industry partners to bring this to life.

We’re announcing a $50 million investment in global research and program partners to ensure these products are developed responsibly.

We develop technology rooted in human connection that brings people together. As we focus on helping to build the next computing platform, our work across augmented and virtual reality and consumer hardware will deepen that human connection regardless of physical distance and without being tied to devices. 

Introducing the XR [extended reality] Programs and Research Fund

There’s a long road ahead. But as a starting point, we’re announcing the XR Programs and Research Fund, a two-year $50 million investment in programs and external research to help us in this effort. Through this fund, we’ll collaborate with industry partners, civil rights groups, governments, nonprofits and academic institutions to determine how to build these technologies responsibly. 

..

Where integrity and safety are concerned Facebook is once again having some credibility issues according to an October 5, 2021 Associated Press article (Whistleblower testifies Facebook chooses profit over safety, calls for ‘congressional action’) posted on the Canadian Broadcasting Corporation’s (CBC) news online website.

Rebranding Facebook’s integrity and safety issues away?

It seems Facebook’s credibility issues are such that the company is about to rebrand itself according to an October 19, 2021 article by Alex Heath for The Verge (Note: Links have been removed),

Facebook is planning to change its company name next week to reflect its focus on building the metaverse, according to a source with direct knowledge of the matter.

The coming name change, which CEO Mark Zuckerberg plans to talk about at the company’s annual Connect conference on October 28th [2021], but could unveil sooner, is meant to signal the tech giant’s ambition to be known for more than social media and all the ills that entail. The rebrand would likely position the blue Facebook app as one of many products under a parent company overseeing groups like Instagram, WhatsApp, Oculus, and more. A spokesperson for Facebook declined to comment for this story.

Facebook already has more than 10,000 employees building consumer hardware like AR glasses that Zuckerberg believes will eventually be as ubiquitous as smartphones. In July, he told The Verge that, over the next several years, “we will effectively transition from people seeing us as primarily being a social media company to being a metaverse company.”

A rebrand could also serve to further separate the futuristic work Zuckerberg is focused on from the intense scrutiny Facebook is currently under for the way its social platform operates today. A former employee turned whistleblower, Frances Haugen, recently leaked a trove of damning internal documents to The Wall Street Journal and testified about them before Congress. Antitrust regulators in the US and elsewhere are trying to break the company up, and public trust in how Facebook does business is falling.

Facebook isn’t the first well-known tech company to change its company name as its ambitions expand. In 2015, Google reorganized entirely under a holding company called Alphabet, partly to signal that it was no longer just a search engine, but a sprawling conglomerate with companies making driverless cars and health tech. And Snapchat rebranded to Snap Inc. in 2016, the same year it started calling itself a “camera company” and debuted its first pair of Spectacles camera glasses.

If you have time, do read Heath’s article in its entirety.

An October 20, 2021 Thomson Reuters item on CBC (Canadian Broadcasting Corporation) news online includes quotes from some industry analysts about the rebrand,

“It reflects the broadening out of the Facebook business. And then, secondly, I do think that Facebook’s brand is probably not the greatest given all of the events of the last three years or so,” internet analyst James Cordwell at Atlantic Equities said.

“Having a different parent brand will guard against having this negative association transferred into a new brand, or other brands that are in the portfolio,” said Shankha Basu, associate professor of marketing at University of Leeds.

Tyler Jadah’s October 20, 2021 article for the Daily Hive includes an earlier announcement (not mentioned in the other two articles about the rebranding), Note: A link has been removed,

Earlier this week [October 17, 2021], Facebook announced it will start “a journey to help build the next computing platform” and will hire 10,000 new high-skilled jobs within the European Union (EU) over the next five years.

“Working with others, we’re developing what is often referred to as the ‘metaverse’ — a new phase of interconnected virtual experiences using technologies like virtual and augmented reality,” wrote Facebook’s Nick Clegg, the VP of Global Affairs. “At its heart is the idea that by creating a greater sense of “virtual presence,” interacting online can become much closer to the experience of interacting in person.”

Clegg says the metaverse has the potential to help unlock access to new creative, social, and economic opportunities across the globe and the virtual world.

In an email with Facebook’s Corporate Communications Canada, David Troya-Alvarez told Daily Hive, “We don’t comment on rumour or speculation,” in regards to The Verge‘s report.

I will update this posting when and if Facebook rebrands itself into a ‘metaverse’ company.

***See Oct. 28, 2021 update at the end of this posting and prepare yourself for ‘Meta’.***

Who (else) cares about integrity and safety in the metaverse?

Apparently, the international legal firm, Norton Rose Fulbright also cares about safety and integrity in the metaverse. Here’s more from their July 2021 The Metaverse: The evolution of a universal digital platform webpage,

In technology, first-mover advantage is often significant. This is why BigTech and other online platforms are beginning to acquire software businesses to position themselves for the arrival of the Metaverse.  They hope to be at the forefront of profound changes that the Metaverse will bring in relation to digital interactions between people, between businesses, and between them both. 

What is the Metaverse? The short answer is that it does not exist yet. At the moment it is vision for what the future will be like where personal and commercial life is conducted digitally in parallel with our lives in the physical world. Sounds too much like science fiction? For something that does not exist yet, the Metaverse is drawing a huge amount of attention and investment in the tech sector and beyond.  

Here we look at what the Metaverse is, what its potential is for disruptive change, and some of the key legal and regulatory issues future stakeholders may need to consider.

What are the potential legal issues?

The revolutionary nature of the Metaverse is likely to give rise to a range of complex legal and regulatory issues. We consider some of the key ones below. As time goes by, naturally enough, new ones will emerge.

Data

Participation in the Metaverse will involve the collection of unprecedented amounts and types of personal data. Today, smartphone apps and websites allow organisations to understand how individuals move around the web or navigate an app. Tomorrow, in the Metaverse, organisations will be able to collect information about individuals’ physiological responses, their movements and potentially even brainwave patterns, thereby gauging a much deeper understanding of their customers’ thought processes and behaviours.

Users participating in the Metaverse will also be “logged in” for extended amounts of time. This will mean that patterns of behaviour will be continually monitored, enabling the Metaverse and the businesses (vendors of goods and services) participating in the Metaverse to understand how best to service the users in an incredibly targeted way.

The hungry Metaverse participant

How might actors in the Metaverse target persons participating in the Metaverse? Let us assume one such woman is hungry at the time of participating. The Metaverse may observe a woman frequently glancing at café and restaurant windows and stopping to look at cakes in a bakery window, and determine that she is hungry and serve her food adverts accordingly.

Contrast this with current technology, where a website or app can generally only ascertain this type of information if the woman actively searched for food outlets or similar on her device.

Therefore, in the Metaverse, a user will no longer need to proactively provide personal data by opening up their smartphone and accessing their webpage or app of choice. Instead, their data will be gathered in the background while they go about their virtual lives. 

This type of opportunity comes with great data protection responsibilities. Businesses developing, or participating in, the Metaverse will need to comply with data protection legislation when processing personal data in this new environment. The nature of the Metaverse raises a number of issues around how that compliance will be achieved in practice.

Who is responsible for complying with applicable data protection law? 

In many jurisdictions, data protection laws place different obligations on entities depending on whether an entity determines the purpose and means of processing personal data (referred to as a “controller” under the EU General Data Protection Regulation (GDPR)) or just processes personal data on behalf of others (referred to as a “processor” under the GDPR). 

In the Metaverse, establishing which entity or entities have responsibility for determining how and why personal data will be processed, and who processes personal data on behalf of another, may not be easy. It will likely involve picking apart a tangled web of relationships, and there may be no obvious or clear answers – for example:

Will there be one main administrator of the Metaverse who collects all personal data provided within it and determines how that personal data will be processed and shared?
Or will multiple entities collect personal data through the Metaverse and each determine their own purposes for doing so? 

Either way, many questions arise, including:

How should the different entities each display their own privacy notice to users? 
Or should this be done jointly? 
How and when should users’ consent be collected? 
Who is responsible if users’ personal data is stolen or misused while they are in the Metaverse? 
What data sharing arrangements need to be put in place and how will these be implemented?

There’s a lot more to this page including a look at Social Media Regulation and Intellectual Property Rights.

One other thing, according to the Norton Rose Fulbright Wikipedia entry, it is one of the ten largest legal firms in the world.

How many realities are there?

I’m starting to think we should talking about RR (real reality), as well as, VR (virtual reality), AR (augmented reality), MR (mixed reality), and XR (extended reality). It seems that all of these (except RR, which is implied) will be part of the ‘metaverse’, assuming that it ever comes into existence. Happily, I have found a good summarized description of VR/AR/MR/XR in a March 20, 2018 essay by North of 41 on medium.com,

Summary: VR is immersing people into a completely virtual environment; AR is creating an overlay of virtual content, but can’t interact with the environment; MR is a mixed of virtual reality and the reality, it creates virtual objects that can interact with the actual environment. XR brings all three Reality (AR, VR, MR) together under one term.

If you have the interest and approximately five spare minutes, read the entire March 20, 2018 essay, which has embedded images illustrating the various realities.

Alternate Mixed Realities: an example

TransforMR: Pose-Aware Object Substitution for Composing Alternate Mixed Realities (ISMAR ’21)

Here’s a description from one of the researchers, Mohamed Kari, of the video, which you can see above, and the paper he and his colleagues presented at the 20th IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 2021 (from the TransforMR page on YouTube),

We present TransforMR, a video see-through mixed reality system for mobile devices that performs 3D-pose-aware object substitution to create meaningful mixed reality scenes in previously unseen, uncontrolled, and open-ended real-world environments.

To get a sense of how recent this work is, ISMAR 2021 was held from October 4 – 8, 2021.

The team’s 2021 ISMAR paper, TransforMR Pose-Aware Object Substitution for Composing Alternate Mixed Realities by Mohamed Kari, Tobias Grosse-Puppendah, Luis Falconeri Coelho, Andreas Rene Fender, David Bethge, Reinhard Schütte, and Christian Holz lists two educational institutions I’d expect to see (University of Duisburg-Essen and ETH Zürich), the surprise was this one: Porsche AG. Perhaps that explains the preponderance of vehicles in this demonstration.

Space walking in virtual reality

Ivan Semeniuk’s October 2, 2021 article for the Globe and Mail highlights a collaboration between Montreal’s Felix and Paul Studios with NASA (US National Aeronautics and Space Administration) and Time studios,

Communing with the infinite while floating high above the Earth is an experience that, so far, has been known to only a handful.

Now, a Montreal production company aims to share that experience with audiences around the world, following the first ever recording of a spacewalk in the medium of virtual reality.

The company, which specializes in creating virtual-reality experiences with cinematic flair, got its long-awaited chance in mid-September when astronauts Thomas Pesquet and Akihiko Hoshide ventured outside the International Space Station for about seven hours to install supports and other equipment in preparation for a new solar array.

The footage will be used in the fourth and final instalment of Space Explorers: The ISS Experience, a virtual-reality journey to space that has already garnered a Primetime Emmy Award for its first two episodes.

From the outset, the production was developed to reach audiences through a variety of platforms for 360-degree viewing, including 5G-enabled smart phones and tablets. A domed theatre version of the experience for group audiences opened this week at the Rio Tinto Alcan Montreal Planetarium. Those who desire a more immersive experience can now see the first two episodes in VR form by using a headset available through the gaming and entertainment company Oculus. Scenes from the VR series are also on offer as part of The Infinite, an interactive exhibition developed by Montreal’s Phi Studio, whose works focus on the intersection of art and technology. The exhibition, which runs until Nov. 7 [2021], has attracted 40,000 visitors since it opened in July [2021?].

At a time when billionaires are able to head off on private extraterrestrial sojourns that almost no one else could dream of, Lajeunesse [Félix Lajeunesse, co-founder and creative director of Felix and Paul studios] said his project was developed with a very different purpose in mind: making it easier for audiences to become eyewitnesses rather than distant spectators to humanity’s greatest adventure.

For the final instalments, the storyline takes viewers outside of the space station with cameras mounted on the Canadarm, and – for the climax of the series – by following astronauts during a spacewalk. These scenes required extensive planning, not only because of the limited time frame in which they could be gathered, but because of the lighting challenges presented by a constantly shifting sun as the space station circles the globe once every 90 minutes.

… Lajeunesse said that it was equally important to acquire shots that are not just technically spectacular but that serve the underlying themes of Space Explorers: The ISS Experience. These include an examination of human adaptation and advancement, and the unity that emerges within a group of individuals from many places and cultures and who must learn to co-exist in a high risk environment in order to achieve a common goal.

If you have the time, do read Semeniuk’s October 2, 2021 article in its entirety. You can find the exhibits (hopefully, you’re in Montreal) The Infinite here and Space Explorers: The ISS experience here (see the preview below),

The realities and the ‘verses

There always seems to be a lot of grappling with new and newish science/technology where people strive to coin terms and define them while everyone, including members of the corporate community, attempts to cash in.

The last time I looked (probably about two years ago), I wasn’t able to find any good definitions for alternate reality and mixed reality. (By good, I mean something which clearly explicated the difference between the two.) It was nice to find something this time.

As for Facebook and its attempts to join/create a/the metaverse, the company’s timing seems particularly fraught. As well, paradigm-shifting technology doesn’t usually start with large corporations. The company is ignoring its own history.

Multiverses

Writing this piece has reminded me of the upcoming movie, “Doctor Strange in the Multiverse of Madness” (Wikipedia entry). While this multiverse is based on a comic book, the idea of a Multiverse (Wikipedia entry) has been around for quite some time,

Early recorded examples of the idea of infinite worlds existed in the philosophy of Ancient Greek Atomism, which proposed that infinite parallel worlds arose from the collision of atoms. In the third century BCE, the philosopher Chrysippus suggested that the world eternally expired and regenerated, effectively suggesting the existence of multiple universes across time.[1] The concept of multiple universes became more defined in the Middle Ages.

Multiple universes have been hypothesized in cosmology, physics, astronomy, religion, philosophy, transpersonal psychology, music, and all kinds of literature, particularly in science fiction, comic books and fantasy. In these contexts, parallel universes are also called “alternate universes”, “quantum universes”, “interpenetrating dimensions”, “parallel universes”, “parallel dimensions”, “parallel worlds”, “parallel realities”, “quantum realities”, “alternate realities”, “alternate timelines”, “alternate dimensions” and “dimensional planes”.

The physics community has debated the various multiverse theories over time. Prominent physicists are divided about whether any other universes exist outside of our own.

Living in a computer simulation or base reality

The whole thing is getting a little confusing for me so I think I’ll stick with RR (real reality) or as it’s also known base reality. For the notion of base reality, I want to thank astronomer David Kipping of Columbia University in Anil Ananthaswamy’s article for this analysis of the idea that we might all be living in a computer simulation (from my December 8, 2020 posting; scroll down about 50% of the way to the “Are we living in a computer simulation?” subhead),

… there is a more obvious answer: Occam’s razor, which says that in the absence of other evidence, the simplest explanation is more likely to be correct. The simulation hypothesis is elaborate, presuming realities nested upon realities, as well as simulated entities that can never tell that they are inside a simulation. “Because it is such an overly complicated, elaborate model in the first place, by Occam’s razor, it really should be disfavored, compared to the simple natural explanation,” Kipping says.

Maybe we are living in base reality after all—The Matrix, Musk and weird quantum physics notwithstanding.

To sum it up (briefly)

I’m sticking with the base reality (or real reality) concept, which is where various people and companies are attempting to create a multiplicity of metaverses or the metaverse effectively replacing the internet. This metaverse can include any all of these realities (AR/MR/VR/XR) along with base reality. As for Facebook’s attempt to build ‘the metaverse’, it seems a little grandiose.

The computer simulation theory is an interesting thought experiment (just like the multiverse is an interesting thought experiment). I’ll leave them there.

Wherever it is we are living, these are interesting times.

***Updated October 28, 2021: D. (Devindra) Hardawar’s October 28, 2021 article for engadget offers details about the rebranding along with a dash of cynicism (Note: A link has been removed),

Here’s what Facebook’s metaverse isn’t: It’s not an alternative world to help us escape from our dystopian reality, a la Snow Crash. It won’t require VR or AR glasses (at least, not at first). And, most importantly, it’s not something Facebook wants to keep to itself. Instead, as Mark Zuckerberg described to media ahead of today’s Facebook Connect conference, the company is betting it’ll be the next major computing platform after the rise of smartphones and the mobile web. Facebook is so confident, in fact, Zuckerberg announced that it’s renaming itself to “Meta.”

After spending the last decade becoming obsessed with our phones and tablets — learning to stare down and scroll practically as a reflex — the Facebook founder thinks we’ll be spending more time looking up at the 3D objects floating around us in the digital realm. Or maybe you’ll be following a friend’s avatar as they wander around your living room as a hologram. It’s basically a digital world layered right on top of the real world, or an “embodied internet” as Zuckerberg describes.

Before he got into the weeds for his grand new vision, though, Zuckerberg also preempted criticism about looking into the future now, as the Facebook Papers paint the company as a mismanaged behemoth that constantly prioritizes profit over safety. While acknowledging the seriousness of the issues the company is facing, noting that it’ll continue to focus on solving them with “industry-leading” investments, Zuckerberg said: 

“The reality is is that there’s always going to be issues and for some people… they may have the view that there’s never really a great time to focus on the future… From my perspective, I think that we’re here to create things and we believe that we can do this and that technology can make things better. So we think it’s important to to push forward.”

Given the extent to which Facebook, and Zuckerberg in particular, have proven to be untrustworthy stewards of social technology, it’s almost laughable that the company wants us to buy into its future. But, like the rise of photo sharing and group chat apps, Zuckerberg at least has a good sense of what’s coming next. And for all of his talk of turning Facebook into a metaverse company, he’s adamant that he doesn’t want to build a metaverse that’s entirely owned by Facebook. He doesn’t think other companies will either. Like the mobile web, he thinks every major technology company will contribute something towards the metaverse. He’s just hoping to make Facebook a pioneer.

“Instead of looking at a screen, or today, how we look at the Internet, I think in the future you’re going to be in the experiences, and I think that’s just a qualitatively different experience,” Zuckerberg said. It’s not quite virtual reality as we think of it, and it’s not just augmented reality. But ultimately, he sees the metaverse as something that’ll help to deliver more presence for digital social experiences — the sense of being there, instead of just being trapped in a zoom window. And he expects there to be continuity across devices, so you’ll be able to start chatting with friends on your phone and seamlessly join them as a hologram when you slip on AR glasses.

D. (Devindra) Hardawar’s October 28, 2021 article provides a lot more details and I recommend reading it in its entirety.

Can tattoos warn you of health dangers?

I think I can safely say that Carson J. Bruns, a Professor at the University of Colorado Boulder, is an electronic tattoo enthusiast. His Sept. 24, 2020 essay on electronic tattoos for The Conversation (also found on Fast Company) outlines a very rosy view of a future where health monitoring is constant and visible on your skin (Note: Links have been removed),

In the sci-fi novel “The Diamond Age” by Neal Stephenson, body art has evolved into “constantly shifting mediatronic tattoos” – in-skin displays powered by nanotech robopigments. In the 25 years since the novel was published, nanotechnology has had time to catch up, and the sci-fi vision of dynamic tattoos is starting to become a reality.

The first examples of color-changing nanotech tattoos have been developed over the past few years, and they’re not just for body art. They have a biomedical purpose. Imagine a tattoo that alerts you to a health problem signaled by a change in your biochemistry, or to radiation exposure that could be dangerous to your health.

You can’t walk into a doctor’s office and get a dynamic tattoo yet, but they are on the way. …

In 2017, researchers tattooed pigskin, which had been removed from the pig, with molecular biosensors that use color to indicate sodium, glucose or pH levels in the skin’s fluids.

In 2019, a team of researchers expanded on that study to include protein sensing and developed smartphone readouts for the tattoos. This year, they also showed that electrolyte levels could be detected with fluorescent tattoo sensors.

In 2018, a team of biologists developed a tattoo made of engineered skin cells that darken when they sense an imbalance of calcium caused by certain cancers. They demonstrated the cancer-detecting tattoo in living mice.

My lab is looking at tech tattoos from a different angle. We are interested in sensing external harms, such as ultraviolet radiation. UV exposure in sunlight and tanning beds is the main risk factor for all types of skin cancer. Nonmelanoma skin cancers are the most common malignancies in the U.S., Australia and Europe.

I served as the first human test subject for these tattoos. I created “solar freckles” on my forearm – invisible spots that turned blue under UV exposure and reminded me when to wear sunscreen. My lab is also working on invisible UV-protective tattoos that would absorb UV light penetrating through the skin, like a long-lasting sunscreen just below the surface. We’re also working on “thermometer” tattoos using temperature-sensitive inks. Ultimately, we believe tattoo inks could be used to prevent and diagnose disease.

Temporary transfer tattoos are also undergoing a high-tech revolution. Wearable electronic tattoos that can sense electrophysiological signals like heart rate and brain activity or monitor hydration and glucose levels from sweat are under development. They can even be used for controlling mobile devices, for example shuffling a music playlist at the touch of a tattoo, or for luminescent body art that lights up the skin.

The advantage of these wearable tattoos is that they can use battery-powered electronics. The disadvantage is that they are much less permanent and comfortable than traditional tattoos. Likewise, electronic devices that go underneath the skin are being developed by scientists, designers and biohackers alike, but they require invasive surgical procedures for implantation.

Tattoos injected into the skin offer the best of both worlds: minimally invasive, yet permanent and comfortable. [emphasis mine] New needle-free tattooing methods that fire microscopic ink droplets into the skin are now in development. Once perfected they will make tattooing quicker and less painful.

The color-changing tattoos in development are also going to open the door to a new kind of dynamic body art. Now that tattoo colors can be changed by an electromagnetic signal, you’ll soon be able to “program” your tattoo’s design, or switch it on and off. You can proudly display your neck tattoo at the motorcycle rally and still have clear skin in the courtroom.

As researchers develop dynamic tattoos, they’ll need to study the safety [emphasis mine] of the high-tech inks. As it is, little is known about the safety of the more than 100 different pigments used in normal tattoo inks [emphasis mine]. The U.S. Food and Drug Administration has not exercised regulatory authority over tattoo pigments, citing other competing public health priorities and a lack of evidence of safety problems with the pigments. So U.S. manufacturers can put whatever they want in tattoo inks [emphasis mine] and sell them without FDA approval.

A wave of high-tech tattoos is slowly upwelling, and it will probably keep rising for the foreseeable future. When it arrives, you can decide to surf or watch from the beach. If you do climb on board, you’ll be able to check your body temperature or UV exposure by simply glancing at one of your tattoos.

There are definitely some interesting possibilities, artistic, health, and medical, offered by electronic tattoos. As you may have guessed, I’m not quite the enthusiast that Dr. Bruns seems to be but I could be persuaded, assuming there’s evidence to support the claims.

Canadian military & a 2nd military futures book from Karl Schroeder (2 of 2)

Part 1 of this two-part series featured some information about Schroeder’s first book, featuring nanotechnology written for the Canadian military, ‘Crisis in Zefra’ along with a lengthy excerpt from Schroeder’s second military scenario book, ‘Crisis in Urlia’. In searching for information about this second book, I found a guest editorial for THE CANADIAN ARMY JOURNAL 14.3 2012 by then Colonel R.N.H. Dickson, CD,

Beyond those activities, the CALWC [Canadian Army Land Warfare Centre] continues its foundational research and publication activities, including the ongoing serial publication of The Canadian Army Journal, the JADEX Papers, as well as other special studies on subjects such as the comprehensive approach to operations, cyber warfare, the future network, S&T trends, and Army operations in the Arctic. The upcoming publication of a novel entitled Crisis in Urlia, a design fiction tool examining alternate future operations, will assist the Army in probing new ideas creatively while highlighting the possible risks and opportunities in an ever-changing security environment. [emphasis mine]

Of course, the future of the Army does not exclusively belong to the capability development community, be that the CALWC, the extended virtual warfare centre, or our broader joint and allied partners. Rather, the future of the Army belongs to each of its members, and no one organization has a monopoly on innovative thought. I encourage you to learn more about the CALWC and the Army’s capability development initiatives, and then be prepared to contribute to the conversation. The Canadian Army Journal offers a great forum to do both.

You can download ‘Crisis in Urlia’ from this webpage for Government of Canada publications or you can try this PDF of the novel, which has a publication date of 2014. I gather the book took longer to write than was initially anticipated.

As for Karl Schroeder, his website homepage notes that he’s back from an Oct. 1, 2014 visit to the US White House,

The White House Office of Science and Technology Policy invited some of the Hieroglyph authors to present on future possibilities on October 2, 2014.  There I am on the end of the line.  (More details soon.)

For anyone not familiar with the Hieroglyph project, here are a few details from my May 7, 2013 posting (scroll down about 75% of the way),

The item which moved me to publish today (May 7, 2013), Can Science Fiction Writers Inspire The World To Save Itself?, by Ariel Schwartz concerns the Hieroglyph project at Arizona State University,

Humanity’s lack of a positive vision for the future can be blamed in part on an engineering culture that’s more focused on incrementalism (and VC funding) than big ideas. But maybe science fiction writers should share some of the blame. That’s the idea that came out of a conversation in 2011 between science fiction author Neal Stephenson and Michael Crow, the president of Arizona State University.

If science fiction inspires scientists and engineers to create new things–Stephenson believes it can–then more visionary, realistic sci-fi stories can help create a better future. Hence the Hieroglyph experiment, launched this month as a collaborative website for researchers and writers. Many of the stories created on the platform will go into a HarperCollins anthology of fiction and non-fiction, set to be published in 2014.

Here’s more about the Hieroglyph project from the About page,

Inspiration is a small but essential part of innovation, and science fiction stories have been a seminal source of inspiration for innovators over many decades. In his article entitled “Innovation Starvation,” Neal Stephenson calls for a return to inspiration in contemporary science fiction. That call resonated with so many and so deeply that Project Hieroglyph was born shortly thereafter.

The name of Project Hieroglyph comes from the notion that certain iconic inventions in science fiction stories serve as modern “hieroglyphs” – Arthur Clarke’s communications satellite, Robert Heinlein’s rocket ship that lands on its fins, Issac Asimov’s robot, and so on. Jim Karkanias of Microsoft Research described hieroglyphs as simple, recognizable symbols on whose significance everyone agrees.

The Hieroglyph project was mentioned here most recently in a Sept. 1, 2014 posting (scroll down about 25% of the way) on the occasion of its book publication and where Schroeder’s ‘Degrees of Freedom’ is listed in the table of contents.

The book is one of a series of projects and events organized by Arizona State University’s Center for Science and the Imagination. You can find information about projects and videos of recent events on the homepage.

As for Karl Schroeder, there’s this from the About page on his kschroeder.com website,

I’m one of Canada’s most popular science fiction and fantasy authors. I divide my time between writing fiction and analyzing, conducting workshops and speaking on the future impact of science and technology on society.  As the author of nine novels I’ve been translated into French, German, Spanish, Russian and Japanese.  In addition to my more traditional fiction, I’ve pioneered a new mode of writing that blends fiction and rigorous futures research—my influential short novels Crisis in Zefra (2005) and Crisis in Urlia (2011) are innovative ‘scenario fictions’ commissioned by the Canadian army as study and research tools.  While doing all of this I’m also working to complete a Master’s degree in Strategic Foresight and Innovation at OCAD [Ontario College of Art and Design] University in Toronto.

I married Janice Beitel in April 2001–we tied the knot in a tropical bird sanctuary on the shore of the Indian Ocean, Kalbarri Western Australia.  Our daughter Paige was born in May 2003.  We live in East Toronto where I’m writing about the evolution of post-bureaucratic governance in the 2025-2035 period.

Happy Reading!

Science for your imagination

David Bruggeman over on his Pasco Phronesis has two postings which highlight different approaches to communicating about science. His Aug. 31, 2014 posting features audio plays (Note: Links have been removed),

L.A. Theatre Works makes a large number of their works available via audio. Its Relativity series (H/T Scirens) is a collection of (at this writing) 25 plays with science and technology either as themes and/or as forces driving the action of the play. You’re certainly familiar with War of the Worlds, and you may have heard of the plays Arcadia and Copenhagen. The science covered in these plays is from a number of different fields, and some works will try to engage the audience on the social implications of how science is conducted. The casts have many familiar faces as well. …

You can find the Relativity Series website here where the home page features these (amongst others),

COMPLETENESS

Jason Ritter and Mandy Siegfried star in a new play about love between gun-shy young scientists.

BREAKING THE CODE

The story of Alan Turing, an early pioneer in computer science, and his struggle to live authentically while serving his country.

THE DOCTOR’S DILEMMA

A respected physician must choose between the lives of two terminally ill men in George Bernard Shaw’s sharp-tongued satire of the medical profession.

THE EXPLORERS CLUB

It’s London, 1879, and the members of the Explorers Club must confront their most lethal threat yet: the admission of a woman into their scientific ranks.

THE GREAT TENNESSEE MONKEY TRIAL

The Scopes Monkey Trial of 1925 comes to life as William Jennings Bryan and Clarence Darrow square off over human evolution and the divide between faith and science.

PHOTOGRAPH 51

Miriam Margolyes stars as Rosalind Franklin, whose work led directly to the discovery of the DNA “double helix.”

DOCTOR CERBERUS

A teenage misfit is coming of age in the comforting glow of late-night horror movies. But when reality begins to intrude on his fantasy world, he realizes that hiding in the closet is no longer an option.

David’s Aug. 26, 2014 posting features Hieroglyph, a project from Arizona State University’s (ASU) Center for Science and the Imagination (Note: A link has been removed),

Next month [Sept. 2014] William Morrow will release Hieroglyph, a collection of science fiction short stories edited by the Director of the Center for Science and the Imagination at Arizona State University.  The name of the collection is taken from a theory advanced by science fiction writer Neil [Neal] Stephenson, and a larger writing project of which this book is a part.  The Hieroglyph Theory describes the kind of science fiction that can motivate scientists and engineers to create a future.  A Hieroglyph story provides a complete picture of the future, with a compelling innovation as part of that future.  An example would be the Asimov model of robotics.

Heiroglyph was first mentioned here in a May 7, 2013 posting,

The item which moved me to publish today (May 7, 2013), Can Science Fiction Writers Inspire The World To Save Itself?, by Ariel Schwartz concerns the Hieroglyph project at Arizona State University,

Humanity’s lack of a positive vision for the future can be blamed in part on an engineering culture that’s more focused on incrementalism (and VC funding) than big ideas. But maybe science fiction writers should share some of the blame. That’s the idea that came out of a conversation in 2011 between science fiction author Neal Stephenson and Michael Crow, the president of Arizona State University.

If science fiction inspires scientists and engineers to create new things–Stephenson believes it can–then more visionary, realistic sci-fi stories can help create a better future. Hence the Hieroglyph experiment, launched this month as a collaborative website for researchers and writers. Many of the stories created on the platform will go into a HarperCollins anthology of fiction and non-fiction, set to be published in 2014.

As it turns out, William Morrow Books is a a HarperCollins imprint. You can read a bit more about the book and preview some of the contents from the Scribd.com Hieroglyph webpage which includes this table of contents (much better looking in the Scribd version),

CONTENTS
FOREWORD—
LAWRENCE M. KRAUSS vii
PREFACE: INNOVATION STARVATION—NEAL STEPHENSON xiii
ACKNOWLEDGMENTS xxi
INTRODUCTION: A BLUEPRINT FOR BETTER DREAMS—ED FINN AND KATHRYN CRAMER xxiii
ATMOSPHÆRA INCOGNITA—NEAL STEPHENSON 1
GIRL IN WAVE : WAVE IN GIRL—KATHLEEN ANN GOONAN 38
BY THE TIME WE GET TO ARIZONA—MADELINE ASHBY 74
THE MAN WHO SOLD THE MOON—CORY DOCTOROW 98
JOHNNY APPLEDRONE VS. THE FAA—LEE KONSTANTINOU 182
DEGREES OF FREEDOM—KARL SCHROEDER 206
TWO SCENARIOS FOR THE FUTURE OF SOLAR ENERGY—ANNALEE NEWITZ 243
A HOTEL IN ANTARCTICA—GEOFFREY A. LANDIS 254
PERIAPSIS—JAMES L. CAMBIAS 283
THE MAN WHO SOLD THE STARS—GREGORY BENFORD 307
ENTANGLEMENT—VANDANA SINGH 352
ELEPHANT ANGELS—BRENDA COOPER 398
COVENANT—ELIZABETH BEAR 421
QUANTUM TELEPATHY—RUDY RUCKER 436
TRANSITION GENERATION—DAVID BRIN 466
THE DAY IT ALL ENDED—CHARLIE JANE ANDERS 477
TALL TOWER—BRUCE STERLING 489
SCIENCE AND SCIENCE FICTION: AN INTERVIEW WITH PAUL DAVIES 515
ABOUT THE EDITORS 526
ABOUT THE CONTRIBUTORS 527

Good on the organizers for being able to follow through on their promise to have something published by HarperCollins in 2014.

This book is not ASU’s Center for Science and the Imagination’s only activity. In November 2014, Margaret Atwood, an internationally known Canadian novelist, will visit the center (from the center’s home page),

Internationally renowned novelist and environmental activist Margaret Atwood will visit Arizona State University this November to discuss the relationship between art and science, and the importance of creative writing and imagination for addressing social and environmental challenges.

Atwood’s visit will mark the launch of the Imagination and Climate Futures Initiative, a new collaborative venture at ASU among the Rob and Melani Walton Sustainability Solutions Initiatives, the Center for Science and the Imagination and the Virginia G. Piper Center for Creative Writing. Atwood, author of the MaddAddam trilogy of novels that have become central to the emerging literary genre of climate fiction, or “CliFi,” will offer the inaugural lecture for the initiative on Nov. 5.

“We are proud to welcome Margaret Atwood, one of the world’s most celebrated living writers, to ASU and engage her in these discussions around climate, science and creative writing,” said Jewell Parker Rhodes, founding artistic director for the Virginia G. Piper Center for Creative Writing and the Piper Endowed Chair at Arizona State University. “A poet, novelist, literary critic and essayist, Ms. Atwood epitomizes the creative and professional excellence our students aspire to achieve.”

Focusing in particular on CliFi, the Imagination and Climate Futures Initiative will explore how imaginative skills can be harnessed to create solutions to climate challenges, and question whether and how creative writing can affect political decisions and behavior by influencing our social, political and scientific imagination.

“ASU is a leader in exploring how creativity and the imagination drive the arts, sciences, engineering and humanities,” said Ed Finn, director of the Center for Science and the Imagination. “The Imagination and Climate Futures Initiative will use the thriving CliFi genre to ask the hard questions about our cultural relationship to climate change and offer compelling visions for sustainable futures.”

The multidisciplinary Initiative will bring together researchers, artists, writers, decision-makers and the public to engage in research projects, teaching activities and events at ASU and beyond. The three ASU programs behind the Imagination and Climate Futures Initiative have a track record for academic and public engagement around innovative programs, including the Sustainability Solutions Festival; Emerge; and the Desert Nights, Rising Stars Writers Conference.

“Imagining how the future could unfold in a climatically changing world is key to making good policy and governance decisions today,” said Manjana Milkoreit, a postdoctoral fellow with the Walton Sustainability Solutions Initiatives. “We need to know more about the nature of imagination, its relationship to scientific knowledge and the effect of cultural phenomena such as CliFi on our imaginative capabilities and, ultimately, our collective ability to create a safe and prosperous future.”

Kind of odd they don’t mention Atwood’s Canadian, eh?

There’s lots more on the page which features news bits and articles, as well as, event information. Coincidentally, another Canuck (assuming he retains his citizenship after several years in the US) visited the center on June 7, 2014 to participate in an event billed as ‘An evening with Nathan Fillion and friends;; serenity [Joss Whedon’s tv series and movie], softwire, and science of science fiction’. A June 21, 2014 piece (on the center home page) by Joey Eschrich describes the night in some detail,

Nathan Fillion may very well be the friendliest, most unpretentious spaceship captain, mystery-solving author and science fiction heartthrob in the known universe. The “ruggedly handsome” star of TV’s “Castle” was the delight of fans as he headlined a fundraiser on the Arizona State University campus in Tempe, June 7 [2014].

The “Serenity, Softwire, and the Science of Science Fiction” event, benefiting the ASU Department of English and advertised as an “intimate evening for a small group of 50 people,” included considerable face-time with Fillion, who in-person proved surprisingly similar to the witty, charming and compassionate characters he plays on television and in film.

Starring with Fillion in the ASU evening’s festivities were science fiction author PJ Haarsma (a close friend of Fillion’s) along with ASU professors Ed Finn, director of the Center for Science and the Imagination; Peter Goggin, a literacy expert in the Department of English and senior scholar with the Global Institute of Sustainability; and School of Earth and Space Exploration faculty Jim Bell, an astronomer, and Sara Imari Walker, an astrobiologist. In addition to the Department of English, sponsors included ASU’s College of Liberal Arts and Sciences and Center for Science and the Imagination.

The event began with each panelist explaining how he or she arrived at his or her respective careers, and whether science or science fiction played a role in that journey. All panelists pointed to reading and imagining as formational to their senses of themselves and their places in life.

A number of big questions were posed to the panelists: “What is the likelihood of life on other planets?” and “What is the physical practicality of traveling to other planets?” ASU scientists Bell and Walker deftly fielded these complex planetary inquiries, while Goggin and Finn explained how the intersection of science and humanities – embodied in science fiction books and film – encouraged children and scholars alike to think creatively about the future. Attendees reported that they found the conversation “intellectually stimulating and thought-provoking as well as fun and entertaining.”

During the ensuing discussion, Haarsma and Fillion bantered back and forth comically, as we are told they often do in real life, at one point raising the group’s awareness of the mission they have shared for many years: promoting reading in the lives of young people. The two founded the Kids Need to Read Foundation, which provides books to underserved schools and libraries. Fillion, the son of retired English teachers, attended Concordia University of Alberta*, where he was a member of the Kappa Alpha Society, an organization that emphasizes literature and debate. His brother, Jeff, is a highly respected school principal. Fillion’s story about the importance of books and reading in his childhood home was a rare moment of seriousness for the actor.

The most delightful aspect of the evening, according to guests, was the good nature of Fillion himself, who arrived with Haarsma earlier than expected and stayed later than scheduled. Fillion spent several minutes with each individual or group of friends, laughing with them, using their phone cameras to snap group “selfies” and showing a genuine interest in getting to know them.

Audience members each received copies of science fiction books: Haarsma’s teen novel, “Softwire: Virus on Orbis I,” and the Tomorrow Project science fiction anthology “Cautions, Dreams & Curiosities,” which was co-produced by the Center for Science and the Imagination with Intel and the Society for Science & the Public. Guests presented their new books and assorted other items to Fillion and Haarsma for autographing and a bit more conversation before the evening came to a close. It was then time for Fillion to head back downtown to his hotel, but not before one cadre of friends “asked him to take one last group shot of us at the end of the night, to which he replied with a smile, ‘I thought you’d never ask.’”

*Corrected on February 4, 2020: I originally stated that “Concordia University is in the province of Québec not Alberta which is home to the University of Calgary and the University of Alberta.” That is not entirely correct. There is a Concordia University in Alberta as well as in Québec. However, the Concordia in Alberta is properly referred to as Concordia University of Edmonton (its Wikipedia entry proudly lists Nathan Filion as one of its notable alumni).*

The evening with Nathan Fillion and friends was a fundraiser, participants were charged $250 each for one of 50 seats at the event, which means they raised $12,500 minus any expenses incurred. Good for them!

For anyone unfamiliar with P.J. Haarsma’s oeuvre, there’s this Wikipedia entry for The Softwire.

The age of the ‘nano-pixel’

As mentioned here before, ‘The Diamond Age: Or, A Young Lady’s Illustrated Primer’, a 1985 novel by Neal Stephenson featured in its opening chapter a flexible, bendable, rollable, newspaper screen. It’s one of those devices promised by ‘nano evangelists’ that never quite seems to come into existence. However, ‘hope springs eternally’ as they say and a team from the University of Oxford claims to be bringing us one step closer.

From a July 10, 2014 University of Oxford press release (also on EurekAlert but dated July 9, 2014 and on Azoanano as a July 10, 2014 news item),

A new discovery will make it possible to create pixels just a few hundred nanometres across that could pave the way for extremely high-resolution and low-energy thin, flexible displays for applications such as ‘smart’ glasses, synthetic retinas, and foldable screens.

A team led by Oxford University scientists explored the link between the electrical and optical properties of phase change materials (materials that can change from an amorphous to a crystalline state). They found that by sandwiching a seven nanometre thick layer of a phase change material (GST) between two layers of a transparent electrode they could use a tiny current to ‘draw’ images within the sandwich ‘stack’.

Here’s a series of images the researchers have created using this technology,

Still images drawn with the technology: at around 70 micrometres across each image is smaller than the width of a human hair.  Courtesy University of Oxford

Still images drawn with the technology: at around 70 micrometres across each image is smaller than the width of a human hair. Courtesy University of Oxford

The press release offers a technical description,

Initially still images were created using an atomic force microscope but the team went on to demonstrate that such tiny ‘stacks’ can be turned into prototype pixel-like devices. These ‘nano-pixels’ – just 300 by 300 nanometres in size – can be electrically switched ‘on and off’ at will, creating the coloured dots that would form the building blocks of an extremely high-resolution display technology.

‘We didn’t set out to invent a new kind of display,’ said Professor Harish Bhaskaran of Oxford University’s Department of Materials, who led the research. ‘We were exploring the relationship between the electrical and optical properties of phase change materials and then had the idea of creating this GST ‘sandwich’ made up of layers just a few nanometres thick. We found that not only were we able to create images in the stack but, to our surprise, thinner layers of GST actually gave us better contrast. We also discovered that altering the size of the bottom electrode layer enabled us to change the colour of the image.’

The layers of the GST sandwich are created using a sputtering technique where a target is bombarded with high energy particles so that atoms from the target are deposited onto another material as a thin film.

‘Because the layers that make up our devices can be deposited as thin films they can be incorporated into very thin flexible materials – we have already demonstrated that the technique works on flexible Mylar sheets around 200 nanometres thick,’ said Professor Bhaskaran. ‘This makes them potentially useful for ‘smart’ glasses, foldable screens, windshield displays, and even synthetic retinas that mimic the abilities of photoreceptor cells in the human eye.’

Peiman Hosseini of Oxford University’s Department of Materials, first author of the paper, said: ‘Our models are so good at predicting the experiment that we can tune our prototype ‘pixels’ to create any colour we want – including the primary colours needed for a display. One of the advantages of our design is that, unlike most conventional LCD screens, there would be no need to constantly refresh all pixels, you would only have to refresh those pixels that actually change (static pixels remain as they were). This means that any display based on this technology would have extremely low energy consumption.’

The research suggests that flexible paper-thin displays based on the technology could have the capacity to switch between a power-saving ‘colour e-reader mode’, and a backlit display capable of showing video. Such displays could be created using cheap materials and, because they would be solid-state, promise to be reliable and easy to manufacture. The tiny ‘nano-pixels’ make it ideal for applications, such as smart glasses, where an image would be projected at a larger size as, even enlarged, they would offer very high-resolution.

Professor David Wright of the Department of Engineering at the University of Exeter, co-author of the paper, said: ‘Along with many other researchers around the world we have been looking into the use of these GST materials for memory applications for many years, but no one before thought of combining their electrical and optical functionality to provide entirely new kinds of non-volatile, high-resolution, electronic colour displays – so our work is a real breakthrough.’

The phase change material used was the alloy Ge2Sb2Te5 (Germanium-Antimony-Tellurium or GST) sandwiched between electrode layers made of indium tin oxide (ITO).

I gather the researchers are looking for investors (from the press release),

Whilst the work is still in its early stages, realising its potential, the Oxford team has filed a patent on the discovery with the help of Isis Innovation, Oxford University’s technology commercialisation company. Isis is now discussing the displays with companies who are interested in assessing the technology, and with investors.

Here’s a link to and a citation for the paper,

An optoelectronic framework enabled by low-dimensional phase-change films by Peiman Hosseini, C. David Wright, & Harish Bhaskaran. Nature 511, 206–211 (10 July 2014) doi:10.1038/nature13487 Published online 09 July 2014

This paper is behind a paywall.

The importance of science fiction for the future

I started this post in March (2013) but haven’t had time till now (May 7, 2013) to flesh it out. It was a Mar. 28, 2013 posting by Jessica Bland and Lydia Nicholas for the UK Guardian science blogs which inspired me (Note: Links have been removed),

Science fiction and real-world innovation have always fed off each other. The history of the electronic book shows us things are more complicated than fiction predicting fact [.]

Imagine a new future. No, not that tired old vision of hoverboards and robot butlers: something really new and truly strange. It’s hard. It’s harder still to invent the new things that will fill this entirely new world. New ideas that do not fit or that come from unfamiliar places are often ignored. Hedy Lemarr [a major movie sex symbol in her day] and George Antheil’s [musician] frequency-hopping patent was ignored for 20 years because the US Navy could not believe that Hollywood artists could invent a method of secure communication. Many of Nikola Tesla’s inventions and his passionate belief in the importance of renewable energy were ignored by a world that could not imagine a need for them.

Stories open our eyes to the opportunities and hazards of new technologies. By articulating our fears and desires for the future, stories help shape what is to come – informing public debate, influencing regulation and inspiring inventors. And this makes it important that we do not just listen to the loudest voices.

Of course it isn’t as simple as mining mountains of pulp sci-fi for the schematics of the next rocket or the algorithms of the next Google. Arthur C. Clarke, often attributed with the invention of the communication satellite, firmly believed that these satellites would require crews. The pervasive connectivity that defines our world today would never have existed if every satellite needed to be manned.

The Guardian posting was occasioned by the publication of two research papers produced for NESTA. It’s an organization which is not similar to any in Canada or the US (as far as I know). Here’s a little more about NESTA from their FAQs page,

Nesta is an independent charity with a mission to help people and organisations bring great ideas to life. We do this by providing investments and grants and mobilising research, networks and skills.

Nesta backs innovation to help bring great ideas to life. We do this by providing investments and grants and mobilising research, networks and skills.

Nesta receives funds from The Nesta Trust, which received the National Lottery endowment from the National Endowment for Science, Technology and the Arts.

The interest from this endowment is used to fund our activities. These activities must be used to promote the charitable objects of both the Nesta Trust and the Nesta charity. We also use the returns from Nesta investments, and income from working in partnership with others, to fund our work.

We don’t receive any ongoing general government funds to support our work.

On 1st April 2012 Nesta ceased being a Non-Departmental Public Body (NDPB) and became a charity (charity number 1144091).

We maintain our mission to carry out research into innovation and to further education, science, technology, the arts, public services, the voluntary sector and enterprise in various areas by encouraging and supporting innovation.

Nesta’s objectives are now set out in our ‘charitable objects’ which can be viewed here.

Nesta continues to operate at no cost to the Government or the taxpayer using return from the Nesta Trust.

In any event, NESTA commissioned two papers:

Imagining technology
Jon Turney
Nesta Working Paper 13/06
Issued: March 2013

Better Made Up: The Mutual Influence of Science fiction and Innovation
Caroline Bassett, Ed Steinmueller, Georgina Voss
Nesta Working Paper 13/07
Issued: March 2013

For anyone who does not have time to read the NESTA papers, the Guardian’s post by Bland and Nicholas provides a good overview of the thinking which links science fiction with real innovation.

Around the same time I stumbled across the Bland/Nicholas post I also stumbled on a science fiction conference that is regularly held at the University of California Riverside.

The Eaton Science Fiction Conference was held Apr. 11 – 14, 2013 and the theme was “Science Fiction Media. It’s a little late for this year but perhaps you want to start planning for next year.  Here’s the Eaton Science Fiction Conference website. For those who’d like to get a feel for this conference, here’s a little more from the Mar. 27, 2013 news release by Bettye Miller,

… the 2013 conference will be largest in the 34-year history of the conference, said Melissa Conway, head of Special Collections and Archives of the UCR Libraries and conference co-organizer. It also is the first time the UCR Libraries and College of Humanities, Arts and Social Sciences have partnered with the Science Fiction Research Association, the largest and most prestigious scholarly organization in the field, to present the event.

Among the science fiction writers who will be presenting on different panels are: Larry Niven, author of “Ringworld” and a five-time winner of the Hugo Award and a Nebula; Gregory Benford, astrophysicist and winner of a Nebula Award and a United Nations Medal in Literature; David Brin, astrophysicist and two-time winner of the Hugo Award; Audre Bormanis, writer/producer for “Star Trek: Enterprise,” “Threshold,” “Eleventh Hour,” “Legend of the Seeker” and “Tron: Uprising”; Kevin Grazier, science adviser for “Battlestar Galactica,” “Defiance,” “Eureka” and “Falling Skies”; and James Gunn, winner of a Hugo Award and the 2007 Damon Knight Memorial Grand Master, presented for lifetime achievement as a writer of science fiction and/or fantasy by the Science Fiction and Fantasy Writers of America.

As for the impetus for this conference in Riverside, California, from the news release,

UCR is the home of the Eaton Collection of Science Fiction and Fantasy, the largest publicly accessible collection of its kind in the world. The collection embraces every branch of science fiction, fantasy, horror and utopian/dystopian fiction.

The collection, which attracts scholars from around the world, holds more than 300,000 items including English-language science fiction, fantasy and horror published in the 20th century and a wide range of works in Spanish, French, Russian, Chinese, Japanese, German, and a dozen other languages; fanzines; comic books; anime; manga; science fiction films and television series; shooting scripts; archives of science fiction writers; and science fiction collectibles and memorabilia.

In one of those odd coincidences we all experience from time to time, Ray Harryhausen, creator of a type of stop-motion model animation known as Dynamation and well loved for his work in special effects and who was recognized with a life time achievement at the 2013 conference, died today (May 7, 2013; Wikipedia essay).

The item which moved me to publish today (May 7, 2013), Can Science Fiction Writers Inspire The World To Save Itself?, by Ariel Schwartz concerns the Hieroglyph project at Arizona State University,

Humanity’s lack of a positive vision for the future can be blamed in part on an engineering culture that’s more focused on incrementalism (and VC funding) than big ideas. But maybe science fiction writers should share some of the blame. That’s the idea that came out of a conversation in 2011 between science fiction author Neal Stephenson and Michael Crow, the president of Arizona State University.

If science fiction inspires scientists and engineers to create new things–Stephenson believes it can–then more visionary, realistic sci-fi stories can help create a better future. Hence the Hieroglyph experiment, launched this month as a collaborative website for researchers and writers. Many of the stories created on the platform will go into a HarperCollins anthology of fiction and non-fiction, set to be published in 2014.

Here’s more about the Hieroglyph project from the About page,

Inspiration is a small but essential part of innovation, and science fiction stories have been a seminal source of inspiration for innovators over many decades. In his article entitled “Innovation Starvation,” Neal Stephenson calls for a return to inspiration in contemporary science fiction. That call resonated with so many and so deeply that Project Hieroglyph was born shortly thereafter.

The name of Project Hieroglyph comes from the notion that certain iconic inventions in science fiction stories serve as modern “hieroglyphs” – Arthur Clarke’s communications satellite, Robert Heinlein’s rocket ship that lands on its fins, Issac Asimov’s robot, and so on. Jim Karkanias of Microsoft Research described hieroglyphs as simple, recognizable symbols on whose significance everyone agrees.

While the mission of Project Hieroglyph begins with creative inspiration, our hope is that many of us will be genuinely inspired towards realization.

This project is an initiative of Arizona State University’s Center for Science and Imagination.

It’s great seeing this confluence of thinking about science fiction, innovation, and science. I’m pretty sure we knew this in the 19th century (and probably before that too) and I just hope we don’t forget it again.

What is a diamond worth?

A couple of diamond-related news items have crossed my path lately causing me to consider diamonds and their social implications. I’ll start first with the news items, according to an April 4, 2012 news item on physorg.com a quantum computer has been built inside a diamond (from the news item),

Diamonds are forever – or, at least, the effects of this diamond on quantum computing may be. A team that includes scientists from USC has built a quantum computer in a diamond, the first of its kind to include protection against “decoherence” – noise that prevents the computer from functioning properly.

I last mentioned decoherence in my July 21, 2011 posting about a joint (University of British Columbia, University of California at Santa Barbara and the University of Southern California) project on quantum computing.

According to the April 5, 2012 news item by Robert Perkins for the University of Southern California (USC),

The multinational team included USC professor Daniel Lidar and USC postdoctoral researcher Zhihui Wang, as well as researchers from the Delft University of Technology in the Netherlands, Iowa State University and the University of California, Santa Barbara. The findings were published today in Nature.

The team’s diamond quantum computer system featured two quantum bits, or qubits, made of subatomic particles.

As opposed to traditional computer bits, which can encode distinctly either a one or a zero, qubits can encode a one and a zero at the same time. This property, called superposition, along with the ability of quantum states to “tunnel” through energy barriers, some day will allow quantum computers to perform optimization calculations much faster than traditional computers.

Like all diamonds, the diamond used by the researchers has impurities – things other than carbon. The more impurities in a diamond, the less attractive it is as a piece of jewelry because it makes the crystal appear cloudy.

The team, however, utilized the impurities themselves.

A rogue nitrogen nucleus became the first qubit. In a second flaw sat an electron, which became the second qubit. (Though put more accurately, the “spin” of each of these subatomic particles was used as the qubit.)

Electrons are smaller than nuclei and perform computations much more quickly, but they also fall victim more quickly to decoherence. A qubit based on a nucleus, which is large, is much more stable but slower.

“A nucleus has a long decoherence time – in the milliseconds. You can think of it as very sluggish,” said Lidar, who holds appointments at the USC Viterbi School of Engineering and the USC Dornsife College of Letters, Arts and Sciences.

Though solid-state computing systems have existed before, this was the first to incorporate decoherence protection – using microwave pulses to continually switch the direction of the electron spin rotation.

“It’s a little like time travel,” Lidar said, because switching the direction of rotation time-reverses the inconsistencies in motion as the qubits move back to their original position.

Here’s an image I downloaded from the USC webpage hosting Perkins’s news item,

The diamond in the center measures 1 mm X 1 mm. Photo/Courtesy of Delft University of Technolgy/UC Santa Barbara

I’m not sure what they were trying to illustrate with the image but I thought it would provide an interesting contrast to the video which follows about the world’s first purely diamond ring,

I first came across this ring in Laura Hibberd’s March 22, 2012 piece for Huffington Post. For anyone who feels compelled to find out more about it, here’s the jeweller’s (Shawish) website.

What with the posting about Neal Stephenson and Diamond Age (aka, The Diamond Age Or A Young Lady’s Illustrated Primer; a novel that integrates nanotechnology into a story about the future and ubiquitous diamonds), a quantum computer in a diamond, and this ring, I’ve started to wonder about role diamonds will have in society. Will they be integrated into everyday objects or will they remain objects of desire? My guess is that the diamonds we create by manipulating carbon atoms will be considered everyday items while the ones which have been formed in the bowels of the earth will retain their status.

Get your question to Neal Stephenson asked at April 17, 2012 event at MIT

After reading Diamond Age (aka, The Diamond Age Or A Young Lady’s Illustrated Prime; a novel that integrates nanotechnology into a story about the future), I have never been able to steel myself to read another Neal Stephenson book. In the last 1/3 of the book, the plot fell to pieces so none of the previously established narrative threads were addressed and the character development, such as it was, ceased to make sense. However, it seems I am in the minority as Stephenson and his work are widely and critically lauded.

April 17, 2012, Stephenson will be appearing in an event which features a live interview by Technology Review editor-in-chief, Jason Pontin at the Massachusetts Institute of Technology (MIT). From Stephen Cass’s April 3, 2012 article for Technology Review,

With assistance from the the MIT Graduate Program in Science Writing, if you’re in the Boston area, you can see Neal Stephenson in person at MIT on April 17. Technology Review‘s editor-in-chief, Jason Pontin, will publicly interview Stephenson for the 2012 issue of TRSF, our annual science fiction anthology. Topics on the table include the state and future of hard science fiction, and how digital publishing is affecting novels.

The event is free and you can get a ticket here. For anyone who can’t get to Boston for the event, you can ask your question here in the comments section.

Nanodiamond research – a quick mention

Nanotechnology and diamonds go together like a horse and carriage … I don’t often resist song references and this was not one of those times. (For anyone who doesn’t recognize it, “Love and marriage go together like …).

Given how strongly diamonds are associated with nanotechnology, it’s good to see that a team from the A. J. Drexel Nanotechnology Institute has published a review of nanodiamond research. From the Jan. 11, 2012 news item on Nanowerk,

Nearly 50 years ago scientists discovered that detonating powerful explosives had the ability to create, not just destroy. Nanodiamonds, diamond-structured particles measuring less than 10 nanometers in diameter, which are the resultant residue from a TNT or Hexogen explosion in a contained space, are now being studied in a variety of science, technology and health applications. A team of researchers who specialize in nanotechnology, led by Dr. Yury Gogotsi, director of the A.J. Drexel Nanotechnology Institute, offered a review of nanodiamond research, in the December 18 edition of Nature Nanotechnology (“The properties and applications of nanodiamonds “) to sift through new ways scientists are using these tiny treasures.

Courtesy of reading Neal Stephenson’s science fiction novel, Diamond Age, I tend to think of materials made from nanodiamonds as being construction materials but this team is suggesting some other applications (from the news item),

According to the piece, nanodiamonds possess a unique combination of qualities, such as accessible surface area, versatile chemistry, chemical stability and biocompatibility. These traits, and the fact that nanodiamonds are non-toxic, make the particles ideal candidates for a variety of tasks including drug delivery cancer diagnostics, and mimicking proteins.

For anyone who’s interested about the Drexel Nanotechnology Institute and their Nanomaterials Group, here’s a link to their webpage.

Human-Computer interfaces: flying with thoughtpower, reading minds, and wrapping a telephone around your wrist

This time I’ve decided to explore a few of the human/computer interface stories I’ve run across lately. So this posting is largely speculative and rambling as I’m not driving towards a conclusion.

My first item is a May 3, 2011 news item on physorg.com. It concerns an art installation at Rensselaer Polytechnic Institute, The Ascent. From the news item,

A team of Rensselaer Polytechnic Institute students has created a system that pairs an EEG headset with a 3-D theatrical flying harness, allowing users to “fly” by controlling their thoughts. The “Infinity Simulator” will make its debut with an art installation [The Ascent] in which participants rise into the air – and trigger light, sound, and video effects – by calming their thoughts.

I found a video of someone demonstrating this project:
http://blog.makezine.com/archive/2011/03/eeg-controlled-wire-flight.html

Please do watch:

I’ve seen this a few times and it still absolutely blows me away.

If you should be near Rensselaer on May 12, 2011, you could have a chance to fly using your own thoughtpower, a harness, and an EEG helmet. From the event webpage,

Come ride The Ascent, a playful mash-up of theatrics, gaming and mind-control. The Ascent is a live-action, theatrical ride experience created for almost anyone to try. Individual riders wear an EEG headset, which reads brainwaves, along with a waist harness, and by marshaling their calm, focus, and concentration, try to levitate themselves thirty feet into the air as a small audience watches from below. The experience is full of obstacles-as a rider ascends via the power of concentration, sound and light also respond to brain activity, creating a storm of stimuli that conspires to distract the rider from achieving the goal: levitating into “transcendence.” The paradox is that in order to succeed, you need to release your desire for achievement, and contend with what might be the biggest obstacle: yourself.

Theater Artist and Experience Designer Yehuda Duenyas (XXXY) presents his MFA Thesis project The Ascent, and its operating platform the Infinity System, a new user driven experience created specifically for EMPAC’s automated rigging system.

The Infinity System is a new platform and user interface for 3D flying which combines aspects of thrill-ride, live-action video game, and interactive installation.

Using a unique and intuitive interface, the Infinity System uses 3D rigging to move bodies creatively through space, while employing wearable sensors to manipulate audio and visual content.

Like a live-action stunt-show crossed with a video game, the user is given the superhuman ability to safely and freely fly, leap, bound, flip, run up walls, fall from great heights, swoop, buzz, drop, soar, and otherwise creatively defy gravity.

“The effect is nothing short of movie magic.” – Sean Hollister, Engadget

Here’s a brief description of the technology behind this ‘Ascent’ (from the news item on physorg.com),

Ten computer programs running simultaneously link the commercially available EEG headset to the computer-controlled 3-D flying harness and various theater systems, said Todd. [Michael Todd, a Rensselaer 2010 graduate in computer science]

Within the theater, the rigging – including the harness – is controlled by a Stage Tech NOMAD console; lights are controlled by an ION console running MIDI show control; sound through MAX/MSP; and video through Isadora and Jitter. The “Infinity Simulator,” a series of three C programs written by Todd, acts as intermediary between the headset and the theater systems, connecting and conveying all input and output.

“We’ve built a software system on top of the rigging control board and now have control of it through an iPad, and since we have the iPad control, we can have anything control it,” said Duenyas. “The ‘Infinity Simulator’ is the center; everything talks to the ‘Infinity Simulator.’”

This May 3, 2011 article (Mystery Man Gives Mind-Reading Tech More Early Cash Than Facebook, Google Combined) by Kit Eaton on Fast Company also concerns itself with a brain/computer interface. From the article,

Imagine the money that could be made by a drug company that accurately predicted and treated the onset of Alzheimer’s before any symptoms surfaced. That may give us an idea why NeuroVigil, a company specializing in non-invasive, wireless brain-recording tech, just got a cash injection that puts it at a valuation “twice the combined seed valuations of Google’s and Facebook’s first rounds,” according to a company announcement

NeuroVigil’s key product at the moment is the iBrain, a slim device in a flexible head-cap that’s designed to be worn for continuous EEG monitoring of a patient’s brain function–mainly during sleep. It’s non-invasive, and replaces older technology that could only access these kind of brain functions via critically implanted electrodes actually on the brain itself. The idea is, first, to record how brain function changes over time, perhaps as a particular combination of drugs is administered or to help diagnose particular brain pathologies–such as epilepsy.

But the other half of the potentailly lucrative equation is the ability to analyze the trove of data coming from iBrain. And that’s where NeuroVigil’s SPEARS algorithm enters the picture. Not only is the company simplifying collection of brain data with a device that can be relatively comfortably worn during all sorts of tasks–sleeping, driving, watching advertising–but the combination of iBrain and SPEARS multiplies the efficiency of data analysis [emphasis mine].

I assume it’s the notion of combining the two technologies (iBrian and SPEARS) that spawned the ‘mind-reading’ part of this article’s title. The technology could be used for early detection and diagnosis, as well as, other possibilities as Eaton notes,

It’s also possible it could develop its technology into non-medicinal uses such as human-computer interfaces–in an earlier announcement, NeuroVigil noted, “We plan to make these kinds of devices available to the transportation industry, biofeedback, and defense. Applications regarding pandemics and bioterrorism are being considered but cannot be shared in this format.” And there’s even a popular line of kid’s toys that use an essentially similar technique, powered by NeuroSky sensors–themselves destined for future uses as games console controllers or even input devices for computers.

What these two technologies have in common is that, in some fashion or other, they have (shy of implanting a computer chip) a relatively direct interface with our brains, which means (to me anyway) a very different relationship between humans and computers.

In the next couple of items I’m going to profile a couple of very similar to each other technologies that allow for more traditional human/computer interactions, one of which I’ve posted about previously, the Nokia Morph (most recently in my Sept. 29, 2010 posting).

It was first introduced as a type of flexible phone with other capabilities. Since then, they seem to have elaborated on those capabilities. Here’s a description of what they now call the ‘Morph concept’ in a [ETA May 12, 2011: inserted correct link information] May 4, 2011 news item on Nanowerk,

Morph is a joint nanotechnology concept developed by Nokia Research Center (NRC) and the University of Cambridge (UK). Morph is a concept that demonstrates how future mobile devices might be stretchable and flexible, allowing the user to transform their mobile device into radically different shapes. It demonstrates the ultimate functionality that nanotechnology might be capable of delivering: flexible materials, transparent electronics and self-cleaning surfaces.

Morph, will act as a gateway. It will connect the user to the local environment as well as the global internet. It is an attentive device that adapts to the context – it shapes according to the context. The device can change its form from rigid to flexible and stretchable. Buttons of the user interface can grow up from a flat surface when needed. User will never have to worry about the battery life. It is a device that will help us in our everyday life, to keep our self connected and in shape. It is one significant piece of a system that will help us to look after the environment.

Without the new materials, i.e. new structures enabled by the novel materials and manufacturing methods it would be impossible to build Morph kind of device. Graphene has an important role in different components of the new device and the ecosystem needed to make the gateway and context awareness possible in an energy efficient way.

Graphene will enable evolution of the current technology e.g. continuation of the ever increasing computing power when the performance of the computing would require sub nanometer scale transistors by using conventional materials.

For someone who’s been following news of the Morph for the last few years, this news item doesn’t give you any new information. Still, it’s nice to be reminded of the Morph project. Here’s a video produced by the University of Cambridge that illustrates some of the project’s hopes for the Morph concept,

While the folks at the Nokia Research Centre and University of Cambridge have been working on their project, it appears the team at the Human Media Lab at the School of Computing at Queen’s University (Kingston, Ontario, Canada) in cooperation with a team from Arizona State University and E Ink Corporation have been able to produce a prototype of something remarkably similar, albeit with fewer functions. The PaperPhone is being introduced at the Association of Computing Machinery’s CHI 2011 (Computer Human Interaction) conference in Vancouver, Canada next Tuesday, May 10, 2011.

Here’s more about it from a May 4, 2011 news item on Nanowerk,

The world’s first interactive paper computer is set to revolutionize the world of interactive computing.

“This is the future. Everything is going to look and feel like this within five years,” says creator Roel Vertegaal, the director of Queen’s University Human Media Lab,. “This computer looks, feels and operates like a small sheet of interactive paper. You interact with it by bending it into a cell phone, flipping the corner to turn pages, or writing on it with a pen.”

The smartphone prototype, called PaperPhone is best described as a flexible iPhone – it does everything a smartphone does, like store books, play music or make phone calls. But its display consists of a 9.5 cm diagonal thin film flexible E Ink display. The flexible form of the display makes it much more portable that any current mobile computer: it will shape with your pocket.

For anyone who knows the novel, it’s very Diamond Age (by Neal Stephenson). On a more technical note, I would have liked more information about the display’s technology. What is E Ink using? Graphene? Carbon nanotubes?

(That does not look like to paper to me but I suppose you could call it ‘paperlike’.)

In reviewing all these news items, it seems to me there are two themes, the computer as bodywear and the computer as an extension of our thoughts. Both of these are more intimate relationships, the latter far more so than the former, than we’ve had with the computer till now. If any of you have any thoughts on this, please do leave a comment as I would be delighted to engage on some discussion about this.

You can get more information about the Association of Computing Machinery’s CHI 2011 (Computer Human Interaction) conference where Dr. Vertegaal will be presenting here.

You can find more about Dr. Vertegaal and the Human Media Lab at Queen’s University here.

The academic paper being presented at the Vancouver conference is here.

Also, if you are interested in the hardware end of things, you can check out E Ink Corporation, the company that partnered with the team from Queen’s and Arizona State University to create the PaperPhone. Interestingly, E Ink is a spin off company from the Massachusetts Institute of Technology (MIT).