Tag Archives: Google

True love with AI (artificial intelligence): The Nature of Things explores emotional and creative AI (long read)

The Canadian Broadcasting Corporation’s (CBC) science television series,The Nature of Things, which has been broadcast since November 1960, explored the world of emotional, empathic and creative artificial intelligence (AI) in a Friday, November 19, 2021 telecast titled, The Machine That Feels,

The Machine That Feels explores how artificial intelligence (AI) is catching up to us in ways once thought to be uniquely human: empathy, emotional intelligence and creativity.

As AI moves closer to replicating humans, it has the potential to reshape every aspect of our world – but most of us are unaware of what looms on the horizon.

Scientists see AI technology as an opportunity to address inequities and make a better, more connected world. But it also has the capacity to do the opposite: to stoke division and inequality and disconnect us from fellow humans. The Machine That Feels, from The Nature of Things, shows viewers what they need to know about a field that is advancing at a dizzying pace, often away from the public eye.

What does it mean when AI makes art? Can AI interpret and understand human emotions? How is it possible that AI creates sophisticated neural networks that mimic the human brain? The Machine That Feels investigates these questions, and more.

In Vienna, composer Walter Werzowa has — with the help of AI — completed Beethoven’s previously unfinished 10th symphony. By feeding data about Beethoven, his music, his style and the original scribbles on the 10th symphony into an algorithm, AI has created an entirely new piece of art.

In Atlanta, Dr. Ayanna Howard and her robotics lab at Georgia Tech are teaching robots how to interpret human emotions. Where others see problems, Howard sees opportunity: how AI can help fill gaps in education and health care systems. She believes we need a fundamental shift in how we perceive robots: let’s get humans and robots to work together to help others.

At Tufts University in Boston, a new type of biological robot has been created: the xenobot. The size of a grain of sand, xenobots are grown from frog heart and skin cells, and combined with the “mind” of a computer. Programmed with a specific task, they can move together to complete it. In the future, they could be used for environmental cleanup, digesting microplastics and targeted drug delivery (like releasing chemotherapy compounds directly into tumours).

The film includes interviews with global leaders, commentators and innovators from the AI field, including Geoff Hinton, Yoshua Bengio, Ray Kurzweil and Douglas Coupland, who highlight some of the innovative and cutting-edge AI technologies that are changing our world.

The Machine That Feels focuses on one central question: in the flourishing age of artificial intelligence, what does it mean to be human?

I’ll get back to that last bit, “… what does it mean to be human?” later.

There’s a lot to appreciate in this 44 min. programme. As you’d expect, there was a significant chunk of time devoted to research being done in the US but Poland and Japan also featured and Canadian content was substantive. A number of tricky topics were covered and transitions from one topic to the next were smooth.

In the end credits, I counted over 40 source materials from Getty Images, Google Canada, Gatebox, amongst others. It would have been interesting to find out which segments were produced by CBC.

David Suzuki’s (programme host) script was well written and his narration was enjoyable, engaging, and non-intrusive. That last quality is not always true of CBC hosts who can fall into the trap of overdramatizing the text.

Drilling down

I have followed artificial intelligence stories in a passive way (i.e., I don’t seek them out) for many years. Even so, there was a lot of material in the programme that was new to me.

For example, there was this love story (from the ‘I love her and see her as a real woman.’ Meet a man who ‘married’ an artificial intelligence hologram webpage on the CBC),

In the The Machine That Feels, a documentary from The Nature of Things, we meet Kondo Akihiko, a Tokyo resident who “married” a hologram of virtual pop singer Hatsune Miku using a certificate issued by Gatebox (the marriage isn’t recognized by the state, and Gatebox acknowledges the union goes “beyond dimensions”).

I found Akihiko to be quite moving when he described his relationship, which is not unique. It seems some 4,000 men have ‘wed’ their digital companions, you can read about that and more on the ‘I love her and see her as a real woman.’ Meet a man who ‘married’ an artificial intelligence hologram webpage.

What does it mean to be human?

Overall, this Nature of Things episode embraces certainty, which means the question of what it means to human is referenced rather than seriously discussed. An unanswerable philosophical question, the programme is ill-equipped to address it, especially since none of the commentators are philosophers or seem inclined to philosophize.

The programme presents AI as a juggernaut. Briefly mentioned is the notion that we need to make some decisions about how our juggernaut is developed and utilized. No one discusses how we go about making changes to systems that are already making critical decisions for us. (For more about AI and decision-making, see my February 28, 2017 posting and scroll down to the ‘Algorithms and big data’ subhead for Cathy O’Neil’s description of how important decisions that affect us are being made by AI systems. She is the author of the 2016 book, ‘Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy’; still a timely read.)

In fact, the programme’s tone is mostly one of breathless excitement. A few misgivings are expressed, e.g,, one woman who has an artificial ‘texting friend’ (Replika; a chatbot app) noted that it can ‘get into your head’ when she had a chat where her ‘friend’ told her that all of a woman’s worth is based on her body; she pushed back but intimated that someone more vulnerable could find that messaging difficult to deal with.

The sequence featuring Akihiko and his hologram ‘wife’ is followed by one suggesting that people might become more isolated and emotionally stunted as they interact with artificial friends. It should be noted, Akihiko’s wife is described as ‘perfect’. I gather perfection means that you are always understanding and have no needs of your own. She also seems to be about 18″ high.

Akihiko has obviously been asked about his ‘wife’ before as his answers are ready. They boil down to “there are many types of relationships” and there’s nothing wrong with that. It’s an intriguing thought which is not explored.

Also unexplored, these relationships could be said to resemble slavery. After all, you pay for these friends over which you have control. But perhaps that’s alright since AI friends don’t have consciousness. Or do they? In addition to not being able to answer the question, “what is it to be human?” we still can’t answer the question, “what is consciousness?”

AI and creativity

The Nature of Things team works fast. ‘Beethoven X – The AI Project’ had its first performance on October 9, 2021. (See my October 1, 2021 post ‘Finishing Beethoven’s unfinished 10th Symphony’ for more information from Ahmed Elgammal’s (Director of the Art & AI Lab at Rutgers University) technical perspective on the project.

Briefly, Beethoven died before completing his 10th symphony and a number of computer scientists, musicologists, AI, and musicians collaborated to finish the symphony.)

The one listener (Felix Mayer, music professor at the Technical University Munich) in the hall during a performance doesn’t consider the work to be a piece of music. He does have a point. Beethoven left some notes but this ’10th’ is at least partly mathematical guesswork. A set of probabilities where an algorithm chooses which note comes next based on probability.

There was another artist also represented in the programme. Puzzlingly, it was the still living Douglas Coupland. In my opinion, he’s better known as a visual artist than a writer (his Wikipedia entry lists him as a novelist first) but he has succeeded greatly in both fields.

What makes his inclusion in the Nature of Things ‘The Machine That Feels’ programme puzzling, is that it’s not clear how he worked with artificial intelligence in a collaborative fashion. Here’s a description of Coupland’s ‘AI’ project from a June 29, 2021 posting by Chris Henry on the Google Outreach blog (Note: Links have been removed),

… when the opportunity presented itself to explore how artificial intelligence (AI) inspires artistic expression — with the help of internationally renowned Canadian artist Douglas Coupland — the Google Research team jumped on it. This collaboration, with the support of Google Arts & Culture, culminated in a project called Slogans for the Class of 2030, which spotlights the experiences of the first generation of young people whose lives are fully intertwined with the existence of AI. 

This collaboration was brought to life by first introducing Coupland’s written work to a machine learning language model. Machine learning is a form of AI that provides computer systems the ability to automatically learn from data. In this case, Google research scientists tuned a machine learning algorithm with Coupland’s 30-year body of written work — more than a million words — so it would familiarize itself with the author’s unique style of writing. From there, curated general-public social media posts on selected topics were added to teach the algorithm how to craft short-form, topical statements. [emphases mine]

Once the algorithm was trained, the next step was to process and reassemble suggestions of text for Coupland to use as inspiration to create twenty-five Slogans for the Class of 2030. [emphasis mine]

I would comb through ‘data dumps’ where characters from one novel were speaking with those in other novels in ways that they might actually do. It felt like I was encountering a parallel universe Doug,” Coupland says. “And from these outputs, the statements you see here in this project appeared like gems. Did I write them? Yes. No. Could they have existed without me? No.” [emphases mine]

So, the algorithms crunched through Coupland’s word and social media texts to produce slogans, which Coupland then ‘combed through’ to pick out 25 slogans for the ‘Slogans For The Class of 2030’ project. (Note: In the programme, he says that he started a sentence and then the AI system completed that sentence with material gleaned from his own writings, which brings to Exquisite Corpse, a collaborative game for writers originated by the Surrealists, possibly as early as 1918.)

The ‘slogans’ project also reminds me of William S. Burroughs and the cut-up technique used in his work. From the William S. Burroughs Cut-up technique webpage on the Language is a Virus website (Thank you to Lake Rain Vajra for a very interesting website),

The cutup is a mechanical method of juxtaposition in which Burroughs literally cuts up passages of prose by himself and other writers and then pastes them back together at random. This literary version of the collage technique is also supplemented by literary use of other media. Burroughs transcribes taped cutups (several tapes spliced into each other), film cutups (montage), and mixed media experiments (results of combining tapes with television, movies, or actual events). Thus Burroughs’s use of cutups develops his juxtaposition technique to its logical conclusion as an experimental prose method, and he also makes use of all contemporary media, expanding his use of popular culture.

[Burroughs says] “All writing is in fact cut-ups. A collage of words read heard overheard. What else? Use of scissors renders the process explicit and subject to extension and variation. Clear classical prose can be composed entirely of rearranged cut-ups. Cutting and rearranging a page of written words introduces a new dimension into writing enabling the writer to turn images in cinematic variation. Images shift sense under the scissors smell images to sound sight to sound to kinesthetic. This is where Rimbaud was going with his color of vowels. And his “systematic derangement of the senses.” The place of mescaline hallucination: seeing colors tasting sounds smelling forms.

“The cut-ups can be applied to other fields than writing. Dr Neumann [emphasis mine] in his Theory of Games and Economic behavior introduces the cut-up method of random action into game and military strategy: assume that the worst has happened and act accordingly. … The cut-up method could be used to advantage in processing scientific data. [emphasis mine] How many discoveries have been made by accident? We cannot produce accidents to order. The cut-ups could add new dimension to films. Cut gambling scene in with a thousand gambling scenes all times and places. Cut back. Cut streets of the world. Cut and rearrange the word and image in films. There is no reason to accept a second-rate product when you can have the best. And the best is there for all. Poetry is for everyone . . .”

First, John von Neumann (1902 – 57) is a very important figure in the history of computing. From a February 25, 2017 John von Neumann and Modern Computer Architecture essay on the ncLab website, “… he invented the computer architecture that we use today.”

Here’s Burroughs on the history of writers and cutups (thank you to QUEDEAR for posting this clip),

You can hear Burroughs talk about the technique and how he started using it in 1959.

There is no explanation from Coupland as to how his project differs substantively from Burroughs’ cut-ups or a session of Exquisite Corpse. The use of a computer programme to crunch through data and give output doesn’t seem all that exciting. *(More about computers and chatbots at end of posting).* It’s hard to know if this was an interview situation where he wasn’t asked the question or if the editors decided against including it.

Kazuo Ishiguro?

Given that Ishiguro’s 2021 book (Klara and the Sun) is focused on an artificial friend and raises the question of ‘what does it mean to be human’, as well as the related question, ‘what is the nature of consciousness’, it would have been interesting to hear from him. He spent a fair amount of time looking into research on machine learning in preparation for his book. Maybe he was too busy?

AI and emotions

The work being done by Georgia Tech’s Dr. Ayanna Howard and her robotics lab is fascinating. They are teaching robots how to interpret human emotions. The segment which features researchers teaching and interacting with robots, Pepper and Salt, also touches on AI and bias.

Watching two African American researchers talk about the ways in which AI is unable to read emotions on ‘black’ faces as accurately as ‘white’ faces is quite compelling. It also reinforces the uneasiness you might feel after the ‘Replika’ segment where an artificial friend informs a woman that her only worth is her body.

(Interestingly, Pepper and Salt are produced by Softbank Robotics, part of Softbank, a multinational Japanese conglomerate, [see a June 28, 2021 article by Ian Carlos Campbell for The Verge] whose entire management team is male according to their About page.)

While Howard is very hopeful about the possibilities of a machine that can read emotions, she doesn’t explore (on camera) any means for pushing back against bias other than training AI by using more black faces to help them learn. Perhaps more representative management and coding teams in technology companies?

While the programme largely focused on AI as an algorithm on a computer, robots can be enabled by AI (as can be seen in the segment with Dr. Howard).

My February 14, 2019 posting features research with a completely different approach to emotions and machines,

“I’ve always felt that robots shouldn’t just be modeled after humans [emphasis mine] or be copies of humans,” he [Guy Hoffman, assistant professor at Cornell University)] said. “We have a lot of interesting relationships with other species. Robots could be thought of as one of those ‘other species,’ not trying to copy what we do but interacting with us with their own language, tapping into our own instincts.”

[from a July 16, 2018 Cornell University news release on EurekAlert]

This brings the question back to, what is consciousness?

What scientists aren’t taught

Dr. Howard notes that scientists are not taught to consider the implications of their work. Her comment reminded me of a question I was asked many years ago after a presentation, it concerned whether or not science had any morality. (I said, no.)

My reply angered an audience member (a visual artist who was working with scientists at the time) as she took it personally and started defending scientists as good people who care and have morals and values. She failed to understand that the way in which we teach science conforms to a notion that somewhere there are scientific facts which are neutral and objective. Society and its values are irrelevant in the face of the larger ‘scientific truth’ and, as a consequence, you don’t need to teach or discuss how your values or morals affect that truth or what the social implications of your work might be.

Science is practiced without much if any thought to values. By contrast, there is the medical injunction, “Do no harm,” which suggests to me that someone recognized competing values. E.g., If your important and worthwhile research is harming people, you should ‘do no harm’.

The experts, the connections, and the Canadian content

It’s been a while since I’ve seen Ray Kurzweil mentioned but he seems to be getting more attention these days. (See this November 16, 2021 posting by Jonny Thomson titled, “The Singularity: When will we all become super-humans? Are we really only a moment away from “The Singularity,” a technological epoch that will usher in a new era in human evolution?” on The Big Think for more). Note: I will have a little more about evolution later in this post.

Interestingly, Kurzweil is employed by Google these days (see his Wikipedia entry, the column to the right). So is Geoffrey Hinton, another one of the experts in the programme (see Hinton’s Wikipedia entry, the column to the right, under Institutions).

I’m not sure about Yoshu Bengio’s relationship with Google but he’s a professor at the Université de Montréal, and he’s the Scientific Director for Mila ((Quebec’s Artificial Intelligence research institute)) & IVADO (Institut de valorisation des données), Note: IVADO is not particularly relevant to what’s being discussed in this post.

As for Mila, the Canada Google blog in a November 21, 2016 posting notes a $4.5M grant to the institution,

Google invests $4.5 Million in Montreal AI Research

A new grant from Google for the Montreal Institute for Learning Algorithms (MILA) will fund seven faculty across a number of Montreal institutions and will help tackle some of the biggest challenges in machine learning and AI, including applications in the realm of systems that can understand and generate natural language. In other words, better understand a fan’s enthusiasm for Les Canadien [sic].

Google is expanding its academic support of deep learning at MILA, renewing Yoshua Bengio’s Focused Research Award and offering Focused Research Awards to MILA faculty at University of Montreal and McGill University:

Google reaffirmed their commitment to Mila in 2020 with a grant worth almost $4M (from a November 13, 2020 posting on the Mila website, Note: A link has been removed),

Google Canada announced today [November 13, 2020] that it will be renewing its funding of Mila – Quebec Artificial Intelligence Institute, with a generous pledge of nearly $4M over a three-year period. Google previously invested $4.5M US in 2016, enabling Mila to grow from 25 to 519 researchers.

In a piece written for Google’s Official Canada Blog, Yoshua Bengio, Mila Scientific Director, says that this year marked a “watershed moment for the Canadian AI community,” as the COVID-19 pandemic created unprecedented challenges that demanded rapid innovation and increased interdisciplinary collaboration between researchers in Canada and around the world.

COVID-19 has changed the world forever and many industries, from healthcare to retail, will need to adapt to thrive in our ‘new normal.’ As we look to the future and how priorities will shift, it is clear that AI is no longer an emerging technology but a useful tool that can serve to solve world problems. Google Canada recognizes not only this opportunity but the important task at hand and I’m thrilled they have reconfirmed their support of Mila with an additional $3,95 million funding grant until 22.

– Yoshua Bengio, for Google’s Official Canada Blog

Interesting, eh? Of course, Douglas Coupland is working with Google, presumably for money, and that would connect over 50% of the Canadian content (Douglas Coupland, Yoshua Bengio, and Geoffrey Hinton; Kurzweil is an American) in the programme to Google.

My hat’s off to Google’s marketing communications and public relations teams.

Anthony Morgan of Science Everywhere also provided some Canadian content. His LinkedIn profile indicates that he’s working on a PhD in molecular science, which is described this way, “My work explores the characteristics of learning environments, that support critical thinking and the relationship between critical thinking and wisdom.”

Morgan is also the founder and creative director of Science Everywhere, from his LinkedIn profile, “An events & media company supporting knowledge mobilization, community engagement, entrepreneurship and critical thinking. We build social tools for better thinking.”

There is this from his LinkedIn profile,

I develop, create and host engaging live experiences & media to foster critical thinking.

I’ve spent my 15+ years studying and working in psychology and science communication, thinking deeply about the most common individual and societal barriers to critical thinking. As an entrepreneur, I lead a team to create, develop and deploy cultural tools designed to address those barriers. As a researcher I study what we can do to reduce polarization around science.

There’s a lot more to Morgan (do look him up; he has connections to the CBC and other media outlets). The difficulty is: why was he chosen to talk about artificial intelligence and emotions and creativity when he doesn’t seem to know much about the topic? He does mention GPT-3, an AI programming language. He seems to be acting as an advocate for AI although he offers this bit of almost cautionary wisdom, “… algorithms are sets of instructions.” (You can can find out more about it in my April 27, 2021 posting. There’s also this November 26, 2021 posting [The Inherent Limitations of GPT-3] by Andrey Kurenkov, a PhD student with the Stanford [University] Vision and Learning Lab.)

Most of the cautionary commentary comes from Luke Stark, assistant professor at Western [Ontario] University’s Faculty of Information and Media Studies. He’s the one who mentions stunted emotional growth.

Before moving on, there is another set of connections through the Pan-Canadian Artificial Intelligence Strategy, a Canadian government science funding initiative announced in the 2017 federal budget. The funds allocated to the strategy are administered by the Canadian Institute for Advanced Research (CIFAR). Yoshua Bengio through Mila is associated with the strategy and CIFAR, as is Geoffrey Hinton through his position as Chief Scientific Advisor for the Vector Institute.

Evolution

Getting back to “The Singularity: When will we all become super-humans? Are we really only a moment away from “The Singularity,” a technological epoch that will usher in a new era in human evolution?” Xenobots point in a disconcerting (for some of us) evolutionary direction.

I featured the work, which is being done at Tufts University in the US, in my June 21, 2021 posting, which includes an embedded video,

From a March 31, 2021 news item on ScienceDaily,

Last year, a team of biologists and computer scientists from Tufts University and the University of Vermont (UVM) created novel, tiny self-healing biological machines from frog cells called “Xenobots” that could move around, push a payload, and even exhibit collective behavior in the presence of a swarm of other Xenobots.

Get ready for Xenobots 2.0.

Also from an excerpt in the posting, the team has “created life forms that self-assemble a body from single cells, do not require muscle cells to move, and even demonstrate the capability of recordable memory.”

Memory is key to intelligence and this work introduces the notion of ‘living’ robots which leads to questioning what constitutes life. ‘The Machine That Feels’ is already grappling with far too many questions to address this development but introducing the research here might have laid the groundwork for the next episode, The New Human, telecast on November 26, 2021,

While no one can be certain what will happen, evolutionary biologists and statisticians are observing trends that could mean our future feet only have four toes (so long, pinky toe) or our faces may have new combinations of features. The new humans might be much taller than their parents or grandparents, or have darker hair and eyes.

And while evolution takes a lot of time, we might not have to wait too long for a new version of ourselves.

Technology is redesigning the way we look and function — at a much faster pace than evolution. We are merging with technology more than ever before: our bodies may now have implanted chips, smart limbs, exoskeletons and 3D-printed organs. A revolutionary gene editing technique has given us the power to take evolution into our own hands and alter our own DNA. How long will it be before we are designing our children?

As the story about the xenobots doesn’t say, we could also take the evolution of another species into our hands.

David Suzuki, where are you?

Our programme host, David Suzuki surprised me. I thought that as an environmentalist he’d point out that the huge amounts of computing power needed for artificial intelligence as mentioned in the programme, constitutes an environmental issue. I also would have expected a geneticist like Suzuki might have some concerns with regard to xenobots but perhaps that’s being saved for the next episode (The New Human) of the Nature of Things.

Artificial stupidity

Thanks to Will Knight for introducing me to the term ‘artificial stupidity’. Knight, a senior writer covers artificial intelligence for WIRED magazine. According to its Wikipedia entry,

Artificial stupidity is commonly used as a humorous opposite of the term artificial intelligence (AI), often as a derogatory reference to the inability of AI technology to adequately perform its tasks.[1] However, within the field of computer science, artificial stupidity is also used to refer to a technique of “dumbing down” computer programs in order to deliberately introduce errors in their responses.

Knight was using the term in its humorous, derogatory form.

Finally

The episode certainly got me thinking if not quite in the way producers might have hoped. ‘The Machine That Feels’ is a glossy, pretty well researched piece of infotainment.

To be blunt, I like and have no problems with infotainment but it can be seductive. I found it easier to remember the artificial friends, wife, xenobots, and symphony than the critiques and concerns.

Hopefully, ‘The Machine That Feels’ stimulates more interest in some very important topics. If you missed the telecast, you can catch the episode here.

For anyone curious about predictive policing, which was mentioned in the Ayanna Howard segment, see my November 23, 2017 posting about Vancouver’s plunge into AI and car theft.

*ETA December 6, 2021: One of the first ‘chatterbots’ was ELIZA, a computer programme developed from1964 to 1966. The most famous ELIZA script was DOCTOR, where the programme simulated a therapist. Many early users believed ELIZA understood and could respond as a human would despite Joseph Weizenbaum’s (creator of the programme) insistence otherwise.

The metaverse or not

The ‘metaverse’ seems to be everywhere these days (especially since Facebook has made a number of announcements bout theirs (more about that later in this posting).

At this point, the metaverse is very hyped up despite having been around for about 30 years. According to the Wikipedia timeline (see the Metaverse entry), the first one was a MOO in 1993 called ‘The Metaverse’. In any event, it seems like it might be a good time to see what’s changed since I dipped my toe into a metaverse (Second Life by Linden Labs) in 2007.

(For grammar buffs, I switched from definite article [the] to indefinite article [a] purposefully. In reading the various opinion pieces and announcements, it’s not always clear whether they’re talking about a single, overarching metaverse [the] replacing the single, overarching internet or whether there will be multiple metaverses, in which case [a].)

The hype/the buzz … call it what you will

This September 6, 2021 piece by Nick Pringle for Fast Company dates the beginning of the metaverse to a 1992 science fiction novel before launching into some typical marketing hype (for those who don’t know, hype is the short form for hyperbole; Note: Links have been removed),

The term metaverse was coined by American writer Neal Stephenson in his 1993 sci-fi hit Snow Crash. But what was far-flung fiction 30 years ago is now nearing reality. At Facebook’s most recent earnings call [June 2021], CEO Mark Zuckerberg announced the company’s vision to unify communities, creators, and commerce through virtual reality: “Our overarching goal across all of these initiatives is to help bring the metaverse to life.”

So what actually is the metaverse? It’s best explained as a collection of 3D worlds you explore as an avatar. Stephenson’s original vision depicted a digital 3D realm in which users interacted in a shared online environment. Set in the wake of a catastrophic global economic crash, the metaverse in Snow Crash emerged as the successor to the internet. Subcultures sprung up alongside new social hierarchies, with users expressing their status through the appearance of their digital avatars.

Today virtual worlds along these lines are formed, populated, and already generating serious money. Household names like Roblox and Fortnite are the most established spaces; however, there are many more emerging, such as Decentraland, Upland, Sandbox, and the soon to launch Victoria VR.

These metaverses [emphasis mine] are peaking at a time when reality itself feels dystopian, with a global pandemic, climate change, and economic uncertainty hanging over our daily lives. The pandemic in particular saw many of us escape reality into online worlds like Roblox and Fortnite. But these spaces have proven to be a place where human creativity can flourish amid crisis.

In fact, we are currently experiencing an explosion of platforms parallel to the dotcom boom. While many of these fledgling digital worlds will become what Ask Jeeves was to Google, I predict [emphasis mine] that a few will match the scale and reach of the tech giant—or even exceed it.

Because the metaverse brings a new dimension to the internet, brands and businesses will need to consider their current and future role within it. Some brands are already forging the way and establishing a new genre of marketing in the process: direct to avatar (D2A). Gucci sold a virtual bag for more than the real thing in Roblox; Nike dropped virtual Jordans in Fortnite; Coca-Cola launched avatar wearables in Decentraland, and Sotheby’s has an art gallery that your avatar can wander in your spare time.

D2A is being supercharged by blockchain technology and the advent of digital ownership via NFTs, or nonfungible tokens. NFTs are already making waves in art and gaming. More than $191 million was transacted on the “play to earn” blockchain game Axie Infinity in its first 30 days this year. This kind of growth makes NFTs hard for brands to ignore. In the process, blockchain and crypto are starting to feel less and less like “outsider tech.” There are still big barriers to be overcome—the UX of crypto being one, and the eye-watering environmental impact of mining being the other. I believe technology will find a way. History tends to agree.

Detractors see the metaverse as a pandemic fad, wrapping it up with the current NFT bubble or reducing it to Zuck’s [Jeffrey Zuckerberg and Facebook] dystopian corporate landscape. This misses the bigger behavior change that is happening among Gen Alpha. When you watch how they play, it becomes clear that the metaverse is more than a buzzword.

For Gen Alpha [emphasis mine], gaming is social life. While millennials relentlessly scroll feeds, Alphas and Zoomers [emphasis mine] increasingly stroll virtual spaces with their friends. Why spend the evening staring at Instagram when you can wander around a virtual Harajuku with your mates? If this seems ridiculous to you, ask any 13-year-old what they think.

Who is Nick Pringle and how accurate are his predictions?

At the end of his September 6, 2021 piece, you’ll find this,

Nick Pringle is SVP [Senior Vice President] executive creative director at R/GA London.

According to the R/GA Wikipedia entry,

… [the company] evolved from a computer-assisted film-making studio to a digital design and consulting company, as part of a major advertising network.

Here’s how Pringle sees our future, his September 6, 2021 piece,

By thinking “virtual first,” you can see how these spaces become highly experimental, creative, and valuable. The products you can design aren’t bound by physics or marketing convention—they can be anything, and are now directly “ownable” through blockchain. …

I believe that the metaverse is here to stay. That means brands and marketers now have the exciting opportunity to create products that exist in multiple realities. The winners will understand that the metaverse is not a copy of our world, and so we should not simply paste our products, experiences, and brands into it.

I emphasized “These metaverses …” in the previous section to highlight the fact that I find the use of ‘metaverses’ vs. ‘worlds’ confusing as the words are sometimes used as synonyms and sometimes as distinctions. We do it all the time in all sorts of conversations but for someone who’s an outsider to a particular occupational group or subculture, the shifts can make for confusion.

As for Gen Alpha and Zoomer, I’m not a fan of ‘Gen anything’ as shorthand for describing a cohort based on birth years. For example, “For Gen Alpha [emphasis mine], gaming is social life,” ignores social and economic classes, as well as, the importance of locations/geography, e.g., Afghanistan in contrast to the US.

To answer the question I asked, Pringle does not mention any record of accuracy for his predictions for the future but I was able to discover that he is a “multiple Cannes Lions award-winning creative” (more here).

A more measured view of the metaverse

An October 4, 2021 article (What is the metaverse, and do I have to care? One part definition, one part aspiration, one part hype) by Adi Robertson and Jay Peters for The Verge offers a deeper dive into the metaverse (Note: Links have been removed),

In recent months you may have heard about something called the metaverse. Maybe you’ve read that the metaverse is going to replace the internet. Maybe we’re all supposed to live there. Maybe Facebook (or Epic, or Roblox, or dozens of smaller companies) is trying to take it over. And maybe it’s got something to do with NFTs [non-fungible tokens]?

Unlike a lot of things The Verge covers, the metaverse is tough to explain for one reason: it doesn’t necessarily exist. It’s partly a dream for the future of the internet and partly a neat way to encapsulate some current trends in online infrastructure, including the growth of real-time 3D worlds.

Then what is the real metaverse?

There’s no universally accepted definition of a real “metaverse,” except maybe that it’s a fancier successor to the internet. Silicon Valley metaverse proponents sometimes reference a description from venture capitalist Matthew Ball, author of the extensive Metaverse Primer:

“The Metaverse is an expansive network of persistent, real-time rendered 3D worlds and simulations that support continuity of identity, objects, history, payments, and entitlements, and can be experienced synchronously by an effectively unlimited number of users, each with an individual sense of presence.”

Facebook, arguably the tech company with the biggest stake in the metaverse, describes it more simply:

“The ‘metaverse’ is a set of virtual spaces where you can create and explore with other people who aren’t in the same physical space as you.”

There are also broader metaverse-related taxonomies like one from game designer Raph Koster, who draws a distinction between “online worlds,” “multiverses,” and “metaverses.” To Koster, online worlds are digital spaces — from rich 3D environments to text-based ones — focused on one main theme. Multiverses are “multiple different worlds connected in a network, which do not have a shared theme or ruleset,” including Ready Player One’s OASIS. And a metaverse is “a multiverse which interoperates more with the real world,” incorporating things like augmented reality overlays, VR dressing rooms for real stores, and even apps like Google Maps.

If you want something a little snarkier and more impressionistic, you can cite digital scholar Janet Murray — who has described the modern metaverse ideal as “a magical Zoom meeting that has all the playful release of Animal Crossing.”

But wait, now Ready Player One isn’t a metaverse and virtual worlds don’t have to be 3D? It sounds like some of these definitions conflict with each other.

An astute observation.

Why is the term “metaverse” even useful? “The internet” already covers mobile apps, websites, and all kinds of infrastructure services. Can’t we roll virtual worlds in there, too?

Matthew Ball favors the term “metaverse” because it creates a clean break with the present-day internet. [emphasis mine] “Using the metaverse as a distinctive descriptor allows us to understand the enormity of that change and in turn, the opportunity for disruption,” he said in a phone interview with The Verge. “It’s much harder to say ‘we’re late-cycle into the last thing and want to change it.’ But I think understanding this next wave of computing and the internet allows us to be more proactive than reactive and think about the future as we want it to be, rather than how to marginally affect the present.”

A more cynical spin is that “metaverse” lets companies dodge negative baggage associated with “the internet” in general and social media in particular. “As long as you can make technology seem fresh and new and cool, you can avoid regulation,” researcher Joan Donovan told The Washington Post in a recent article about Facebook and the metaverse. “You can run defense on that for several years before the government can catch up.”

There’s also one very simple reason: it sounds more futuristic than “internet” and gets investors and media people (like us!) excited.

People keep saying NFTs are part of the metaverse. Why?

NFTs are complicated in their own right, and you can read more about them here. Loosely, the thinking goes: NFTs are a way of recording who owns a specific virtual good, creating and transferring virtual goods is a big part of the metaverse, thus NFTs are a potentially useful financial architecture for the metaverse. Or in more practical terms: if you buy a virtual shirt in Metaverse Platform A, NFTs can create a permanent receipt and let you redeem the same shirt in Metaverse Platforms B to Z.

Lots of NFT designers are selling collectible avatars like CryptoPunks, Cool Cats, and Bored Apes, sometimes for astronomical sums. Right now these are mostly 2D art used as social media profile pictures. But we’re already seeing some crossover with “metaverse”-style services. The company Polygonal Mind, for instance, is building a system called CryptoAvatars that lets people buy 3D avatars as NFTs and then use them across multiple virtual worlds.

If you have the time, the October 4, 2021 article (What is the metaverse, and do I have to care? One part definition, one part aspiration, one part hype) is definitely worth the read.

Facebook’s multiverse and other news

Since starting this post sometime in September 2021, the situation regarding Facebook has changed a few times. I’ve decided to begin my version of the story from a summer 2021 announcement.

On Monday, July 26, 2021, Facebook announced a new Metaverse product group. From a July 27, 2021 article by Scott Rosenberg for Yahoo News (Note: A link has been removed),

Facebook announced Monday it was forming a new Metaverse product group to advance its efforts to build a 3D social space using virtual and augmented reality tech.

Facebook’s new Metaverse product group will report to Andrew Bosworth, Facebook’s vice president of virtual and augmented reality [emphasis mine], who announced the new organization in a Facebook post.

Facebook, integrity, and safety in the metaverse

On September 27, 2021 Facebook posted this webpage (Building the Metaverse Responsibly by Andrew Bosworth, VP, Facebook Reality Labs [emphasis mine] and Nick Clegg, VP, Global Affairs) on its site,

The metaverse won’t be built overnight by a single company. We’ll collaborate with policymakers, experts and industry partners to bring this to life.

We’re announcing a $50 million investment in global research and program partners to ensure these products are developed responsibly.

We develop technology rooted in human connection that brings people together. As we focus on helping to build the next computing platform, our work across augmented and virtual reality and consumer hardware will deepen that human connection regardless of physical distance and without being tied to devices. 

Introducing the XR [extended reality] Programs and Research Fund

There’s a long road ahead. But as a starting point, we’re announcing the XR Programs and Research Fund, a two-year $50 million investment in programs and external research to help us in this effort. Through this fund, we’ll collaborate with industry partners, civil rights groups, governments, nonprofits and academic institutions to determine how to build these technologies responsibly. 

..

Where integrity and safety are concerned Facebook is once again having some credibility issues according to an October 5, 2021 Associated Press article (Whistleblower testifies Facebook chooses profit over safety, calls for ‘congressional action’) posted on the Canadian Broadcasting Corporation’s (CBC) news online website.

Rebranding Facebook’s integrity and safety issues away?

It seems Facebook’s credibility issues are such that the company is about to rebrand itself according to an October 19, 2021 article by Alex Heath for The Verge (Note: Links have been removed),

Facebook is planning to change its company name next week to reflect its focus on building the metaverse, according to a source with direct knowledge of the matter.

The coming name change, which CEO Mark Zuckerberg plans to talk about at the company’s annual Connect conference on October 28th [2021], but could unveil sooner, is meant to signal the tech giant’s ambition to be known for more than social media and all the ills that entail. The rebrand would likely position the blue Facebook app as one of many products under a parent company overseeing groups like Instagram, WhatsApp, Oculus, and more. A spokesperson for Facebook declined to comment for this story.

Facebook already has more than 10,000 employees building consumer hardware like AR glasses that Zuckerberg believes will eventually be as ubiquitous as smartphones. In July, he told The Verge that, over the next several years, “we will effectively transition from people seeing us as primarily being a social media company to being a metaverse company.”

A rebrand could also serve to further separate the futuristic work Zuckerberg is focused on from the intense scrutiny Facebook is currently under for the way its social platform operates today. A former employee turned whistleblower, Frances Haugen, recently leaked a trove of damning internal documents to The Wall Street Journal and testified about them before Congress. Antitrust regulators in the US and elsewhere are trying to break the company up, and public trust in how Facebook does business is falling.

Facebook isn’t the first well-known tech company to change its company name as its ambitions expand. In 2015, Google reorganized entirely under a holding company called Alphabet, partly to signal that it was no longer just a search engine, but a sprawling conglomerate with companies making driverless cars and health tech. And Snapchat rebranded to Snap Inc. in 2016, the same year it started calling itself a “camera company” and debuted its first pair of Spectacles camera glasses.

If you have time, do read Heath’s article in its entirety.

An October 20, 2021 Thomson Reuters item on CBC (Canadian Broadcasting Corporation) news online includes quotes from some industry analysts about the rebrand,

“It reflects the broadening out of the Facebook business. And then, secondly, I do think that Facebook’s brand is probably not the greatest given all of the events of the last three years or so,” internet analyst James Cordwell at Atlantic Equities said.

“Having a different parent brand will guard against having this negative association transferred into a new brand, or other brands that are in the portfolio,” said Shankha Basu, associate professor of marketing at University of Leeds.

Tyler Jadah’s October 20, 2021 article for the Daily Hive includes an earlier announcement (not mentioned in the other two articles about the rebranding), Note: A link has been removed,

Earlier this week [October 17, 2021], Facebook announced it will start “a journey to help build the next computing platform” and will hire 10,000 new high-skilled jobs within the European Union (EU) over the next five years.

“Working with others, we’re developing what is often referred to as the ‘metaverse’ — a new phase of interconnected virtual experiences using technologies like virtual and augmented reality,” wrote Facebook’s Nick Clegg, the VP of Global Affairs. “At its heart is the idea that by creating a greater sense of “virtual presence,” interacting online can become much closer to the experience of interacting in person.”

Clegg says the metaverse has the potential to help unlock access to new creative, social, and economic opportunities across the globe and the virtual world.

In an email with Facebook’s Corporate Communications Canada, David Troya-Alvarez told Daily Hive, “We don’t comment on rumour or speculation,” in regards to The Verge‘s report.

I will update this posting when and if Facebook rebrands itself into a ‘metaverse’ company.

***See Oct. 28, 2021 update at the end of this posting and prepare yourself for ‘Meta’.***

Who (else) cares about integrity and safety in the metaverse?

Apparently, the international legal firm, Norton Rose Fulbright also cares about safety and integrity in the metaverse. Here’s more from their July 2021 The Metaverse: The evolution of a universal digital platform webpage,

In technology, first-mover advantage is often significant. This is why BigTech and other online platforms are beginning to acquire software businesses to position themselves for the arrival of the Metaverse.  They hope to be at the forefront of profound changes that the Metaverse will bring in relation to digital interactions between people, between businesses, and between them both. 

What is the Metaverse? The short answer is that it does not exist yet. At the moment it is vision for what the future will be like where personal and commercial life is conducted digitally in parallel with our lives in the physical world. Sounds too much like science fiction? For something that does not exist yet, the Metaverse is drawing a huge amount of attention and investment in the tech sector and beyond.  

Here we look at what the Metaverse is, what its potential is for disruptive change, and some of the key legal and regulatory issues future stakeholders may need to consider.

What are the potential legal issues?

The revolutionary nature of the Metaverse is likely to give rise to a range of complex legal and regulatory issues. We consider some of the key ones below. As time goes by, naturally enough, new ones will emerge.

Data

Participation in the Metaverse will involve the collection of unprecedented amounts and types of personal data. Today, smartphone apps and websites allow organisations to understand how individuals move around the web or navigate an app. Tomorrow, in the Metaverse, organisations will be able to collect information about individuals’ physiological responses, their movements and potentially even brainwave patterns, thereby gauging a much deeper understanding of their customers’ thought processes and behaviours.

Users participating in the Metaverse will also be “logged in” for extended amounts of time. This will mean that patterns of behaviour will be continually monitored, enabling the Metaverse and the businesses (vendors of goods and services) participating in the Metaverse to understand how best to service the users in an incredibly targeted way.

The hungry Metaverse participant

How might actors in the Metaverse target persons participating in the Metaverse? Let us assume one such woman is hungry at the time of participating. The Metaverse may observe a woman frequently glancing at café and restaurant windows and stopping to look at cakes in a bakery window, and determine that she is hungry and serve her food adverts accordingly.

Contrast this with current technology, where a website or app can generally only ascertain this type of information if the woman actively searched for food outlets or similar on her device.

Therefore, in the Metaverse, a user will no longer need to proactively provide personal data by opening up their smartphone and accessing their webpage or app of choice. Instead, their data will be gathered in the background while they go about their virtual lives. 

This type of opportunity comes with great data protection responsibilities. Businesses developing, or participating in, the Metaverse will need to comply with data protection legislation when processing personal data in this new environment. The nature of the Metaverse raises a number of issues around how that compliance will be achieved in practice.

Who is responsible for complying with applicable data protection law? 

In many jurisdictions, data protection laws place different obligations on entities depending on whether an entity determines the purpose and means of processing personal data (referred to as a “controller” under the EU General Data Protection Regulation (GDPR)) or just processes personal data on behalf of others (referred to as a “processor” under the GDPR). 

In the Metaverse, establishing which entity or entities have responsibility for determining how and why personal data will be processed, and who processes personal data on behalf of another, may not be easy. It will likely involve picking apart a tangled web of relationships, and there may be no obvious or clear answers – for example:

Will there be one main administrator of the Metaverse who collects all personal data provided within it and determines how that personal data will be processed and shared?
Or will multiple entities collect personal data through the Metaverse and each determine their own purposes for doing so? 

Either way, many questions arise, including:

How should the different entities each display their own privacy notice to users? 
Or should this be done jointly? 
How and when should users’ consent be collected? 
Who is responsible if users’ personal data is stolen or misused while they are in the Metaverse? 
What data sharing arrangements need to be put in place and how will these be implemented?

There’s a lot more to this page including a look at Social Media Regulation and Intellectual Property Rights.

One other thing, according to the Norton Rose Fulbright Wikipedia entry, it is one of the ten largest legal firms in the world.

How many realities are there?

I’m starting to think we should talking about RR (real reality), as well as, VR (virtual reality), AR (augmented reality), MR (mixed reality), and XR (extended reality). It seems that all of these (except RR, which is implied) will be part of the ‘metaverse’, assuming that it ever comes into existence. Happily, I have found a good summarized description of VR/AR/MR/XR in a March 20, 2018 essay by North of 41 on medium.com,

Summary: VR is immersing people into a completely virtual environment; AR is creating an overlay of virtual content, but can’t interact with the environment; MR is a mixed of virtual reality and the reality, it creates virtual objects that can interact with the actual environment. XR brings all three Reality (AR, VR, MR) together under one term.

If you have the interest and approximately five spare minutes, read the entire March 20, 2018 essay, which has embedded images illustrating the various realities.

Alternate Mixed Realities: an example

TransforMR: Pose-Aware Object Substitution for Composing Alternate Mixed Realities (ISMAR ’21)

Here’s a description from one of the researchers, Mohamed Kari, of the video, which you can see above, and the paper he and his colleagues presented at the 20th IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 2021 (from the TransforMR page on YouTube),

We present TransforMR, a video see-through mixed reality system for mobile devices that performs 3D-pose-aware object substitution to create meaningful mixed reality scenes in previously unseen, uncontrolled, and open-ended real-world environments.

To get a sense of how recent this work is, ISMAR 2021 was held from October 4 – 8, 2021.

The team’s 2021 ISMAR paper, TransforMR Pose-Aware Object Substitution for Composing Alternate Mixed Realities by Mohamed Kari, Tobias Grosse-Puppendah, Luis Falconeri Coelho, Andreas Rene Fender, David Bethge, Reinhard Schütte, and Christian Holz lists two educational institutions I’d expect to see (University of Duisburg-Essen and ETH Zürich), the surprise was this one: Porsche AG. Perhaps that explains the preponderance of vehicles in this demonstration.

Space walking in virtual reality

Ivan Semeniuk’s October 2, 2021 article for the Globe and Mail highlights a collaboration between Montreal’s Felix and Paul Studios with NASA (US National Aeronautics and Space Administration) and Time studios,

Communing with the infinite while floating high above the Earth is an experience that, so far, has been known to only a handful.

Now, a Montreal production company aims to share that experience with audiences around the world, following the first ever recording of a spacewalk in the medium of virtual reality.

The company, which specializes in creating virtual-reality experiences with cinematic flair, got its long-awaited chance in mid-September when astronauts Thomas Pesquet and Akihiko Hoshide ventured outside the International Space Station for about seven hours to install supports and other equipment in preparation for a new solar array.

The footage will be used in the fourth and final instalment of Space Explorers: The ISS Experience, a virtual-reality journey to space that has already garnered a Primetime Emmy Award for its first two episodes.

From the outset, the production was developed to reach audiences through a variety of platforms for 360-degree viewing, including 5G-enabled smart phones and tablets. A domed theatre version of the experience for group audiences opened this week at the Rio Tinto Alcan Montreal Planetarium. Those who desire a more immersive experience can now see the first two episodes in VR form by using a headset available through the gaming and entertainment company Oculus. Scenes from the VR series are also on offer as part of The Infinite, an interactive exhibition developed by Montreal’s Phi Studio, whose works focus on the intersection of art and technology. The exhibition, which runs until Nov. 7 [2021], has attracted 40,000 visitors since it opened in July [2021?].

At a time when billionaires are able to head off on private extraterrestrial sojourns that almost no one else could dream of, Lajeunesse [Félix Lajeunesse, co-founder and creative director of Felix and Paul studios] said his project was developed with a very different purpose in mind: making it easier for audiences to become eyewitnesses rather than distant spectators to humanity’s greatest adventure.

For the final instalments, the storyline takes viewers outside of the space station with cameras mounted on the Canadarm, and – for the climax of the series – by following astronauts during a spacewalk. These scenes required extensive planning, not only because of the limited time frame in which they could be gathered, but because of the lighting challenges presented by a constantly shifting sun as the space station circles the globe once every 90 minutes.

… Lajeunesse said that it was equally important to acquire shots that are not just technically spectacular but that serve the underlying themes of Space Explorers: The ISS Experience. These include an examination of human adaptation and advancement, and the unity that emerges within a group of individuals from many places and cultures and who must learn to co-exist in a high risk environment in order to achieve a common goal.

If you have the time, do read Semeniuk’s October 2, 2021 article in its entirety. You can find the exhibits (hopefully, you’re in Montreal) The Infinite here and Space Explorers: The ISS experience here (see the preview below),

The realities and the ‘verses

There always seems to be a lot of grappling with new and newish science/technology where people strive to coin terms and define them while everyone, including members of the corporate community, attempts to cash in.

The last time I looked (probably about two years ago), I wasn’t able to find any good definitions for alternate reality and mixed reality. (By good, I mean something which clearly explicated the difference between the two.) It was nice to find something this time.

As for Facebook and its attempts to join/create a/the metaverse, the company’s timing seems particularly fraught. As well, paradigm-shifting technology doesn’t usually start with large corporations. The company is ignoring its own history.

Multiverses

Writing this piece has reminded me of the upcoming movie, “Doctor Strange in the Multiverse of Madness” (Wikipedia entry). While this multiverse is based on a comic book, the idea of a Multiverse (Wikipedia entry) has been around for quite some time,

Early recorded examples of the idea of infinite worlds existed in the philosophy of Ancient Greek Atomism, which proposed that infinite parallel worlds arose from the collision of atoms. In the third century BCE, the philosopher Chrysippus suggested that the world eternally expired and regenerated, effectively suggesting the existence of multiple universes across time.[1] The concept of multiple universes became more defined in the Middle Ages.

Multiple universes have been hypothesized in cosmology, physics, astronomy, religion, philosophy, transpersonal psychology, music, and all kinds of literature, particularly in science fiction, comic books and fantasy. In these contexts, parallel universes are also called “alternate universes”, “quantum universes”, “interpenetrating dimensions”, “parallel universes”, “parallel dimensions”, “parallel worlds”, “parallel realities”, “quantum realities”, “alternate realities”, “alternate timelines”, “alternate dimensions” and “dimensional planes”.

The physics community has debated the various multiverse theories over time. Prominent physicists are divided about whether any other universes exist outside of our own.

Living in a computer simulation or base reality

The whole thing is getting a little confusing for me so I think I’ll stick with RR (real reality) or as it’s also known base reality. For the notion of base reality, I want to thank astronomer David Kipping of Columbia University in Anil Ananthaswamy’s article for this analysis of the idea that we might all be living in a computer simulation (from my December 8, 2020 posting; scroll down about 50% of the way to the “Are we living in a computer simulation?” subhead),

… there is a more obvious answer: Occam’s razor, which says that in the absence of other evidence, the simplest explanation is more likely to be correct. The simulation hypothesis is elaborate, presuming realities nested upon realities, as well as simulated entities that can never tell that they are inside a simulation. “Because it is such an overly complicated, elaborate model in the first place, by Occam’s razor, it really should be disfavored, compared to the simple natural explanation,” Kipping says.

Maybe we are living in base reality after all—The Matrix, Musk and weird quantum physics notwithstanding.

To sum it up (briefly)

I’m sticking with the base reality (or real reality) concept, which is where various people and companies are attempting to create a multiplicity of metaverses or the metaverse effectively replacing the internet. This metaverse can include any all of these realities (AR/MR/VR/XR) along with base reality. As for Facebook’s attempt to build ‘the metaverse’, it seems a little grandiose.

The computer simulation theory is an interesting thought experiment (just like the multiverse is an interesting thought experiment). I’ll leave them there.

Wherever it is we are living, these are interesting times.

***Updated October 28, 2021: D. (Devindra) Hardawar’s October 28, 2021 article for engadget offers details about the rebranding along with a dash of cynicism (Note: A link has been removed),

Here’s what Facebook’s metaverse isn’t: It’s not an alternative world to help us escape from our dystopian reality, a la Snow Crash. It won’t require VR or AR glasses (at least, not at first). And, most importantly, it’s not something Facebook wants to keep to itself. Instead, as Mark Zuckerberg described to media ahead of today’s Facebook Connect conference, the company is betting it’ll be the next major computing platform after the rise of smartphones and the mobile web. Facebook is so confident, in fact, Zuckerberg announced that it’s renaming itself to “Meta.”

After spending the last decade becoming obsessed with our phones and tablets — learning to stare down and scroll practically as a reflex — the Facebook founder thinks we’ll be spending more time looking up at the 3D objects floating around us in the digital realm. Or maybe you’ll be following a friend’s avatar as they wander around your living room as a hologram. It’s basically a digital world layered right on top of the real world, or an “embodied internet” as Zuckerberg describes.

Before he got into the weeds for his grand new vision, though, Zuckerberg also preempted criticism about looking into the future now, as the Facebook Papers paint the company as a mismanaged behemoth that constantly prioritizes profit over safety. While acknowledging the seriousness of the issues the company is facing, noting that it’ll continue to focus on solving them with “industry-leading” investments, Zuckerberg said: 

“The reality is is that there’s always going to be issues and for some people… they may have the view that there’s never really a great time to focus on the future… From my perspective, I think that we’re here to create things and we believe that we can do this and that technology can make things better. So we think it’s important to to push forward.”

Given the extent to which Facebook, and Zuckerberg in particular, have proven to be untrustworthy stewards of social technology, it’s almost laughable that the company wants us to buy into its future. But, like the rise of photo sharing and group chat apps, Zuckerberg at least has a good sense of what’s coming next. And for all of his talk of turning Facebook into a metaverse company, he’s adamant that he doesn’t want to build a metaverse that’s entirely owned by Facebook. He doesn’t think other companies will either. Like the mobile web, he thinks every major technology company will contribute something towards the metaverse. He’s just hoping to make Facebook a pioneer.

“Instead of looking at a screen, or today, how we look at the Internet, I think in the future you’re going to be in the experiences, and I think that’s just a qualitatively different experience,” Zuckerberg said. It’s not quite virtual reality as we think of it, and it’s not just augmented reality. But ultimately, he sees the metaverse as something that’ll help to deliver more presence for digital social experiences — the sense of being there, instead of just being trapped in a zoom window. And he expects there to be continuity across devices, so you’ll be able to start chatting with friends on your phone and seamlessly join them as a hologram when you slip on AR glasses.

D. (Devindra) Hardawar’s October 28, 2021 article provides a lot more details and I recommend reading it in its entirety.

Exotic magnetism: a quantum simulation from D-Wave Sytems

Vancouver (Canada) area company, D-Wave Systems is trumpeting itself (with good reason) again. This 2021 ‘milestone’ achievement builds on work from 2018 (see my August 23, 2018 posting for the earlier work). For me, the big excitement was finding the best explanation for quantum annealing and D-Wave’s quantum computers that I’ve seen yet (that explanation and a link to more is at the end of this posting).

A February 18, 2021 news item on phys.org announces the latest achievement,

D-Wave Systems Inc. today [February 18, 2021] published a milestone study in collaboration with scientists at Google, demonstrating a computational performance advantage, increasing with both simulation size and problem hardness, to over 3 million times that of corresponding classical methods. Notably, this work was achieved on a practical application with real-world implications, simulating the topological phenomena behind the 2016 Nobel Prize in Physics. This performance advantage, exhibited in a complex quantum simulation of materials, is a meaningful step in the journey toward applications advantage in quantum computing.

A February 18, 2021 D-Wave Systems press release (also on EurekAlert), which originated the news item, describes the work in more detail,

The work by scientists at D-Wave and Google also demonstrates that quantum effects can be harnessed to provide a computational advantage in D-Wave processors, at problem scale that requires thousands of qubits. Recent experiments performed on multiple D-Wave processors represent by far the largest quantum simulations carried out by existing quantum computers to date.

The paper, entitled “Scaling advantage over path-integral Monte Carlo in quantum simulation of geometrically frustrated magnets”, was published in the journal Nature Communications (DOI 10.1038/s41467-021-20901-5, February 18, 2021). D-Wave researchers programmed the D-Wave 2000Q™ system to model a two-dimensional frustrated quantum magnet using artificial spins. The behavior of the magnet was described by the Nobel-prize winning work of theoretical physicists Vadim Berezinskii, J. Michael Kosterlitz and David Thouless. They predicted a new state of matter in the 1970s characterized by nontrivial topological properties. This new research is a continuation of previous breakthrough work published by D-Wave’s team in a 2018 Nature paper entitled “Observation of topological phenomena in a programmable lattice of 1,800 qubits” (Vol. 560, Issue 7719, August 22, 2018). In this latest paper, researchers from D-Wave, alongside contributors from Google, utilize D-Wave’s lower noise processor to achieve superior performance and glean insights into the dynamics of the processor never observed before.

“This work is the clearest evidence yet that quantum effects provide a computational advantage in D-Wave processors,” said Dr. Andrew King, principal investigator for this work at D-Wave. “Tying the magnet up into a topological knot and watching it escape has given us the first detailed look at dynamics that are normally too fast to observe. What we see is a huge benefit in absolute terms, with the scaling advantage in temperature and size that we would hope for. This simulation is a real problem that scientists have already attacked using the algorithms we compared against, marking a significant milestone and an important foundation for future development. This wouldn’t have been possible today without D-Wave’s lower noise processor.”

“The search for quantum advantage in computations is becoming increasingly lively because there are special problems where genuine progress is being made. These problems may appear somewhat contrived even to physicists, but in this paper from a collaboration between D-Wave Systems, Google, and Simon Fraser University [SFU], it appears that there is an advantage for quantum annealing using a special purpose processor over classical simulations for the more ‘practical’ problem of finding the equilibrium state of a particular quantum magnet,” said Prof. Dr. Gabriel Aeppli, professor of physics at ETH Zürich and EPF Lausanne, and head of the Photon Science Division of the Paul Scherrer Institute. “This comes as a surprise given the belief of many that quantum annealing has no intrinsic advantage over path integral Monte Carlo programs implemented on classical processors.”

“Nascent quantum technologies mature into practical tools only when they leave classical counterparts in the dust in solving real-world problems,” said Hidetoshi Nishimori, Professor, Institute of Innovative Research, Tokyo Institute of Technology. “A key step in this direction has been achieved in this paper by providing clear evidence of a scaling advantage of the quantum annealer over an impregnable classical computing competitor in simulating dynamical properties of a complex material. I send sincere applause to the team.”

“Successfully demonstrating such complex phenomena is, on its own, further proof of the programmability and flexibility of D-Wave’s quantum computer,” said D-Wave CEO Alan Baratz. “But perhaps even more important is the fact that this was not demonstrated on a synthetic or ‘trick’ problem. This was achieved on a real problem in physics against an industry-standard tool for simulation–a demonstration of the practical value of the D-Wave processor. We must always be doing two things: furthering the science and increasing the performance of our systems and technologies to help customers develop applications with real-world business value. This kind of scientific breakthrough from our team is in line with that mission and speaks to the emerging value that it’s possible to derive from quantum computing today.”

The scientific achievements presented in Nature Communications further underpin D-Wave’s ongoing work with world-class customers to develop over 250 early quantum computing applications, with a number piloting in production applications, in diverse industries such as manufacturing, logistics, pharmaceutical, life sciences, retail and financial services. In September 2020, D-Wave brought its next-generation Advantage™ quantum system to market via the Leap™ quantum cloud service. The system includes more than 5,000 qubits and 15-way qubit connectivity, as well as an expanded hybrid solver service capable of running business problems with up to one million variables. The combination of Advantage’s computing power and scale with the hybrid solver service gives businesses the ability to run performant, real-world quantum applications for the first time.

That last paragraph seems more sales pitch than research oriented. It’s not unexpected in a company’s press release but I was surprised that the editors at EurekAlert didn’t remove it.

Here’s a link to and a citation for the latest paper,

Scaling advantage over path-integral Monte Carlo in quantum simulation of geometrically frustrated magnets by Andrew D. King, Jack Raymond, Trevor Lanting, Sergei V. Isakov, Masoud Mohseni, Gabriel Poulin-Lamarre, Sara Ejtemaee, William Bernoudy, Isil Ozfidan, Anatoly Yu. Smirnov, Mauricio Reis, Fabio Altomare, Michael Babcock, Catia Baron, Andrew J. Berkley, Kelly Boothby, Paul I. Bunyk, Holly Christiani, Colin Enderud, Bram Evert, Richard Harris, Emile Hoskinson, Shuiyuan Huang, Kais Jooya, Ali Khodabandelou, Nicolas Ladizinsky, Ryan Li, P. Aaron Lott, Allison J. R. MacDonald, Danica Marsden, Gaelen Marsden, Teresa Medina, Reza Molavi, Richard Neufeld, Mana Norouzpour, Travis Oh, Igor Pavlov, Ilya Perminov, Thomas Prescott, Chris Rich, Yuki Sato, Benjamin Sheldan, George Sterling, Loren J. Swenson, Nicholas Tsai, Mark H. Volkmann, Jed D. Whittaker, Warren Wilkinson, Jason Yao, Hartmut Neven, Jeremy P. Hilton, Eric Ladizinsky, Mark W. Johnson, Mohammad H. Amin. Nature Communications volume 12, Article number: 1113 (2021) DOI: https://doi.org/10.1038/s41467-021-20901-5 Published: 18 February 2021

This paper is open access.

Quantum annealing and more

Dr. Andrew King, one of the D-Wave researchers, has written a February 18, 2021 article on Medium explaining some of the work. I’ve excerpted one of King’s points,

Insight #1: We observed what actually goes on under the hood in the processor for the first time

Quantum annealing — the approach adopted by D-Wave from the beginning — involves setting up a simple but purely quantum initial state, and gradually reducing the “quantumness” until the system is purely classical. This takes on the order of a microsecond. If you do it right, the classical system represents a hard (NP-complete) computational problem, and the state has evolved to an optimal, or at least near-optimal, solution to that problem.

What happens at the beginning and end of the computation are about as simple as quantum computing gets. But the action in the middle is hard to get a handle on, both theoretically and experimentally. That’s one reason these experiments are so important: they provide high-fidelity measurements of the physical processes at the core of quantum annealing. Our 2018 Nature article introduced the same simulation, but without measuring computation time. To benchmark the experiment this time around, we needed lower-noise hardware (in this case, we used the D-Wave 2000Q lower noise quantum computer), and we needed, strangely, to slow the simulation down. Since the quantum simulation happens so fast, we actually had to make things harder. And we had to find a way to slow down both quantum and classical simulation in an equitable way. The solution? Topological obstruction.

If you have time and the inclination, I encourage you to read King’s piece.

Quantum supremacy

This supremacy, refers to an engineering milestone and a October 23, 2019 news item on ScienceDaily announces the milestone has been reached,

Researchers in UC [University of California] Santa Barbara/Google scientist John Martinis’ group have made good on their claim to quantum supremacy. Using 53 entangled quantum bits (“qubits”), their Sycamore computer has taken on — and solved — a problem considered intractable for classical computers.

An October 23, 2019 UC Santa Barbara news release (also on EurekAlert) by Sonia Fernandez, which originated the news item, delves further into the work,

“A computation that would take 10,000 years on a classical supercomputer took 200 seconds on our quantum computer,” said Brooks Foxen, a graduate student researcher in the Martinis Group. “It is likely that the classical simulation time, currently estimated at 10,000 years, will be reduced by improved classical hardware and algorithms, but, since we are currently 1.5 trillion times faster, we feel comfortable laying claim to this achievement.”

The feat is outlined in a paper in the journal Nature.

The milestone comes after roughly two decades of quantum computing research conducted by Martinis and his group, from the development of a single superconducting qubit to systems including architectures of 72 and, with Sycamore, 54 qubits (one didn’t perform) that take advantage of the both awe-inspiring and bizarre properties of quantum mechanics.

“The algorithm was chosen to emphasize the strengths of the quantum computer by leveraging the natural dynamics of the device,” said Ben Chiaro, another graduate student researcher in the Martinis Group. That is, the researchers wanted to test the computer’s ability to hold and rapidly manipulate a vast amount of complex, unstructured data.

“We basically wanted to produce an entangled state involving all of our qubits as quickly as we can,” Foxen said, “and so we settled on a sequence of operations that produced a complicated superposition state that, when measured, returns bitstring with a probability determined by the specific sequence of operations used to prepare that particular superposition. The exercise, which was to verify that the circuit’s output correspond to the equence used to prepare the state, sampled the quantum circuit a million times in just a few minutes, exploring all possibilities — before the system could lose its quantum coherence.

‘A complex superposition state’

“We performed a fixed set of operations that entangles 53 qubits into a complex superposition state,” Chiaro explained. “This superposition state encodes the probability distribution. For the quantum computer, preparing this superposition state is accomplished by applying a sequence of tens of control pulses to each qubit in a matter of microseconds. We can prepare and then sample from this distribution by measuring the qubits a million times in 200 seconds.”

“For classical computers, it is much more difficult to compute the outcome of these operations because it requires computing the probability of being in any one of the 2^53 possible states, where the 53 comes from the number of qubits — the exponential scaling is why people are interested in quantum computing to begin with,” Foxen said. “This is done by matrix multiplication, which is expensive for classical computers as the matrices become large.”

According to the new paper, the researchers used a method called cross-entropy benchmarking to compare the quantum circuit’s output (a “bitstring”) to its “corresponding ideal probability computed via simulation on a classical computer” to ascertain that the quantum computer was working correctly.

“We made a lot of design choices in the development of our processor that are really advantageous,” said Chiaro. Among these advantages, he said, are the ability to experimentally tune the parameters of the individual qubits as well as their interactions.

While the experiment was chosen as a proof-of-concept for the computer, the research has resulted in a very real and valuable tool: a certified random number generator. Useful in a variety of fields, random numbers can ensure that encrypted keys can’t be guessed, or that a sample from a larger population is truly representative, leading to optimal solutions for complex problems and more robust machine learning applications. The speed with which the quantum circuit can produce its randomized bit string is so great that there is no time to analyze and “cheat” the system.

“Quantum mechanical states do things that go beyond our day-to-day experience and so have the potential to provide capabilities and application that would otherwise be unattainable,” commented Joe Incandela, UC Santa Barbara’s vice chancellor for research. “The team has demonstrated the ability to reliably create and repeatedly sample complicated quantum states involving 53 entangled elements to carry out an exercise that would take millennia to do with a classical supercomputer. This is a major accomplishment. We are at the threshold of a new era of knowledge acquisition.”

Looking ahead

With an achievement like “quantum supremacy,” it’s tempting to think that the UC Santa Barbara/Google researchers will plant their flag and rest easy. But for Foxen, Chiaro, Martinis and the rest of the UCSB/Google AI Quantum group, this is just the beginning.

“It’s kind of a continuous improvement mindset,” Foxen said. “There are always projects in the works.” In the near term, further improvements to these “noisy” qubits may enable the simulation of interesting phenomena in quantum mechanics, such as thermalization, or the vast amount of possibility in the realms of materials and chemistry.

In the long term, however, the scientists are always looking to improve coherence times, or, at the other end, to detect and fix errors, which would take many additional qubits per qubit being checked. These efforts have been running parallel to the design and build of the quantum computer itself, and ensure the researchers have a lot of work before hitting their next milestone.

“It’s been an honor and a pleasure to be associated with this team,” Chiaro said. “It’s a great collection of strong technical contributors with great leadership and the whole team really synergizes well.”

Here’s a link to and a citation for the paper,

Quantum supremacy using a programmable superconducting processor by Frank Arute, Kunal Arya, Ryan Babbush, Dave Bacon, Joseph C. Bardin, Rami Barends, Rupak Biswas, Sergio Boixo, Fernando G. S. L. Brandao, David A. Buell, Brian Burkett, Yu Chen, Zijun Chen, Ben Chiaro, Roberto Collins, William Courtney, Andrew Dunsworth, Edward Farhi, Brooks Foxen, Austin Fowler, Craig Gidney, Marissa Giustina, Rob Graff, Keith Guerin, Steve Habegger, Matthew P. Harrigan, Michael J. Hartmann, Alan Ho, Markus Hoffmann, Trent Huang, Travis S. Humble, Sergei V. Isakov, Evan Jeffrey, Zhang Jiang, Dvir Kafri, Kostyantyn Kechedzhi, Julian Kelly, Paul V. Klimov, Sergey Knysh, Alexander Korotkov, Fedor Kostritsa, David Landhuis, Mike Lindmark, Erik Lucero, Dmitry Lyakh, Salvatore Mandrà, Jarrod R. McClean, Matthew McEwen, Anthony Megrant, Xiao Mi, Kristel Michielsen, Masoud Mohseni, Josh Mutus, Ofer Naaman, Matthew Neeley, Charles Neill, Murphy Yuezhen Niu, Eric Ostby, Andre Petukhov, John C. Platt, Chris Quintana, Eleanor G. Rieffel, Pedram Roushan, Nicholas C. Rubin, Daniel Sank, Kevin J. Satzinger, Vadim Smelyanskiy, Kevin J. Sung, Matthew D. Trevithick, Amit Vainsencher, Benjamin Villalonga, Theodore White, Z. Jamie Yao, Ping Yeh, Adam Zalcman, Hartmut Neven & John M. Martinis. Nature volume 574, pages505–510 (2019) DOI: https://doi.org/10.1038/s41586-019-1666-5 Issue Date 24 October 2019

This paper appears to be open access.

MXene-coated yarn for wearable electronics

There’s been a lot of talk about wearable electronics, specifically e-textiles, but nothing seems to have entered the marketplace. Scaling up your lab discoveries for industrial production can be quite problematic. From an October 10, 2019 news item on ScienceDaily,

Producing functional fabrics that perform all the functions we want, while retaining the characteristics of fabric we’re accustomed to is no easy task.

Two groups of researchers at Drexel University — one, who is leading the development of industrial functional fabric production techniques, and the other, a pioneer in the study and application of one of the strongest, most electrically conductive super materials in use today — believe they have a solution.

They’ve improved a basic element of textiles: yarn. By adding technical capabilities to the fibers that give textiles their character, fit and feel, the team has shown that it can knit new functionality into fabrics without limiting their wearability.

An October 10, 2019 Drexel University news release (also on EurekAlert), which originated the news item, details the proposed solution (pun! as you’ll see in the video following this excerpt),

In a paper recently published in the journal Advanced Functional Materials, the researchers, led by Yury Gogotsi, PhD, Distinguished University and Bach professor in Drexel’s College of Engineering, and Genevieve Dion, an associate professor in Westphal College of Media Arts & Design and director of Drexel’s Center for Functional Fabrics, showed that they can create a highly conductive, durable yarn by coating standard cellulose-based yarns with a type of conductive two-dimensional material called MXene.

Hitting snags

“Current wearables utilize conventional batteries, which are bulky and uncomfortable, and can impose design limitations to the final product,” they write. “Therefore, the development of flexible, electrochemically and electromechanically active yarns, which can be engineered and knitted into full fabrics provide new and practical insights for the scalable production of textile-based devices.”

The team reported that its conductive yarn packs more conductive material into the fibers and can be knitted by a standard industrial knitting machine to produce a textile with top-notch electrical performance capabilities. This combination of ability and durability stands apart from the rest of the functional fabric field today.

Most attempts to turn textiles into wearable technology use stiff metallic fibers that alter the texture and physical behavior of the fabric. Other attempts to make conductive textiles using silver nanoparticles and graphene and other carbon materials raise environmental concerns and come up short on performance requirements. And the coating methods that are successfully able to apply enough material to a textile substrate to make it highly conductive also tend to make the yarns and fabrics too brittle to withstand normal wear and tear.

“Some of the biggest challenges in our field are developing innovative functional yarns at scale that are robust enough to be integrated into the textile manufacturing process and withstand washing,” Dion said. “We believe that demonstrating the manufacturability of any new conductive yarn during experimental stages is crucial. High electrical conductivity and electrochemical performance are important, but so are conductive yarns that can be produced by a simple and scalable process with suitable mechanical properties for textile integration. All must be taken into consideration for the successful development of the next-generation devices that can be worn like everyday garments.”

The winning combination

Dion has been a pioneer in the field of wearable technology, by drawing on her background on fashion and industrial design to produce new processes for creating fabrics with new technological capabilities. Her work has been recognized by the Department of Defense, which included Drexel, and Dion, in its Advanced Functional Fabrics of America effort to make the country a leader in the field.

She teamed with Gogotsi, who is a leading researcher in the area of two-dimensional conductive materials, to approach the challenge of making a conductive yarn that would hold up to knitting, wearing and washing.

Gogotsi’s group was part of the Drexel team that discovered highly conductive two-dimensional materials, called MXenes, in 2011 and have been exploring their exceptional properties and applications for them ever since. His group has shown that it can synthesize MXenes that mix with water to create inks and spray coatings without any additives or surfactants – a revelation that made them a natural candidate for making conductive yarn that could be used in functional fabrics. [Gogotsi’s work was featured here in a May 6, 2019 posting]

“Researchers have explored adding graphene and carbon nanotube coatings to yarn, our group has also looked at a number of carbon coatings in the past,” Gogotsi said. “But achieving the level of conductivity that we demonstrate with MXenes has not been possible until now. It is approaching the conductivity of silver nanowire-coated yarns, but the use of silver in the textile industry is severely limited due to its dissolution and harmful effect on the environment. Moreover, MXenes could be used to add electrical energy storage capability, sensing, electromagnetic interference shielding and many other useful properties to textiles.”

In its basic form, titanium carbide MXene looks like a black powder. But it is actually composed of flakes that are just a few atoms thick, which can be produced at various sizes. Larger flakes mean more surface area and greater conductivity, so the team found that it was possible to boost the performance of the yarn by infiltrating the individual fibers with smaller flakes and then coating the yarn itself with a layer of larger-flake MXene.

Putting it to the test

The team created the conductive yarns from three common, cellulose-based yarns: cotton, bamboo and linen. They applied the MXene material via dip-coating, which is a standard dyeing method, before testing them by knitting full fabrics on an industrial knitting machine – the kind used to make most of the sweaters and scarves you’ll see this fall.

Each type of yarn was knit into three different fabric swatches using three different stitch patterns – single jersey, half gauge and interlock – to ensure that they are durable enough to hold up in any textile from a tightly knit sweater to a loose-knit scarf.

“The ability to knit MXene-coated cellulose-based yarns with different stitch patterns allowed us to control the fabric properties, such as porosity and thickness for various applications,” the researchers write.

To put the new threads to the test in a technological application, the team knitted some touch-sensitive textiles – the sort that are being explored by Levi’s and Yves Saint Laurent as part of Google’s Project Jacquard.

Not only did the MXene-based conductive yarns hold up against the wear and tear of the industrial knitting machines, but the fabrics produced survived a battery of tests to prove its durability. Tugging, twisting, bending and – most importantly – washing, did not diminish the touch-sensing abilities of the yarn, the team reported – even after dozens of trips through the spin cycle.

Pushing forward

But the researchers suggest that the ultimate advantage of using MXene-coated conductive yarns to produce these special textiles is that all of the functionality can be seamlessly integrated into the textiles. So instead of having to add an external battery to power the wearable device, or wirelessly connect it to your smartphone, these energy storage devices and antennas would be made of fabric as well – an integration that, though literally seamed, is a much smoother way to incorporate the technology.

“Electrically conducting yarns are quintessential for wearable applications because they can be engineered to perform specific functions in a wide array of technologies,” they write.

Using conductive yarns also means that a wider variety of technological customization and innovations are possible via the knitting process. For example, “the performance of the knitted pressure sensor can be further improved in the future by changing the yarn type, stitch pattern, active material loading and the dielectric layer to result in higher capacitance changes,” according to the authors.

Dion’s team at the Center for Functional Fabrics is already putting this development to the test in a number of projects, including a collaboration with textile manufacturer Apex Mills – one of the leading producers of material for car seats and interiors. And Gogotsi suggests the next step for this work will be tuning the coating process to add just the right amount of conductive MXene material to the yarn for specific uses.

“With this MXene yarn, so many applications are possible,” Gogotsi said. “You can think about making car seats with it so the car knows the size and weight of the passenger to optimize safety settings; textile pressure sensors could be in sports apparel to monitor performance, or woven into carpets to help connected houses discern how many people are home – your imagination is the limit.”

Researchers have produced a video about their work,

Here’s a link to and a citation for the paper,

Knittable and Washable Multifunctional MXene‐Coated Cellulose Yarns by Simge Uzun, Shayan Seyedin, Amy L. Stoltzfus, Ariana S. Levitt, Mohamed Alhabeb, Mark Anayee, Christina J. Strobel, Joselito M. Razal, Genevieve Dion, Yury Gogotsi. Advanced Functional Materials DOI: https://doi.org/10.1002/adfm.201905015 First published: 05 September 2019

This paper is behind a paywall.

Toronto, Sidewalk Labs, smart cities, and timber

The ‘smart city’ initiatives continue to fascinate. During the summer, Toronto’s efforts were described in a June 24, 2019 article by Katharine Schwab for Fast Company (Note: Links have been removed),

Today, Google sister company Sidewalk Labs released a draft of its master plan to transform 12 acres on the Toronto waterfront into a smart city. The document details the neighborhood’s buildings, street design, transportation, and digital infrastructure—as well as how the company plans to construct it.

When a leaked copy of the plan popped up online earlier this year, we learned that Sidewalk Labs plans to build the entire development, called Quayside, out of mass timber. But today’s release of the official plan reveals the key to doing so: Sidewalk proposes investing $80 million to build a timber factory and supply chain that would support its fully timber neighborhood. The company says the factory, which would be focused on manufacturing prefabricated building pieces that could then be assembled into fully modular buildings on site, could reduce building time by 35% compared to more traditional building methods.

“We would fund the creation of [a factory] somewhere in the greater Toronto area that we think could play a role in catalyzing a new industry around mass timber,” says Sidewalk Labs CEO and chairman Dan Doctoroff.

However, the funding of the factory is dependent on Sidewalk Labs being able to expand its development plan to the entire riverfront district. .. [emphasis mine].

Here’s where I think it gets very interesting,

Sidewalk proposes sourcing spruce and fir trees from the forests in Ontario, Quebec, and British Columbia. While Canada has 40% of the world’s sustainable forests, Sidewalk claims, the country has few factories that can turn these trees into the building material. That’s why the company proposes starting a factory to process two kinds of mass timber: Cross-laminated timber (CLT) and glulam beams. The latter is meant specifically to bear the weight of the 30-story buildings Sidewalk hopes to build. While Sidewalk says that 84% of the larger district would be handed over for development by local companies, the plan requires that these companies uphold the same sustainability standards when it comes to performance

Sidewalk says companies wouldn’t be required to build with CLT and glulam, but since the company’s reason for building the mass timber factory is that there aren’t many existing manufacturers to meet the needs for a full-scale development, the company’s plan might ultimately push any third-party developers toward using its [Google] factory to source materials. … [emphasis mine]

If I understand this rightly, Google wants to expand its plan to Toronto’s entire waterfront to make building a factory to produce the type of wood products Google wants to use in its Quayside development financially feasible (profitable). And somehow, local developers will not be forced to build the sames kinds of structures although Google will be managing the entire waterfront development. Hmmm.

Let’s take a look at one of Google’s other ‘city ventures’.

Louisville, Kentucky

First, Alphabet is the name of Google’s parent company and it was Alphabet that offered the city of Louisville an opportunity for cheap, abundant internet service known as Google Fiber. From a May 6, 2019 article by Alex Correa for the The Edge (Note: Links have been removed),

In 2015, Alphabet chose several cities in Kentucky to host its Google Fiber project. Google Fiber is a service providing broadband internet and IPTV directly to a number of locations, and the initiative in Kentucky … . The tech giant dug up city streets to bury fibre optic cables of their own, touting a new technique that would only require the cables to be a few inches beneath the surface. However, after two years of delays and negotiations after the announcement, Google abandoned the project in Louisville, Kentucky.

Like an unwanted pest in a garden, sign of Google’s presence can be seen and felt in the city streets. Metro Councilman Brandon Coan criticized the state of the city’s infrastructure, pointing out that strands of errant, tar-like sealant, used to cover up the cables, are “everywhere.” Speaking outside of a Louisville coffee shop that ran Google Fiber lines before the departure, he said, “I’m confident that Google and the city are going to negotiate a deal… to restore the roads to as good a condition as they were when they got here. Frankly, I think they owe us more than that.”

Google’s disappearance did more than just damage roads [emphasis mine] in Louisville. Plans for promising projects were abandoned, including transformative economic development that could have provided the population with new jobs and vastly different career opportunities than what was available. Add to that the fact that media coverage of the aborted initiative cast Louisville as the site of a failed experiment, creating an impression of the city as an embarrassment. (Google has since announced plans to reimburse the city $3.84 million over 20 months to help repair the damage to the city’s streets and infrastructure.)

A February 22, 2019 article on CBC (Canadian Broadcasting Corporation) Radio news online offers images of the damaged roadways and a particle transcript of a Day 6 radio show hosted by Brent Bambury,

Shortly after it was installed, the sealant on the trenches Google Fiber cut into Louisville roads popped out. (WDRB Louisville) Courtesy: CBC Radio Day 6

Google’s Sidewalk Labs is facing increased pushback to its proposal to build a futuristic neighbourhood in Toronto, after leaked documents revealed the company’s plans are more ambitious than the public had realized.

One particular proposal — which would see Sidewalk Labs taking a cut of property taxes in exchange for building a light rail transit line along Toronto’s waterfront — is especially controversial.

The company has developed an impressive list of promises for its proposed neighbourhood, including mobile pre-built buildings and office towers that tailor themselves to occupants’ behaviour.

But Louisville, Kentucky-based business reporter Chris Otts says that when Google companies come to town, it doesn’t always end well.

What was the promise Google Fiber made to Louisville back in 2015?

Well, it was just to be included as one of their Fiber cities, which was a pretty serious deal for Louisville at the time. A big coup for the mayor, and his administration had been working for years to get Google to consider adding Louisville to that list.

So if the city was eager, what sorts of accommodations were made for Google to entice them to come to Louisville?

Basically, the city did everything it could from a streamlining red tape perspective to get Google here … in terms of, you know, awarding them a franchise, and allowing them to be in the rights of way with this innovative technique they had for burying their cables here.
And then also, they [the city] passed a policy, which, to be sure, they say is just good policy regardless of Google’s support for it. But it had to do with how new Internet companies like Google can access utility poles to install their networks.

And Louisville ended up spending hundreds of thousands of dollars to defend that new policy in court in lawsuits by AT&T and by the traditional cable company here.

When Google Fiber starts doing business, they’re offering cheaper high speed Internet access, and they start burying these cables in the ground.

When did things start to go sideways for this project?

I don’t know if I would say ‘almost immediately,’ but certainly the problems were evident fairly quickly.

So they started their work in 2017. If you picture it, [in] the streets you can see on either side there are these seams. They look like little strings … near the end of the streets on both sides. And there are cuts in the street where they buried the cable and they topped it off with this sealant

And fairly early on — within months, I would say, of them doing that — you could see the sealant popping out. The conduit in there [was] visible or exposed. And so it was fairly evident that there were problems with it pretty quickly

Was this the first time that they had used this system and the sealant that you’re describing?

It was the first time, according to them, that they had used such shallow trenches in the streets.

So these are as shallow as two inches below the pavement surface that they’d bury these cables. It’s the ultra-shallow version of this technique.

And what explanation did Google Fiber offer for their decision to leave Louisville?

That it was basically a business decision; that they were trying this construction method to see if it was sustainable and they just had too many problems with it.

And as they said directly in their … written statement about this, they decided that instead of doing things right and starting over, which they would have to do essentially to keep providing service in Louisville, that it was the better business decision for them to just pick up and leave.

Toronto’s Sidewalk Labs isn’t Google Fiber — but they’re both owned by Google’s parent company, Alphabet.

If Louisville could give Toronto a piece of advice about welcoming a Google infrastructure project to town, what do you think that advice would be?

The biggest lesson from this is that one day they can be next to you at the press conference saying what a great city you are and how happy they are to … provide new service in your market, and then the next day, with almost no notice, they can say, “You know what? This doesn’t make sense for us anymore. And by the way, see ya. Thanks for having us. Sorry it didn’t work out.”

Google’s promises to Toronto

Getting back to Katharine Schwab’s June 24, 2019 fast Company article,

The factory is also key to another of Sidewalk’s promises: Jobs. According to Sidewalk, the factory itself would create 2,500 jobs [emphasis mine] along the entire supply chain over a 20-year period. But even if the Canadian government approves Sidewalk’s plan and commits to building out the entire waterfront district to take advantage of the mass timber factory’s economies of scale, there are other regulatory hurdles to overcome. Right now, the building code in Toronto doesn’t allow for timber buildings over six stories tall. All of Sidewalk’s proposed buildings are over six stories, and many of them go up to 30 stories. Doctoroff said he was optimistic that the company will be able to get regulations changed if the city decides to adopt the plan. There are several examples of timber buildings that are already under construction, with a planned skyscraper in Japan that will be 70 stories.

Sidewalk’s proposal is the result of 18 months of planning, which involved getting feedback from community members and prototyping elements like a building raincoat that the company hopes to include in the final development. It has come under fire from privacy advocates in particular, and the Canadian government is currently facing a lawsuit from a civil liberties group over its decision to allow a corporation to propose public privacy governance standards.

Now that the company has released the plan, it will be up to the Canadian government to decide whether to move forward. And the mass timber factory, in particular, will be dependent on the government adopting Sidewalk’s plan wholesale, far beyond the Quayside development—a reminder that Sidewalk is a corporation that’s here to make money, dangling investment dollars in front of the government to incentivize it to embrace Sidewalk as the developer for the entire area.

A few thoughts

Those folks in Louisville made a lot of accommodations for Google only to have the company abandon them. They will get some money in compensation, finally, but it doesn’t make up for the lost jobs and the national, if not international, loss of face.

I would think that should things go wrong, Google would do exactly the same thing to Toronto. As for the $80M promise, here’s exactly how it’s phrased in the June 24, 2019 Sidewalk Labs news release,

… Together with local partners, Sidewalk proposes to invest up to $80 million in a mass timber factory in Ontario to jumpstart this emerging industry.

So, Alphabet/Google/Sidewalk has proposed up to an $80M investment—with local partners. I wonder how much this factory is supposed to cost and what kinds of accommodations Alphabet/Google/Sidewalk will demand. Possibilities include policy changes, changes in municipal bylaws, and government money. In other words, Canadian taxpayers could end up footing part of the bill and/or local developers could be required to cover and outsize percentage of the costs for the factory as they jockey for the opportunity to develop part of Toronto’s waterfront.

Other than Louisville, what’s the company’s track record with regard to its partnerships with cities and municipalities? I Haven’t found any success stories in my admittedly brief search. Unusually, the company doesn’t seem to be promoting any of its successful city partnerships.

Smart city

While my focus has been on the company’s failure with Louisville and the possible dangers inherent to Toronto in a partnership with this company, it shouldn’t be forgotten that all of this development is in the name of a ‘smart’ city and that means data-driven. My March 28, 2018 posting features some of the issues with the technology, 5G, that will be needed to make cities ‘smart’. There’s also my March 20, 2018 posting (scroll down about 30% of the way) which looks at ‘smart’ cities in Canada with a special emphasis on Vancouver.

You may want to check out David Skok’s February 15, 2019 Maclean’s article (Cracks in the Sidewalk) for a Torontonian’s perspective.

Should you wish to do some delving yourself, there’s Sidewalk Labs website here and a June 24, 2019 article by Matt McFarland for CNN detailing some of the latest news about the backlash in Toronto concerning Sidewalk Labs.

A September 2019 update

Waterfront Toronto’s Digital Strategy Advisory Panel (DSAP) submitted a report to Google in August 2019 which was subsequently published as of September 10, 2019. To sum it up, the panel was not impressed with Google’s June 2019 draft master plan. From a September 11, 2019 news item on the Guardian (Note: Links have been removed),

A controversial smart city development in Canada has hit another roadblock after an oversight panel called key aspects of the proposal “irrelevant”, “unnecessary” and “frustratingly abstract” in a new report.

The project on Toronto’s waterfront, dubbed Quayside, is a partnership between the city and Google’s sister company Sidewalk Labs. It promises “raincoats” for buildings, autonomous vehicles and cutting-edge wood-frame towers, but has faced numerous criticisms in recent months.

A September 11, 2019 article by Ian Bick of Canadian Press published on the CBC (Canadian Broadcasting Corporation) website offers more detail,

Preliminary commentary from Waterfront Toronto’s digital strategy advisory panel (DSAP) released Tuesday said the plan from Google’s sister company Sidewalk is “frustratingly abstract” and that some of the innovations proposed were “irrelevant or unnecessary.”

“The document is somewhat unwieldy and repetitive, spreads discussions of topics across multiple volumes, and is overly focused on the ‘what’ rather than the ‘how,’ ” said the report on the panel’s comments.

Some on the 15-member panel, an arm’s-length body that gives expert advice to Waterfront Toronto, have also found the scope of the proposal to be unclear or “concerning.”

The report says that some members also felt the official Sidewalk plan did not appear to put the citizen at the centre of the design process for digital innovations, and raised issues with the way Sidewalk has proposed to manage data that is generated from the neighbourhood.

The panel’s early report is not official commentary from Waterfront Toronto, the multi-government body that is overseeing the Quayside development, but is meant to indicate areas that needs improvement.

The panel, chaired by University of Ottawa law professor Michael Geist, includes executives, professors, and other experts on technology, privacy, and innovation.

Sidewalk Labs spokeswoman Keerthana Rang said the company appreciates the feedback and already intends to release more details in October on the digital innovations it hopes to implement at Quayside.

I haven’t been able to find the response to DSAP’s September 2019 critique but I did find this Toronto Sidewalk Labs report, Responsible Data Use Assessment Summary :Overview of Collab dated October 16, 2019. Of course, there’s still another 10 days before October 2019 is past.

The wonder of movement in 3D

Shades of Eadweard Muybridge (English photographer who pioneered photographic motion studies)! A September 19, 2018 news item on ScienceDaily describes the latest efforts to ‘capture motion’,

Patriots quarterback Tom Brady has often credited his success to spending countless hours studying his opponent’s movements on film. This understanding of movement is necessary for all living species, whether it’s figuring out what angle to throw a ball at, or perceiving the motion of predators and prey. But simple videos can’t actually give us the full picture.

That’s because traditional videos and photos for studying motion are two-dimensional, and don’t show us the underlying 3-D structure of the person or subject of interest. Without the full geometry, we can’t inspect the small and subtle movements that help us move faster, or make sense of the precision needed to perfect our athletic form.

Recently, though, researchers from MIT’s [Massachusetts Institute of Technology] Computer Science and Artificial Intelligence Laboratory (CSAIL) have come up with a way to get a better handle on this understanding of complex motion.

There isn’t a single reference to Muybridge, still, this September 18, 2018 Massachusetts Institute of Technology news release (also on EurekAlert but published September 19, 2018), which originated the news item, delves further into the research,

The new system uses an algorithm that can take 2-D videos and turn them into 3-D printed “motion sculptures” that show how a human body moves through space. In addition to being an intriguing aesthetic visualization of shape and time, the team envisions that their “MoSculp” system could enable a much more detailed study of motion for professional athletes, dancers, or anyone who wants to improve their physical skills.

“Imagine you have a video of Roger Federer serving a ball in a tennis match, and a video of yourself learning tennis,” says PhD student Xiuming Zhang, lead author of a new paper about the system. “You could then build motion sculptures of both scenarios to compare them and more comprehensively study where you need to improve.”

Because motion sculptures are 3-D, users can use a computer interface to navigate around the structures and see them from different viewpoints, revealing motion-related information inaccessible from the original viewpoint.

Zhang wrote the paper alongside MIT professors William Freeman and Stefanie Mueller, PhD student Jiajun Wu, Google researchers Qiurui He and Tali Dekel, as well as U.C. Berkeley postdoc and former CSAIL PhD Andrew Owens.

How it works

Artists and scientists have long struggled to gain better insight into movement, limited by their own camera lens and what it could provide.

Previous work has mostly used so-called “stroboscopic” photography techniques, which look a lot like the images in a flip book stitched together. But since these photos only show snapshots of movement, you wouldn’t be able to see as much of the trajectory of a person’s arm when they’re hitting a golf ball, for example.

What’s more, these photographs also require laborious pre-shoot setup, such as using a clean background and specialized depth cameras and lighting equipment. All MoSculp needs is a video sequence.

Given an input video, the system first automatically detects 2-D key points on the subject’s body, such as the hip, knee, and ankle of a ballerina while she’s doing a complex dance sequence. Then, it takes the best possible poses from those points to be turned into 3-D “skeletons.”

After stitching these skeletons together, the system generates a motion sculpture that can be 3-D printed, showing the smooth, continuous path of movement traced out by the subject. Users can customize their figures to focus on different body parts, assign different materials to distinguish among parts, and even customize lighting.

In user studies, the researchers found that over 75 percent of subjects felt that MoSculp provided a more detailed visualization for studying motion than the standard photography techniques.

“Dance and highly-skilled athletic motions often seem like ‘moving sculptures’ but they only create fleeting and ephemeral shapes,” says Courtney Brigham, communications lead at Adobe. “This work shows how to take motions and turn them into real sculptures with objective visualizations of movement, providing a way for athletes to analyze their movements for training, requiring no more equipment than a mobile camera and some computing time.”

The system works best for larger movements, like throwing a ball or taking a sweeping leap during a dance sequence. It also works for situations that might obstruct or complicate movement, such as people wearing loose clothing or carrying objects.

Currently, the system only uses single-person scenarios, but the team soon hopes to expand to multiple people. This could open up the potential to study things like social disorders, interpersonal interactions, and team dynamics.

This work will be presented at the User Interface Software and Technology (UIST) symposium in Berlin, Germany in October 2018 and the team’s paper published as part of the proceedings.

As for anyone wondering about the Muybridge comment, here’s an image the MIT researchers have made available,

A new system uses an algorithm that can take 2-D videos and turn them into 3-D-printed “motion sculptures” that show how a human body moves through space. Image courtesy of MIT CSAIL

Contrast that MIT image with some of the images in this video capturing parts of a theatre production, Studies in Motion: The Hauntings of Eadweard Muybridge,

Getting back to MIT, here’s their MoSculp video,

There are some startling similarities, eh? I suppose there are only so many ways one can capture movement be it in studies of Eadweard Muybridge, a theatre production about his work, or an MIT video the latest in motion capture technology.

Media registration for United Nations 3rd AI (artificial intelligence) for Good Global Summit

This is strictly for folks who have media accreditation. First, the news about the summit and then some detail about how you might accreditation should you be interested in going to Switzerland. Warning: The International Telecommunications Union which is holding this summit is a United Nations agency and you will note almost an entire paragraph of ‘alphabet soup’ when all the ‘sister’ agencies involved are listed.

From the March 21, 2019 International Telecommunications Union (ITU) media advisory (Note: There have been some changes to the formatting),

Geneva, 21 March 2019
​​​​​​​​​​​​​
Artificial Intelligence (AI) h​as taken giant leaps forward in recent years, inspiring growing confidence in AI’s ability to assist in solving some of humanity’s greatest challenges. Leaders in AI and humanitarian action are convening on the neutral platform offered by the United Nations to work towards AI improving the quality and sustainability of life on our planet.
The 2017 summit marked the beginning of global dialogue on the potential of AI to act as a force for good. The action-oriented 2018 summit gave rise to numerous ‘AI for Good’ projects, including an ‘AI for Health’ Focus Group, now led by ITU and the World Health Organization (WHO). The 2019 summit will continue to connect AI innovators with public and private-sector decision-makers, building collaboration to maximize the impact of ‘AI for Good’.

Organized by the International Telecommunication Union (IT​U) – the United Nations specialized agency for information and communication technology (ICT) – in partnership with the XPRIZE Foundation, the Association for Computing Machinery (ACM) and close to 30 sister United Nations agencies, the 3rd annual ​AI for Good Global Summit in Geneva, 28-31 May, is the leading United Nations platform for inclusive dialogue on AI. The goal of the summit is to identify practical applications of AI to accelerate progress towards the United Nations Sustainable Development Goals​​.​

►►► MEDIA REGISTRATION IS NOW OPEN ◄◄◄

Media are recommended to register in advance to receive key announcements in the run-up to the summit.

WHAT: The summit attracts a cross-section of AI experts from industry and academia, global business leaders, Heads of UN agencies, ICT ministers, non-governmental organizations, and civil society.

The summit is designed to generate ‘AI for Good’ projects able to be enacted in the near term, guided by the summit’s multi-stakeholder and inter-disciplinary audience. It also formulates supporting strategies to ensure trusted, safe and inclusive development of AI technologies and equitable access to their benefits.

The 2019 summit will highlight AI’s value in advancing education, healthcare and wellbeing, social and economic equality, space research, and smart and safe mobility. It will propose actions to assist high-potential AI solutions in achieving global scale. It will host debate around unintended consequences of AI as well as AI’s relationship with art and culture. A ‘learning day’ will offer potential AI adopters an audience with leading AI experts and educators.

A dynamic show floor will demonstrate innovations at the cutting edge of AI research and development, such as the IBM Watson live debater; the Fusion collaborative exoskeleton; RoboRace, the world’s first self-driving electric racing car; avatar prototypes, and the ElliQ social robot for the care of the elderly. Summit attendees can also look forward to AI-inspired performances from world-renowned musician Jojo Mayer and award-winning vocal and visual artist​ Reeps One

WHEN: 28-31 May 2019
WHERE: International Conference Centre Geneva, 17 Rue de Varembé, Geneva, Switzerland

WHO: Over 100 speakers have been confirmed to date, including:

Jim Hagemann Snabe – Chairman, Siemens​​
Cédric Villani – AI advisor to the President of France, and Mathematics Fields Medal Winner
Jean-Philippe Courtois – President of Global Operations, Microsoft
Anousheh Ansari – CEO, XPRIZE Foundation, Space Ambassador
Yves Daccord – Director General, International Committee of the Red Cross
Yan Huang – Director AI Innovation, Baidu
Timnit Gebru – Head of AI Ethics, Google
Vladimir Kramnik – World Chess Champion
Vicki Hanson – CEO, ACM
Zoubin Ghahramani – Chief Scientist, Uber, and Professor of Engineering, University of Cambridge
Lucas di Grassi – Formula E World Racing Champion, CEO of Roborac

Confirmed speakers also include C-level and expert representatives of Bosch, Botnar Foundation, Byton, Cambridge Quantum Computing, the cities of Montreal and Pittsburg, Darktrace, Deloitte, EPFL, European Space Agency, Factmata, Google, IBM, IEEE, IFIP, Intel, IPSoft, Iridescent, MasterCard, Mechanica.ai, Minecraft, NASA, Nethope, NVIDIA, Ocean Protocol, Open AI, Philips, PWC, Stanford University, University of Geneva, and WWF.

Please visit the summit programme for more information on the latest speakers, breakthrough sessions and panels.

The summit is organized in partnership with the following sister United Nations agencies:CTBTO, ICAO, ILO, IOM, UNAIDS, UNCTAD, UNDESA, UNDPA, UNEP, UNESCO, UNFPA, UNGP, UNHCR, UNICEF, UNICRI, UNIDIR, UNIDO, UNISDR, UNITAR, UNODA, UNODC, UNOOSA, UNOPS, UNU, WBG,  WFP, WHO, and WIPO.

The 2019 summit is kindly supported by Platinum Sponsor and Strategic Partner, Microsoft; Gold Sponsors, ACM, the Kay Family Foundation, Mind.ai and the Autonomous Driver Alliance; Silver Sponsors, Deloitte and the Zero Abuse Project; and Bronze Sponsor, Live Tiles.​

More information available at aiforgood.itu.int
​Join the conversat​ion on social media ​using the hashtag #AIforGood

As promised here are the media accreditation details from the ITU Media Registration and Accreditation webpage,

To gain media access, ITU must confirm your status as a bona fide member of the media. Therefore, please read ITU’s Media Accreditation Guidelines below so you are aware of the information you will be required to submit for ITU to confirm such status. ​
Media accreditation is not granted to 1) non-editorial staff working for a publishing house (e.g. management, marketing, advertising executives, etc.); 2) researchers, academics, authors or editors of directories; 3) employees of information outlets of public, non-governmental or private entities that are not first and foremost media organizations; 4) members of professional broadcasting or media associations, 5) press or communication professionals accompanying member state delegations; and 6) citizen journalists under no apparent editorial board oversight. If you have questions about your eligibility, please email us at pressreg@itu.int.​

Applications for accreditation are considered on a case-by-case basis and ITU reserves the right to request additional proof or documentation other than what is listed below. ​​​Media accreditation decisions rest with ITU and all decisions are final.

​Accreditation eligibility & credentials 
​1. Journalists* should provide an official letter of assignment from the Editor-in-Chief (or the News Editor for radio/TV). One letter per crew/editorial team will suffice. Editors-in-Chief and Bureau Chiefs should submit a letter from their Director. Please email this to pressreg@itu.int along with the required supporting credentials, based on the type of media organization you work for:

​​​​​Print and online publications should be available to the general public and published at least 6 times a year by an organization whose principle business activity is publishing and which generally carries paid advertising;
o please submit 2 copies or links to recent byline articles published within the last 4 months.

News wire services should provide news coverage to subscribers, including newspapers, periodicals and/or television networks;
o please submit 2 copies or links to recent byline articles or broadcasting material published within the last 4 months.

Broadcast media should provide news and information programmes to the general public. Inde​pendent film and video production companies can only be accredited if officially mandated by a broadcast station via a letter of assignment;
o please submit broadcasting material published within the last 4 months.

Freelance journalists and photographers must provide clear documentation that they are on assignment from a specific news organization or publication. Evidence that they regularly supply journalistic content to recognized media may be acceptable in the absence of an assignment letter and at the discretion of the ITU Corporate Communication Division.
o if possible, please submit a valid assignment letter from the news organization or publication.

2. Bloggers and community media may be granted accreditation if the content produced is deemed relevant to the industry, contains news commentary, is regularly updated and/or made publicly available. Corporate bloggers may register as normal participants (not media). Please see Guidelines for Bloggers and Community Media Accreditation below for more details:

Special guidelines for bloggers and community ​media accreditation

ITU is committed to working with independent and ‘new media’ reporters and columnists who reach their audiences via blogs, podcasts, video blogs, community or online radio, limited print formats which generally carry paid advertising ​​and other online media. These are some of the guidelines we use to determine whether to accredit bloggers and community media representatives:

​​ITU reserves the right to request traffic data from a third party (Sitemeter, Technorati, Feedburner, iTunes or equivalent) when considering your application. While the decision to grant access is not based solely on traffic/subscriber data, we ask that applicants provide sufficient transparency into their operations to help us make a fair and timely decision. If your media outlet is new, you must have an established record of having written extensively on ICT issues and must present copies or links to two recently published videos, podcasts or articles with your byline.​

Obtaining media accreditation for ITU events is an opportunity to meet and interact with key industry and political figures. While continued accreditation for ITU events is not directly contingent on producing coverage, owing to space limitations we may take this into consideration when processing future accreditation requests. Following any ITU event for which you are accredited, we therefore kindly request that you forward a link to your post/podcast/video blog to pressreg​@itu.int.

Bloggers who are granted access to ITU events are expected to act professionally. Those who do not maintain the standards expected of professional media representatives run the risk of having their accreditation withdrawn.

UN-accre​dited media

Media already accredited and badged by the United Nations are automatically accredited and registered by ITU. In this case, you only need to send a copy of your UN badge to pressreg@itu.int​to make sure you receive your event badge. Anyone joining an ITU event MUST have an event badge in order to access the premises. ​Please make sure you let us know in advance that you are planning to attend so your event badge is ready for printing and pick-up.​

You can register and get accreditation here (scroll past the guidelines). Good luck!

Why not monetize your DNA for 2019?

I’m not a big fan of DNA (deoxyribonucleic acid) companies that promise to tell you about your ancestors and, depending on the kit, predisposition to certain health issues as per their reports about your genetic code. (I regularly pray no one in my family has decided to pay one of these companies to analyze their spit.)

During Christmas season 2018, the DNA companies (23andMe and Ancestry) advertised special prices so you could gift someone in your family with a kit. All this corporate largesse may not be wholly in service of the Christmas spirit. After all, there’s money to be made once they’ve gotten your sample.

Monetizing your DNA in 2016

I don’t know when 23andMe started selling DNA information or if any similar company predated their efforts but this June 21, 2016 article by Antonio Regalado for MIT (Massachusetts Institute of Technology) Review offers the earliest information I found,

“Welcome to You.” So says the genetic test kit that 23andMe will send to your home. Pay $199, spit in a tube, and several weeks later you’ll get a peek into your DNA. Have you got the gene for blond hair? Which of 36 disease risks could you pass to a child?

Run by entrepreneur Anne Wojcicki, the ex-wife of Google founder Sergey Brin, and until last year housed alongside the Googleplex, the company created a test that has been attacked by regulators and embraced by a curious public. It remains, nine years after its introduction, the only one of its kind sold directly to consumers. 23andMe has managed to amass a collection of DNA information about 1.2 million people, which last year began to prove its value when the company revealed it had sold access to the data to more than 13 drug companies. One, Genentech, anted up $10 million for a look at the genes of people with Parkinson’s disease.

That means 23andMe is monetizing DNA rather the way Facebook makes money from our “likes.” What’s more, it gets its customers to pay for the privilege. That idea so appeals to investors that they have valued the still-unprofitable company at over $1 billion. “Money follows data,” says Barbara Evans, a legal scholar at the University of Houston, who studies personal genetics. “It takes a lot of labor and capital to get that information in a form that is useful.”

Monetizing your DNA in 2018 and privacy concerns

Starting with Adele Peters’ December 13, 2018 article for Fast Company (Note: A link has been removed),

When 23andMe made a $300 million deal with GlaxoSmithKline [GSK] in July[2018]–so the pharmaceutical giant could access a vast store of genetic data as it works on new drugs–the consumers who actually provided that data didn’t get a cut of the proceeds. A new health platform is taking a different approach: If you choose to share your own DNA data or other health records, you’ll get company shares that will later pay you dividends if that data is sold.

Before getting to the start-up that would allow you rather than a company to profit or at least somewhat monetize your DNA, I’m including a general overview of the July 2018 GSK/23andMe deal in Jamie Ducharme’s July 26, 2018 article for TIME (Note: Links have been removed),

Consumer genetic testing company 23andMe announced on Wednesday [July 25, 2018] that GlaxoSmithKline purchased a $300 million stake in the company, allowing the pharmaceutical giant to use 23andMe’s trove of genetic data to develop new drugs — and raising new privacy concerns for consumers

The “collaboration” is a way to make “novel treatments and cures a reality,” 23andMe CEO Anne Wojcicki said in a company blog post. But, though it isn’t 23andMe’s first foray into drug discovery, the deal doesn’t seem quite so simple to some medical experts — or some of the roughly 5 million 23andMe customers who have sent off tubes of their spit in exchange for ancestry and health insights

Perhaps the most obvious issue is privacy, says Peter Pitts, president of the Center for Medicine in the Public Interest, a non-partisan non-profit that aims to promote patient-centered health care.

“If people are concerned about their social security numbers being stolen, they should be concerned about their genetic information being misused,” Pitts says. “This information is never 100% safe. The risk is magnified when one organization shares it with a second organization. When information moves from one place to another, there’s always a chance for it to be intercepted by unintended third parties.

That risk is real, agrees Dr. Arthur Caplan, head of the division of medical ethics at the New York University School of Medicine. Caplan says that any genetic privacy concerns also extend to your blood relatives, who likely did not consent to having their DNA tested — echoing some of the questions that arose after law enforcement officials used a genealogy website to find and arrest the suspected Golden State Killer in April [2018].

“A lot of people paid money to 23andMe to get their ancestry determined — fun, recreational stuff,” Caplan says. “Even though they may have signed a thing saying, ‘I’m okay if you use this information for medical research,’ I’m not sure they understood what that really meant. I’m not sure they understood that it meant, ‘Yes, we’ll go to Glaxo, and that’s where we’re really going to make a lot of money off of you.’”

A 23andMe spokesperson told TIME that data privacy is a “top priority” for the company, emphasizing that customer data isn’t used in research without consent, and that GlaxoSmithKline will only receive “summary statistics from analyses 23andMe conducts so that no single individual can be identified.”

Yes the data is supposed to be stripped of identifying information but given how many times similar claims about geolocation data have been disproved, I am skeptical. DJ Pangburn’s September 26, 2017 article (Even This Data Guru Is Creeped Out By What Anonymous Location Data Reveals About Us) for Fast Company illustrate the fragility of ‘anonymized data’,

… as a number of studies have shown, even when it’s “anonymous,” stripped of so-called personally identifiable information, geographic data can help create a detailed portrait of a person and, with enough ancillary data, identify them by name

Curious to see this kind of data mining in action, I emailed Gilad Lotan, now vice president of BuzzFeed’s data science team. He agreed to look at a month’s worth of two different users’ anonymized location data, and to come up with individual profiles that were as accurate as possible

The results, produced in just a few days’ time, range from the expected to the surprisingly revealing, and demonstrate just how “anonymous” data can identify individuals.

Last fall Lotan taught a class at New York University on surveillance that kicked off with an assignment like the one I’d given him: link anonymous location data with other data sets–from LinkedIn, Facebook, home registration and mortgage records, and other online data.
“It’s not hard to figure out who this [unnamed] person is,” says Lotan. In class, students found that tracking location data around holidays proved to be the easiest way to determine who, exactly, the data belonged to. “Basically,” he says, “visits to private homes that are owned and publicly registered.”

In 2013, researchers at MIT and the Université Catholique de Louvain in Belgium published a paper reporting on 15 months of study of human mobility data for over 1.5 million individuals. What they found is that only four spatio-temporal points are required to “uniquely identify 95% of the individuals.” The researchers concluded that there was very little privacy even in raw location data. Four years later, their calls for policies rectifying concerns about location tracking have fallen largely on deaf ears.

Getting back to DNA, there was also some concern at Fox News,

Other than warnings, I haven’t seen much about any possible legislation regarding DNA and privacy in either Canada or the US.

Now, let’s get to how you can monetize your self.

Me making money off me

I’ve found two possibilities for an individual who wants to consider monetizing their own DNA.

Health shares

Adele Peters’ December 13, 2018 article describes a start-up company and the model they’re proposing to allow you profit from your own DNA (Note: Links have been removed),

“You can’t say data is valuable and then take that data away from everybody,” says Dawn Barry, president and cofounder of LunaPBC, the public benefit corporation that manages the community-owned platform, called LunaDNA, which recently got SEC approval to recognize health data as currency. “What we’re finding is that [our early adopters are] very excited about the transparency of this model–that when we all come together and create value, that value flows down to the individuals who shared their data.

The platform shares some anonymized data with nonprofits, such as foundations that study rare diseases. In that case, money wouldn’t initially change hands, but “there could be intellectual property that at some point in time is monetized, and the community would share in that,” says Bob Kain, CEO and cofounder of LunaPBC. “When we have enough data in the near future, then we’ll work with pharmaceutical companies, for instance, to drive discovery for those companies. And they will pay market rates.

The company doesn’t offer DNA analysis itself, but chose to focus on data management. If you’ve sent a tube of spit to 23andMe, AncestryDNA, MyHeritage, or FamilyTree DNA, you can contribute that data to LunaDNA and get shares. (If you’d rather not let the original testing company keep your data, you can also separately take the steps to delete it.

“We looked at a number of different models to enable people to have ownership, including cryptocurrency, which is a proxy for ownership, too,” says Kain. “Cryptocurrency is hard to understand for most people, and right now, the regulatory landscape is blurry. So we thought, to move forward, we’d go with something much more traditional and easy to understand, and that is stock shares, basically.

For sharing targeted genes, you get 10 shares. For sharing your whole genome, you get 300 shares. At the moment, that’s not worth very much–the valuation takes into account the risk that the data might not be monetized, and the fact that the startup isn’t the exclusive owner of your data. The SEC filing says that the estimated fair market value of a whole genome is only $21. Some other health information is worth far less; 20 days of data from a fitness tracker garners two shares, valued at 14¢. But as more people contribute data, the research value of the whole database (and dividends) will increase. If the shareholders ever decided to sell the company itself, they would also make money that way.

Luna’s is a very interesting approach and I encourage you to read the December 13, 2018 article in its entirety.

Blockchain and crypto me

At least one effort to introduce blockchain/cryptocurrency technology to the process for monetizing your DNA garnered a lot of attention in February 2018.

A February 8, 2018 article by Eric Rosenbaum for CNBC (a US cable tv channel) explores an effort by George Church (Note: Links have been removed),

It’s probably wise to be skeptical of anyone who says they have a new idea for a blockchain-based company, or worse still, a company changing its business model to focus on the crypto world. That ice tea company that shifted its model to the blockchain, or Kodak saying its road back to riches was managing photo rights using a blockchain system. Raise eyebrow, or move directly onto outright shake of head

However, when a world renown Harvard geneticist announces he’s launching a blockchain-based start-up, it merits some attention. And it’s not the crypto-angle itself that might make you do a double-take, but the assets that will be managed, and exchanged, using digital currency: your DNA

Harvard University genetics guru George Church — one of the scientists at the forefront of the CRISPR genetic engineering revolution — announced on Wednesday a start-up, Nebula Genomics, that will use the blockchain to not only allow individuals to share their personal genome for research purposes, but retain ownership and monetize their DNA through trading of a custom digital currency.

The genomics revolution has been exponentially advanced by drastic reductions in cost. As Nebula noted in a white paper explaining its business model, the first human genome was sequenced in 2001 at a cost of $3 billion. Today, human genome sequencing costs less than $1,000, and in a few years the price will drop below $100

In fact, some big Silicon Valley start-ups, led by 23andMe, have capitalized on this rapid advance and already offer personal DNA testing kits for around $100 (sometimes with discounts even less)

Nebula took direct aim at 23andMe in its white paper, and one reason why it can offer genetic testing for less

“Today, 23andMe (23andme.com) and Ancestry (ancestry.com) are the two leading personal genomics companies. Both use DNA microarray-based genotyping for their genetic tests. It is an outdated and significantly less powerful alternative to DNA sequencing. Instead of sequencing continuous stretches of DNA, genotyping identifies single letters spaced at approximately regular intervals across the genome. …

Outdated genetic tests? Interesting, eh? Zoë Corbyn provides more information about Church’s plans in her February 18, 2018 article for the Guardian,

“Under the current system, personal genomics companies effectively own your personal genomics data, and you don’t see any benefit at all,” says Grishin [Dennis Grishin, Nebula co-founder]. “We want to eliminate the middleman.

Although the aim isn’t to provide a get-rich-quick scheme, the company believes there is potential for substantial returns. Though speculative, its modelling suggests that someone in the US could earn up to 50 times the cost of sequencing their genome – about $50,000 at current rates – taking into account both what could be made from a lifetime of renting out their genetic data, and reductions in medical bills if the results throw up a potentially preventable disease

The startup also thinks it can solve the problem of the dearth of genetic data researchers have to draw on, due to individuals – put off by cost or privacy concerns – not getting sequenced.

Payouts when you grant access to your genome would come in the form of Nebula tokens, the company’s cryptocurrency, and companies would need to buy tokens from the startup to pay people whose data they wanted to access. Though the value of a token is yet to be set and the number of tokens defined, it might, for example, take one Nebula token to get your genome sequenced. An individual new to the system could begin to earn fractions of a token by taking part in surveys about their heath posted by prospective data buyers. When someone had earned enough, they could get sequenced and begin renting out their data and amassing tokens. Alternatively, if an individual wasn’t yet sequenced they may find data buyers willing to pay for or subsidise their genome sequencing in exchange for access to it. “Potentially you wouldn’t have to pay out of pocket for the sequencing of your genome,” says Grishin.

In all cases, stress Grishin and Obbad [Kamal Obbad, Nebula co-founder], the sequence would belong to the individual, so they could rent it out over and over, including to multiple companies simultaneously. And the data buyer would never take ownership or possession of it – rather, it would be stored by the individual (for example in their computer or on their Dropbox account) with Nebula then providing a secure computation platform on which the data buyer could compute on the data. “You stay in control of your data and you can share it securely with who you want to,” explains Obbad. Nebula makes money not by taking any transaction fee but by being a participant providing computing and storage services. The cryptocurrency would be able to be cashed out for real money via existing cryptocurrency exchanges.

Hopefully, Luna and Nebula, as well as, any competitors in this race to allow individuals to monetize their own DNA will have excellent security.

For the curious, you can find Luna here and Nebula here.Note: I am not endorsing either company or any others mentioned here. This posting is strictly informational.