Tag Archives: Google

Overview of fusion energy scene

It’s funny how you think you know something and then realize you don’t. I’ve been hearing about cold fusion/fusion energy for years but never really understood what the term meant. So, this post includes an explanation, as well as, an overview, and a Cold Fusion Rap to ‘wrap’ it all up. (Sometimes I cannot resist a pun.)

Fusion energy explanation (1)

The Massachusetts Institute of Technology (MIT) has a Climate Portal where fusion energy is explained,

Fusion energy is the source of energy at the center of stars, including our own sun. Stars, like most of the universe, are made up of hydrogen, the simplest and most abundant element in the universe, created during the big bang. The center of a star is so hot and so dense that the immense pressure forces hydrogen atoms together. These atoms are forced together so strongly that they create new atoms entirely—helium atoms—and release a staggering amount of energy in the process. This energy is called fusion energy.

More energy than chemical energy

Fusion energy, like fossil fuels, is a form of stored energy. But fusion can create 20 to 100 million times more energy than the chemical reaction of a fossil fuel. Most of the mass of an atom, 99.9 percent, is contained at an atom’s center—inside of its nucleus. The ratio of this matter to the empty space in an atom is almost exactly the same ratio of how much energy you release when you manipulate the nucleus. In contrast, a chemical reaction, such as burning coal, rearranges the atoms through heat, but doesn’t alter the atoms themselves, so we don’t get as much energy.

Making fusion energy

For scientists, making fusion energy means recreating the conditions of stars, starting with plasma. Plasma is the fourth state of matter, after solids, liquids and gases. Ice is an example of a solid. When heated up, it becomes a liquid. Place that liquid in a pot on the stove, and it becomes a gas (steam). If you take that gas and continue to make it hotter, at around 10,000 degrees Fahrenheit (~6,000 Kelvin), it will change from a gas to the next phase of matter: plasma. Ninety-nine percent of the mass in the universe is in the plasma state, since almost the entire mass of the universe is in super hot stars that exist as plasma.

To make fusion energy, scientists must first build a steel chamber and create a vacuum, like in outer space. The next step is to add hydrogen gas. The gas particles are charged to produce an electric current and then surrounded and contained with an electromagnetic force; the hydrogen is now a plasma. This plasma is then heated to about 100 million degrees and fusion energy is released.

Fusion energy explanation (2)

A Vancouver-based company, General Fusion, offers an explanation of how they have approached making fusion energy a reality,

How It Works: Plasma Injector Technology at General Fusion from General Fusion on Vimeo.

After announcing that a General Fusion demonstration plant would be built in the UK (see June 17, 2021 General Fusion news release), there’s a recent announcement about an agreement with the UK Atomic Energy Authority (UKAEA) to commericialize the technology, from an October 17, 2022 General Fusion news release,

Today [October 17, 2022], General Fusion and the UKAEA kick off projects to advance the commercialization of magnetized target fusion energy as part of an important collaborative agreement. With these unique projects, General Fusion will benefit from the vast experience of the UKAEA’s team. The results will hone the design of General Fusion’s demonstration machine being built at the Culham Campus, part of the thriving UK fusion cluster. Ultimately, the company expects the projects will support its efforts to provide low-cost and low-carbon energy to the electricity grid.

General Fusion’s approach to fusion maximizes the reapplication of existing industrialized technologies, bypassing the need for expensive superconducting magnets, significant new materials, or high-power lasers. The demonstration machine will create fusion conditions in a power-plant-relevant environment, confirming the performance and economics of the company’s technology.

“The leading-edge fusion researchers at UKAEA have proven experience building, commissioning, and successfully operating large fusion machines,” said Greg Twinney, Chief Executive Officer, General Fusion. “Partnering with UKAEA’s incredible team will fast-track work to advance our technology and achieve our mission of delivering affordable commercial fusion power to the world.”

“Fusion energy is one of the greatest scientific and engineering quests of our time,” said Ian Chapman, UKAEA CEO. “This collaboration will enable General Fusion to benefit from the ground-breaking research being done in the UK and supports our shared aims of making fusion part of the world’s future energy mix for generations to come.”

I last wrote about General Fusion in a November 3, 2021 posting about the company’s move (?) to Sea Island, Richmond,

I first wrote about General Fusion in a December 2, 2011 posting titled: Burnaby-based company (Canada) challenges fossil fuel consumption with nuclear fusion. (For those unfamiliar with the Vancouver area, there’s the city of Vancouver and there’s Vancouver Metro, which includes the city of Vancouver and others in the region. Burnaby is part of Metro Vancouver; General Fusion is moving to Sea Island (near Vancouver Airport), in Richmond, which is also in Metro Vancouver.) Kenneth Chan’s October 20, 2021 article for the Daily Hive gives more detail about General Fusion’s new facilities (Note: A link has been removed),

The new facility will span two buildings at 6020 and 6082 Russ Baker Way, near YVR’s [Vancouver Airport] South Terminal. This includes a larger building previously used for aircraft engine maintenance and repair.

The relocation process could start before the end of 2021, allowing the company to more than quadruple its workforce over the coming years. Currently, it employs about 140 people.

The Sea Island [in Richmond] facility will house its corporate offices, primary fusion technology development division, and many of its engineering laboratories. This new facility provides General Fusion with the ability to build a new demonstration prototype to support the commercialization of its magnetized target fusion technology.

As of the date of this posting, I have not been able to confirm the move. The company’s Contact webpage lists an address in Burnaby, BC for its headquarters.

The overview

Alex **Pasternack** in an August 17, 2022 article (The frontrunners in the trillion-dollar race for limitless fusion power), **in Fast Company,** provides an overview of the international race with a very, very strong emphasis on the US scene (Note: Links have been removed),

With energy prices on the rise, along with demands for energy independence and an urgent need for carbon-free power, plans to walk away from nuclear energy are now being revised in Japan, South Korea, and even Germany. Last month, Europe announced green bonds for nuclear, and the U.S., thanks to the Inflation Reduction Act, will soon devote millions to new nuclear designs, incentives for nuclear production and domestic uranium mining, and, after years of paucity in funding, cash for fusion.

The new investment comes as fusion—long considered a pipe dream—has attracted real money from big venture capital and big companies, who are increasingly betting that abundant, cheap, clean nuclear will be a multi-trillion dollar industry. Last year, investors like Bill Gates and Jeff Bezos injected a record $3.4 billion into firms working on the technology, according to Pitchbook. One fusion firm, Seattle-based Helion, raised a record $500 million from Sam Altman and Peter Thiel. That money has certainly supercharged the nuclear sector: The Fusion Industry Association says that at least 33 different companies were now pursuing nuclear fusion, and predicted that fusion would be connected to the energy grid sometime in the 2030s.

… What’s not a joke is that we have about zero years to stop powering our civilization with earth-warming energy. The challenge with fusion is to achieve net energy gain, where the energy produced by a fusion reaction exceeds the energy used to make it. One milestone came quietly this month, when a team of researchers at the National Ignition Facility at Lawrence Livermore National Lab in California announced that an experiment last year had yielded over 1.3 megajoules (MJ) of energy, setting a new world record for energy yield for a nuclear fusion experiment. The experiment also achieved scientific ignition for the first time in history: after applying enough heat using an arsenal of lasers, the plasma became self-heating. (Researchers have since been trying to replicate the result, so far without success.)

On a growing campus an hour outside of Boston, the MIT spinoff Commonwealth Fusion Systems is building their first machine, SPARC, with a goal of producing power by 2025. “You’ll push a button,” CEO and cofounder Bob Mumgaard told the Khosla Ventures CEO Summit this summer, “and for the first time on earth you will make more power out than in from a fusion plasma. That’s about 200 million degrees—you know, cooling towers will have a bunch of steam go out of them—and you let your finger off the button and it will stop, and you push the button again and it will go.” With an explosion in funding from investors including Khosla, Bill Gates, George Soros, Emerson Collective and Google to name a few—they raised $1.8 billion last year alone—CFS hopes to start operating a prototype in 2025.

Like the three-decade-old ITER project in France, set for operation in 2025, Commonwealth and many other companies will try to reach net energy gain using a machine called a tokamak, a bagel-shaped device filled with super-hot plasma, heated to about 150 million degrees, within which hydrogen atoms can fuse and release energy. To control that hot plasma, you need to build a very powerful magnetic field. Commonwealth’s breakthrough was tape—specifically, a high-temperature-superconducting steel tape coated with a compound called yttrium-barium-copper oxide. When a prototype was first made commercially available in 2009, Dennis Whyte, director of MIT’s Plasma Science and Fusion Center, ordered as much as he could. With Mumgaard and a team of students, his lab used coils of the stuff to build a new kind of superconducting magnet, and a prototype reactor named ARC, after Tony Stark’s energy source. Commonwealth was born in 2015.

Southern California-based TAE Technologies has raised a whopping $1.2 billion since it was founded in 1998, and $250 million in its latest round. The round, announced in July, was led by Chevron’s venture arm, Google, and Sumitomo, a Tokyo-based holding company that aims to deploy fusion power in the Asia-Pacific market. TAE’s approach, which involves creating a fusion reaction at incredibly high heat, has a key advantage. Whereas ITER uses the hydrogen isotopes deuterium and tritium, an extremely rare element that must be specially created from lithium—and that produces as a byproduct radioactive-free neutrons—TAE’s linear reactor is completely non-radioactive, because it relies on hydrogen and boron, two abundant, naturally-occurring elements that react to produce only helium.

General Atomics, of San Diego, California, has the largest tokamak in the U.S. Its powerful magnetic chamber, called the DIII-D National Fusion Facility, or just “D-three-D,” now features a Toroidal Field Reversing Switch, which allows for the redirection of 120,000 amps of the current that power the primary magnetic field. It’s the only tokamak in the world that allows researchers to switch directions of the magnetic fields in minutes rather than hours. Another new upgrade, a traveling-wave antenna, allows physicists to inject high-powered “helicon” radio waves into DIII-D plasmas so fusion reactions occur much more powerfully and efficiently.

“We’ve got new tools for flexibility and new tools to help us figure out how to make that fusion plasma just keep going,” Richard Buttery, director of the project, told the San Diego Union-Tribune in January. The company is also behind eight of the magnet modules at the heart of the ITER facility, including its wild Central Solenoid — the world’s most powerful magnet — in a kind of scaled up version of the California machine.

But like an awful lot in fusion, ITER has been hampered by cost overruns and delays, with “first plasma” not expected to occur in 2025 as previously expected due to global pandemic-related disruptions. Some have complained that the money going to ITER has distracted from other more practical energy projects—the latest price tag is $22 billion—and others doubt if the project can ever produce net energy gain.

Based in Canada, General Fusion is backed by Jeff Bezos and building on technology originally developed by the U.S. Navy and explored by Russian scientists for potential use in weapons. Inside the machine, molten metal is spun to create a cavity, and pumped with pistons that push the metal inward to form a sphere. Hydrogen, heated to super-hot temperatures and held in place by a magnetic field, fills the sphere to create the reaction. Heat transferred to the metal can be turned into steam to drive a turbine and generate electricity. As former CEO Christofer Mowry told Fast Company last year, “to re-create a piece of the sun on Earth, as you can imagine, is very, very challenging.” Like many fusion companies, GF depends on modern supercomputers and advanced modeling and computational techniques to understand the science of plasma physics, as well as modern manufacturing technologies and materials.

“That’s really opened the door not just to being able to make fusion work but to make it work in a practical way,” Mowry said. This has been difficult to make work, but with a demonstration center it announced last year in Culham, England, GF isn’t aiming to generate electricity but to gather the data needed to later build a commercial pilot plant that could—and to generate more interest in fusion.

Magneto-Intertial Fusion Technologies, or MIFTI, of Tustin, Calif., founded by researchers from the University of California, Irvine, is developing a reactor that uses what’s known as a Staged Z-Pinch approach. A Z-Pinch design heats, confines, and compresses plasma using an intense, pulsed electrical current to generate a magnetic field that could reduce instabilities in the plasma, allowing fusion to persist for longer periods of time. But only recently have MIFTI’s scientists been able to overcome the instability problems, the company says, thanks to software made available to them at UC-Irvine by the U.S. Air Force. …

Princeton Fusion Systems of Plainsboro, New Jersey, is a small business focused on developing small, clean fusion reactors for both terrestrial and space applications. A spinoff of Princeton Satellite Systems, which specializes in spacecraft control, the company’s Princeton FRC reactor is built upon 15 years of research at the Princeton Plasma Physics Laboratory, funded primarily by the U.S. DOE and NASA, and is designed to eventually provide between 1 and 10 megawatts of power in off-grid locations and in modular power plants, “from remote industrial applications to emergency power after natural disasters to off-world bases on the moon or Mars.” The concept uses radio-frequency electromagnetic fields to generates and sustain a plasma formation called a Field-Reversed Configuration (FRC) inside a strong magnetic bottle. …

Tokamak Energy, a U.K.-based company named after the popular fusion device, announced in July that its ST-40 tokamak reactor had reached the 100 million Celsius threshold for commercially viable nuclear fusion. The achievement was made possible by a proprietary design built on a spherical, rather than donut, shape. This means that the magnets are closer to the plasma stream, allowing for smaller and cheaper magnets to create even stronger magnetic fields. …

Based in Pasadena, California, Helicity Space is developing a propulsion and power technology based on a specialized magneto inertial fusion concept. The system, a spin on what fellow fusion engineer, Seattle-based Helion is doing, appears to use twisted compression coils, like a braided rope, to achieve a known phenomenon called the Magnetic Helicity. … According to ZoomInfo and Linkedin, Helicity has over $4 million in funding and up to 10 employees, all aimed, the company says, at “enabling humanity’s access to the solar system, with a Helicity Drive-powered flight to Mars expected to take two months, without planetary alignment.”

ITER (International Thermonuclear Experimental Reactor), meaning “the way” or “the path” in Latin and mentioned in Pasternak’s article, dates its history with *fusion back to about 1978 when cold fusion was the ‘hot’ topic*. (You can read more here in the ITER Wikipedia entry.)

For more about the various approaches to fusion energy, read Pasternack’s August 17, 2022 article (The frontrunners in the trillion-dollar race for limitless fusion power) provides details. I wish there had been a little more about efforts in Japan and South Korea and other parts of the world. Pasternak’s singular focus on the US with a little of Canada and the UK seemingly thrown into the mix to provide an international flavour seems a little myopic.

Fusion rap

In an August 30, 2022 Baba Brinkman announcement (received via email) which gave an extensive update of Brinkman’s activities, there was this,

And the other new topic, which was surprisingly fun to explore, is cold fusion also known as “Low Energy Nuclear Reactions” which you may or may not have a strong opinion about, but if you do I imagine you probably think the technology is either bunk or destined to save the world.

That makes for an interesting topic to explore in rap songs! And fortunately last month I had the pleasure of performing for the cream of the LENR crop at the 24th International Conference on Cold Fusion, including rap ups and two new songs about the field, one very celebratory (for the insiders), and one cautiously optimistic (as an outreach tool).

You can watch “Cold Fusion Renaissance” and “You Must LENR” [L ow E nergy N uclear R eactions or sometimes L attice E nabled N anoscale R eactions or Cold Fusion or CANR (C hemically A ssisted N uclear R eactions)] for yourself to determine which video is which, and also enjoy this article in Infinite Energy Magazine which chronicles my whole cold fusion rap saga.

Here’s one of the rap videos mentioned in Brinkman’s email,

Enjoy!

*December 13, 2022: Sentence changed from “ITER (International Thermonuclear Experimental Reactor), meaning “the way” or “the path” in Latin and mentioned in Pasternak’s article, dates its history with fusion back to about 1978 when cold fusion was the ‘hot’ topic.” to “ITER (International Thermonuclear Experimental Reactor), meaning “the way” or “the path” in Latin and mentioned in Pasternak’s article, dates its history with fusion back to about 1978 when cold fusion was the ‘hot’ topic.”

** ‘Pasternak’ corrected to ‘Pasternack” and ‘in Fast Company’ added on December 29, 2022

STEM (science, technology, engineering and math) brings life to the global hit television series “The Walking Dead” and a Canadian AI initiative for women and diversity

I stumbled across this June 8, 2022 AMC Networks news release in the last place I was expecting (i.e., a self-described global entertainment company’s website) to see a STEM (science, technology, engineering, and mathematics) announcement,

AMC NETWORKS CONTENT ROOM TEAMS WITH THE AD COUNCIL TO EMPOWER GIRLS IN STEM, FEATURING “THE WALKING DEAD”

AMC Networks Content Room and the Ad Council, a non-profit and leading producer of social impact campaigns for 80 years, announced today a series of new public service advertisements (PSAs) that will highlight the power of girls in STEM (science, technology, engineering and math) against the backdrop of the global hit series “The Walking Dead.”  In the spots, behind-the-scenes talent of the popular franchise, including Director Aisha Tyler, Costume Designer Vera Chow and Art Director Jasmine Garnet, showcase how STEM is used to bring the post-apocalyptic world of “The Walking Dead” to life on screen.  Created by AMC Networks Content Room, the PSAs are part of the Ad Council’s national She Can STEM campaign, which encourages girls, trans youth and non-binary youth around the country to get excited about and interested in STEM.

The new creative consists of TV spots and custom videos created specifically for TikTok and Instagram.  The spots also feature Gitanjali Rao, a 16-year-old scientist, inventor and activist, interviewing Tyler, Chow and Garnet discussing how they and their teams use STEM in the production of “The Walking Dead.”  Using before and after visuals, each piece highlights the unique and unexpected uses of STEM in the making of the series.  In addition to being part of the larger Ad Council campaign, the spots will be available on “The Walking Dead’s” social media platforms, including Facebook, Instagram, Twitter and YouTube pages, and across AMC Networks linear channels and digital platforms.

PSA:   https://youtu.be/V20HO-tUO18

Social: https://youtu.be/LnDwmZrx6lI

Said Kim Granito, EVP of AMC Networks Content Room: “We are thrilled to partner with the Ad Council to inspire young girls in STEM through the unexpected backdrop of ‘The Walking Dead.’  Over the last 11 years, this universe has been created by an array of insanely talented women that utilize STEM every day in their roles.  This campaign will broaden perceptions of STEM beyond the stereotypes of lab coats and beakers, and hopefully inspire the next generation of talented women in STEM.  Aisha Tyler, Vera Chow and Jasmine Garnet were a dream to work with and their shared enthusiasm for this mission is inspiring.”

“Careers in STEM are varied and can touch all aspects of our lives. We are proud to partner with AMC Networks Content Room on this latest work for the She Can STEM campaign. With it, we hope to inspire young girls, non-binary youth, and trans youth to recognize that their passion for STEM can impact countless industries – including the entertainment industry,” said Michelle Hillman, Chief Campaign Development Officer, Ad Council.

Women make up nearly half of the total college-educated workforce in the U.S., but they only constitute 27% of the STEM workforce, according to the U.S. Census Bureau. Research shows that many girls lose interest in STEM as early as middle school, and this path continues through high school and college, ultimately leading to an underrepresentation of women in STEM careers.  She Can STEM aims to dismantle the intimidating perceived barrier of STEM fields by showing girls, non-binary youth, and trans youth how fun, messy, diverse and accessible STEM can be, encouraging them to dive in, no matter where they are in their STEM journey.

Since the launch of She Can STEM in September 2018, the campaign has been supported by a variety of corporate, non-profit and media partners. The current funder of the campaign is IF/THEN, an initiative of Lyda Hill Philanthropies.  Non-profit partners include Black Girls Code, ChickTech, Girl Scouts of the USA, Girls Inc., Girls Who Code, National Center for Women & Information Technology, The New York Academy of Sciences and Society of Women Engineers.

About AMC Networks Inc.

AMC Networks (Nasdaq: AMCX) is a global entertainment company known for its popular and critically-acclaimed content. Its brands include targeted streaming services AMC+, Acorn TV, Shudder, Sundance Now, ALLBLK, and the newest addition to its targeted streaming portfolio, the anime-focused HIDIVE streaming service, in addition to AMC, BBC AMERICA (operated through a joint venture with BBC Studios), IFC, SundanceTV, WE tv and IFC Films. AMC Studios, the Company’s in-house studio, production and distribution operation, is behind some of the biggest titles and brands known to a global audience, including The Walking Dead, the Anne Rice catalog and the Agatha Christie library.  The Company also operates AMC Networks International, its international programming business, and 25/7 Media, its production services business.

About Content Room

Content Room is AMC Networks’ award-winning branded entertainment studio that collaborates with advertising partners to build brand stories and create bespoke experiences across an expanding range of digital, social, and linear platforms. Content Room enables brands to fully tap into the company’s premium programming, distinct IP, deep talent roster and filmmaking roots through an array of creative partnership opportunities— from premium branded content and integrations— to franchise and gaming extensions.

Content Room is also home to the award-winning digital content studio which produces dozens of original series annually, which expands popular AMC Networks scripted programming for both fans and advertising partners by leveraging the built-in massive series and talent fandoms.

The Ad Council
The Ad Council is where creativity and causes converge. The non-profit organization brings together the most creative minds in advertising, media, technology and marketing to address many of the nation’s most important causes. The Ad Council has created many of the most iconic campaigns in advertising history. Friends Don’t Let Friends Drive Drunk. Smokey Bear. Love Has No Labels.

The Ad Council’s innovative social good campaigns raise awareness, inspire action and save lives. To learn more, visit AdCouncil.org, follow the Ad Council’s communities on Facebook and Twitter, and view the creative on YouTube.

You can find the ‘She Can Stem’ Ad Council initiative here.

Canadian women and the AI4Good Lab

A June 9, 2022 posting on the Borealis AI website describes an artificial intelligence (AI) initiative designed to encourage women to enter the field,

The AI4Good Lab is one of those programs that creates exponential opportunities. As the leading Canadian AI-training initiative for women-identified STEM students, the lab helps encourage diversity in the field of AI. Participants work together to use AI to solve a social problem, delivering untold benefits to their local communities. And they work shoulder-to-shoulder with other leaders in the field of AI, building their networks and expanding the ecosystem.

At this year’s [2022] AI4Good Lab Industry Night, program partners – like Borealis AI, RBC [Royal Bank of Canada], DeepMind, Ivado and Google – had an opportunity to (virtually) meet the nearly 90  participants of this year’s program. Many of the program’s alumni were also in attendance. So, too, were representatives from CIFAR [Canadian Institute for Advanced Research], one of Canada’s leading global research organizations.

Industry participants – including Dr. Eirene Seiradaki, Director of Research Partnerships at Borealis AI, Carey Mende-Gibson, RBC’s Location Intelligence ambassador, and Lucy Liu, Director of Data Science at RBC – talked with attendees about their experiences in the AI industry, discussed career opportunities and explored various career paths that the participants could take in the industry. For the entire two hours, our three tables  and our virtually cozy couches were filled to capacity. It was only after the end of the event that we had the chance to exchange visits to the tables of our partners from CIFAR and AMII [Alberta Machine Intelligence Institute]. Eirene did not miss the opportunity to catch up with our good friend, Warren Johnston, and hear first-hand the news from AMII’s recent AI Week 2022.

Borealis AI is funded by the Royal Bank of Canada. Somebody wrote this for the homepage (presumably tongue in cheek),

All you can bank on.

The AI4Good Lab can be found here,

The AI4Good Lab is a 7-week program that equips women and people of marginalized genders with the skills to build their own machine learning projects. We emphasize mentorship and curiosity-driven learning to prepare our participants for a career in AI.

The program is designed to open doors for those who have historically been underrepresented in the AI industry. Together, we are building a more inclusive and diverse tech culture in Canada while inspiring the next generation of leaders to use AI as a tool for social good.

A most recent programme ran (May 3 – June 21, 2022) in Montréal, Toronto, and Edmonton.

There are a number of AI for Good initiatives including this one from the International Telecommunications Union (a United Nations Agency).

For the curious, I have a May 10, 2018 post “The Royal Bank of Canada reports ‘Humans wanted’ and some thoughts on the future of work, robots, and artificial intelligence” where I ‘examine’ RBC and its AI initiatives.

True love with AI (artificial intelligence): The Nature of Things explores emotional and creative AI (long read)

The Canadian Broadcasting Corporation’s (CBC) science television series,The Nature of Things, which has been broadcast since November 1960, explored the world of emotional, empathic and creative artificial intelligence (AI) in a Friday, November 19, 2021 telecast titled, The Machine That Feels,

The Machine That Feels explores how artificial intelligence (AI) is catching up to us in ways once thought to be uniquely human: empathy, emotional intelligence and creativity.

As AI moves closer to replicating humans, it has the potential to reshape every aspect of our world – but most of us are unaware of what looms on the horizon.

Scientists see AI technology as an opportunity to address inequities and make a better, more connected world. But it also has the capacity to do the opposite: to stoke division and inequality and disconnect us from fellow humans. The Machine That Feels, from The Nature of Things, shows viewers what they need to know about a field that is advancing at a dizzying pace, often away from the public eye.

What does it mean when AI makes art? Can AI interpret and understand human emotions? How is it possible that AI creates sophisticated neural networks that mimic the human brain? The Machine That Feels investigates these questions, and more.

In Vienna, composer Walter Werzowa has — with the help of AI — completed Beethoven’s previously unfinished 10th symphony. By feeding data about Beethoven, his music, his style and the original scribbles on the 10th symphony into an algorithm, AI has created an entirely new piece of art.

In Atlanta, Dr. Ayanna Howard and her robotics lab at Georgia Tech are teaching robots how to interpret human emotions. Where others see problems, Howard sees opportunity: how AI can help fill gaps in education and health care systems. She believes we need a fundamental shift in how we perceive robots: let’s get humans and robots to work together to help others.

At Tufts University in Boston, a new type of biological robot has been created: the xenobot. The size of a grain of sand, xenobots are grown from frog heart and skin cells, and combined with the “mind” of a computer. Programmed with a specific task, they can move together to complete it. In the future, they could be used for environmental cleanup, digesting microplastics and targeted drug delivery (like releasing chemotherapy compounds directly into tumours).

The film includes interviews with global leaders, commentators and innovators from the AI field, including Geoff Hinton, Yoshua Bengio, Ray Kurzweil and Douglas Coupland, who highlight some of the innovative and cutting-edge AI technologies that are changing our world.

The Machine That Feels focuses on one central question: in the flourishing age of artificial intelligence, what does it mean to be human?

I’ll get back to that last bit, “… what does it mean to be human?” later.

There’s a lot to appreciate in this 44 min. programme. As you’d expect, there was a significant chunk of time devoted to research being done in the US but Poland and Japan also featured and Canadian content was substantive. A number of tricky topics were covered and transitions from one topic to the next were smooth.

In the end credits, I counted over 40 source materials from Getty Images, Google Canada, Gatebox, amongst others. It would have been interesting to find out which segments were produced by CBC.

David Suzuki’s (programme host) script was well written and his narration was enjoyable, engaging, and non-intrusive. That last quality is not always true of CBC hosts who can fall into the trap of overdramatizing the text.

Drilling down

I have followed artificial intelligence stories in a passive way (i.e., I don’t seek them out) for many years. Even so, there was a lot of material in the programme that was new to me.

For example, there was this love story (from the ‘I love her and see her as a real woman.’ Meet a man who ‘married’ an artificial intelligence hologram webpage on the CBC),

In the The Machine That Feels, a documentary from The Nature of Things, we meet Kondo Akihiko, a Tokyo resident who “married” a hologram of virtual pop singer Hatsune Miku using a certificate issued by Gatebox (the marriage isn’t recognized by the state, and Gatebox acknowledges the union goes “beyond dimensions”).

I found Akihiko to be quite moving when he described his relationship, which is not unique. It seems some 4,000 men have ‘wed’ their digital companions, you can read about that and more on the ‘I love her and see her as a real woman.’ Meet a man who ‘married’ an artificial intelligence hologram webpage.

What does it mean to be human?

Overall, this Nature of Things episode embraces certainty, which means the question of what it means to human is referenced rather than seriously discussed. An unanswerable philosophical question, the programme is ill-equipped to address it, especially since none of the commentators are philosophers or seem inclined to philosophize.

The programme presents AI as a juggernaut. Briefly mentioned is the notion that we need to make some decisions about how our juggernaut is developed and utilized. No one discusses how we go about making changes to systems that are already making critical decisions for us. (For more about AI and decision-making, see my February 28, 2017 posting and scroll down to the ‘Algorithms and big data’ subhead for Cathy O’Neil’s description of how important decisions that affect us are being made by AI systems. She is the author of the 2016 book, ‘Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy’; still a timely read.)

In fact, the programme’s tone is mostly one of breathless excitement. A few misgivings are expressed, e.g,, one woman who has an artificial ‘texting friend’ (Replika; a chatbot app) noted that it can ‘get into your head’ when she had a chat where her ‘friend’ told her that all of a woman’s worth is based on her body; she pushed back but intimated that someone more vulnerable could find that messaging difficult to deal with.

The sequence featuring Akihiko and his hologram ‘wife’ is followed by one suggesting that people might become more isolated and emotionally stunted as they interact with artificial friends. It should be noted, Akihiko’s wife is described as ‘perfect’. I gather perfection means that you are always understanding and have no needs of your own. She also seems to be about 18″ high.

Akihiko has obviously been asked about his ‘wife’ before as his answers are ready. They boil down to “there are many types of relationships” and there’s nothing wrong with that. It’s an intriguing thought which is not explored.

Also unexplored, these relationships could be said to resemble slavery. After all, you pay for these friends over which you have control. But perhaps that’s alright since AI friends don’t have consciousness. Or do they? In addition to not being able to answer the question, “what is it to be human?” we still can’t answer the question, “what is consciousness?”

AI and creativity

The Nature of Things team works fast. ‘Beethoven X – The AI Project’ had its first performance on October 9, 2021. (See my October 1, 2021 post ‘Finishing Beethoven’s unfinished 10th Symphony’ for more information from Ahmed Elgammal’s (Director of the Art & AI Lab at Rutgers University) technical perspective on the project.

Briefly, Beethoven died before completing his 10th symphony and a number of computer scientists, musicologists, AI, and musicians collaborated to finish the symphony.)

The one listener (Felix Mayer, music professor at the Technical University Munich) in the hall during a performance doesn’t consider the work to be a piece of music. He does have a point. Beethoven left some notes but this ’10th’ is at least partly mathematical guesswork. A set of probabilities where an algorithm chooses which note comes next based on probability.

There was another artist also represented in the programme. Puzzlingly, it was the still living Douglas Coupland. In my opinion, he’s better known as a visual artist than a writer (his Wikipedia entry lists him as a novelist first) but he has succeeded greatly in both fields.

What makes his inclusion in the Nature of Things ‘The Machine That Feels’ programme puzzling, is that it’s not clear how he worked with artificial intelligence in a collaborative fashion. Here’s a description of Coupland’s ‘AI’ project from a June 29, 2021 posting by Chris Henry on the Google Outreach blog (Note: Links have been removed),

… when the opportunity presented itself to explore how artificial intelligence (AI) inspires artistic expression — with the help of internationally renowned Canadian artist Douglas Coupland — the Google Research team jumped on it. This collaboration, with the support of Google Arts & Culture, culminated in a project called Slogans for the Class of 2030, which spotlights the experiences of the first generation of young people whose lives are fully intertwined with the existence of AI. 

This collaboration was brought to life by first introducing Coupland’s written work to a machine learning language model. Machine learning is a form of AI that provides computer systems the ability to automatically learn from data. In this case, Google research scientists tuned a machine learning algorithm with Coupland’s 30-year body of written work — more than a million words — so it would familiarize itself with the author’s unique style of writing. From there, curated general-public social media posts on selected topics were added to teach the algorithm how to craft short-form, topical statements. [emphases mine]

Once the algorithm was trained, the next step was to process and reassemble suggestions of text for Coupland to use as inspiration to create twenty-five Slogans for the Class of 2030. [emphasis mine]

I would comb through ‘data dumps’ where characters from one novel were speaking with those in other novels in ways that they might actually do. It felt like I was encountering a parallel universe Doug,” Coupland says. “And from these outputs, the statements you see here in this project appeared like gems. Did I write them? Yes. No. Could they have existed without me? No.” [emphases mine]

So, the algorithms crunched through Coupland’s word and social media texts to produce slogans, which Coupland then ‘combed through’ to pick out 25 slogans for the ‘Slogans For The Class of 2030’ project. (Note: In the programme, he says that he started a sentence and then the AI system completed that sentence with material gleaned from his own writings, which brings to Exquisite Corpse, a collaborative game for writers originated by the Surrealists, possibly as early as 1918.)

The ‘slogans’ project also reminds me of William S. Burroughs and the cut-up technique used in his work. From the William S. Burroughs Cut-up technique webpage on the Language is a Virus website (Thank you to Lake Rain Vajra for a very interesting website),

The cutup is a mechanical method of juxtaposition in which Burroughs literally cuts up passages of prose by himself and other writers and then pastes them back together at random. This literary version of the collage technique is also supplemented by literary use of other media. Burroughs transcribes taped cutups (several tapes spliced into each other), film cutups (montage), and mixed media experiments (results of combining tapes with television, movies, or actual events). Thus Burroughs’s use of cutups develops his juxtaposition technique to its logical conclusion as an experimental prose method, and he also makes use of all contemporary media, expanding his use of popular culture.

[Burroughs says] “All writing is in fact cut-ups. A collage of words read heard overheard. What else? Use of scissors renders the process explicit and subject to extension and variation. Clear classical prose can be composed entirely of rearranged cut-ups. Cutting and rearranging a page of written words introduces a new dimension into writing enabling the writer to turn images in cinematic variation. Images shift sense under the scissors smell images to sound sight to sound to kinesthetic. This is where Rimbaud was going with his color of vowels. And his “systematic derangement of the senses.” The place of mescaline hallucination: seeing colors tasting sounds smelling forms.

“The cut-ups can be applied to other fields than writing. Dr Neumann [emphasis mine] in his Theory of Games and Economic behavior introduces the cut-up method of random action into game and military strategy: assume that the worst has happened and act accordingly. … The cut-up method could be used to advantage in processing scientific data. [emphasis mine] How many discoveries have been made by accident? We cannot produce accidents to order. The cut-ups could add new dimension to films. Cut gambling scene in with a thousand gambling scenes all times and places. Cut back. Cut streets of the world. Cut and rearrange the word and image in films. There is no reason to accept a second-rate product when you can have the best. And the best is there for all. Poetry is for everyone . . .”

First, John von Neumann (1902 – 57) is a very important figure in the history of computing. From a February 25, 2017 John von Neumann and Modern Computer Architecture essay on the ncLab website, “… he invented the computer architecture that we use today.”

Here’s Burroughs on the history of writers and cutups (thank you to QUEDEAR for posting this clip),

You can hear Burroughs talk about the technique and how he started using it in 1959.

There is no explanation from Coupland as to how his project differs substantively from Burroughs’ cut-ups or a session of Exquisite Corpse. The use of a computer programme to crunch through data and give output doesn’t seem all that exciting. *(More about computers and chatbots at end of posting).* It’s hard to know if this was an interview situation where he wasn’t asked the question or if the editors decided against including it.

Kazuo Ishiguro?

Given that Ishiguro’s 2021 book (Klara and the Sun) is focused on an artificial friend and raises the question of ‘what does it mean to be human’, as well as the related question, ‘what is the nature of consciousness’, it would have been interesting to hear from him. He spent a fair amount of time looking into research on machine learning in preparation for his book. Maybe he was too busy?

AI and emotions

The work being done by Georgia Tech’s Dr. Ayanna Howard and her robotics lab is fascinating. They are teaching robots how to interpret human emotions. The segment which features researchers teaching and interacting with robots, Pepper and Salt, also touches on AI and bias.

Watching two African American researchers talk about the ways in which AI is unable to read emotions on ‘black’ faces as accurately as ‘white’ faces is quite compelling. It also reinforces the uneasiness you might feel after the ‘Replika’ segment where an artificial friend informs a woman that her only worth is her body.

(Interestingly, Pepper and Salt are produced by Softbank Robotics, part of Softbank, a multinational Japanese conglomerate, [see a June 28, 2021 article by Ian Carlos Campbell for The Verge] whose entire management team is male according to their About page.)

While Howard is very hopeful about the possibilities of a machine that can read emotions, she doesn’t explore (on camera) any means for pushing back against bias other than training AI by using more black faces to help them learn. Perhaps more representative management and coding teams in technology companies?

While the programme largely focused on AI as an algorithm on a computer, robots can be enabled by AI (as can be seen in the segment with Dr. Howard).

My February 14, 2019 posting features research with a completely different approach to emotions and machines,

“I’ve always felt that robots shouldn’t just be modeled after humans [emphasis mine] or be copies of humans,” he [Guy Hoffman, assistant professor at Cornell University)] said. “We have a lot of interesting relationships with other species. Robots could be thought of as one of those ‘other species,’ not trying to copy what we do but interacting with us with their own language, tapping into our own instincts.”

[from a July 16, 2018 Cornell University news release on EurekAlert]

This brings the question back to, what is consciousness?

What scientists aren’t taught

Dr. Howard notes that scientists are not taught to consider the implications of their work. Her comment reminded me of a question I was asked many years ago after a presentation, it concerned whether or not science had any morality. (I said, no.)

My reply angered an audience member (a visual artist who was working with scientists at the time) as she took it personally and started defending scientists as good people who care and have morals and values. She failed to understand that the way in which we teach science conforms to a notion that somewhere there are scientific facts which are neutral and objective. Society and its values are irrelevant in the face of the larger ‘scientific truth’ and, as a consequence, you don’t need to teach or discuss how your values or morals affect that truth or what the social implications of your work might be.

Science is practiced without much if any thought to values. By contrast, there is the medical injunction, “Do no harm,” which suggests to me that someone recognized competing values. E.g., If your important and worthwhile research is harming people, you should ‘do no harm’.

The experts, the connections, and the Canadian content

It’s been a while since I’ve seen Ray Kurzweil mentioned but he seems to be getting more attention these days. (See this November 16, 2021 posting by Jonny Thomson titled, “The Singularity: When will we all become super-humans? Are we really only a moment away from “The Singularity,” a technological epoch that will usher in a new era in human evolution?” on The Big Think for more). Note: I will have a little more about evolution later in this post.

Interestingly, Kurzweil is employed by Google these days (see his Wikipedia entry, the column to the right). So is Geoffrey Hinton, another one of the experts in the programme (see Hinton’s Wikipedia entry, the column to the right, under Institutions).

I’m not sure about Yoshu Bengio’s relationship with Google but he’s a professor at the Université de Montréal, and he’s the Scientific Director for Mila ((Quebec’s Artificial Intelligence research institute)) & IVADO (Institut de valorisation des données), Note: IVADO is not particularly relevant to what’s being discussed in this post.

As for Mila, the Canada Google blog in a November 21, 2016 posting notes a $4.5M grant to the institution,

Google invests $4.5 Million in Montreal AI Research

A new grant from Google for the Montreal Institute for Learning Algorithms (MILA) will fund seven faculty across a number of Montreal institutions and will help tackle some of the biggest challenges in machine learning and AI, including applications in the realm of systems that can understand and generate natural language. In other words, better understand a fan’s enthusiasm for Les Canadien [sic].

Google is expanding its academic support of deep learning at MILA, renewing Yoshua Bengio’s Focused Research Award and offering Focused Research Awards to MILA faculty at University of Montreal and McGill University:

Google reaffirmed their commitment to Mila in 2020 with a grant worth almost $4M (from a November 13, 2020 posting on the Mila website, Note: A link has been removed),

Google Canada announced today [November 13, 2020] that it will be renewing its funding of Mila – Quebec Artificial Intelligence Institute, with a generous pledge of nearly $4M over a three-year period. Google previously invested $4.5M US in 2016, enabling Mila to grow from 25 to 519 researchers.

In a piece written for Google’s Official Canada Blog, Yoshua Bengio, Mila Scientific Director, says that this year marked a “watershed moment for the Canadian AI community,” as the COVID-19 pandemic created unprecedented challenges that demanded rapid innovation and increased interdisciplinary collaboration between researchers in Canada and around the world.

COVID-19 has changed the world forever and many industries, from healthcare to retail, will need to adapt to thrive in our ‘new normal.’ As we look to the future and how priorities will shift, it is clear that AI is no longer an emerging technology but a useful tool that can serve to solve world problems. Google Canada recognizes not only this opportunity but the important task at hand and I’m thrilled they have reconfirmed their support of Mila with an additional $3,95 million funding grant until 22.

– Yoshua Bengio, for Google’s Official Canada Blog

Interesting, eh? Of course, Douglas Coupland is working with Google, presumably for money, and that would connect over 50% of the Canadian content (Douglas Coupland, Yoshua Bengio, and Geoffrey Hinton; Kurzweil is an American) in the programme to Google.

My hat’s off to Google’s marketing communications and public relations teams.

Anthony Morgan of Science Everywhere also provided some Canadian content. His LinkedIn profile indicates that he’s working on a PhD in molecular science, which is described this way, “My work explores the characteristics of learning environments, that support critical thinking and the relationship between critical thinking and wisdom.”

Morgan is also the founder and creative director of Science Everywhere, from his LinkedIn profile, “An events & media company supporting knowledge mobilization, community engagement, entrepreneurship and critical thinking. We build social tools for better thinking.”

There is this from his LinkedIn profile,

I develop, create and host engaging live experiences & media to foster critical thinking.

I’ve spent my 15+ years studying and working in psychology and science communication, thinking deeply about the most common individual and societal barriers to critical thinking. As an entrepreneur, I lead a team to create, develop and deploy cultural tools designed to address those barriers. As a researcher I study what we can do to reduce polarization around science.

There’s a lot more to Morgan (do look him up; he has connections to the CBC and other media outlets). The difficulty is: why was he chosen to talk about artificial intelligence and emotions and creativity when he doesn’t seem to know much about the topic? He does mention GPT-3, an AI programming language. He seems to be acting as an advocate for AI although he offers this bit of almost cautionary wisdom, “… algorithms are sets of instructions.” (You can can find out more about it in my April 27, 2021 posting. There’s also this November 26, 2021 posting [The Inherent Limitations of GPT-3] by Andrey Kurenkov, a PhD student with the Stanford [University] Vision and Learning Lab.)

Most of the cautionary commentary comes from Luke Stark, assistant professor at Western [Ontario] University’s Faculty of Information and Media Studies. He’s the one who mentions stunted emotional growth.

Before moving on, there is another set of connections through the Pan-Canadian Artificial Intelligence Strategy, a Canadian government science funding initiative announced in the 2017 federal budget. The funds allocated to the strategy are administered by the Canadian Institute for Advanced Research (CIFAR). Yoshua Bengio through Mila is associated with the strategy and CIFAR, as is Geoffrey Hinton through his position as Chief Scientific Advisor for the Vector Institute.

Evolution

Getting back to “The Singularity: When will we all become super-humans? Are we really only a moment away from “The Singularity,” a technological epoch that will usher in a new era in human evolution?” Xenobots point in a disconcerting (for some of us) evolutionary direction.

I featured the work, which is being done at Tufts University in the US, in my June 21, 2021 posting, which includes an embedded video,

From a March 31, 2021 news item on ScienceDaily,

Last year, a team of biologists and computer scientists from Tufts University and the University of Vermont (UVM) created novel, tiny self-healing biological machines from frog cells called “Xenobots” that could move around, push a payload, and even exhibit collective behavior in the presence of a swarm of other Xenobots.

Get ready for Xenobots 2.0.

Also from an excerpt in the posting, the team has “created life forms that self-assemble a body from single cells, do not require muscle cells to move, and even demonstrate the capability of recordable memory.”

Memory is key to intelligence and this work introduces the notion of ‘living’ robots which leads to questioning what constitutes life. ‘The Machine That Feels’ is already grappling with far too many questions to address this development but introducing the research here might have laid the groundwork for the next episode, The New Human, telecast on November 26, 2021,

While no one can be certain what will happen, evolutionary biologists and statisticians are observing trends that could mean our future feet only have four toes (so long, pinky toe) or our faces may have new combinations of features. The new humans might be much taller than their parents or grandparents, or have darker hair and eyes.

And while evolution takes a lot of time, we might not have to wait too long for a new version of ourselves.

Technology is redesigning the way we look and function — at a much faster pace than evolution. We are merging with technology more than ever before: our bodies may now have implanted chips, smart limbs, exoskeletons and 3D-printed organs. A revolutionary gene editing technique has given us the power to take evolution into our own hands and alter our own DNA. How long will it be before we are designing our children?

As the story about the xenobots doesn’t say, we could also take the evolution of another species into our hands.

David Suzuki, where are you?

Our programme host, David Suzuki surprised me. I thought that as an environmentalist he’d point out that the huge amounts of computing power needed for artificial intelligence as mentioned in the programme, constitutes an environmental issue. I also would have expected a geneticist like Suzuki might have some concerns with regard to xenobots but perhaps that’s being saved for the next episode (The New Human) of the Nature of Things.

Artificial stupidity

Thanks to Will Knight for introducing me to the term ‘artificial stupidity’. Knight, a senior writer covers artificial intelligence for WIRED magazine. According to its Wikipedia entry,

Artificial stupidity is commonly used as a humorous opposite of the term artificial intelligence (AI), often as a derogatory reference to the inability of AI technology to adequately perform its tasks.[1] However, within the field of computer science, artificial stupidity is also used to refer to a technique of “dumbing down” computer programs in order to deliberately introduce errors in their responses.

Knight was using the term in its humorous, derogatory form.

Finally

The episode certainly got me thinking if not quite in the way producers might have hoped. ‘The Machine That Feels’ is a glossy, pretty well researched piece of infotainment.

To be blunt, I like and have no problems with infotainment but it can be seductive. I found it easier to remember the artificial friends, wife, xenobots, and symphony than the critiques and concerns.

Hopefully, ‘The Machine That Feels’ stimulates more interest in some very important topics. If you missed the telecast, you can catch the episode here.

For anyone curious about predictive policing, which was mentioned in the Ayanna Howard segment, see my November 23, 2017 posting about Vancouver’s plunge into AI and car theft.

*ETA December 6, 2021: One of the first ‘chatterbots’ was ELIZA, a computer programme developed from1964 to 1966. The most famous ELIZA script was DOCTOR, where the programme simulated a therapist. Many early users believed ELIZA understood and could respond as a human would despite Joseph Weizenbaum’s (creator of the programme) insistence otherwise.

The metaverse or not

The ‘metaverse’ seems to be everywhere these days (especially since Facebook has made a number of announcements bout theirs (more about that later in this posting).

At this point, the metaverse is very hyped up despite having been around for about 30 years. According to the Wikipedia timeline (see the Metaverse entry), the first one was a MOO in 1993 called ‘The Metaverse’. In any event, it seems like it might be a good time to see what’s changed since I dipped my toe into a metaverse (Second Life by Linden Labs) in 2007.

(For grammar buffs, I switched from definite article [the] to indefinite article [a] purposefully. In reading the various opinion pieces and announcements, it’s not always clear whether they’re talking about a single, overarching metaverse [the] replacing the single, overarching internet or whether there will be multiple metaverses, in which case [a].)

The hype/the buzz … call it what you will

This September 6, 2021 piece by Nick Pringle for Fast Company dates the beginning of the metaverse to a 1992 science fiction novel before launching into some typical marketing hype (for those who don’t know, hype is the short form for hyperbole; Note: Links have been removed),

The term metaverse was coined by American writer Neal Stephenson in his 1993 sci-fi hit Snow Crash. But what was far-flung fiction 30 years ago is now nearing reality. At Facebook’s most recent earnings call [June 2021], CEO Mark Zuckerberg announced the company’s vision to unify communities, creators, and commerce through virtual reality: “Our overarching goal across all of these initiatives is to help bring the metaverse to life.”

So what actually is the metaverse? It’s best explained as a collection of 3D worlds you explore as an avatar. Stephenson’s original vision depicted a digital 3D realm in which users interacted in a shared online environment. Set in the wake of a catastrophic global economic crash, the metaverse in Snow Crash emerged as the successor to the internet. Subcultures sprung up alongside new social hierarchies, with users expressing their status through the appearance of their digital avatars.

Today virtual worlds along these lines are formed, populated, and already generating serious money. Household names like Roblox and Fortnite are the most established spaces; however, there are many more emerging, such as Decentraland, Upland, Sandbox, and the soon to launch Victoria VR.

These metaverses [emphasis mine] are peaking at a time when reality itself feels dystopian, with a global pandemic, climate change, and economic uncertainty hanging over our daily lives. The pandemic in particular saw many of us escape reality into online worlds like Roblox and Fortnite. But these spaces have proven to be a place where human creativity can flourish amid crisis.

In fact, we are currently experiencing an explosion of platforms parallel to the dotcom boom. While many of these fledgling digital worlds will become what Ask Jeeves was to Google, I predict [emphasis mine] that a few will match the scale and reach of the tech giant—or even exceed it.

Because the metaverse brings a new dimension to the internet, brands and businesses will need to consider their current and future role within it. Some brands are already forging the way and establishing a new genre of marketing in the process: direct to avatar (D2A). Gucci sold a virtual bag for more than the real thing in Roblox; Nike dropped virtual Jordans in Fortnite; Coca-Cola launched avatar wearables in Decentraland, and Sotheby’s has an art gallery that your avatar can wander in your spare time.

D2A is being supercharged by blockchain technology and the advent of digital ownership via NFTs, or nonfungible tokens. NFTs are already making waves in art and gaming. More than $191 million was transacted on the “play to earn” blockchain game Axie Infinity in its first 30 days this year. This kind of growth makes NFTs hard for brands to ignore. In the process, blockchain and crypto are starting to feel less and less like “outsider tech.” There are still big barriers to be overcome—the UX of crypto being one, and the eye-watering environmental impact of mining being the other. I believe technology will find a way. History tends to agree.

Detractors see the metaverse as a pandemic fad, wrapping it up with the current NFT bubble or reducing it to Zuck’s [Jeffrey Zuckerberg and Facebook] dystopian corporate landscape. This misses the bigger behavior change that is happening among Gen Alpha. When you watch how they play, it becomes clear that the metaverse is more than a buzzword.

For Gen Alpha [emphasis mine], gaming is social life. While millennials relentlessly scroll feeds, Alphas and Zoomers [emphasis mine] increasingly stroll virtual spaces with their friends. Why spend the evening staring at Instagram when you can wander around a virtual Harajuku with your mates? If this seems ridiculous to you, ask any 13-year-old what they think.

Who is Nick Pringle and how accurate are his predictions?

At the end of his September 6, 2021 piece, you’ll find this,

Nick Pringle is SVP [Senior Vice President] executive creative director at R/GA London.

According to the R/GA Wikipedia entry,

… [the company] evolved from a computer-assisted film-making studio to a digital design and consulting company, as part of a major advertising network.

Here’s how Pringle sees our future, his September 6, 2021 piece,

By thinking “virtual first,” you can see how these spaces become highly experimental, creative, and valuable. The products you can design aren’t bound by physics or marketing convention—they can be anything, and are now directly “ownable” through blockchain. …

I believe that the metaverse is here to stay. That means brands and marketers now have the exciting opportunity to create products that exist in multiple realities. The winners will understand that the metaverse is not a copy of our world, and so we should not simply paste our products, experiences, and brands into it.

I emphasized “These metaverses …” in the previous section to highlight the fact that I find the use of ‘metaverses’ vs. ‘worlds’ confusing as the words are sometimes used as synonyms and sometimes as distinctions. We do it all the time in all sorts of conversations but for someone who’s an outsider to a particular occupational group or subculture, the shifts can make for confusion.

As for Gen Alpha and Zoomer, I’m not a fan of ‘Gen anything’ as shorthand for describing a cohort based on birth years. For example, “For Gen Alpha [emphasis mine], gaming is social life,” ignores social and economic classes, as well as, the importance of locations/geography, e.g., Afghanistan in contrast to the US.

To answer the question I asked, Pringle does not mention any record of accuracy for his predictions for the future but I was able to discover that he is a “multiple Cannes Lions award-winning creative” (more here).

A more measured view of the metaverse

An October 4, 2021 article (What is the metaverse, and do I have to care? One part definition, one part aspiration, one part hype) by Adi Robertson and Jay Peters for The Verge offers a deeper dive into the metaverse (Note: Links have been removed),

In recent months you may have heard about something called the metaverse. Maybe you’ve read that the metaverse is going to replace the internet. Maybe we’re all supposed to live there. Maybe Facebook (or Epic, or Roblox, or dozens of smaller companies) is trying to take it over. And maybe it’s got something to do with NFTs [non-fungible tokens]?

Unlike a lot of things The Verge covers, the metaverse is tough to explain for one reason: it doesn’t necessarily exist. It’s partly a dream for the future of the internet and partly a neat way to encapsulate some current trends in online infrastructure, including the growth of real-time 3D worlds.

Then what is the real metaverse?

There’s no universally accepted definition of a real “metaverse,” except maybe that it’s a fancier successor to the internet. Silicon Valley metaverse proponents sometimes reference a description from venture capitalist Matthew Ball, author of the extensive Metaverse Primer:

“The Metaverse is an expansive network of persistent, real-time rendered 3D worlds and simulations that support continuity of identity, objects, history, payments, and entitlements, and can be experienced synchronously by an effectively unlimited number of users, each with an individual sense of presence.”

Facebook, arguably the tech company with the biggest stake in the metaverse, describes it more simply:

“The ‘metaverse’ is a set of virtual spaces where you can create and explore with other people who aren’t in the same physical space as you.”

There are also broader metaverse-related taxonomies like one from game designer Raph Koster, who draws a distinction between “online worlds,” “multiverses,” and “metaverses.” To Koster, online worlds are digital spaces — from rich 3D environments to text-based ones — focused on one main theme. Multiverses are “multiple different worlds connected in a network, which do not have a shared theme or ruleset,” including Ready Player One’s OASIS. And a metaverse is “a multiverse which interoperates more with the real world,” incorporating things like augmented reality overlays, VR dressing rooms for real stores, and even apps like Google Maps.

If you want something a little snarkier and more impressionistic, you can cite digital scholar Janet Murray — who has described the modern metaverse ideal as “a magical Zoom meeting that has all the playful release of Animal Crossing.”

But wait, now Ready Player One isn’t a metaverse and virtual worlds don’t have to be 3D? It sounds like some of these definitions conflict with each other.

An astute observation.

Why is the term “metaverse” even useful? “The internet” already covers mobile apps, websites, and all kinds of infrastructure services. Can’t we roll virtual worlds in there, too?

Matthew Ball favors the term “metaverse” because it creates a clean break with the present-day internet. [emphasis mine] “Using the metaverse as a distinctive descriptor allows us to understand the enormity of that change and in turn, the opportunity for disruption,” he said in a phone interview with The Verge. “It’s much harder to say ‘we’re late-cycle into the last thing and want to change it.’ But I think understanding this next wave of computing and the internet allows us to be more proactive than reactive and think about the future as we want it to be, rather than how to marginally affect the present.”

A more cynical spin is that “metaverse” lets companies dodge negative baggage associated with “the internet” in general and social media in particular. “As long as you can make technology seem fresh and new and cool, you can avoid regulation,” researcher Joan Donovan told The Washington Post in a recent article about Facebook and the metaverse. “You can run defense on that for several years before the government can catch up.”

There’s also one very simple reason: it sounds more futuristic than “internet” and gets investors and media people (like us!) excited.

People keep saying NFTs are part of the metaverse. Why?

NFTs are complicated in their own right, and you can read more about them here. Loosely, the thinking goes: NFTs are a way of recording who owns a specific virtual good, creating and transferring virtual goods is a big part of the metaverse, thus NFTs are a potentially useful financial architecture for the metaverse. Or in more practical terms: if you buy a virtual shirt in Metaverse Platform A, NFTs can create a permanent receipt and let you redeem the same shirt in Metaverse Platforms B to Z.

Lots of NFT designers are selling collectible avatars like CryptoPunks, Cool Cats, and Bored Apes, sometimes for astronomical sums. Right now these are mostly 2D art used as social media profile pictures. But we’re already seeing some crossover with “metaverse”-style services. The company Polygonal Mind, for instance, is building a system called CryptoAvatars that lets people buy 3D avatars as NFTs and then use them across multiple virtual worlds.

If you have the time, the October 4, 2021 article (What is the metaverse, and do I have to care? One part definition, one part aspiration, one part hype) is definitely worth the read.

Facebook’s multiverse and other news

Since starting this post sometime in September 2021, the situation regarding Facebook has changed a few times. I’ve decided to begin my version of the story from a summer 2021 announcement.

On Monday, July 26, 2021, Facebook announced a new Metaverse product group. From a July 27, 2021 article by Scott Rosenberg for Yahoo News (Note: A link has been removed),

Facebook announced Monday it was forming a new Metaverse product group to advance its efforts to build a 3D social space using virtual and augmented reality tech.

Facebook’s new Metaverse product group will report to Andrew Bosworth, Facebook’s vice president of virtual and augmented reality [emphasis mine], who announced the new organization in a Facebook post.

Facebook, integrity, and safety in the metaverse

On September 27, 2021 Facebook posted this webpage (Building the Metaverse Responsibly by Andrew Bosworth, VP, Facebook Reality Labs [emphasis mine] and Nick Clegg, VP, Global Affairs) on its site,

The metaverse won’t be built overnight by a single company. We’ll collaborate with policymakers, experts and industry partners to bring this to life.

We’re announcing a $50 million investment in global research and program partners to ensure these products are developed responsibly.

We develop technology rooted in human connection that brings people together. As we focus on helping to build the next computing platform, our work across augmented and virtual reality and consumer hardware will deepen that human connection regardless of physical distance and without being tied to devices. 

Introducing the XR [extended reality] Programs and Research Fund

There’s a long road ahead. But as a starting point, we’re announcing the XR Programs and Research Fund, a two-year $50 million investment in programs and external research to help us in this effort. Through this fund, we’ll collaborate with industry partners, civil rights groups, governments, nonprofits and academic institutions to determine how to build these technologies responsibly. 

..

Where integrity and safety are concerned Facebook is once again having some credibility issues according to an October 5, 2021 Associated Press article (Whistleblower testifies Facebook chooses profit over safety, calls for ‘congressional action’) posted on the Canadian Broadcasting Corporation’s (CBC) news online website.

Rebranding Facebook’s integrity and safety issues away?

It seems Facebook’s credibility issues are such that the company is about to rebrand itself according to an October 19, 2021 article by Alex Heath for The Verge (Note: Links have been removed),

Facebook is planning to change its company name next week to reflect its focus on building the metaverse, according to a source with direct knowledge of the matter.

The coming name change, which CEO Mark Zuckerberg plans to talk about at the company’s annual Connect conference on October 28th [2021], but could unveil sooner, is meant to signal the tech giant’s ambition to be known for more than social media and all the ills that entail. The rebrand would likely position the blue Facebook app as one of many products under a parent company overseeing groups like Instagram, WhatsApp, Oculus, and more. A spokesperson for Facebook declined to comment for this story.

Facebook already has more than 10,000 employees building consumer hardware like AR glasses that Zuckerberg believes will eventually be as ubiquitous as smartphones. In July, he told The Verge that, over the next several years, “we will effectively transition from people seeing us as primarily being a social media company to being a metaverse company.”

A rebrand could also serve to further separate the futuristic work Zuckerberg is focused on from the intense scrutiny Facebook is currently under for the way its social platform operates today. A former employee turned whistleblower, Frances Haugen, recently leaked a trove of damning internal documents to The Wall Street Journal and testified about them before Congress. Antitrust regulators in the US and elsewhere are trying to break the company up, and public trust in how Facebook does business is falling.

Facebook isn’t the first well-known tech company to change its company name as its ambitions expand. In 2015, Google reorganized entirely under a holding company called Alphabet, partly to signal that it was no longer just a search engine, but a sprawling conglomerate with companies making driverless cars and health tech. And Snapchat rebranded to Snap Inc. in 2016, the same year it started calling itself a “camera company” and debuted its first pair of Spectacles camera glasses.

If you have time, do read Heath’s article in its entirety.

An October 20, 2021 Thomson Reuters item on CBC (Canadian Broadcasting Corporation) news online includes quotes from some industry analysts about the rebrand,

“It reflects the broadening out of the Facebook business. And then, secondly, I do think that Facebook’s brand is probably not the greatest given all of the events of the last three years or so,” internet analyst James Cordwell at Atlantic Equities said.

“Having a different parent brand will guard against having this negative association transferred into a new brand, or other brands that are in the portfolio,” said Shankha Basu, associate professor of marketing at University of Leeds.

Tyler Jadah’s October 20, 2021 article for the Daily Hive includes an earlier announcement (not mentioned in the other two articles about the rebranding), Note: A link has been removed,

Earlier this week [October 17, 2021], Facebook announced it will start “a journey to help build the next computing platform” and will hire 10,000 new high-skilled jobs within the European Union (EU) over the next five years.

“Working with others, we’re developing what is often referred to as the ‘metaverse’ — a new phase of interconnected virtual experiences using technologies like virtual and augmented reality,” wrote Facebook’s Nick Clegg, the VP of Global Affairs. “At its heart is the idea that by creating a greater sense of “virtual presence,” interacting online can become much closer to the experience of interacting in person.”

Clegg says the metaverse has the potential to help unlock access to new creative, social, and economic opportunities across the globe and the virtual world.

In an email with Facebook’s Corporate Communications Canada, David Troya-Alvarez told Daily Hive, “We don’t comment on rumour or speculation,” in regards to The Verge‘s report.

I will update this posting when and if Facebook rebrands itself into a ‘metaverse’ company.

***See Oct. 28, 2021 update at the end of this posting and prepare yourself for ‘Meta’.***

Who (else) cares about integrity and safety in the metaverse?

Apparently, the international legal firm, Norton Rose Fulbright also cares about safety and integrity in the metaverse. Here’s more from their July 2021 The Metaverse: The evolution of a universal digital platform webpage,

In technology, first-mover advantage is often significant. This is why BigTech and other online platforms are beginning to acquire software businesses to position themselves for the arrival of the Metaverse.  They hope to be at the forefront of profound changes that the Metaverse will bring in relation to digital interactions between people, between businesses, and between them both. 

What is the Metaverse? The short answer is that it does not exist yet. At the moment it is vision for what the future will be like where personal and commercial life is conducted digitally in parallel with our lives in the physical world. Sounds too much like science fiction? For something that does not exist yet, the Metaverse is drawing a huge amount of attention and investment in the tech sector and beyond.  

Here we look at what the Metaverse is, what its potential is for disruptive change, and some of the key legal and regulatory issues future stakeholders may need to consider.

What are the potential legal issues?

The revolutionary nature of the Metaverse is likely to give rise to a range of complex legal and regulatory issues. We consider some of the key ones below. As time goes by, naturally enough, new ones will emerge.

Data

Participation in the Metaverse will involve the collection of unprecedented amounts and types of personal data. Today, smartphone apps and websites allow organisations to understand how individuals move around the web or navigate an app. Tomorrow, in the Metaverse, organisations will be able to collect information about individuals’ physiological responses, their movements and potentially even brainwave patterns, thereby gauging a much deeper understanding of their customers’ thought processes and behaviours.

Users participating in the Metaverse will also be “logged in” for extended amounts of time. This will mean that patterns of behaviour will be continually monitored, enabling the Metaverse and the businesses (vendors of goods and services) participating in the Metaverse to understand how best to service the users in an incredibly targeted way.

The hungry Metaverse participant

How might actors in the Metaverse target persons participating in the Metaverse? Let us assume one such woman is hungry at the time of participating. The Metaverse may observe a woman frequently glancing at café and restaurant windows and stopping to look at cakes in a bakery window, and determine that she is hungry and serve her food adverts accordingly.

Contrast this with current technology, where a website or app can generally only ascertain this type of information if the woman actively searched for food outlets or similar on her device.

Therefore, in the Metaverse, a user will no longer need to proactively provide personal data by opening up their smartphone and accessing their webpage or app of choice. Instead, their data will be gathered in the background while they go about their virtual lives. 

This type of opportunity comes with great data protection responsibilities. Businesses developing, or participating in, the Metaverse will need to comply with data protection legislation when processing personal data in this new environment. The nature of the Metaverse raises a number of issues around how that compliance will be achieved in practice.

Who is responsible for complying with applicable data protection law? 

In many jurisdictions, data protection laws place different obligations on entities depending on whether an entity determines the purpose and means of processing personal data (referred to as a “controller” under the EU General Data Protection Regulation (GDPR)) or just processes personal data on behalf of others (referred to as a “processor” under the GDPR). 

In the Metaverse, establishing which entity or entities have responsibility for determining how and why personal data will be processed, and who processes personal data on behalf of another, may not be easy. It will likely involve picking apart a tangled web of relationships, and there may be no obvious or clear answers – for example:

Will there be one main administrator of the Metaverse who collects all personal data provided within it and determines how that personal data will be processed and shared?
Or will multiple entities collect personal data through the Metaverse and each determine their own purposes for doing so? 

Either way, many questions arise, including:

How should the different entities each display their own privacy notice to users? 
Or should this be done jointly? 
How and when should users’ consent be collected? 
Who is responsible if users’ personal data is stolen or misused while they are in the Metaverse? 
What data sharing arrangements need to be put in place and how will these be implemented?

There’s a lot more to this page including a look at Social Media Regulation and Intellectual Property Rights.

One other thing, according to the Norton Rose Fulbright Wikipedia entry, it is one of the ten largest legal firms in the world.

How many realities are there?

I’m starting to think we should talking about RR (real reality), as well as, VR (virtual reality), AR (augmented reality), MR (mixed reality), and XR (extended reality). It seems that all of these (except RR, which is implied) will be part of the ‘metaverse’, assuming that it ever comes into existence. Happily, I have found a good summarized description of VR/AR/MR/XR in a March 20, 2018 essay by North of 41 on medium.com,

Summary: VR is immersing people into a completely virtual environment; AR is creating an overlay of virtual content, but can’t interact with the environment; MR is a mixed of virtual reality and the reality, it creates virtual objects that can interact with the actual environment. XR brings all three Reality (AR, VR, MR) together under one term.

If you have the interest and approximately five spare minutes, read the entire March 20, 2018 essay, which has embedded images illustrating the various realities.

Alternate Mixed Realities: an example

TransforMR: Pose-Aware Object Substitution for Composing Alternate Mixed Realities (ISMAR ’21)

Here’s a description from one of the researchers, Mohamed Kari, of the video, which you can see above, and the paper he and his colleagues presented at the 20th IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 2021 (from the TransforMR page on YouTube),

We present TransforMR, a video see-through mixed reality system for mobile devices that performs 3D-pose-aware object substitution to create meaningful mixed reality scenes in previously unseen, uncontrolled, and open-ended real-world environments.

To get a sense of how recent this work is, ISMAR 2021 was held from October 4 – 8, 2021.

The team’s 2021 ISMAR paper, TransforMR Pose-Aware Object Substitution for Composing Alternate Mixed Realities by Mohamed Kari, Tobias Grosse-Puppendah, Luis Falconeri Coelho, Andreas Rene Fender, David Bethge, Reinhard Schütte, and Christian Holz lists two educational institutions I’d expect to see (University of Duisburg-Essen and ETH Zürich), the surprise was this one: Porsche AG. Perhaps that explains the preponderance of vehicles in this demonstration.

Space walking in virtual reality

Ivan Semeniuk’s October 2, 2021 article for the Globe and Mail highlights a collaboration between Montreal’s Felix and Paul Studios with NASA (US National Aeronautics and Space Administration) and Time studios,

Communing with the infinite while floating high above the Earth is an experience that, so far, has been known to only a handful.

Now, a Montreal production company aims to share that experience with audiences around the world, following the first ever recording of a spacewalk in the medium of virtual reality.

The company, which specializes in creating virtual-reality experiences with cinematic flair, got its long-awaited chance in mid-September when astronauts Thomas Pesquet and Akihiko Hoshide ventured outside the International Space Station for about seven hours to install supports and other equipment in preparation for a new solar array.

The footage will be used in the fourth and final instalment of Space Explorers: The ISS Experience, a virtual-reality journey to space that has already garnered a Primetime Emmy Award for its first two episodes.

From the outset, the production was developed to reach audiences through a variety of platforms for 360-degree viewing, including 5G-enabled smart phones and tablets. A domed theatre version of the experience for group audiences opened this week at the Rio Tinto Alcan Montreal Planetarium. Those who desire a more immersive experience can now see the first two episodes in VR form by using a headset available through the gaming and entertainment company Oculus. Scenes from the VR series are also on offer as part of The Infinite, an interactive exhibition developed by Montreal’s Phi Studio, whose works focus on the intersection of art and technology. The exhibition, which runs until Nov. 7 [2021], has attracted 40,000 visitors since it opened in July [2021?].

At a time when billionaires are able to head off on private extraterrestrial sojourns that almost no one else could dream of, Lajeunesse [Félix Lajeunesse, co-founder and creative director of Felix and Paul studios] said his project was developed with a very different purpose in mind: making it easier for audiences to become eyewitnesses rather than distant spectators to humanity’s greatest adventure.

For the final instalments, the storyline takes viewers outside of the space station with cameras mounted on the Canadarm, and – for the climax of the series – by following astronauts during a spacewalk. These scenes required extensive planning, not only because of the limited time frame in which they could be gathered, but because of the lighting challenges presented by a constantly shifting sun as the space station circles the globe once every 90 minutes.

… Lajeunesse said that it was equally important to acquire shots that are not just technically spectacular but that serve the underlying themes of Space Explorers: The ISS Experience. These include an examination of human adaptation and advancement, and the unity that emerges within a group of individuals from many places and cultures and who must learn to co-exist in a high risk environment in order to achieve a common goal.

If you have the time, do read Semeniuk’s October 2, 2021 article in its entirety. You can find the exhibits (hopefully, you’re in Montreal) The Infinite here and Space Explorers: The ISS experience here (see the preview below),

The realities and the ‘verses

There always seems to be a lot of grappling with new and newish science/technology where people strive to coin terms and define them while everyone, including members of the corporate community, attempts to cash in.

The last time I looked (probably about two years ago), I wasn’t able to find any good definitions for alternate reality and mixed reality. (By good, I mean something which clearly explicated the difference between the two.) It was nice to find something this time.

As for Facebook and its attempts to join/create a/the metaverse, the company’s timing seems particularly fraught. As well, paradigm-shifting technology doesn’t usually start with large corporations. The company is ignoring its own history.

Multiverses

Writing this piece has reminded me of the upcoming movie, “Doctor Strange in the Multiverse of Madness” (Wikipedia entry). While this multiverse is based on a comic book, the idea of a Multiverse (Wikipedia entry) has been around for quite some time,

Early recorded examples of the idea of infinite worlds existed in the philosophy of Ancient Greek Atomism, which proposed that infinite parallel worlds arose from the collision of atoms. In the third century BCE, the philosopher Chrysippus suggested that the world eternally expired and regenerated, effectively suggesting the existence of multiple universes across time.[1] The concept of multiple universes became more defined in the Middle Ages.

Multiple universes have been hypothesized in cosmology, physics, astronomy, religion, philosophy, transpersonal psychology, music, and all kinds of literature, particularly in science fiction, comic books and fantasy. In these contexts, parallel universes are also called “alternate universes”, “quantum universes”, “interpenetrating dimensions”, “parallel universes”, “parallel dimensions”, “parallel worlds”, “parallel realities”, “quantum realities”, “alternate realities”, “alternate timelines”, “alternate dimensions” and “dimensional planes”.

The physics community has debated the various multiverse theories over time. Prominent physicists are divided about whether any other universes exist outside of our own.

Living in a computer simulation or base reality

The whole thing is getting a little confusing for me so I think I’ll stick with RR (real reality) or as it’s also known base reality. For the notion of base reality, I want to thank astronomer David Kipping of Columbia University in Anil Ananthaswamy’s article for this analysis of the idea that we might all be living in a computer simulation (from my December 8, 2020 posting; scroll down about 50% of the way to the “Are we living in a computer simulation?” subhead),

… there is a more obvious answer: Occam’s razor, which says that in the absence of other evidence, the simplest explanation is more likely to be correct. The simulation hypothesis is elaborate, presuming realities nested upon realities, as well as simulated entities that can never tell that they are inside a simulation. “Because it is such an overly complicated, elaborate model in the first place, by Occam’s razor, it really should be disfavored, compared to the simple natural explanation,” Kipping says.

Maybe we are living in base reality after all—The Matrix, Musk and weird quantum physics notwithstanding.

To sum it up (briefly)

I’m sticking with the base reality (or real reality) concept, which is where various people and companies are attempting to create a multiplicity of metaverses or the metaverse effectively replacing the internet. This metaverse can include any all of these realities (AR/MR/VR/XR) along with base reality. As for Facebook’s attempt to build ‘the metaverse’, it seems a little grandiose.

The computer simulation theory is an interesting thought experiment (just like the multiverse is an interesting thought experiment). I’ll leave them there.

Wherever it is we are living, these are interesting times.

***Updated October 28, 2021: D. (Devindra) Hardawar’s October 28, 2021 article for engadget offers details about the rebranding along with a dash of cynicism (Note: A link has been removed),

Here’s what Facebook’s metaverse isn’t: It’s not an alternative world to help us escape from our dystopian reality, a la Snow Crash. It won’t require VR or AR glasses (at least, not at first). And, most importantly, it’s not something Facebook wants to keep to itself. Instead, as Mark Zuckerberg described to media ahead of today’s Facebook Connect conference, the company is betting it’ll be the next major computing platform after the rise of smartphones and the mobile web. Facebook is so confident, in fact, Zuckerberg announced that it’s renaming itself to “Meta.”

After spending the last decade becoming obsessed with our phones and tablets — learning to stare down and scroll practically as a reflex — the Facebook founder thinks we’ll be spending more time looking up at the 3D objects floating around us in the digital realm. Or maybe you’ll be following a friend’s avatar as they wander around your living room as a hologram. It’s basically a digital world layered right on top of the real world, or an “embodied internet” as Zuckerberg describes.

Before he got into the weeds for his grand new vision, though, Zuckerberg also preempted criticism about looking into the future now, as the Facebook Papers paint the company as a mismanaged behemoth that constantly prioritizes profit over safety. While acknowledging the seriousness of the issues the company is facing, noting that it’ll continue to focus on solving them with “industry-leading” investments, Zuckerberg said: 

“The reality is is that there’s always going to be issues and for some people… they may have the view that there’s never really a great time to focus on the future… From my perspective, I think that we’re here to create things and we believe that we can do this and that technology can make things better. So we think it’s important to to push forward.”

Given the extent to which Facebook, and Zuckerberg in particular, have proven to be untrustworthy stewards of social technology, it’s almost laughable that the company wants us to buy into its future. But, like the rise of photo sharing and group chat apps, Zuckerberg at least has a good sense of what’s coming next. And for all of his talk of turning Facebook into a metaverse company, he’s adamant that he doesn’t want to build a metaverse that’s entirely owned by Facebook. He doesn’t think other companies will either. Like the mobile web, he thinks every major technology company will contribute something towards the metaverse. He’s just hoping to make Facebook a pioneer.

“Instead of looking at a screen, or today, how we look at the Internet, I think in the future you’re going to be in the experiences, and I think that’s just a qualitatively different experience,” Zuckerberg said. It’s not quite virtual reality as we think of it, and it’s not just augmented reality. But ultimately, he sees the metaverse as something that’ll help to deliver more presence for digital social experiences — the sense of being there, instead of just being trapped in a zoom window. And he expects there to be continuity across devices, so you’ll be able to start chatting with friends on your phone and seamlessly join them as a hologram when you slip on AR glasses.

D. (Devindra) Hardawar’s October 28, 2021 article provides a lot more details and I recommend reading it in its entirety.

Exotic magnetism: a quantum simulation from D-Wave Sytems

Vancouver (Canada) area company, D-Wave Systems is trumpeting itself (with good reason) again. This 2021 ‘milestone’ achievement builds on work from 2018 (see my August 23, 2018 posting for the earlier work). For me, the big excitement was finding the best explanation for quantum annealing and D-Wave’s quantum computers that I’ve seen yet (that explanation and a link to more is at the end of this posting).

A February 18, 2021 news item on phys.org announces the latest achievement,

D-Wave Systems Inc. today [February 18, 2021] published a milestone study in collaboration with scientists at Google, demonstrating a computational performance advantage, increasing with both simulation size and problem hardness, to over 3 million times that of corresponding classical methods. Notably, this work was achieved on a practical application with real-world implications, simulating the topological phenomena behind the 2016 Nobel Prize in Physics. This performance advantage, exhibited in a complex quantum simulation of materials, is a meaningful step in the journey toward applications advantage in quantum computing.

A February 18, 2021 D-Wave Systems press release (also on EurekAlert), which originated the news item, describes the work in more detail,

The work by scientists at D-Wave and Google also demonstrates that quantum effects can be harnessed to provide a computational advantage in D-Wave processors, at problem scale that requires thousands of qubits. Recent experiments performed on multiple D-Wave processors represent by far the largest quantum simulations carried out by existing quantum computers to date.

The paper, entitled “Scaling advantage over path-integral Monte Carlo in quantum simulation of geometrically frustrated magnets”, was published in the journal Nature Communications (DOI 10.1038/s41467-021-20901-5, February 18, 2021). D-Wave researchers programmed the D-Wave 2000Q™ system to model a two-dimensional frustrated quantum magnet using artificial spins. The behavior of the magnet was described by the Nobel-prize winning work of theoretical physicists Vadim Berezinskii, J. Michael Kosterlitz and David Thouless. They predicted a new state of matter in the 1970s characterized by nontrivial topological properties. This new research is a continuation of previous breakthrough work published by D-Wave’s team in a 2018 Nature paper entitled “Observation of topological phenomena in a programmable lattice of 1,800 qubits” (Vol. 560, Issue 7719, August 22, 2018). In this latest paper, researchers from D-Wave, alongside contributors from Google, utilize D-Wave’s lower noise processor to achieve superior performance and glean insights into the dynamics of the processor never observed before.

“This work is the clearest evidence yet that quantum effects provide a computational advantage in D-Wave processors,” said Dr. Andrew King, principal investigator for this work at D-Wave. “Tying the magnet up into a topological knot and watching it escape has given us the first detailed look at dynamics that are normally too fast to observe. What we see is a huge benefit in absolute terms, with the scaling advantage in temperature and size that we would hope for. This simulation is a real problem that scientists have already attacked using the algorithms we compared against, marking a significant milestone and an important foundation for future development. This wouldn’t have been possible today without D-Wave’s lower noise processor.”

“The search for quantum advantage in computations is becoming increasingly lively because there are special problems where genuine progress is being made. These problems may appear somewhat contrived even to physicists, but in this paper from a collaboration between D-Wave Systems, Google, and Simon Fraser University [SFU], it appears that there is an advantage for quantum annealing using a special purpose processor over classical simulations for the more ‘practical’ problem of finding the equilibrium state of a particular quantum magnet,” said Prof. Dr. Gabriel Aeppli, professor of physics at ETH Zürich and EPF Lausanne, and head of the Photon Science Division of the Paul Scherrer Institute. “This comes as a surprise given the belief of many that quantum annealing has no intrinsic advantage over path integral Monte Carlo programs implemented on classical processors.”

“Nascent quantum technologies mature into practical tools only when they leave classical counterparts in the dust in solving real-world problems,” said Hidetoshi Nishimori, Professor, Institute of Innovative Research, Tokyo Institute of Technology. “A key step in this direction has been achieved in this paper by providing clear evidence of a scaling advantage of the quantum annealer over an impregnable classical computing competitor in simulating dynamical properties of a complex material. I send sincere applause to the team.”

“Successfully demonstrating such complex phenomena is, on its own, further proof of the programmability and flexibility of D-Wave’s quantum computer,” said D-Wave CEO Alan Baratz. “But perhaps even more important is the fact that this was not demonstrated on a synthetic or ‘trick’ problem. This was achieved on a real problem in physics against an industry-standard tool for simulation–a demonstration of the practical value of the D-Wave processor. We must always be doing two things: furthering the science and increasing the performance of our systems and technologies to help customers develop applications with real-world business value. This kind of scientific breakthrough from our team is in line with that mission and speaks to the emerging value that it’s possible to derive from quantum computing today.”

The scientific achievements presented in Nature Communications further underpin D-Wave’s ongoing work with world-class customers to develop over 250 early quantum computing applications, with a number piloting in production applications, in diverse industries such as manufacturing, logistics, pharmaceutical, life sciences, retail and financial services. In September 2020, D-Wave brought its next-generation Advantage™ quantum system to market via the Leap™ quantum cloud service. The system includes more than 5,000 qubits and 15-way qubit connectivity, as well as an expanded hybrid solver service capable of running business problems with up to one million variables. The combination of Advantage’s computing power and scale with the hybrid solver service gives businesses the ability to run performant, real-world quantum applications for the first time.

That last paragraph seems more sales pitch than research oriented. It’s not unexpected in a company’s press release but I was surprised that the editors at EurekAlert didn’t remove it.

Here’s a link to and a citation for the latest paper,

Scaling advantage over path-integral Monte Carlo in quantum simulation of geometrically frustrated magnets by Andrew D. King, Jack Raymond, Trevor Lanting, Sergei V. Isakov, Masoud Mohseni, Gabriel Poulin-Lamarre, Sara Ejtemaee, William Bernoudy, Isil Ozfidan, Anatoly Yu. Smirnov, Mauricio Reis, Fabio Altomare, Michael Babcock, Catia Baron, Andrew J. Berkley, Kelly Boothby, Paul I. Bunyk, Holly Christiani, Colin Enderud, Bram Evert, Richard Harris, Emile Hoskinson, Shuiyuan Huang, Kais Jooya, Ali Khodabandelou, Nicolas Ladizinsky, Ryan Li, P. Aaron Lott, Allison J. R. MacDonald, Danica Marsden, Gaelen Marsden, Teresa Medina, Reza Molavi, Richard Neufeld, Mana Norouzpour, Travis Oh, Igor Pavlov, Ilya Perminov, Thomas Prescott, Chris Rich, Yuki Sato, Benjamin Sheldan, George Sterling, Loren J. Swenson, Nicholas Tsai, Mark H. Volkmann, Jed D. Whittaker, Warren Wilkinson, Jason Yao, Hartmut Neven, Jeremy P. Hilton, Eric Ladizinsky, Mark W. Johnson, Mohammad H. Amin. Nature Communications volume 12, Article number: 1113 (2021) DOI: https://doi.org/10.1038/s41467-021-20901-5 Published: 18 February 2021

This paper is open access.

Quantum annealing and more

Dr. Andrew King, one of the D-Wave researchers, has written a February 18, 2021 article on Medium explaining some of the work. I’ve excerpted one of King’s points,

Insight #1: We observed what actually goes on under the hood in the processor for the first time

Quantum annealing — the approach adopted by D-Wave from the beginning — involves setting up a simple but purely quantum initial state, and gradually reducing the “quantumness” until the system is purely classical. This takes on the order of a microsecond. If you do it right, the classical system represents a hard (NP-complete) computational problem, and the state has evolved to an optimal, or at least near-optimal, solution to that problem.

What happens at the beginning and end of the computation are about as simple as quantum computing gets. But the action in the middle is hard to get a handle on, both theoretically and experimentally. That’s one reason these experiments are so important: they provide high-fidelity measurements of the physical processes at the core of quantum annealing. Our 2018 Nature article introduced the same simulation, but without measuring computation time. To benchmark the experiment this time around, we needed lower-noise hardware (in this case, we used the D-Wave 2000Q lower noise quantum computer), and we needed, strangely, to slow the simulation down. Since the quantum simulation happens so fast, we actually had to make things harder. And we had to find a way to slow down both quantum and classical simulation in an equitable way. The solution? Topological obstruction.

If you have time and the inclination, I encourage you to read King’s piece.

Quantum supremacy

This supremacy, refers to an engineering milestone and a October 23, 2019 news item on ScienceDaily announces the milestone has been reached,

Researchers in UC [University of California] Santa Barbara/Google scientist John Martinis’ group have made good on their claim to quantum supremacy. Using 53 entangled quantum bits (“qubits”), their Sycamore computer has taken on — and solved — a problem considered intractable for classical computers.

An October 23, 2019 UC Santa Barbara news release (also on EurekAlert) by Sonia Fernandez, which originated the news item, delves further into the work,

“A computation that would take 10,000 years on a classical supercomputer took 200 seconds on our quantum computer,” said Brooks Foxen, a graduate student researcher in the Martinis Group. “It is likely that the classical simulation time, currently estimated at 10,000 years, will be reduced by improved classical hardware and algorithms, but, since we are currently 1.5 trillion times faster, we feel comfortable laying claim to this achievement.”

The feat is outlined in a paper in the journal Nature.

The milestone comes after roughly two decades of quantum computing research conducted by Martinis and his group, from the development of a single superconducting qubit to systems including architectures of 72 and, with Sycamore, 54 qubits (one didn’t perform) that take advantage of the both awe-inspiring and bizarre properties of quantum mechanics.

“The algorithm was chosen to emphasize the strengths of the quantum computer by leveraging the natural dynamics of the device,” said Ben Chiaro, another graduate student researcher in the Martinis Group. That is, the researchers wanted to test the computer’s ability to hold and rapidly manipulate a vast amount of complex, unstructured data.

“We basically wanted to produce an entangled state involving all of our qubits as quickly as we can,” Foxen said, “and so we settled on a sequence of operations that produced a complicated superposition state that, when measured, returns bitstring with a probability determined by the specific sequence of operations used to prepare that particular superposition. The exercise, which was to verify that the circuit’s output correspond to the equence used to prepare the state, sampled the quantum circuit a million times in just a few minutes, exploring all possibilities — before the system could lose its quantum coherence.

‘A complex superposition state’

“We performed a fixed set of operations that entangles 53 qubits into a complex superposition state,” Chiaro explained. “This superposition state encodes the probability distribution. For the quantum computer, preparing this superposition state is accomplished by applying a sequence of tens of control pulses to each qubit in a matter of microseconds. We can prepare and then sample from this distribution by measuring the qubits a million times in 200 seconds.”

“For classical computers, it is much more difficult to compute the outcome of these operations because it requires computing the probability of being in any one of the 2^53 possible states, where the 53 comes from the number of qubits — the exponential scaling is why people are interested in quantum computing to begin with,” Foxen said. “This is done by matrix multiplication, which is expensive for classical computers as the matrices become large.”

According to the new paper, the researchers used a method called cross-entropy benchmarking to compare the quantum circuit’s output (a “bitstring”) to its “corresponding ideal probability computed via simulation on a classical computer” to ascertain that the quantum computer was working correctly.

“We made a lot of design choices in the development of our processor that are really advantageous,” said Chiaro. Among these advantages, he said, are the ability to experimentally tune the parameters of the individual qubits as well as their interactions.

While the experiment was chosen as a proof-of-concept for the computer, the research has resulted in a very real and valuable tool: a certified random number generator. Useful in a variety of fields, random numbers can ensure that encrypted keys can’t be guessed, or that a sample from a larger population is truly representative, leading to optimal solutions for complex problems and more robust machine learning applications. The speed with which the quantum circuit can produce its randomized bit string is so great that there is no time to analyze and “cheat” the system.

“Quantum mechanical states do things that go beyond our day-to-day experience and so have the potential to provide capabilities and application that would otherwise be unattainable,” commented Joe Incandela, UC Santa Barbara’s vice chancellor for research. “The team has demonstrated the ability to reliably create and repeatedly sample complicated quantum states involving 53 entangled elements to carry out an exercise that would take millennia to do with a classical supercomputer. This is a major accomplishment. We are at the threshold of a new era of knowledge acquisition.”

Looking ahead

With an achievement like “quantum supremacy,” it’s tempting to think that the UC Santa Barbara/Google researchers will plant their flag and rest easy. But for Foxen, Chiaro, Martinis and the rest of the UCSB/Google AI Quantum group, this is just the beginning.

“It’s kind of a continuous improvement mindset,” Foxen said. “There are always projects in the works.” In the near term, further improvements to these “noisy” qubits may enable the simulation of interesting phenomena in quantum mechanics, such as thermalization, or the vast amount of possibility in the realms of materials and chemistry.

In the long term, however, the scientists are always looking to improve coherence times, or, at the other end, to detect and fix errors, which would take many additional qubits per qubit being checked. These efforts have been running parallel to the design and build of the quantum computer itself, and ensure the researchers have a lot of work before hitting their next milestone.

“It’s been an honor and a pleasure to be associated with this team,” Chiaro said. “It’s a great collection of strong technical contributors with great leadership and the whole team really synergizes well.”

Here’s a link to and a citation for the paper,

Quantum supremacy using a programmable superconducting processor by Frank Arute, Kunal Arya, Ryan Babbush, Dave Bacon, Joseph C. Bardin, Rami Barends, Rupak Biswas, Sergio Boixo, Fernando G. S. L. Brandao, David A. Buell, Brian Burkett, Yu Chen, Zijun Chen, Ben Chiaro, Roberto Collins, William Courtney, Andrew Dunsworth, Edward Farhi, Brooks Foxen, Austin Fowler, Craig Gidney, Marissa Giustina, Rob Graff, Keith Guerin, Steve Habegger, Matthew P. Harrigan, Michael J. Hartmann, Alan Ho, Markus Hoffmann, Trent Huang, Travis S. Humble, Sergei V. Isakov, Evan Jeffrey, Zhang Jiang, Dvir Kafri, Kostyantyn Kechedzhi, Julian Kelly, Paul V. Klimov, Sergey Knysh, Alexander Korotkov, Fedor Kostritsa, David Landhuis, Mike Lindmark, Erik Lucero, Dmitry Lyakh, Salvatore Mandrà, Jarrod R. McClean, Matthew McEwen, Anthony Megrant, Xiao Mi, Kristel Michielsen, Masoud Mohseni, Josh Mutus, Ofer Naaman, Matthew Neeley, Charles Neill, Murphy Yuezhen Niu, Eric Ostby, Andre Petukhov, John C. Platt, Chris Quintana, Eleanor G. Rieffel, Pedram Roushan, Nicholas C. Rubin, Daniel Sank, Kevin J. Satzinger, Vadim Smelyanskiy, Kevin J. Sung, Matthew D. Trevithick, Amit Vainsencher, Benjamin Villalonga, Theodore White, Z. Jamie Yao, Ping Yeh, Adam Zalcman, Hartmut Neven & John M. Martinis. Nature volume 574, pages505–510 (2019) DOI: https://doi.org/10.1038/s41586-019-1666-5 Issue Date 24 October 2019

This paper appears to be open access.

MXene-coated yarn for wearable electronics

There’s been a lot of talk about wearable electronics, specifically e-textiles, but nothing seems to have entered the marketplace. Scaling up your lab discoveries for industrial production can be quite problematic. From an October 10, 2019 news item on ScienceDaily,

Producing functional fabrics that perform all the functions we want, while retaining the characteristics of fabric we’re accustomed to is no easy task.

Two groups of researchers at Drexel University — one, who is leading the development of industrial functional fabric production techniques, and the other, a pioneer in the study and application of one of the strongest, most electrically conductive super materials in use today — believe they have a solution.

They’ve improved a basic element of textiles: yarn. By adding technical capabilities to the fibers that give textiles their character, fit and feel, the team has shown that it can knit new functionality into fabrics without limiting their wearability.

An October 10, 2019 Drexel University news release (also on EurekAlert), which originated the news item, details the proposed solution (pun! as you’ll see in the video following this excerpt),

In a paper recently published in the journal Advanced Functional Materials, the researchers, led by Yury Gogotsi, PhD, Distinguished University and Bach professor in Drexel’s College of Engineering, and Genevieve Dion, an associate professor in Westphal College of Media Arts & Design and director of Drexel’s Center for Functional Fabrics, showed that they can create a highly conductive, durable yarn by coating standard cellulose-based yarns with a type of conductive two-dimensional material called MXene.

Hitting snags

“Current wearables utilize conventional batteries, which are bulky and uncomfortable, and can impose design limitations to the final product,” they write. “Therefore, the development of flexible, electrochemically and electromechanically active yarns, which can be engineered and knitted into full fabrics provide new and practical insights for the scalable production of textile-based devices.”

The team reported that its conductive yarn packs more conductive material into the fibers and can be knitted by a standard industrial knitting machine to produce a textile with top-notch electrical performance capabilities. This combination of ability and durability stands apart from the rest of the functional fabric field today.

Most attempts to turn textiles into wearable technology use stiff metallic fibers that alter the texture and physical behavior of the fabric. Other attempts to make conductive textiles using silver nanoparticles and graphene and other carbon materials raise environmental concerns and come up short on performance requirements. And the coating methods that are successfully able to apply enough material to a textile substrate to make it highly conductive also tend to make the yarns and fabrics too brittle to withstand normal wear and tear.

“Some of the biggest challenges in our field are developing innovative functional yarns at scale that are robust enough to be integrated into the textile manufacturing process and withstand washing,” Dion said. “We believe that demonstrating the manufacturability of any new conductive yarn during experimental stages is crucial. High electrical conductivity and electrochemical performance are important, but so are conductive yarns that can be produced by a simple and scalable process with suitable mechanical properties for textile integration. All must be taken into consideration for the successful development of the next-generation devices that can be worn like everyday garments.”

The winning combination

Dion has been a pioneer in the field of wearable technology, by drawing on her background on fashion and industrial design to produce new processes for creating fabrics with new technological capabilities. Her work has been recognized by the Department of Defense, which included Drexel, and Dion, in its Advanced Functional Fabrics of America effort to make the country a leader in the field.

She teamed with Gogotsi, who is a leading researcher in the area of two-dimensional conductive materials, to approach the challenge of making a conductive yarn that would hold up to knitting, wearing and washing.

Gogotsi’s group was part of the Drexel team that discovered highly conductive two-dimensional materials, called MXenes, in 2011 and have been exploring their exceptional properties and applications for them ever since. His group has shown that it can synthesize MXenes that mix with water to create inks and spray coatings without any additives or surfactants – a revelation that made them a natural candidate for making conductive yarn that could be used in functional fabrics. [Gogotsi’s work was featured here in a May 6, 2019 posting]

“Researchers have explored adding graphene and carbon nanotube coatings to yarn, our group has also looked at a number of carbon coatings in the past,” Gogotsi said. “But achieving the level of conductivity that we demonstrate with MXenes has not been possible until now. It is approaching the conductivity of silver nanowire-coated yarns, but the use of silver in the textile industry is severely limited due to its dissolution and harmful effect on the environment. Moreover, MXenes could be used to add electrical energy storage capability, sensing, electromagnetic interference shielding and many other useful properties to textiles.”

In its basic form, titanium carbide MXene looks like a black powder. But it is actually composed of flakes that are just a few atoms thick, which can be produced at various sizes. Larger flakes mean more surface area and greater conductivity, so the team found that it was possible to boost the performance of the yarn by infiltrating the individual fibers with smaller flakes and then coating the yarn itself with a layer of larger-flake MXene.

Putting it to the test

The team created the conductive yarns from three common, cellulose-based yarns: cotton, bamboo and linen. They applied the MXene material via dip-coating, which is a standard dyeing method, before testing them by knitting full fabrics on an industrial knitting machine – the kind used to make most of the sweaters and scarves you’ll see this fall.

Each type of yarn was knit into three different fabric swatches using three different stitch patterns – single jersey, half gauge and interlock – to ensure that they are durable enough to hold up in any textile from a tightly knit sweater to a loose-knit scarf.

“The ability to knit MXene-coated cellulose-based yarns with different stitch patterns allowed us to control the fabric properties, such as porosity and thickness for various applications,” the researchers write.

To put the new threads to the test in a technological application, the team knitted some touch-sensitive textiles – the sort that are being explored by Levi’s and Yves Saint Laurent as part of Google’s Project Jacquard.

Not only did the MXene-based conductive yarns hold up against the wear and tear of the industrial knitting machines, but the fabrics produced survived a battery of tests to prove its durability. Tugging, twisting, bending and – most importantly – washing, did not diminish the touch-sensing abilities of the yarn, the team reported – even after dozens of trips through the spin cycle.

Pushing forward

But the researchers suggest that the ultimate advantage of using MXene-coated conductive yarns to produce these special textiles is that all of the functionality can be seamlessly integrated into the textiles. So instead of having to add an external battery to power the wearable device, or wirelessly connect it to your smartphone, these energy storage devices and antennas would be made of fabric as well – an integration that, though literally seamed, is a much smoother way to incorporate the technology.

“Electrically conducting yarns are quintessential for wearable applications because they can be engineered to perform specific functions in a wide array of technologies,” they write.

Using conductive yarns also means that a wider variety of technological customization and innovations are possible via the knitting process. For example, “the performance of the knitted pressure sensor can be further improved in the future by changing the yarn type, stitch pattern, active material loading and the dielectric layer to result in higher capacitance changes,” according to the authors.

Dion’s team at the Center for Functional Fabrics is already putting this development to the test in a number of projects, including a collaboration with textile manufacturer Apex Mills – one of the leading producers of material for car seats and interiors. And Gogotsi suggests the next step for this work will be tuning the coating process to add just the right amount of conductive MXene material to the yarn for specific uses.

“With this MXene yarn, so many applications are possible,” Gogotsi said. “You can think about making car seats with it so the car knows the size and weight of the passenger to optimize safety settings; textile pressure sensors could be in sports apparel to monitor performance, or woven into carpets to help connected houses discern how many people are home – your imagination is the limit.”

Researchers have produced a video about their work,

Here’s a link to and a citation for the paper,

Knittable and Washable Multifunctional MXene‐Coated Cellulose Yarns by Simge Uzun, Shayan Seyedin, Amy L. Stoltzfus, Ariana S. Levitt, Mohamed Alhabeb, Mark Anayee, Christina J. Strobel, Joselito M. Razal, Genevieve Dion, Yury Gogotsi. Advanced Functional Materials DOI: https://doi.org/10.1002/adfm.201905015 First published: 05 September 2019

This paper is behind a paywall.

Toronto, Sidewalk Labs, smart cities, and timber

The ‘smart city’ initiatives continue to fascinate. During the summer, Toronto’s efforts were described in a June 24, 2019 article by Katharine Schwab for Fast Company (Note: Links have been removed),

Today, Google sister company Sidewalk Labs released a draft of its master plan to transform 12 acres on the Toronto waterfront into a smart city. The document details the neighborhood’s buildings, street design, transportation, and digital infrastructure—as well as how the company plans to construct it.

When a leaked copy of the plan popped up online earlier this year, we learned that Sidewalk Labs plans to build the entire development, called Quayside, out of mass timber. But today’s release of the official plan reveals the key to doing so: Sidewalk proposes investing $80 million to build a timber factory and supply chain that would support its fully timber neighborhood. The company says the factory, which would be focused on manufacturing prefabricated building pieces that could then be assembled into fully modular buildings on site, could reduce building time by 35% compared to more traditional building methods.

“We would fund the creation of [a factory] somewhere in the greater Toronto area that we think could play a role in catalyzing a new industry around mass timber,” says Sidewalk Labs CEO and chairman Dan Doctoroff.

However, the funding of the factory is dependent on Sidewalk Labs being able to expand its development plan to the entire riverfront district. .. [emphasis mine].

Here’s where I think it gets very interesting,

Sidewalk proposes sourcing spruce and fir trees from the forests in Ontario, Quebec, and British Columbia. While Canada has 40% of the world’s sustainable forests, Sidewalk claims, the country has few factories that can turn these trees into the building material. That’s why the company proposes starting a factory to process two kinds of mass timber: Cross-laminated timber (CLT) and glulam beams. The latter is meant specifically to bear the weight of the 30-story buildings Sidewalk hopes to build. While Sidewalk says that 84% of the larger district would be handed over for development by local companies, the plan requires that these companies uphold the same sustainability standards when it comes to performance

Sidewalk says companies wouldn’t be required to build with CLT and glulam, but since the company’s reason for building the mass timber factory is that there aren’t many existing manufacturers to meet the needs for a full-scale development, the company’s plan might ultimately push any third-party developers toward using its [Google] factory to source materials. … [emphasis mine]

If I understand this rightly, Google wants to expand its plan to Toronto’s entire waterfront to make building a factory to produce the type of wood products Google wants to use in its Quayside development financially feasible (profitable). And somehow, local developers will not be forced to build the sames kinds of structures although Google will be managing the entire waterfront development. Hmmm.

Let’s take a look at one of Google’s other ‘city ventures’.

Louisville, Kentucky

First, Alphabet is the name of Google’s parent company and it was Alphabet that offered the city of Louisville an opportunity for cheap, abundant internet service known as Google Fiber. From a May 6, 2019 article by Alex Correa for the The Edge (Note: Links have been removed),

In 2015, Alphabet chose several cities in Kentucky to host its Google Fiber project. Google Fiber is a service providing broadband internet and IPTV directly to a number of locations, and the initiative in Kentucky … . The tech giant dug up city streets to bury fibre optic cables of their own, touting a new technique that would only require the cables to be a few inches beneath the surface. However, after two years of delays and negotiations after the announcement, Google abandoned the project in Louisville, Kentucky.

Like an unwanted pest in a garden, sign of Google’s presence can be seen and felt in the city streets. Metro Councilman Brandon Coan criticized the state of the city’s infrastructure, pointing out that strands of errant, tar-like sealant, used to cover up the cables, are “everywhere.” Speaking outside of a Louisville coffee shop that ran Google Fiber lines before the departure, he said, “I’m confident that Google and the city are going to negotiate a deal… to restore the roads to as good a condition as they were when they got here. Frankly, I think they owe us more than that.”

Google’s disappearance did more than just damage roads [emphasis mine] in Louisville. Plans for promising projects were abandoned, including transformative economic development that could have provided the population with new jobs and vastly different career opportunities than what was available. Add to that the fact that media coverage of the aborted initiative cast Louisville as the site of a failed experiment, creating an impression of the city as an embarrassment. (Google has since announced plans to reimburse the city $3.84 million over 20 months to help repair the damage to the city’s streets and infrastructure.)

A February 22, 2019 article on CBC (Canadian Broadcasting Corporation) Radio news online offers images of the damaged roadways and a particle transcript of a Day 6 radio show hosted by Brent Bambury,

Shortly after it was installed, the sealant on the trenches Google Fiber cut into Louisville roads popped out. (WDRB Louisville) Courtesy: CBC Radio Day 6

Google’s Sidewalk Labs is facing increased pushback to its proposal to build a futuristic neighbourhood in Toronto, after leaked documents revealed the company’s plans are more ambitious than the public had realized.

One particular proposal — which would see Sidewalk Labs taking a cut of property taxes in exchange for building a light rail transit line along Toronto’s waterfront — is especially controversial.

The company has developed an impressive list of promises for its proposed neighbourhood, including mobile pre-built buildings and office towers that tailor themselves to occupants’ behaviour.

But Louisville, Kentucky-based business reporter Chris Otts says that when Google companies come to town, it doesn’t always end well.

What was the promise Google Fiber made to Louisville back in 2015?

Well, it was just to be included as one of their Fiber cities, which was a pretty serious deal for Louisville at the time. A big coup for the mayor, and his administration had been working for years to get Google to consider adding Louisville to that list.

So if the city was eager, what sorts of accommodations were made for Google to entice them to come to Louisville?

Basically, the city did everything it could from a streamlining red tape perspective to get Google here … in terms of, you know, awarding them a franchise, and allowing them to be in the rights of way with this innovative technique they had for burying their cables here.
And then also, they [the city] passed a policy, which, to be sure, they say is just good policy regardless of Google’s support for it. But it had to do with how new Internet companies like Google can access utility poles to install their networks.

And Louisville ended up spending hundreds of thousands of dollars to defend that new policy in court in lawsuits by AT&T and by the traditional cable company here.

When Google Fiber starts doing business, they’re offering cheaper high speed Internet access, and they start burying these cables in the ground.

When did things start to go sideways for this project?

I don’t know if I would say ‘almost immediately,’ but certainly the problems were evident fairly quickly.

So they started their work in 2017. If you picture it, [in] the streets you can see on either side there are these seams. They look like little strings … near the end of the streets on both sides. And there are cuts in the street where they buried the cable and they topped it off with this sealant

And fairly early on — within months, I would say, of them doing that — you could see the sealant popping out. The conduit in there [was] visible or exposed. And so it was fairly evident that there were problems with it pretty quickly

Was this the first time that they had used this system and the sealant that you’re describing?

It was the first time, according to them, that they had used such shallow trenches in the streets.

So these are as shallow as two inches below the pavement surface that they’d bury these cables. It’s the ultra-shallow version of this technique.

And what explanation did Google Fiber offer for their decision to leave Louisville?

That it was basically a business decision; that they were trying this construction method to see if it was sustainable and they just had too many problems with it.

And as they said directly in their … written statement about this, they decided that instead of doing things right and starting over, which they would have to do essentially to keep providing service in Louisville, that it was the better business decision for them to just pick up and leave.

Toronto’s Sidewalk Labs isn’t Google Fiber — but they’re both owned by Google’s parent company, Alphabet.

If Louisville could give Toronto a piece of advice about welcoming a Google infrastructure project to town, what do you think that advice would be?

The biggest lesson from this is that one day they can be next to you at the press conference saying what a great city you are and how happy they are to … provide new service in your market, and then the next day, with almost no notice, they can say, “You know what? This doesn’t make sense for us anymore. And by the way, see ya. Thanks for having us. Sorry it didn’t work out.”

Google’s promises to Toronto

Getting back to Katharine Schwab’s June 24, 2019 fast Company article,

The factory is also key to another of Sidewalk’s promises: Jobs. According to Sidewalk, the factory itself would create 2,500 jobs [emphasis mine] along the entire supply chain over a 20-year period. But even if the Canadian government approves Sidewalk’s plan and commits to building out the entire waterfront district to take advantage of the mass timber factory’s economies of scale, there are other regulatory hurdles to overcome. Right now, the building code in Toronto doesn’t allow for timber buildings over six stories tall. All of Sidewalk’s proposed buildings are over six stories, and many of them go up to 30 stories. Doctoroff said he was optimistic that the company will be able to get regulations changed if the city decides to adopt the plan. There are several examples of timber buildings that are already under construction, with a planned skyscraper in Japan that will be 70 stories.

Sidewalk’s proposal is the result of 18 months of planning, which involved getting feedback from community members and prototyping elements like a building raincoat that the company hopes to include in the final development. It has come under fire from privacy advocates in particular, and the Canadian government is currently facing a lawsuit from a civil liberties group over its decision to allow a corporation to propose public privacy governance standards.

Now that the company has released the plan, it will be up to the Canadian government to decide whether to move forward. And the mass timber factory, in particular, will be dependent on the government adopting Sidewalk’s plan wholesale, far beyond the Quayside development—a reminder that Sidewalk is a corporation that’s here to make money, dangling investment dollars in front of the government to incentivize it to embrace Sidewalk as the developer for the entire area.

A few thoughts

Those folks in Louisville made a lot of accommodations for Google only to have the company abandon them. They will get some money in compensation, finally, but it doesn’t make up for the lost jobs and the national, if not international, loss of face.

I would think that should things go wrong, Google would do exactly the same thing to Toronto. As for the $80M promise, here’s exactly how it’s phrased in the June 24, 2019 Sidewalk Labs news release,

… Together with local partners, Sidewalk proposes to invest up to $80 million in a mass timber factory in Ontario to jumpstart this emerging industry.

So, Alphabet/Google/Sidewalk has proposed up to an $80M investment—with local partners. I wonder how much this factory is supposed to cost and what kinds of accommodations Alphabet/Google/Sidewalk will demand. Possibilities include policy changes, changes in municipal bylaws, and government money. In other words, Canadian taxpayers could end up footing part of the bill and/or local developers could be required to cover and outsize percentage of the costs for the factory as they jockey for the opportunity to develop part of Toronto’s waterfront.

Other than Louisville, what’s the company’s track record with regard to its partnerships with cities and municipalities? I Haven’t found any success stories in my admittedly brief search. Unusually, the company doesn’t seem to be promoting any of its successful city partnerships.

Smart city

While my focus has been on the company’s failure with Louisville and the possible dangers inherent to Toronto in a partnership with this company, it shouldn’t be forgotten that all of this development is in the name of a ‘smart’ city and that means data-driven. My March 28, 2018 posting features some of the issues with the technology, 5G, that will be needed to make cities ‘smart’. There’s also my March 20, 2018 posting (scroll down about 30% of the way) which looks at ‘smart’ cities in Canada with a special emphasis on Vancouver.

You may want to check out David Skok’s February 15, 2019 Maclean’s article (Cracks in the Sidewalk) for a Torontonian’s perspective.

Should you wish to do some delving yourself, there’s Sidewalk Labs website here and a June 24, 2019 article by Matt McFarland for CNN detailing some of the latest news about the backlash in Toronto concerning Sidewalk Labs.

A September 2019 update

Waterfront Toronto’s Digital Strategy Advisory Panel (DSAP) submitted a report to Google in August 2019 which was subsequently published as of September 10, 2019. To sum it up, the panel was not impressed with Google’s June 2019 draft master plan. From a September 11, 2019 news item on the Guardian (Note: Links have been removed),

A controversial smart city development in Canada has hit another roadblock after an oversight panel called key aspects of the proposal “irrelevant”, “unnecessary” and “frustratingly abstract” in a new report.

The project on Toronto’s waterfront, dubbed Quayside, is a partnership between the city and Google’s sister company Sidewalk Labs. It promises “raincoats” for buildings, autonomous vehicles and cutting-edge wood-frame towers, but has faced numerous criticisms in recent months.

A September 11, 2019 article by Ian Bick of Canadian Press published on the CBC (Canadian Broadcasting Corporation) website offers more detail,

Preliminary commentary from Waterfront Toronto’s digital strategy advisory panel (DSAP) released Tuesday said the plan from Google’s sister company Sidewalk is “frustratingly abstract” and that some of the innovations proposed were “irrelevant or unnecessary.”

“The document is somewhat unwieldy and repetitive, spreads discussions of topics across multiple volumes, and is overly focused on the ‘what’ rather than the ‘how,’ ” said the report on the panel’s comments.

Some on the 15-member panel, an arm’s-length body that gives expert advice to Waterfront Toronto, have also found the scope of the proposal to be unclear or “concerning.”

The report says that some members also felt the official Sidewalk plan did not appear to put the citizen at the centre of the design process for digital innovations, and raised issues with the way Sidewalk has proposed to manage data that is generated from the neighbourhood.

The panel’s early report is not official commentary from Waterfront Toronto, the multi-government body that is overseeing the Quayside development, but is meant to indicate areas that needs improvement.

The panel, chaired by University of Ottawa law professor Michael Geist, includes executives, professors, and other experts on technology, privacy, and innovation.

Sidewalk Labs spokeswoman Keerthana Rang said the company appreciates the feedback and already intends to release more details in October on the digital innovations it hopes to implement at Quayside.

I haven’t been able to find the response to DSAP’s September 2019 critique but I did find this Toronto Sidewalk Labs report, Responsible Data Use Assessment Summary :Overview of Collab dated October 16, 2019. Of course, there’s still another 10 days before October 2019 is past.

The wonder of movement in 3D

Shades of Eadweard Muybridge (English photographer who pioneered photographic motion studies)! A September 19, 2018 news item on ScienceDaily describes the latest efforts to ‘capture motion’,

Patriots quarterback Tom Brady has often credited his success to spending countless hours studying his opponent’s movements on film. This understanding of movement is necessary for all living species, whether it’s figuring out what angle to throw a ball at, or perceiving the motion of predators and prey. But simple videos can’t actually give us the full picture.

That’s because traditional videos and photos for studying motion are two-dimensional, and don’t show us the underlying 3-D structure of the person or subject of interest. Without the full geometry, we can’t inspect the small and subtle movements that help us move faster, or make sense of the precision needed to perfect our athletic form.

Recently, though, researchers from MIT’s [Massachusetts Institute of Technology] Computer Science and Artificial Intelligence Laboratory (CSAIL) have come up with a way to get a better handle on this understanding of complex motion.

There isn’t a single reference to Muybridge, still, this September 18, 2018 Massachusetts Institute of Technology news release (also on EurekAlert but published September 19, 2018), which originated the news item, delves further into the research,

The new system uses an algorithm that can take 2-D videos and turn them into 3-D printed “motion sculptures” that show how a human body moves through space. In addition to being an intriguing aesthetic visualization of shape and time, the team envisions that their “MoSculp” system could enable a much more detailed study of motion for professional athletes, dancers, or anyone who wants to improve their physical skills.

“Imagine you have a video of Roger Federer serving a ball in a tennis match, and a video of yourself learning tennis,” says PhD student Xiuming Zhang, lead author of a new paper about the system. “You could then build motion sculptures of both scenarios to compare them and more comprehensively study where you need to improve.”

Because motion sculptures are 3-D, users can use a computer interface to navigate around the structures and see them from different viewpoints, revealing motion-related information inaccessible from the original viewpoint.

Zhang wrote the paper alongside MIT professors William Freeman and Stefanie Mueller, PhD student Jiajun Wu, Google researchers Qiurui He and Tali Dekel, as well as U.C. Berkeley postdoc and former CSAIL PhD Andrew Owens.

How it works

Artists and scientists have long struggled to gain better insight into movement, limited by their own camera lens and what it could provide.

Previous work has mostly used so-called “stroboscopic” photography techniques, which look a lot like the images in a flip book stitched together. But since these photos only show snapshots of movement, you wouldn’t be able to see as much of the trajectory of a person’s arm when they’re hitting a golf ball, for example.

What’s more, these photographs also require laborious pre-shoot setup, such as using a clean background and specialized depth cameras and lighting equipment. All MoSculp needs is a video sequence.

Given an input video, the system first automatically detects 2-D key points on the subject’s body, such as the hip, knee, and ankle of a ballerina while she’s doing a complex dance sequence. Then, it takes the best possible poses from those points to be turned into 3-D “skeletons.”

After stitching these skeletons together, the system generates a motion sculpture that can be 3-D printed, showing the smooth, continuous path of movement traced out by the subject. Users can customize their figures to focus on different body parts, assign different materials to distinguish among parts, and even customize lighting.

In user studies, the researchers found that over 75 percent of subjects felt that MoSculp provided a more detailed visualization for studying motion than the standard photography techniques.

“Dance and highly-skilled athletic motions often seem like ‘moving sculptures’ but they only create fleeting and ephemeral shapes,” says Courtney Brigham, communications lead at Adobe. “This work shows how to take motions and turn them into real sculptures with objective visualizations of movement, providing a way for athletes to analyze their movements for training, requiring no more equipment than a mobile camera and some computing time.”

The system works best for larger movements, like throwing a ball or taking a sweeping leap during a dance sequence. It also works for situations that might obstruct or complicate movement, such as people wearing loose clothing or carrying objects.

Currently, the system only uses single-person scenarios, but the team soon hopes to expand to multiple people. This could open up the potential to study things like social disorders, interpersonal interactions, and team dynamics.

This work will be presented at the User Interface Software and Technology (UIST) symposium in Berlin, Germany in October 2018 and the team’s paper published as part of the proceedings.

As for anyone wondering about the Muybridge comment, here’s an image the MIT researchers have made available,

A new system uses an algorithm that can take 2-D videos and turn them into 3-D-printed “motion sculptures” that show how a human body moves through space. Image courtesy of MIT CSAIL

Contrast that MIT image with some of the images in this video capturing parts of a theatre production, Studies in Motion: The Hauntings of Eadweard Muybridge,

Getting back to MIT, here’s their MoSculp video,

There are some startling similarities, eh? I suppose there are only so many ways one can capture movement be it in studies of Eadweard Muybridge, a theatre production about his work, or an MIT video the latest in motion capture technology.