Turns out entropy binds nanoparticles a lot like electrons bind chemical crystals
ANN ARBOR—Entropy, a physical property often explained as “disorder,” is revealed as a creator of order with a new bonding theory developed at the University of Michigan and published in the Proceedings of the National Academy of Sciences [PNAS].
Engineers dream of using nanoparticles to build designer materials, and the new theory can help guide efforts to make nanoparticles assemble into useful structures. The theory explains earlier results exploring the formation of crystal structures by space-restricted nanoparticles, enabling entropy to be quantified and harnessed in future efforts.
And curiously, the set of equations that govern nanoparticle interactions due to entropy mirror those that describe chemical bonding. Sharon Glotzer, the Anthony C. Lembke Department Chair of Chemical Engineering, and Thi Vo, a postdoctoral researcher in chemical engineering, answered some questions about their new theory.
What is entropic bonding?
Glotzer: Entropic bonding is a way of explaining how nanoparticles interact to form crystal structures. It’s analogous to the chemical bonds formed by atoms. But unlike atoms, there aren’t electron interactions holding these nanoparticles together. Instead, the attraction arises because of entropy.
Oftentimes, entropy is associated with disorder, but it’s really about options. When nanoparticles are crowded together and options are limited, it turns out that the most likely arrangement of nanoparticles can be a particular crystal structure. That structure gives the system the most options, and thus the highest entropy. Large entropic forces arise when the particles become close to one another.
By doing the most extensive studies of particle shapes and the crystals they form, my group found that as you change the shape, you change the directionality of those entropic forces that guide the formation of these crystal structures. That directionality simulates a bond, and since it’s driven by entropy, we call it entropic bonding.
Why is this important?
Glotzer: Entropy’s contribution to creating order is often overlooked when designing nanoparticles for self-assembly, but that’s a mistake. If entropy is helping your system organize itself, you may not need to engineer explicit attraction between particles—for example, using DNA or other sticky molecules—with as strong an interaction as you thought. With our new theory, we can calculate the strength of those entropic bonds.
While we’ve known that entropic interactions can be directional like bonds, our breakthrough is that we can describe those bonds with a theory that line-for-line matches the theory that you would write down for electron interactions in actual chemical bonds. That’s profound. I’m amazed that it’s even possible to do that. Mathematically speaking, it puts chemical bonds and entropic bonds on the same footing. This is both fundamentally important for our understanding of matter and practically important for making new materials.
Electrons are the key to those chemical equations though. How did you do this when no particles mediate the interactions between your nanoparticles?
Glotzer: Entropy is related to the free space in the system, but for years I didn’t know how to count that space. Thi’s big insight was that we could count that space using fictitious point particles. And that gave us the mathematical analogue of the electrons.
Vo: The pseudoparticles move around the system and fill in the spaces that are hard for another nanoparticle to fill—we call this the excluded volume around each nanoparticle. As the nanoparticles become more ordered, the excluded volume around them becomes smaller, and the concentration of pseudoparticles in those regions increases. The entropic bonds are where that concentration is highest.
In crowded conditions, the entropy lost by increasing the order is outweighed by the entropy gained by shrinking the excluded volume. As a result, the configuration with the highest entropy will be the one where pseudoparticles occupy the least space.
The research is funded by the Simons Foundation, Office of Naval Research, and the Office of the Undersecretary of Defense for Research and Engineering. It relied on the computing resources of the National Science Foundation’s Extreme Science and Engineering Discovery Environment. Glotzer is also the John Werner Cahn Distinguished University Professor of Engineering, the Stuart W. Churchill Collegiate Professor of Chemical Engineering, and a professor of material science and engineering, macromolecular science and engineering, and physics at U-M.
Here’s a link to and a citation for the paper,
A theory of entropic bonding by Thi Vo and Sharon C. Glotzer. PNAS January 25, 2022 119 (4) e2116414119 DOI: https://doi.org/10.1073/pnas.2116414119
Musicians are helping scientists analyze data, teach protein folding and make new discoveries through sound.
A team of researchers at the University of Illinois Urbana-Champaign is using sonification — the use of sound to convey information — to depict biochemical processes and better understand how they happen.
Music professor and composer Stephen Andrew Taylor; chemistry professor and biophysicist Martin Gruebele; and Illinois music and computer science alumna, composer and software designer Carla Scaletti formed the Biophysics Sonification Group, which has been meeting weekly on Zoom since the beginning of the pandemic. The group has experimented with using sonification in Gruebele’s research into the physical mechanisms of protein folding, and its work recently allowed Gruebele to make a new discovery about the ways a protein can fold.
Taylor’s musical compositions have long been influenced by science, and recent works represent scientific data and biological processes. Gruebele also is a musician who built his own pipe organ that he plays and uses to compose music. The idea of working together on sonification struck a chord with them, and they’ve been collaborating for several years. Through her company, Symbolic Sound Corp., Scaletti develops a digital audio software and hardware sound design system called Kyma that is used by many musicians and researchers, including Taylor.
Scaletti created an animated visualization paired with sound that illustrated a simplified protein-folding process, and Gruebele and Taylor used it to introduce key concepts of the process to students and gauge whether it helped with their understanding. They found that sonification complemented and reinforced the visualizations and that, even for experts, it helped increase intuition for how proteins fold and misfold over time. The Biophysics Sonification Group – which also includes chemistry professor Taras Pogorelov, former chemistry graduate student (now alumna) Meredith Rickard, composer and pipe organist Franz Danksagmüller of the Lübeck Academy of Music in Germany, and Illinois electrical and computer engineering alumnus Kurt Hebel of Symbolic Sound – described using sonification in teaching in the Journal of Chemical Education.
Gruebele and his research team use supercomputers to run simulations of proteins folding into a specific structure, a process that relies on a complex pattern of many interactions. The simulation reveals the multiple pathways the proteins take as they fold, and also shows when they misfold or get stuck in the wrong shape – something thought to be related to a number of diseases such as Alzheimer’s and Parkinson’s.
The researchers use the simulation data to gain insight into the process. Nearly all data analysis is done visually, Gruebele said, but massive amounts of data generated by the computer simulations – representing hundreds of thousands of variables and millions of moments in time – can be very difficult to visualize.
“In digital audio, everything is a stream of numbers, so actually it’s quite natural to take a stream of numbers and listen to it as if it’s a digital recording,” Scaletti said. “You can hear things that you wouldn’t see if you looked at a list of numbers and you also wouldn’t see if you looked at an animation. There’s so much going on that there could be something that’s hidden, but you could bring it out with sound.”
For example, when the protein folds, it is surrounded by water molecules that are critical to the process. Gruebele said he wants to know when a water molecule touches and solvates a protein, but “there are 50,000 water molecules moving around, and only one or two are doing a critical thing. It’s impossible to see.” However, if a splashy sound occurred every time a water molecule touched a specific amino acid, that would be easy to hear.
Taylor and Scaletti use various audio-mapping techniques to link aspects of proteins to sound parameters such as pitch, timbre, loudness and pan position. For example, Taylor’s work uses different pitches and instruments to represent each unique amino acid, as well as their hydrophobic or hydrophilic qualities.
“I’ve been trying to draw on our instinctive responses to sound as much as possible,” Taylor said. “Beethoven said, ‘The deeper the stream, the deeper the tone.’ We expect an elephant to make a low sound because it’s big, and we expect a sparrow to make a high sound because it’s small. Certain kinds of mappings are built into us. As much as possible, we can take advantage of those and that helps to communicate more effectively.”
The highly developed instincts of musicians help in creating the best tool to use sound to convey information, Taylor said.
“It’s a new way of showing how music and sound can help us understand the world. Musicians have an important role to play,” he said. “It’s helped me become a better musician, in thinking about sound in different ways and thinking how sound can link to the world in different ways, even the world of the very small.”
There is a huge global effort to engineer a computer capable of harnessing the power of quantum physics to carry out computations of unprecedented complexity. While formidable technological obstacles still stand in the way of creating such a quantum computer, today’s early prototypes are still capable of remarkable feats.
For example, the creation of a new phase of matter called a “time crystal.” Just as a crystal’s structure repeats in space, a time crystal repeats in time and, importantly, does so infinitely and without any further input of energy—like a clock that runs forever without any batteries. The quest to realize this phase of matter has been a longstanding challenge in theory and experiment—one that has now finally come to fruition.
In research published Nov. 30  in Nature, a team of scientists from Stanford University, Google Quantum AI, the Max Planck Institute for Physics of Complex Systems and Oxford University detail their creation of a time crystal using Google’s Sycamore quantum computing hardware.
“The big picture is that we are taking the devices that are meant to be the quantum computers of the future and thinking of them as complex quantum systems in their own right,” said Matteo Ippoliti, a postdoctoral scholar at Stanford and co-lead author of the work. “Instead of computation, we’re putting the computer to work as a new experimental platform to realize and detect new phases of matter.”
For the team, the excitement of their achievement lies not only in creating a new phase of matter but in opening up opportunities to explore new regimes in their field of condensed matter physics, which studies the novel phenomena and properties brought about by the collective interactions of many objects in a system. (Such interactions can be far richer than the properties of the individual objects.)
“Time-crystals are a striking example of a new type of non-equilibrium quantum phase of matter,” said Vedika Khemani, assistant professor of physics at Stanford and a senior author of the paper. “While much of our understanding of condensed matter physics is based on equilibrium systems, these new quantum devices are providing us a fascinating window into new non-equilibrium regimes in many-body physics.”
What a time crystal is and isn’t
The basic ingredients to make this time crystal are as follows: The physics equivalent of a fruit fly and something to give it a kick. The fruit fly of physics is the Ising model, a longstanding tool for understanding various physical phenomena – including phase transitions and magnetism – which consists of a lattice where each site is occupied by a particle that can be in two states, represented as a spin up or down.
During her graduate school years, Khemani, her doctoral advisor Shivaji Sondhi, then at Princeton University, and Achilleas Lazarides and Roderich Moessner at the Max Planck Institute for Physics of Complex Systems stumbled upon this recipe for making time crystals unintentionally. They were studying non-equilibrium many-body localized systems – systems where the particles get “stuck” in the state in which they started and can never relax to an equilibrium state. They were interested in exploring phases that might develop in such systems when they are periodically “kicked” by a laser. Not only did they manage to find stable non-equilibrium phases, they found one where the spins of the particles flipped between patterns that repeat in time forever, at a period twice that of the driving period of the laser, thus making a time crystal.
The periodic kick of the laser establishes a specific rhythm to the dynamics. Normally the “dance” of the spins should sync up with this rhythm, but in a time crystal it doesn’t. Instead, the spins flip between two states, completing a cycle only after being kicked by the laser twice. This means that the system’s “time translation symmetry” is broken. Symmetries play a fundamental role in physics, and they are often broken – explaining the origins of regular crystals, magnets and many other phenomena; however, time translation symmetry stands out because unlike other symmetries, it can’t be broken in equilibrium. The periodic kick is a loophole that makes time crystals possible.
The doubling of the oscillation period is unusual, but not unprecedented. And long-lived oscillations are also very common in the quantum dynamics of few-particle systems. What makes a time crystal unique is that it’s a system of millions of things that are showing this kind of concerted behavior without any energy coming in or leaking out.
“It’s a completely robust phase of matter, where you’re not fine-tuning parameters or states but your system is still quantum,” said Sondhi, professor of physics at Oxford and co-author of the paper. “There’s no feed of energy, there’s no drain of energy, and it keeps going forever and it involves many strongly interacting particles.”
While this may sound suspiciously close to a “perpetual motion machine,” a closer look reveals that time crystals don’t break any laws of physics. Entropy – a measure of disorder in the system – remains stationary over time, marginally satisfying the second law of thermodynamics by not decreasing.
Between the development of this plan for a time crystal and the quantum computer experiment that brought it to reality, many experiments by many different teams of researchers achieved various almost-time-crystal milestones. However, providing all the ingredients in the recipe for “many-body localization” (the phenomenon that enables an infinitely stable time crystal) had remained an outstanding challenge.
For Khemani and her collaborators, the final step to time crystal success was working with a team at Google Quantum AI. Together, this group used Google’s Sycamore quantum computing hardware to program 20 “spins” using the quantum version of a classical computer’s bits of information, known as qubits.
Revealing just how intense the interest in time crystals currently is, another time crystal was published in Science this month [November 2021]. That crystal was created using qubits within a diamond by researchers at Delft University of Technology in the Netherlands.
The researchers were able to confirm their claim of a true time crystal thanks to special capabilities of the quantum computer. Although the finite size and coherence time of the (imperfect) quantum device meant that their experiment was limited in size and duration – so that the time crystal oscillations could only be observed for a few hundred cycles rather than indefinitely – the researchers devised various protocols for assessing the stability of their creation. These included running the simulation forward and backward in time and scaling its size.
“We managed to use the versatility of the quantum computer to help us analyze its own limitations,” said Moessner, co-author of the paper and director at the Max Planck Institute for Physics of Complex Systems. “It essentially told us how to correct for its own errors, so that the fingerprint of ideal time-crystalline behavior could be ascertained from finite time observations.”
A key signature of an ideal time crystal is that it shows indefinite oscillations from all states. Verifying this robustness to choice of states was a key experimental challenge, and the researchers devised a protocol to probe over a million states of their time crystal in just a single run of the machine, requiring mere milliseconds of runtime. This is like viewing a physical crystal from many angles to verify its repetitive structure.
“A unique feature of our quantum processor is its ability to create highly complex quantum states,” said Xiao Mi, a researcher at Google and co-lead author of the paper. “These states allow the phase structures of matter to be effectively verified without needing to investigate the entire computational space – an otherwise intractable task.”
Creating a new phase of matter is unquestionably exciting on a fundamental level. In addition, the fact that these researchers were able to do so points to the increasing usefulness of quantum computers for applications other than computing. “I am optimistic that with more and better qubits, our approach can become a main method in studying non-equilibrium dynamics,” said Pedram Roushan, researcher at Google and senior author of the paper.
“We think that the most exciting use for quantum computers right now is as platforms for fundamental quantum physics,” said Ippoliti. “With the unique capabilities of these systems, there’s hope that you might discover some new phenomenon that you hadn’t predicted.”
Here’s a link to and a citation for the paper,
Time-Crystalline Eigenstate Order on a Quantum Processor by Xiao Mi, Matteo Ippoliti, Chris Quintana, Ami Greene, Zijun Chen, Jonathan Gross, Frank Arute, Kunal Arya, Juan Atalaya, Ryan Babbush, Joseph C. Bardin, Joao Basso, Andreas Bengtsson, Alexander Bilmes, Alexandre Bourassa, Leon Brill, Michael Broughton, Bob B. Buckley, David A. Buell, Brian Burkett, Nicholas Bushnell, Benjamin Chiaro, Roberto Collins, William Courtney, Dripto Debroy, Sean Demura, Alan R. Derk, Andrew Dunsworth, Daniel Eppens, Catherine Erickson, Edward Farhi, Austin G. Fowler, Brooks Foxen, Craig Gidney, Marissa Giustina, Matthew P. Harrigan, Sean D. Harrington, Jeremy Hilton, Alan Ho, Sabrina Hong, Trent Huang, Ashley Huff, William J. Huggins, L. B. Ioffe, Sergei V. Isakov, Justin Iveland, Evan Jeffrey, Zhang Jiang, Cody Jones, Dvir Kafri, Tanuj Khattar, Seon Kim, Alexei Kitaev, Paul V. Klimov, Alexander N. Korotkov, Fedor Kostritsa, David Landhuis, Pavel Laptev, Joonho Lee, Kenny Lee, Aditya Locharla, Erik Lucero, Orion Martin, Jarrod R. McClean, Trevor McCourt, Matt McEwen, Kevin C. Miao, Masoud Mohseni, Shirin Montazeri, Wojciech Mruczkiewicz, Ofer Naaman, Matthew Neeley, Charles Neill, Michael Newman, Murphy Yuezhen Niu, Thomas E. O’Brien, Alex Opremcak, Eric Ostby, Balint Pato, Andre Petukhov, Nicholas C. Rubin, Daniel Sank, Kevin J. Satzinger, Vladimir Shvarts, Yuan Su, Doug Strain, Marco Szalay, Matthew D. Trevithick, Benjamin Villalonga, Theodore White, Z. Jamie Yao, Ping Yeh, Juhwan Yoo, Adam Zalcman, Hartmut Neven, Sergio Boixo, Vadim Smelyanskiy, Anthony Megrant, Julian Kelly, Yu Chen, S. L. Sondhi, Roderich Moessner, Kostyantyn Kechedzhi, Vedika Khemani & Pedram Roushan. Nature (2021) DOI: https://doi.org/10.1038/s41586-021-04257-w Published 30 November 2021
This is a preview of the unedited paper being provided by Nature. Click on the Download PDF button (to the right of the title) to get access.
The ‘metaverse’ seems to be everywhere these days (especially since Facebook has made a number of announcements bout theirs (more about that later in this posting).
At this point, the metaverse is very hyped up despite having been around for about 30 years. According to the Wikipedia timeline (see the Metaverse entry), the first one was a MOO in 1993 called ‘The Metaverse’. In any event, it seems like it might be a good time to see what’s changed since I dipped my toe into a metaverse (Second Life by Linden Labs) in 2007.
(For grammar buffs, I switched from definite article [the] to indefinite article [a] purposefully. In reading the various opinion pieces and announcements, it’s not always clear whether they’re talking about a single, overarching metaverse [the] replacing the single, overarching internet or whether there will be multiple metaverses, in which case [a].)
The hype/the buzz … call it what you will
This September 6, 2021 piece by Nick Pringle for Fast Company dates the beginning of the metaverse to a 1992 science fiction novel before launching into some typical marketing hype (for those who don’t know, hype is the short form for hyperbole; Note: Links have been removed),
The term metaverse was coined by American writer Neal Stephenson in his 1993 sci-fi hit Snow Crash. But what was far-flung fiction 30 years ago is now nearing reality. At Facebook’s most recent earnings call [June 2021], CEO Mark Zuckerberg announced the company’s vision to unify communities, creators, and commerce through virtual reality: “Our overarching goal across all of these initiatives is to help bring the metaverse to life.”
So what actually is the metaverse? It’s best explained as a collection of 3D worlds you explore as an avatar. Stephenson’s original vision depicted a digital 3D realm in which users interacted in a shared online environment. Set in the wake of a catastrophic global economic crash, the metaverse in Snow Crash emerged as the successor to the internet. Subcultures sprung up alongside new social hierarchies, with users expressing their status through the appearance of their digital avatars.
Today virtual worlds along these lines are formed, populated, and already generating serious money. Household names like Roblox and Fortnite are the most established spaces; however, there are many more emerging, such as Decentraland, Upland, Sandbox, and the soon to launch Victoria VR.
These metaverses [emphasis mine] are peaking at a time when reality itself feels dystopian, with a global pandemic, climate change, and economic uncertainty hanging over our daily lives. The pandemic in particular saw many of us escape reality into online worlds like Roblox and Fortnite. But these spaces have proven to be a place where human creativity can flourish amid crisis.
In fact, we are currently experiencing an explosion of platforms parallel to the dotcom boom. While many of these fledgling digital worlds will become what Ask Jeeves was to Google, I predict [emphasis mine] that a few will match the scale and reach of the tech giant—or even exceed it.
Because the metaverse brings a new dimension to the internet, brands and businesses will need to consider their current and future role within it. Some brands are already forging the way and establishing a new genre of marketing in the process: direct to avatar (D2A). Gucci sold a virtual bag for more than the real thing in Roblox; Nike dropped virtual Jordans in Fortnite; Coca-Cola launched avatar wearables in Decentraland, and Sotheby’s has an art gallery that your avatar can wander in your spare time.
D2A is being supercharged by blockchain technology and the advent of digital ownership via NFTs, or nonfungible tokens. NFTs are already making waves in art and gaming. More than $191 million was transacted on the “play to earn” blockchain game Axie Infinity in its first 30 days this year. This kind of growth makes NFTs hard for brands to ignore. In the process, blockchain and crypto are starting to feel less and less like “outsider tech.” There are still big barriers to be overcome—the UX of crypto being one, and the eye-watering environmental impact of mining being the other. I believe technology will find a way. History tends to agree.
Detractors see the metaverse as a pandemic fad, wrapping it up with the current NFT bubble or reducing it to Zuck’s [Jeffrey Zuckerberg and Facebook] dystopian corporate landscape. This misses the bigger behavior change that is happening among Gen Alpha. When you watch how they play, it becomes clear that the metaverse is more than a buzzword.
For Gen Alpha [emphasis mine], gaming is social life. While millennials relentlessly scroll feeds, Alphas and Zoomers [emphasis mine] increasingly stroll virtual spaces with their friends. Why spend the evening staring at Instagram when you can wander around a virtual Harajuku with your mates? If this seems ridiculous to you, ask any 13-year-old what they think.
Who is Nick Pringle and how accurate are his predictions?
By thinking “virtual first,” you can see how these spaces become highly experimental, creative, and valuable. The products you can design aren’t bound by physics or marketing convention—they can be anything, and are now directly “ownable” through blockchain. …
I believe that the metaverse is here to stay. That means brands and marketers now have the exciting opportunity to create products that exist in multiple realities. The winners will understand that the metaverse is not a copy of our world, and so we should not simply paste our products, experiences, and brands into it.
I emphasized “These metaverses …” in the previous section to highlight the fact that I find the use of ‘metaverses’ vs. ‘worlds’ confusing as the words are sometimes used as synonyms and sometimes as distinctions. We do it all the time in all sorts of conversations but for someone who’s an outsider to a particular occupational group or subculture, the shifts can make for confusion.
As for Gen Alpha and Zoomer, I’m not a fan of ‘Gen anything’ as shorthand for describing a cohort based on birth years. For example, “For Gen Alpha [emphasis mine], gaming is social life,” ignores social and economic classes, as well as, the importance of locations/geography, e.g., Afghanistan in contrast to the US.
To answer the question I asked, Pringle does not mention any record of accuracy for his predictions for the future but I was able to discover that he is a “multiple Cannes Lions award-winning creative” (more here).
In recent months you may have heard about something called the metaverse. Maybe you’ve read that the metaverse is going to replace the internet. Maybe we’re all supposed to live there. Maybe Facebook (or Epic, or Roblox, or dozens of smaller companies) is trying to take it over. And maybe it’s got something to do with NFTs [non-fungible tokens]?
Unlike a lot of things The Verge covers, the metaverse is tough to explain for one reason: it doesn’t necessarily exist. It’s partly a dream for the future of the internet and partly a neat way to encapsulate some current trends in online infrastructure, including the growth of real-time 3D worlds.
Then what is the real metaverse?
There’s no universally accepted definition of a real “metaverse,” except maybe that it’s a fancier successor to the internet. Silicon Valley metaverse proponents sometimes reference a description from venture capitalist Matthew Ball, author of the extensive Metaverse Primer:
“The Metaverse is an expansive network of persistent, real-time rendered 3D worlds and simulations that support continuity of identity, objects, history, payments, and entitlements, and can be experienced synchronously by an effectively unlimited number of users, each with an individual sense of presence.”
Facebook, arguably the tech company with the biggest stake in the metaverse, describes it more simply:
“The ‘metaverse’ is a set of virtual spaces where you can create and explore with other people who aren’t in the same physical space as you.”
There are also broader metaverse-related taxonomies like one from game designer Raph Koster, who draws a distinction between “online worlds,” “multiverses,” and “metaverses.” To Koster, online worlds are digital spaces — from rich 3D environments to text-based ones — focused on one main theme. Multiverses are “multiple different worlds connected in a network, which do not have a shared theme or ruleset,” including Ready Player One’s OASIS. And a metaverse is “a multiverse which interoperates more with the real world,” incorporating things like augmented reality overlays, VR dressing rooms for real stores, and even apps like Google Maps.
If you want something a little snarkier and more impressionistic, you can cite digital scholar Janet Murray — who has described the modern metaverse ideal as “a magical Zoom meeting that has all the playful release of Animal Crossing.”
But wait, now Ready Player One isn’t a metaverse and virtual worlds don’t have to be 3D? It sounds like some of these definitions conflict with each other.
An astute observation.
Why is the term “metaverse” even useful? “The internet” already covers mobile apps, websites, and all kinds of infrastructure services. Can’t we roll virtual worlds in there, too?
Matthew Ball favors the term “metaverse” because it creates a clean break with the present-day internet. [emphasis mine] “Using the metaverse as a distinctive descriptor allows us to understand the enormity of that change and in turn, the opportunity for disruption,” he said in a phone interview with The Verge. “It’s much harder to say ‘we’re late-cycle into the last thing and want to change it.’ But I think understanding this next wave of computing and the internet allows us to be more proactive than reactive and think about the future as we want it to be, rather than how to marginally affect the present.”
A more cynical spin is that “metaverse” lets companies dodge negative baggage associated with “the internet” in general and social media in particular. “As long as you can make technology seem fresh and new and cool, you can avoid regulation,” researcher Joan Donovan told The Washington Post in a recent article about Facebook and the metaverse. “You can run defense on that for several years before the government can catch up.”
There’s also one very simple reason: it sounds more futuristic than “internet” and gets investors and media people (like us!) excited.
People keep saying NFTs are part of the metaverse. Why?
NFTs are complicated in their own right, and you can read more about them here. Loosely, the thinking goes: NFTs are a way of recording who owns a specific virtual good, creating and transferring virtual goods is a big part of the metaverse, thus NFTs are a potentially useful financial architecture for the metaverse. Or in more practical terms: if you buy a virtual shirt in Metaverse Platform A, NFTs can create a permanent receipt and let you redeem the same shirt in Metaverse Platforms B to Z.
Lots of NFT designers are selling collectible avatars like CryptoPunks, Cool Cats, and Bored Apes, sometimes for astronomical sums. Right now these are mostly 2D art used as social media profile pictures. But we’re already seeing some crossover with “metaverse”-style services. The company Polygonal Mind, for instance, is building a system called CryptoAvatars that lets people buy 3D avatars as NFTs and then use them across multiple virtual worlds.
Since starting this post sometime in September 2021, the situation regarding Facebook has changed a few times. I’ve decided to begin my version of the story from a summer 2021 announcement.
On Monday, July 26, 2021, Facebook announced a new Metaverse product group. From a July 27, 2021 article by Scott Rosenberg for Yahoo News (Note: A link has been removed),
Facebook announced Monday it was forming a new Metaverse product group to advance its efforts to build a 3D social space using virtual and augmented reality tech.
Facebook’s new Metaverse product group will report to Andrew Bosworth, Facebook’s vice president of virtual and augmented reality [emphasis mine], who announced the new organization in a Facebook post.
Facebook, integrity, and safety in the metaverse
On September 27, 2021 Facebook posted this webpage (Building the Metaverse Responsibly by Andrew Bosworth, VP, Facebook Reality Labs [emphasis mine] and Nick Clegg, VP, Global Affairs) on its site,
The metaverse won’t be built overnight by a single company. We’ll collaborate with policymakers, experts and industry partners to bring this to life.
We’re announcing a $50 million investment in global research and program partners to ensure these products are developed responsibly.
We develop technology rooted in human connection that brings people together. As we focus on helping to build the next computing platform, our work across augmented and virtual reality and consumer hardware will deepen that human connection regardless of physical distance and without being tied to devices.
Introducing the XR [extended reality] Programs and Research Fund
There’s a long road ahead. But as a starting point, we’re announcing the XR Programs and Research Fund, a two-year $50 million investment in programs and external research to help us in this effort. Through this fund, we’ll collaborate with industry partners, civil rights groups, governments, nonprofits and academic institutions to determine how to build these technologies responsibly.
Rebranding Facebook’s integrity and safety issues away?
It seems Facebook’s credibility issues are such that the company is about to rebrand itself according to an October 19, 2021 article by Alex Heath for The Verge (Note: Links have been removed),
Facebook is planning to change its company name next week to reflect its focus on building the metaverse, according to a source with direct knowledge of the matter.
The coming name change, which CEO Mark Zuckerberg plans to talk about at the company’s annual Connect conference on October 28th , but could unveil sooner, is meant to signal the tech giant’s ambition to be known for more than social media and all the ills that entail. The rebrand would likely position the blue Facebook app as one of many products under a parent company overseeing groups like Instagram, WhatsApp, Oculus, and more. A spokesperson for Facebook declined to comment for this story.
Facebook already has more than 10,000 employees building consumer hardware like AR glasses that Zuckerberg believes will eventually be as ubiquitous as smartphones. In July, he told The Verge that, over the next several years, “we will effectively transition from people seeing us as primarily being a social media company to being a metaverse company.”
A rebrand could also serve to further separate the futuristic work Zuckerberg is focused on from the intense scrutiny Facebook is currently under for the way its social platform operates today. A former employee turned whistleblower, Frances Haugen, recently leaked a trove of damning internal documents to The Wall Street Journal and testified about them before Congress. Antitrust regulators in the US and elsewhere are trying to break the company up, and public trust in how Facebook does business is falling.
Facebook isn’t the first well-known tech company to change its company name as its ambitions expand. In 2015, Google reorganized entirely under a holding company called Alphabet, partly to signal that it was no longer just a search engine, but a sprawling conglomerate with companies making driverless cars and health tech. And Snapchat rebranded to Snap Inc. in 2016, the same year it started calling itself a “camera company” and debuted its first pair of Spectacles camera glasses.
If you have time, do read Heath’s article in its entirety.
“It reflects the broadening out of the Facebook business. And then, secondly, I do think that Facebook’s brand is probably not the greatest given all of the events of the last three years or so,” internet analyst James Cordwell at Atlantic Equities said.
“Having a different parent brand will guard against having this negative association transferred into a new brand, or other brands that are in the portfolio,” said Shankha Basu, associate professor of marketing at University of Leeds.
Tyler Jadah’s October 20, 2021 article for the Daily Hive includes an earlier announcement (not mentioned in the other two articles about the rebranding), Note: A link has been removed,
Earlier this week [October 17, 2021], Facebook announced it will start “a journey to help build the next computing platform” and will hire 10,000 new high-skilled jobs within the European Union (EU) over the next five years.
“Working with others, we’re developing what is often referred to as the ‘metaverse’ — a new phase of interconnected virtual experiences using technologies like virtual and augmented reality,” wrote Facebook’s Nick Clegg, the VP of Global Affairs. “At its heart is the idea that by creating a greater sense of “virtual presence,” interacting online can become much closer to the experience of interacting in person.”
Clegg says the metaverse has the potential to help unlock access to new creative, social, and economic opportunities across the globe and the virtual world.
In an email with Facebook’s Corporate Communications Canada, David Troya-Alvarez told Daily Hive, “We don’t comment on rumour or speculation,” in regards to The Verge‘s report.
I will update this posting when and if Facebook rebrands itself into a ‘metaverse’ company.
***See Oct. 28, 2021 update at the end of this posting and prepare yourself for ‘Meta’.***
Who (else) cares about integrity and safety in the metaverse?
In technology, first-mover advantage is often significant. This is why BigTech and other online platforms are beginning to acquire software businesses to position themselves for the arrival of the Metaverse. They hope to be at the forefront of profound changes that the Metaverse will bring in relation to digital interactions between people, between businesses, and between them both.
What is the Metaverse? The short answer is that it does not exist yet. At the moment it is vision for what the future will be like where personal and commercial life is conducted digitally in parallel with our lives in the physical world. Sounds too much like science fiction? For something that does not exist yet, the Metaverse is drawing a huge amount of attention and investment in the tech sector and beyond.
Here we look at what the Metaverse is, what its potential is for disruptive change, and some of the key legal and regulatory issues future stakeholders may need to consider.
What are the potential legal issues?
The revolutionary nature of the Metaverse is likely to give rise to a range of complex legal and regulatory issues. We consider some of the key ones below. As time goes by, naturally enough, new ones will emerge.
Participation in the Metaverse will involve the collection of unprecedented amounts and types of personal data. Today, smartphone apps and websites allow organisations to understand how individuals move around the web or navigate an app. Tomorrow, in the Metaverse, organisations will be able to collect information about individuals’ physiological responses, their movements and potentially even brainwave patterns, thereby gauging a much deeper understanding of their customers’ thought processes and behaviours.
Users participating in the Metaverse will also be “logged in” for extended amounts of time. This will mean that patterns of behaviour will be continually monitored, enabling the Metaverse and the businesses (vendors of goods and services) participating in the Metaverse to understand how best to service the users in an incredibly targeted way.
The hungry Metaverse participant
How might actors in the Metaverse target persons participating in the Metaverse? Let us assume one such woman is hungry at the time of participating. The Metaverse may observe a woman frequently glancing at café and restaurant windows and stopping to look at cakes in a bakery window, and determine that she is hungry and serve her food adverts accordingly.
Contrast this with current technology, where a website or app can generally only ascertain this type of information if the woman actively searched for food outlets or similar on her device.
Therefore, in the Metaverse, a user will no longer need to proactively provide personal data by opening up their smartphone and accessing their webpage or app of choice. Instead, their data will be gathered in the background while they go about their virtual lives.
This type of opportunity comes with great data protection responsibilities. Businesses developing, or participating in, the Metaverse will need to comply with data protection legislation when processing personal data in this new environment. The nature of the Metaverse raises a number of issues around how that compliance will be achieved in practice.
Who is responsible for complying with applicable data protection law?
In many jurisdictions, data protection laws place different obligations on entities depending on whether an entity determines the purpose and means of processing personal data (referred to as a “controller” under the EU General Data Protection Regulation (GDPR)) or just processes personal data on behalf of others (referred to as a “processor” under the GDPR).
In the Metaverse, establishing which entity or entities have responsibility for determining how and why personal data will be processed, and who processes personal data on behalf of another, may not be easy. It will likely involve picking apart a tangled web of relationships, and there may be no obvious or clear answers – for example:
Will there be one main administrator of the Metaverse who collects all personal data provided within it and determines how that personal data will be processed and shared? Or will multiple entities collect personal data through the Metaverse and each determine their own purposes for doing so?
Either way, many questions arise, including:
How should the different entities each display their own privacy notice to users? Or should this be done jointly? How and when should users’ consent be collected? Who is responsible if users’ personal data is stolen or misused while they are in the Metaverse? What data sharing arrangements need to be put in place and how will these be implemented?
There’s a lot more to this page including a look at Social Media Regulation and Intellectual Property Rights.
I’m starting to think we should talking about RR (real reality), as well as, VR (virtual reality), AR (augmented reality), MR (mixed reality), and XR (extended reality). It seems that all of these (except RR, which is implied) will be part of the ‘metaverse’, assuming that it ever comes into existence. Happily, I have found a good summarized description of VR/AR/MR/XR in a March 20, 2018 essay by North of 41 on medium.com,
Summary: VR is immersing people into a completely virtual environment; AR is creating an overlay of virtual content, but can’t interact with the environment; MR is a mixed of virtual reality and the reality, it creates virtual objects that can interact with the actual environment. XR brings all three Reality (AR, VR, MR) together under one term.
If you have the interest and approximately five spare minutes, read the entire March 20, 2018 essay, which has embedded images illustrating the various realities.
Here’s a description from one of the researchers, Mohamed Kari, of the video, which you can see above, and the paper he and his colleagues presented at the 20th IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 2021 (from the TransforMR page on YouTube),
We present TransforMR, a video see-through mixed reality system for mobile devices that performs 3D-pose-aware object substitution to create meaningful mixed reality scenes in previously unseen, uncontrolled, and open-ended real-world environments.
To get a sense of how recent this work is, ISMAR 2021 was held from October 4 – 8, 2021.
The team’s 2021 ISMAR paper, TransforMR Pose-Aware Object Substitution for Composing Alternate Mixed Realities by Mohamed Kari, Tobias Grosse-Puppendah, Luis Falconeri Coelho, Andreas Rene Fender, David Bethge, Reinhard Schütte, and Christian Holz lists two educational institutions I’d expect to see (University of Duisburg-Essen and ETH Zürich), the surprise was this one: Porsche AG. Perhaps that explains the preponderance of vehicles in this demonstration.
Space walking in virtual reality
Ivan Semeniuk’s October 2, 2021 article for the Globe and Mail highlights a collaboration between Montreal’s Felix and Paul Studios with NASA (US National Aeronautics and Space Administration) and Time studios,
Communing with the infinite while floating high above the Earth is an experience that, so far, has been known to only a handful.
Now, a Montreal production company aims to share that experience with audiences around the world, following the first ever recording of a spacewalk in the medium of virtual reality.
The company, which specializes in creating virtual-reality experiences with cinematic flair, got its long-awaited chance in mid-September when astronauts Thomas Pesquet and Akihiko Hoshide ventured outside the International Space Station for about seven hours to install supports and other equipment in preparation for a new solar array.
The footage will be used in the fourth and final instalment of Space Explorers: The ISS Experience, a virtual-reality journey to space that has already garnered a Primetime Emmy Award for its first two episodes.
From the outset, the production was developed to reach audiences through a variety of platforms for 360-degree viewing, including 5G-enabled smart phones and tablets. A domed theatre version of the experience for group audiences opened this week at the Rio Tinto Alcan Montreal Planetarium. Those who desire a more immersive experience can now see the first two episodes in VR form by using a headset available through the gaming and entertainment company Oculus. Scenes from the VR series are also on offer as part of The Infinite, an interactive exhibition developed by Montreal’s Phi Studio, whose works focus on the intersection of art and technology. The exhibition, which runs until Nov. 7 , has attracted 40,000 visitors since it opened in July [2021?].
At a time when billionaires are able to head off on private extraterrestrial sojourns that almost no one else could dream of, Lajeunesse [Félix Lajeunesse, co-founder and creative director of Felix and Paul studios] said his project was developed with a very different purpose in mind: making it easier for audiences to become eyewitnesses rather than distant spectators to humanity’s greatest adventure.
For the final instalments, the storyline takes viewers outside of the space station with cameras mounted on the Canadarm, and – for the climax of the series – by following astronauts during a spacewalk. These scenes required extensive planning, not only because of the limited time frame in which they could be gathered, but because of the lighting challenges presented by a constantly shifting sun as the space station circles the globe once every 90 minutes.
… Lajeunesse said that it was equally important to acquire shots that are not just technically spectacular but that serve the underlying themes of Space Explorers: The ISS Experience. These include an examination of human adaptation and advancement, and the unity that emerges within a group of individuals from many places and cultures and who must learn to co-exist in a high risk environment in order to achieve a common goal.
There always seems to be a lot of grappling with new and newish science/technology where people strive to coin terms and define them while everyone, including members of the corporate community, attempts to cash in.
The last time I looked (probably about two years ago), I wasn’t able to find any good definitions for alternate reality and mixed reality. (By good, I mean something which clearly explicated the difference between the two.) It was nice to find something this time.
As for Facebook and its attempts to join/create a/the metaverse, the company’s timing seems particularly fraught. As well, paradigm-shifting technology doesn’t usually start with large corporations. The company is ignoring its own history.
Writing this piece has reminded me of the upcoming movie, “Doctor Strange in the Multiverse of Madness” (Wikipedia entry). While this multiverse is based on a comic book, the idea of a Multiverse (Wikipedia entry) has been around for quite some time,
Early recorded examples of the idea of infinite worlds existed in the philosophy of Ancient Greek Atomism, which proposed that infinite parallel worlds arose from the collision of atoms. In the third century BCE, the philosopher Chrysippus suggested that the world eternally expired and regenerated, effectively suggesting the existence of multiple universes across time. The concept of multiple universes became more defined in the Middle Ages.
Multiple universes have been hypothesized in cosmology, physics, astronomy, religion, philosophy, transpersonal psychology, music, and all kinds of literature, particularly in science fiction, comic books and fantasy. In these contexts, parallel universes are also called “alternate universes”, “quantum universes”, “interpenetrating dimensions”, “parallel universes”, “parallel dimensions”, “parallel worlds”, “parallel realities”, “quantum realities”, “alternate realities”, “alternate timelines”, “alternate dimensions” and “dimensional planes”.
The physics community has debated the various multiverse theories over time. Prominent physicists are divided about whether any other universes exist outside of our own.
Living in a computer simulation or base reality
The whole thing is getting a little confusing for me so I think I’ll stick with RR (real reality) or as it’s also known base reality. For the notion of base reality, I want to thank astronomer David Kipping of Columbia University in Anil Ananthaswamy’s article for this analysis of the idea that we might all be living in a computer simulation (from my December 8, 2020 posting; scroll down about 50% of the way to the “Are we living in a computer simulation?” subhead),
… there is a more obvious answer: Occam’s razor, which says that in the absence of other evidence, the simplest explanation is more likely to be correct. The simulation hypothesis is elaborate, presuming realities nested upon realities, as well as simulated entities that can never tell that they are inside a simulation. “Because it is such an overly complicated, elaborate model in the first place, by Occam’s razor, it really should be disfavored, compared to the simple natural explanation,” Kipping says.
Maybe we are living in base reality after all—The Matrix, Musk and weird quantum physics notwithstanding.
To sum it up (briefly)
I’m sticking with the base reality (or real reality) concept, which is where various people and companies are attempting to create a multiplicity of metaverses or the metaverse effectively replacing the internet. This metaverse can include any all of these realities (AR/MR/VR/XR) along with base reality. As for Facebook’s attempt to build ‘the metaverse’, it seems a little grandiose.
The computer simulation theory is an interesting thought experiment (just like the multiverse is an interesting thought experiment). I’ll leave them there.
Wherever it is we are living, these are interesting times.
***Updated October 28, 2021: D. (Devindra) Hardawar’s October 28, 2021 article for engadget offers details about the rebranding along with a dash of cynicism (Note: A link has been removed),
Here’s what Facebook’s metaverse isn’t: It’s not an alternative world to help us escape from our dystopian reality, a la Snow Crash. It won’t require VR or AR glasses (at least, not at first). And, most importantly, it’s not something Facebook wants to keep to itself. Instead, as Mark Zuckerberg described to media ahead of today’s Facebook Connect conference, the company is betting it’ll be the next major computing platform after the rise of smartphones and the mobile web. Facebook is so confident, in fact, Zuckerberg announced that it’s renaming itself to “Meta.”
After spending the last decade becoming obsessed with our phones and tablets — learning to stare down and scroll practically as a reflex — the Facebook founder thinks we’ll be spending more time looking up at the 3D objects floating around us in the digital realm. Or maybe you’ll be following a friend’s avatar as they wander around your living room as a hologram. It’s basically a digital world layered right on top of the real world, or an “embodied internet” as Zuckerberg describes.
Before he got into the weeds for his grand new vision, though, Zuckerberg also preempted criticism about looking into the future now, as the Facebook Papers paint the company as a mismanaged behemoth that constantly prioritizes profit over safety. While acknowledging the seriousness of the issues the company is facing, noting that it’ll continue to focus on solving them with “industry-leading” investments, Zuckerberg said:
“The reality is is that there’s always going to be issues and for some people… they may have the view that there’s never really a great time to focus on the future… From my perspective, I think that we’re here to create things and we believe that we can do this and that technology can make things better. So we think it’s important to to push forward.”
Given the extent to which Facebook, and Zuckerberg in particular, have proven to be untrustworthy stewards of social technology, it’s almost laughable that the company wants us to buy into its future. But, like the rise of photo sharing and group chat apps, Zuckerberg at least has a good sense of what’s coming next. And for all of his talk of turning Facebook into a metaverse company, he’s adamant that he doesn’t want to build a metaverse that’s entirely owned by Facebook. He doesn’t think other companies will either. Like the mobile web, he thinks every major technology company will contribute something towards the metaverse. He’s just hoping to make Facebook a pioneer.
“Instead of looking at a screen, or today, how we look at the Internet, I think in the future you’re going to be in the experiences, and I think that’s just a qualitatively different experience,” Zuckerberg said. It’s not quite virtual reality as we think of it, and it’s not just augmented reality. But ultimately, he sees the metaverse as something that’ll help to deliver more presence for digital social experiences — the sense of being there, instead of just being trapped in a zoom window. And he expects there to be continuity across devices, so you’ll be able to start chatting with friends on your phone and seamlessly join them as a hologram when you slip on AR glasses.
D. (Devindra) Hardawar’s October 28, 2021 article provides a lot more details and I recommend reading it in its entirety.
There’s nothing especially new in this latest paper on neuromorphic computing and memristors, however it does a very good job of describing how these new computers might work. From a Nov. 30, 2020 news item on phys.org (Note: A link has been removed),
In a paper published in Nano, researchers study the role of memristors in neuromorphic computing. This novel fundamental electronic component supports the cloning of bio-neural systems with low cost and power.
Contemporary computing systems are unable to deal with critical challenges of size reduction and computing speed in the big data era. The Von Neumann bottleneck is referred to as a hindrance in data transfer through the bus connecting processor and memory cell. This gives an opportunity to create alternative architectures based on a biological neuron model. Neuromorphic computing is one of such alternative architectures that mimic neuro-biological brain architectures.
The humanoid neural brain system comprises approximately 100 billion neurons and numerous synapses of connectivity. An efficient circuit device is therefore essential for the construction of a neural network that mimics the human brain. The development of a basic electrical component, the memristor, with several distinctive features such as scalability, in-memory processing and CMOS compatibility, has significantly facilitated the implementation of neural network hardware.
The memristor was introduced as a “memory-like resistor” where the background of the applied inputs would alter the resistance status of the device. It is a capable electronic component that can memorise the current in order to effectively reduce the size of the device and increase processing speed in neural networks. Parallel calculations, as in the human nervous system, are made with the support of memristor devices in a novel computing architecture.
System instability and uncertainty have been described as current problems for most memory-based applications. This is the opposite of the biological process. Despite noise, nonlinearity, variability and volatility, biological systems work well. It is still unclear, however, that the effectiveness of biological systems actually depends on these obstacles. Neural modeling is sometimes avoided because it is not easy to model and study. The possibility of exploiting these properties is therefore, of course, a critical path to success in the achievement of artificial and biological systems.
Here’s a link to and a citation for the paper (Note: I usually include the link as part of the paper’s title but couldn’t do it this time),
Memristors: Understanding, Utilization and Upgradation for Neuromorphic Computing [https://www.worldscientific.com/doi/abs/10.1142/S1793292020300054] by Mohanbabu Bharathi, Zhiwei Wang, Bingrui Guo, Babu Balraj, Qiuhong Li, Jianwei Shuai and Donghui Guo. NanoVol. 15, No. 11, 2030005 (2020) DOI: https://doi.org/10.1142/S1793292020300054 Published: 12 November 2020
Clustered regularly interspaced short palindromic repeats (CRISPR) does not and never has made much sense to me. I understand each word individually it’s just that I’ve never thought they made much sense strung together that way. It’s taken years but I’ve finally found out what the words (when strung together that way) mean and the origins for the phrase. Hint: it’s all about the phages.
Apparently, it all started with yogurt as Cynthia Graber and Nicola Twilley of Gastropod discuss on their podcast, “4 CRISPR experts on how gene editing is changing the future of food.” During the course of the podcast they explain the ‘phraseology’ issue, mention hornless cattle (I have an update to the information in the podcast later in this posting), and so much more.
CRISPR started with yogurt
You’ll find the podcast (almost 50 minutes long) here on an Oct. 11, 2019 posting on the Genetic Literacy Project. If you need a little more encouragement, here’s how the podcast is described,
To understand how CRISPR will transform our food, we begin our episode at Dupont’s yoghurt culture facility in Madison, Wisconsin. Senior scientist Dennis Romero tells us the story of CRISPR’s accidental discovery—and its undercover but ubiquitous presence in the dairy aisles today.
Jennifer Kuzma and Yiping Qi help us understand the technology’s potential, both good and bad, as well as how it might be regulated and labeled. And Joyce Van Eck, a plant geneticist at the Boyce Thompson Institute in Ithaca, New York, tells us the story of how she is using CRISPR, combined with her understanding of tomato genetics, to fast-track the domestication of one of the Americas’ most delicious orphan crops [groundcherries].
I featured Van Eck’s work with groundcherries last year in a November 28, 2018 posting and I don’t think she’s published any new work about the fruit since. As for Kuzma’s point that there should be more transparency where genetically modified food is concerned, Canadian consumers were surprised (shocked) in 2017 to find out that genetically modified Atlantic salmon had been introduced into the food market without any notification (my September 13, 2017 posting; scroll down to the Fish subheading; Note: The WordPress ‘updated version from Hell’ has affected some of the formatting on the page).
The earliest article on CRISPR and yogurt that I’ve found is a January 1, 2015 article by Kerry Grens for The Scientist,
Two years ago, a genome-editing tool referred to as CRISPR (clustered regularly interspaced short palindromic repeats) burst onto the scene and swept through laboratories faster than you can say “adaptive immunity.” Bacteria and archaea evolved CRISPR eons before clever researchers harnessed the system to make very precise changes to pretty much any sequence in just about any genome.
But life scientists weren’t the first to get hip to CRISPR’s potential. For nearly a decade, cheese and yogurt makers have been relying on CRISPR to produce starter cultures that are better able to fend off bacteriophage attacks. “It’s a very efficient way to get rid of viruses for bacteria,” says Martin Kullen, the global R&D technology leader of Health and Protection at DuPont Nutrition & Health. “CRISPR’s been an important part of our solution to avoid food waste.”
Phage infection of starter cultures is a widespread and significant problem in the dairy-product business, one that’s been around as long as people have been making cheese. Patrick Derkx, senior director of innovation at Denmark-based Chr. Hansen, one of the world’s largest culture suppliers, estimates that the quality of about two percent of cheese production worldwide suffers from phage attacks. Infection can also slow the acidification of milk starter cultures, thereby reducing creameries’ capacity by up to about 10 percent, Derkx estimates. In the early 2000s, Philippe Horvath and Rodolphe Barrangou of Danisco (later acquired by DuPont) and their colleagues were first introduced to CRISPR while sequencing Streptococcus thermophilus, a workhorse of yogurt and cheese production. Initially, says Barrangou, they had no idea of the purpose of the CRISPR sequences. But as his group sequenced different strains of the bacteria, they began to realize that CRISPR might be related to phage infection and subsequent immune defense. “That was an eye-opening moment when we first thought of the link between CRISPR sequencing content and phage resistance,” says Barrangou, who joined the faculty of North Carolina State University in 2013.
One last bit before getting to the hornless cattle, scientist Yi Li has a November 15, 2018 posting on the GLP website about his work with gene editing and food crops,
I’m a plant geneticist and one of my top priorities is developing tools to engineer woody plants such as citrus trees that can resist the greening disease, Huanglongbing (HLB), which has devastated these trees around the world. First detected in Florida in 2005, the disease has decimated the state’s US$9 billion citrus crop, leading to a 75 percent decline in its orange production in 2017. Because citrus trees take five to 10 years before they produce fruits, our new technique – which has been nominated by many editors-in-chief as one of the groundbreaking approaches of 2017 that has the potential to change the world – may accelerate the development of non-GMO citrus trees that are HLB-resistant.
Genetically modified vs. gene edited
You may wonder why the plants we create with our new DNA editing technique are not considered GMO? It’s a good question.
Genetically modified refers to plants and animals that have been altered in a way that wouldn’t have arisen naturally through evolution. A very obvious example of this involves transferring a gene from one species to another to endow the organism with a new trait – like pest resistance or drought tolerance.
But in our work, we are not cutting and pasting genes from animals or bacteria into plants. We are using genome editing technologies to introduce new plant traits by directly rewriting the plants’ genetic code.
This is faster and more precise than conventional breeding, is less controversial than GMO techniques, and can shave years or even decades off the time it takes to develop new crop varieties for farmers.
There is also another incentive to opt for using gene editing to create designer crops. On March 28, 2018, U.S. Secretary of Agriculture Sonny Perdue announced that the USDA wouldn’t regulate new plant varieties developed with new technologies like genome editing that would yield plants indistinguishable from those developed through traditional breeding methods. By contrast, a plant that includes a gene or genes from another organism, such as bacteria, is considered a GMO. This is another reason why many researchers and companies prefer using CRISPR in agriculture whenever it is possible.
As the Gatropod’casters note, there’s more than one side to the gene editing story and not everyone is comfortable with the notion of cavalierly changing genetic codes when so much is still unknown.
For the past two years, researchers at the University of California, Davis, have been studying six offspring of a dairy bull, genome-edited to prevent it from growing horns. This technology has been proposed as an alternative to dehorning, a common management practice performed to protect other cattle and human handlers from injuries.
UC Davis scientists have just published their findings in the journal Nature Biotechnology. They report that none of the bull’s offspring developed horns, as expected, and blood work and physical exams of the calves found they were all healthy. The researchers also sequenced the genomes of the calves and their parents and analyzed these genomic sequences, looking for any unexpected changes.
All data were shared with the U.S. Food and Drug Administration. Analysis by FDA scientists revealed a fragment of bacterial DNA, used to deliver the hornless trait to the bull, had integrated alongside one of the two hornless genetic variants, or alleles, that were generated by genome-editing in the bull. UC Davis researchers further validated this finding.
“Our study found that two calves inherited the naturally-occurring hornless allele and four calves additionally inherited a fragment of bacterial DNA, known as a plasmid,” said corresponding author Alison Van Eenennaam, with the UC Davis Department of Animal Science.
Plasmid integration can be addressed by screening and selection, in this case, selecting the two offspring of the genome-edited hornless bull that inherited only the naturally occurring allele.
“This type of screening is routinely done in plant breeding where genome editing frequently involves a step that includes a plasmid integration,” said Van Eenennaam.
Van Eenennaam said the plasmid does not harm the animals, but the integration technically made the genome-edited bull a GMO, because it contained foreign DNA from another species, in this case a bacterial plasmid.
“We’ve demonstrated that healthy hornless calves with only the intended edit can be produced, and we provided data to help inform the process for evaluating genome-edited animals,” said Van Eenennaam. “Our data indicates the need to screen for plasmid integration when they’re used in the editing process.”
Since the original work in 2013, initiated by the Minnesota-based company Recombinetics, new methods have been developed that no longer use donor template plasmid or other extraneous DNA sequence to bring about introgression of the hornless allele.
Scientists did not observe any other unintended genomic alterations in the calves, and all animals remained healthy during the study period. Neither the bull, nor the calves, entered the food supply as per FDA guidance for genome-edited livestock.
WHY THE NEED FOR HORNLESS COWS?
Many dairy breeds naturally grow horns. But on dairy farms, the horns are typically removed, or the calves “disbudded” at a young age. Animals that don’t have horns are less likely to harm animals or dairy workers and have fewer aggressive behaviors. The dehorning process is unpleasant and has implications for animal welfare. Van Eenennaam said genome-editing offers a pain-free genetic alternative to removing horns by introducing a naturally occurring genetic variant, or allele, that is present in some breeds of beef cattle such as Angus.
Plasmons are oscillations in the plasma of free electrons that constantly swirl across the surface of conductive materials like metals. In some nanomaterials, a specific color of light can resonate with the plasma and cause the electrons inside it to lose their individual identities and move as one, in rhythmic waves. Rice’s Laboratory for Nanophotonics (LANP) has pioneered a growing list of plasmonic technologies for applications as diverse as color-changing glass, molecular sensing, cancer diagnosis and treatment, optoelectronics, solar energy collection and photocatalysis.
Reporting online in the Proceedings of the National Academy of Sciences, LANP scientists detailed the results of a two-year experimental and theoretical study of plasmons in three different polycyclic aromatic hydrocarbons (PAHs). Unlike the plasmons in relatively large metal nanoparticles, which can typically be described with classical electromagnetic theory like Maxwell’s [James Clerk Maxwell] equations, the paucity of atoms in the PAHs produces plasmons that can only be understood in terms of quantum mechanics, said study co-author and co-designer Naomi Halas, the director of LANP and the lead researcher on the project.
“These PAHs are essentially scraps of graphene that contain five or six fused benzene rings surrounded by a perimeter of hydrogen atoms,” Halas said. “There are so few atoms in each that adding or removing even a single electron dramatically changes their electronic behavior.”
Halas’ team had experimentally verified the existence of molecular plasmons in several previous studies. But an investigation that combined side by side theoretical and experimental perspectives was needed, said study co-author Luca Bursi, a postdoctoral research associate and theoretical physicist in the research group of study co-designer and co-author Peter Nordlander.
“Molecular excitations are a ubiquity in nature and very well studied, especially for neutral PAHs, which have been considered as the standard of non-plasmonic excitations in the past,” Bursi said. “Given how much is already known about PAHs, they were an ideal choice for further investigation of the properties of plasmonic excitations in systems as small as actual molecules, which represent a frontier of plasmonics.”
Lead co-author Kyle Chapkin, a Ph.D. student in applied physics in the Halas research group, said, “Molecular plasmonics is a new area at the interface between plasmonics and molecular chemistry, which is rapidly evolving. When plasmonics reach the molecular scale, we lose any sharp distinction of what constitutes a plasmon and what doesn’t. We need to find a new rationale to explain this regime, which was one of the main motivations for this study.”
In their native state, the PAHs that were studied — anthanthrene, benzo[ghi]perylene and perylene — are charge-neutral and cannot be excited into a plasmonic state by the visible wavelengths of light used in Chapkin’s experiments. In their anionic form, the molecules contain an additional electron, which alters their “ground state” and makes them plasmonically active in the visible spectrum. By exciting both the native and anionic forms of the molecules and comparing precisely how they behaved as they relaxed back to their ground states, Chapkin and Bursi built a solid case that the anionic forms do support molecular plasmons in the visible spectrum.
The key, Chapkin said, was identifying a number of similarities between the behavior of known plasmonic particles and the anionic PAHs. By matching both the timescales and modes for relaxation behaviors, the LANP team built up a picture of a characteristic dynamics of low-energy plasmonic excitations in the anionic PAHs.
“In molecules, all excitations are molecular excitations, but select excited states show some characteristics that allow us to draw a parallel with the well-established plasmonic excitations in metal nanostructures,” Bursi said.
“This study offers a window on the sometimes surprising behavior of collective excitations in few-atom quantum systems,” Halas said. “What we’ve learned here will aid our lab and others in developing quantum-plasmonic approaches for ultrafast color-changing glass, molecular-scale optoelectronics and nonlinear plasmon-mediated optics.”
Here’s a link to and a citation for the paper,
Lifetime dynamics of plasmons in the few-atom limit by Kyle D. Chapkin, Luca Bursi, Grant J. Stec, Adam Lauchner, Nathaniel J. Hogan, Yao Cui, Peter Nordlander, and Naomi J. Halas. PNAS September 11, 2018 115 (37) 9134-9139; published ahead of print August 27, 2018 DOI: https://doi.org/10.1073/pnas.1805357115
An April 10, 2017 news item on Nanowerk announces work from the University of New Mexico (UNM), Note: A link has been removed,
A new scientific paper published, in part, by a University of New Mexico physicist is shedding light on a strange force impacting particles at the smallest level of the material world.
The discovery, published in Physical Review Letters (“Lateral Casimir Force on a Rotating Particle near a Planar Surface”), was made by an international team of researchers lead by UNM Assistant Professor Alejandro Manjavacas in the Department of Physics & Astronomy. Collaborators on the project include Francisco Rodríguez-Fortuño (King’s College London, U.K.), F. Javier García de Abajo (The Institute of Photonic Sciences, Spain) and Anatoly Zayats (King’s College London, U.K.).
The findings relate to an area of theoretical nanophotonics and quantum theory known as the Casimir Effect, a measurable force that exists between objects inside a vacuum caused by the fluctuations of electromagnetic waves. When studied using classical physics, the vacuum would not produce any force on the objects. However, when looked at using quantum field theory, the vacuum is filled with photons, creating a small but potentially significant force on the objects.
“These studies are important because we are developing nanotechnologies where we’re getting into distances and sizes that are so small that these types of forces can dominate everything else,” said Manjavacas. “We know these Casimir forces exist, so, what we’re trying to do is figure out the overall impact they have very small particles.”
Manjavacas’ research expands on the Casimir effect by developing an analytical expression for the lateral Casimir force experienced by nanoparticles rotating near a flat surface.
Imagine a tiny sphere (nanoparticle) rotating over a surface. While the sphere slows down due to photons colliding with it, that rotation also causes the sphere to move in a lateral direction. In our physical world, friction between the sphere and the surface would be needed to achieve lateral movement. However, the nano-world does not follow the same set of rules, eliminating the need for contact between the sphere and the surface for movement to occur.
“The nanoparticle experiences a lateral force as if it were in contact with the surface, even though is actually separated from it,” said Manjavacas. “It’s a strange reaction but one that may prove to have significant impact for engineers.”
While the discovery may seem somewhat obscure, it is also extremely useful for researchers working in the always evolving nanotechnology industry. As part of their work, Manjavacas says they’ve also learned the direction of the force can be controlled by changing the distance between the particle and surface, an understanding that may help nanotech engineers develop better nanoscale objects for healthcare, computing or a variety of other areas.
For Manjavacas, the project and this latest publication are just another step forward in his research into these Casimir forces, which he has been studying throughout his scientific career. After receiving his Ph.D. from Complutense University of Madrid (UCM) in 2013, Manjavacas worked as a postdoctoral research fellow at Rice University before coming to UNM in 2015.
Currently, Manjavacas heads UNM’s Theoretical Nanophotonics research group, collaborating with scientists around the world and locally in New Mexico. In fact, Manjavacas credits Los Alamos National Laboratory Researcher Diego Dalvit, a leading expert on Casimir forces, for helping much of his work progress.
“If I had to name the person who knows the most about Casimir forces, I’d say it was him,” said Manjavacas. “He published a book that’s considered one of the big references on the topic. So, having him nearby and being able to collaborate with other UNM faculty is a big advantage for our research.”
The University of British Columbia’s (UBC) Peter Wall Institute for Advanced Studies (PWIAS) is hosting along with local biotech firm, Aspect Biosystems, a May 3 -5, 2017 international research roundtable known as ‘Printing the Future of Therapeutics in 3D‘.
A May 1, 2017 UBC news release (received via email) offers some insight into the field of bioprinting from one of the roundtable organizers,
This week, global experts will gather  at the University of British
Columbia to discuss the latest trends in 3D bioprinting—a technology
used to create living tissues and organs.
In this Q&A, UBC chemical and biological engineering professor
Vikramaditya Yadav , who is also with the Regenerative Medicine
Cluster Initiative in B.C., explains how bioprinting could potentially
transform healthcare and drug development, and highlights Canadian
innovations in this field.
WHY IS 3D BIOPRINTING SIGNIFICANT?
Bioprinted tissues or organs could allow scientists to predict
beforehand how a drug will interact within the body. For every
life-saving therapeutic drug that makes its way into our medicine
cabinets, Health Canada blocks the entry of nine drugs because they are
proven unsafe or ineffective. Eliminating poor-quality drug candidates
to reduce development costs—and therefore the cost to consumers—has
never been more urgent.
In Canada alone, nearly 4,500 individuals are waiting to be matched with
organ donors. If and when bioprinters evolve to the point where they can
manufacture implantable organs, the concept of an organ transplant
waiting list would cease to exist. And bioprinted tissues and organs
from a patient’s own healthy cells could potentially reduce the risk
of transplant rejection and related challenges.
HOW IS THIS TECHNOLOGY CURRENTLY BEING USED?
Skin, cartilage and bone, and blood vessels are some of the tissue types
that have been successfully constructed using bioprinting. Two of the
most active players are the Wake Forest Institute for Regenerative
Medicine in North Carolina, which reports that its bioprinters can make
enough replacement skin to cover a burn with 10 times less healthy
tissue than is usually needed, and California-based Organovo, which
makes its kidney and liver tissue commercially available to
pharmaceutical companies for drug testing.
Beyond medicine, bioprinting has already been commercialized to print
meat and artificial leather. It’s been estimated that the global
bioprinting market will hit $2 billion by 2021.
HOW IS CANADA INVOLVED IN THIS FIELD?
Canada is home to some of the most innovative research clusters and
start-up companies in the field. The UBC spin-off Aspect Biosystems 
has pioneered a bioprinting paradigm that rapidly prints on-demand
tissues. It has successfully generated tissues found in human lungs.
Many initiatives at Canadian universities are laying strong foundations
for the translation of bioprinting and tissue engineering into
mainstream medical technologies. These include the Regenerative Medicine
Cluster Initiative in B.C., which is headed by UBC, and the University
of Toronto’s Institute of Biomaterials and Biomedical Engineering.
WHAT ETHICAL ISSUES DOES BIOPRINTING CREATE?
There are concerns about the quality of the printed tissues. It’s
important to note that the U.S. Food and Drug Administration and Health
Canada are dedicating entire divisions to regulation of biomanufactured
products and biomedical devices, and the FDA also has a special division
that focuses on regulation of additive manufacturing – another name
for 3D printing.
These regulatory bodies have an impressive track record that should
assuage concerns about the marketing of substandard tissue. But cost and
pricing are arguably much more complex issues.
Some ethicists have also raised questions about whether society is not
too far away from creating Replicants, à la _Blade Runner_. The idea is
fascinating, scary and ethically grey. In theory, if one could replace
the extracellular matrix of bones and muscles with a stronger substitute
and use cells that are viable for longer, it is not too far-fetched to
create bones or muscles that are stronger and more durable than their
WILL DOCTORS BE PRINTING REPLACEMENT BODY PARTS IN 20 YEARS’ TIME?
This is still some way off. Optimistically, patients could see the
technology in certain clinical environments within the next decade.
However, some technical challenges must be addressed in order for this
to occur, beginning with faithful replication of the correct 3D
architecture and vascularity of tissues and organs. The bioprinters
themselves need to be improved in order to increase cell viability after
These developments are happening as we speak. Regulation, though, will
be the biggest challenge for the field in the coming years.
Imagine a world where drugs are developed without the use of animals, where doctors know how a patient will react to a drug before prescribing it and where patients can have a replacement organ 3D-printed using their own cells, without dealing with long donor waiting lists or organ rejection. 3D bioprinting could enable this world. Join us for lively discussion and dessert as experts in the field discuss the exciting potential of 3D bioprinting and the ethical issues raised when you can print human tissues on demand. This is also a rare opportunity to see a bioprinter live in action!
Friday, May 5, 2017
Peter Wall Institute for Advanced Studies 2:00 – 7:00pm
A Scientific Discussion on the Promise of 3D Bioprinting
The medical industry is struggling to keep our ageing population healthy. Developing effective and safe drugs is too expensive and time-consuming to continue unchanged. We cannot meet the current demand for transplant organs, and people are dying on the donor waiting list every day. We invite you to join an open session where four of the most influential academic and industry professionals in the field discuss how 3D bioprinting is being used to shape the future of health and what ethical challenges may be involved if you are able to print your own organs.
The University of British Columbia and the award-winning bioprinting company Aspect Biosystems, are proud to be co-organizing the first “Printing the Future of Therapeutics in 3D” International Research Roundtable. This event will congregate global leaders in tissue engineering research and pharmaceutical industry experts to discuss the rapidly emerging and potentially game-changing technology of 3D-printing living human tissues (bioprinting). The goals are to:
Highlight the state-of-the-art in 3D bioprinting research
Ideate on disruptive innovations that will transform bioprinting from a novel research tool to a broadly adopted systematic practice
Formulate an actionable strategy for industry engagement, clinical translation and societal impact
Present in a public forum, key messages to educate and stimulate discussion on the promises of bioprinting technology
The Roundtable will bring together a unique collection of industry experts and academic leaders to define a guiding vision to efficiently deploy bioprinting technology for the discovery and development of new therapeutics. As the novel technology of 3D bioprinting is more broadly adopted, we envision this Roundtable will become a key annual meeting to help guide the development of the technology both in Canada and globally.
We thank you for your involvement in this ground-breaking event and look forward to you all joining us in Vancouver for this unique research roundtable.
The Organizing Committee Christian Naus, Professor, Cellular & Physiological Sciences, UBC Vikram Yadav, Assistant Professor, Chemical & Biological Engineering, UBC Tamer Mohamed, CEO, Aspect Biosystems Sam Wadsworth, CSO, Aspect Biosystems Natalie Korenic, Business Coordinator, Aspect Biosystems
I’m glad to see this event is taking place—and with public events too! (Wish I’d seen the Café Scientifique announcement earlier when I first checked for tickets yesterday. I was hoping there’d been some cancellations today.) Finally, for the interested, you can find Aspect Biosystems here.