Category Archives: science philosophy

The physics of biology: “Nano comes to Life” by Sonia Contera

Louis Minion provides an overview of a newly published book, “Nano Comes to Life: How Nanotechnology is Transforming Medicine and the Future of Biology” by Sonia Contera, in a December 5, 2022 article for Physics World and notes this in his final paragraph,

Nano Comes to Life is aimed at both the general reader as well as scientists [emphasis mine], emphasizing and encouraging the democratization of science and its relationship to human culture. Ending on an inspiring note, Contera encourages us to throw off our fear of technology and use science to make a fairer and more prosperous future.

Minion notes elsewhere in his article (Note: Links have been removed),

Part showcase, part manifesto, Sonia Contera’s Nano Comes to Life makes the ambitious attempt to convey the wonder of recent advances in biology and nanoscience while at the same time also arguing for a new approach to biological and medical research.

Contera – a biological physicist at the University of Oxford – covers huge ground, describing with clarity a range of pioneering experiments, including building nanoscale robots and engines from self-assembled DNA strands, and the incremental but fascinating work towards artificially grown organs.

But throughout this interesting survey of nanoscience in biology, Contera weaves a complex argument for the future of biology and medicine. For me, it is here the book truly excels. In arguing for the importance of physics and engineering in biology, the author critiques the way in which the biomedical industry has typically carried out research, instead arguing that we need an approach to biology that respects its properties at all scales, not just the molecular.

This book was published in hard cover in 2019 and in paperback in 2021 (according to Sonia Contera’s University of Oxford Department of Physics profile page), so, I’m not sure why there’s an article about it in December 2022 but I’m glad to learn of the book’s existence.

Princeton University Press, which published Contera’s book, features a November 1, 2019 interview (from the Sonia Contera on Nano Comes to Life webpage),

What is the significance of the title of the book? What is the relationship between biology and nanotechnology?

SC: Nanotechnology—the capacity to visualize, manipulate, and interact with matter at the nanometer scale—has been engaged with and inspired by biology from its inception in the 1980s. This is primarily because the molecular players in biology, and the main drug and treatment targets in medicine—proteins and DNA—are nanosize. Since the early days of the field, a main mission of nanotechnologists has been to create tools that allow us to interact with key biological molecules one at a time, directly in their natural medium. They strive to understand and even mimic in their artificial nanostructures the mechanisms that underpin the function of biological nanomachines (proteins). In the last thirty years nanomicroscopies (primarily, the atomic force microscope) have unveiled the complex dynamic nature of proteins and the vast numbers of tasks that they perform. Far from being the static shapes featured in traditional biochemistry books, proteins rotate to work as nanomotors; they  literally perform walks to transport cargo around the cell. This enables an understanding of molecular biology that departs quite radically from traditional biochemical methods developed in the last fifty years. Since the main tools of nanotechnology were born in physics labs, the scientists who use them to study biomolecules interrogate those molecules within the framework of physics. Everyone should have the experience of viewing atomic force microscopy movies of proteins in action. It really changes the way we think about ourselves, as I try to convey in my book.

And how does physics change the study of biology at the nanoscale?

SC: In its widest sense the physics of life seeks to understand how the rules that govern the whole universe  led to the emergence of life on Earth and underlie biological behaviour. Central to this study are the molecules (proteins, DNA, etc.  that underpin biological processes. Nanotechnology enables the investigation of the most basic mechanisms of their functions, their engineering principles, and ultimately mathematical models that describe them. Life on Earth probably evolved from nanosize molecules that became complex enough to enable replication, and evolution on Earth over billions of years has created the incredibly sophisticated nanomachines  whose complex interactions constitute the fabric of the actions, perceptions, and senses of all living creatures. Combining the tools of nanotech with physics to study the mechanisms of biology is also inspiring the development of new materials, electronic devices, and applications in engineering and medicine.

What consequences will this have for the future of biology?

SC: The incorporation of biology (including intelligence) into the realm of physics facilitates a profound and potentially groundbreaking cultural shift, because it places the study of life within the widest possible context: the study of the rules that govern the cosmos. Nano Comes to Life seeks to reveal this new context for studying life and the potential for human advancement that it enables. The most powerful message of this book is that in the twenty-first century life can no longer be considered just the biochemical product of an algorithm written in genes (one that can potentially be modified at someone’s convenience); it must be understood as a complex and magnificent (and meaningful) realization of the laws that created the universe itself. The biochemical/genetic paradigm that dominated most of the twentieth century has been useful for understanding many biological processes, but it is insufficient to explain life in all its complexity, and to unblock existing medical bottlenecks. More broadly, as physics, engineering, computer science, and materials science merge with biology, they are actually helping to reconnect science and technology with the deep questions that humans have asked themselves from the beginning of civilization: What is life? What does it mean to be human when we can manipulate and even exploit our own biology? We have reached a point in history where these questions naturally arise from the practice of science, and this necessarily changes the sciences’ relationship with society.

We are entering a historic period of scientific convergence, feeling an urge to turn our heads to the past even as we walk toward the future, seeking to find, in the origin of the ideas that brought us here, the inspiration that will allow us to move forward. Nano Comes to Life focuses on the science but attempts to call attention to the potential for a new intellectual framework to emerge at the convergence of the sciences, one that scientists, engineers, artists, and thinkers should tap to create narratives and visions of the future that midwife our coming of age as a technological species. This might be the most important role of the physics of life that emerges from our labs: to contribute to the collective construction of a path to the preservation of human life on Earth.

You can find out more about Contera’s work and writing on her University of Oxford Department of Physics profile page, which she seems to have written herself. I found this section particularly striking,

I am also interested in the relation of physics with power, imperialism/nationalism, politics and social identities in the XIX, XX and XXI centuries, and I am starting to write about it, like in this piece for Nature Review Materials : “Communication is central to the mission of science”  which explores science comms in the context of the pandemic and global warming. In a recent talk at Fundacion Telefonica, I explored the relation of national, “East-West”, and gender identity and physics, from colonialism to the Manhattan Project and the tech companies of the Silicon Valley of today, can be watched in Spanish and English (from min 17). Here I explore the future of Spanish science and world politics at Fundacion Rafael del Pino (Spanish).

The woman has some big ideas! Good, we need them.

BTW, I’ve posted a few items that might be of interest with regard to some of her ideas.

  1. Perimeter Institute (PI) presents: The Jazz of Physics with Stephon Alexander,” this April 5, 2023 posting features physicist Stephon Alexander’s upcoming April 14, 2023 presentation (you can get on the waiting list or find a link to the livestream) and mentions his 2021 book “Fear of a Black Universe; An Outsider’s Guide to the Future of Physics.”
  2. There’s also “Scientists gain from communication with public” posted on April 6, 2023.

A robot with body image and self awareness

This research is a rather interesting direction for robotics to take (from a July 13, 2022 news item on ScienceDaily),

As every athletic or fashion-conscious person knows, our body image is not always accurate or realistic, but it’s an important piece of information that determines how we function in the world. When you get dressed or play ball, your brain is constantly planning ahead so that you can move your body without bumping, tripping, or falling over.

We humans acquire our body-model as infants, and robots are following suit. A Columbia Engineering team announced today they have created a robot that — for the first time — is able to learn a model of its entire body from scratch, without any human assistance. In a new study published by Science Robotics,, the researchers demonstrate how their robot created a kinematic model of itself, and then used its self-model to plan motion, reach goals, and avoid obstacles in a variety of situations. It even automatically recognized and then compensated for damage to its body.

Courtesy Columbia University School of Engineering and Applied Science

A July 13, 2022 Columbia University news release by Holly Evarts (also on EurekAlert), which originated the news item, describes the research in more detail, Note: Links have been removed,

Robot watches itself like an an infant exploring itself in a hall of mirrors

The researchers placed a robotic arm inside a circle of five streaming video cameras. The robot watched itself through the cameras as it undulated freely. Like an infant exploring itself for the first time in a hall of mirrors, the robot wiggled and contorted to learn how exactly its body moved in response to various motor commands. After about three hours, the robot stopped. Its internal deep neural network had finished learning the relationship between the robot’s motor actions and the volume it occupied in its environment. 

“We were really curious to see how the robot imagined itself,” said Hod Lipson, professor of mechanical engineering and director of Columbia’s Creative Machines Lab, where the work was done. “But you can’t just peek into a neural network, it’s a black box.” After the researchers struggled with various visualization techniques, the self-image gradually emerged. “It was a sort of gently flickering cloud that appeared to engulf the robot’s three-dimensional body,” said Lipson. “As the robot moved, the flickering cloud gently followed it.” The robot’s self-model was accurate to about 1% of its workspace.

Self-modeling robots will lead to more self-reliant autonomous systems

The ability of robots to model themselves without being assisted by engineers is important for many reasons: Not only does it save labor, but it also allows the robot to keep up with its own wear-and-tear, and even detect and compensate for damage. The authors argue that this ability is important as we need autonomous systems to be more self-reliant. A factory robot, for instance, could detect that something isn’t moving right, and compensate or call for assistance.

“We humans clearly have a notion of self,” explained the study’s first author Boyuan Chen, who led the work and is now an assistant professor at Duke University. “Close your eyes and try to imagine how your own body would move if you were to take some action, such as stretch your arms forward or take a step backward. Somewhere inside our brain we have a notion of self, a self-model that informs us what volume of our immediate surroundings we occupy, and how that volume changes as we move.”

Self-awareness in robots

The work is part of Lipson’s decades-long quest to find ways to grant robots some form of self-awareness.  “Self-modeling is a primitive form of self-awareness,” he explained. “If a robot, animal, or human, has an accurate self-model, it can function better in the world, it can make better decisions, and it has an evolutionary advantage.” 

The researchers are aware of the limits, risks, and controversies surrounding granting machines greater autonomy through self-awareness. Lipson is quick to admit that the kind of self-awareness demonstrated in this study is, as he noted, “trivial compared to that of humans, but you have to start somewhere. We have to go slowly and carefully, so we can reap the benefits while minimizing the risks.”  

Here’s a link to and a citation for the paper,

Fully body visual self-modeling of robot morphologies by Boyuan Chen, Robert Kwiatkowski, Carl Vondrick and Hod Lipson. Science Robotics 13 Jul 2022 Vol 7, Issue 68 DOI: 10.1126/scirobotics.abn1944

This paper is behind a paywall.

If you follow the link to the July 13, 2022 Columbia University news release, you’ll find an approximately 25 min. video of Hod Lipson showing you how they did it. As Lipson notes discussion of self-awareness and sentience is not found in robotics programmes. Plus, there are more details and links if you follow the EurekAlert link.

Kempner Institute for the Study of Natural and Artificial Intelligence launched at Harvard University and University of Manchester pushes the boundaries of smart robotics and AI

Before getting to the two news items, it might be a good idea to note that ‘artificial intelligence (AI)’ and ‘robot’ are not synonyms although they are often used that way, even by people who should know better. (sigh … I do it too)

A robot may or may not be animated with artificial intelligence while artificial intelligence algorithms may be installed on a variety of devices such as a phone or a computer or a thermostat or a … .

It’s something to bear in mind when reading about the two new institutions being launched. Now, on to Harvard University.

Kempner Institute for the Study of Natural and Artificial Intelligence

A September 23, 2022 Chan Zuckerberg Initiative (CZI) news release (also on EurekAlert) announces a symposium to launch a new institute close to Mark Zuckerberg’s heart,

On Thursday [September 22, 2022], leadership from the Chan Zuckerberg Initiative (CZI) and Harvard University celebrated the launch of the Kempner Institute for the Study of Natural and Artificial Intelligence at Harvard University with a symposium on Harvard’s campus. Speakers included CZI Head of Science Stephen Quake, President of Harvard University Lawrence Bacow, Provost of Harvard University Alan Garber, and Kempner Institute co-directors Bernardo Sabatini and Sham Kakade. The event also included remarks and panels from industry leaders in science, technology, and artificial intelligence, including Bill Gates, Eric Schmidt, Andy Jassy, Daniel Huttenlocher, Sam Altman, Joelle Pineau, Sangeeta Bhatia, and Yann LeCun, among many others.

The Kempner Institute will seek to better understand the basis of intelligence in natural and artificial systems. Its bold premise is that the two fields are intimately interconnected; the next generation of AI will require the same principles that our brains use for fast, flexible natural reasoning, and understanding how our brains compute and reason requires theories developed for AI. The Kempner Institute will study AI systems, including artificial neural networks, to develop both principled theories [emphasis mine] and a practical understanding of how these systems operate and learn. It will also focus on research topics such as learning and memory, perception and sensation, brain function, and metaplasticity. The Institute will recruit and train future generations of researchers from undergraduates and graduate students to post-docs and faculty — actively recruiting from underrepresented groups at every stage of the pipeline — to study intelligence from biological, cognitive, engineering, and computational perspectives.

CZI Co-Founder and Co-CEO Mark Zuckerberg [chairman and chief executive officer of Meta/Facebook] said: “The Kempner Institute will be a one-of-a-kind institute for studying intelligence and hopefully one that helps us discover what intelligent systems really are, how they work, how they break and how to repair them. There’s a lot of exciting implications because once you understand how something is supposed to work and how to repair it once it breaks, you can apply that to the broader mission the Chan Zuckerberg Initiative has to empower scientists to help cure, prevent or manage all diseases.”

CZI Co-Founder and Co-CEO Priscilla Chan said: “Just attending this school meant the world to me. But to stand on this stage and to be able to give something back is truly a dream come true … All of this progress starts with building one fundamental thing: a Kempner community that’s diverse, multi-disciplinary and multi-generational, because incredible ideas can come from anyone. If you bring together people from all different disciplines to look at a problem and give them permission to articulate their perspective, you might start seeing insights or solutions in a whole different light. And those new perspectives lead to new insights and discoveries and generate new questions that can lead an entire field to blossom. So often, that momentum is what breaks the dam and tears down old orthodoxies, unleashing new floods of new ideas that allow us to progress together as a society.”

CZI Head of Science Stephen Quake said: “It’s an honor to partner with Harvard in building this extraordinary new resource for students and science. This is a once-in-a-generation moment for life sciences and medicine. We are living in such an extraordinary and exciting time for science. Many breakthrough discoveries are going to happen not only broadly but right here on this campus and at this institute.”

CZI’s 10-year vision is to advance research and develop technologies to observe, measure, and analyze any biological process within the human body — across spatial scales and in real time. CZI’s goal is to accelerate scientific progress by funding scientific research to advance entire fields; working closely with scientists and engineers at partner institutions like the Chan Zuckerberg Biohub and Chan Zuckerberg Institute for Advanced Biological Imaging to do the research that can’t be done in conventional environments; and building and democratizing next-generation software and hardware tools to drive biological insights and generate more accurate and biologically important sources of data.

President of Harvard University Lawrence Bacow said: “Here we are with this incredible opportunity that Priscilla Chan and Mark Zuckerberg have given us to imagine taking what we know about the brain, neuroscience and how to model intelligence and putting them together in ways that can inform both, and can truly advance our understanding of intelligence from multiple perspectives.”

Kempner Institute Co-Director and Gordon McKay Professor of Computer Science and of Statistics at the Harvard John A. Paulson School of Engineering and Applied Sciences Sham Kakade said: “Now we begin assembling a world-leading research and educational program at Harvard that collectively tries to understand the fundamental mechanisms of intelligence and seeks to apply these new technologies for the benefit of humanity … We hope to create a vibrant environment for all of us to engage in broader research questions … We want to train the next generation of leaders because those leaders will go on to do the next set of great things.”

Kempner Institute Co-Director and the Alice and Rodman W. Moorhead III Professor of Neurobiology at Harvard Medical School Bernardo Sabatini said: “We’re blending research, education and computation to nurture, raise up and enable any scientist who is interested in unraveling the mysteries of the brain. This field is a nascent and interdisciplinary one, so we’re going to have to teach neuroscience to computational biologists, who are going to have to teach machine learning to cognitive scientists and math to biologists. We’re going to do whatever is necessary to help each individual thrive and push the field forward … Success means we develop mathematical theories that explain how our brains compute and learn, and these theories should be specific enough to be testable and useful enough to start to explain diseases like schizophrenia, dyslexia or autism.”

About the Chan Zuckerberg Initiative

The Chan Zuckerberg Initiative was founded in 2015 to help solve some of society’s toughest challenges — from eradicating disease and improving education, to addressing the needs of our communities. Through collaboration, providing resources and building technology, our mission is to help build a more inclusive, just and healthy future for everyone. For more information, please visit chanzuckerberg.com.

Principled theories, eh. I don’t see a single mention of ethicists or anyone in the social sciences or the humanities or the arts. How are scientists and engineers who have no training in or education in or, even, an introduction to ethics or social impacts or psychology going to manage this?

Mark Zuckerberg’s approach to these issues was something along the lines of “it’s easier to ask for forgiveness than to ask for permission.” I understand there have been changes but it took far too long to recognize the damage let alone attempt to address it.

If you want to gain a little more insight into the Kempner Institute, there’s a December 7, 2021 article by Alvin Powell announcing the institute for the Harvard Gazette,

The institute will be funded by a $500 million gift from Priscilla Chan and Mark Zuckerberg, which was announced Tuesday [December 7, 2021] by the Chan Zuckerberg Initiative. The gift will support 10 new faculty appointments, significant new computing infrastructure, and resources to allow students to flow between labs in pursuit of ideas and knowledge. The institute’s name honors Zuckerberg’s mother, Karen Kempner Zuckerberg, and her parents — Zuckerberg’s grandparents — Sidney and Gertrude Kempner. Chan and Zuckerberg have given generously to Harvard in the past, supporting students, faculty, and researchers in a range of areas, including around public service, literacy, and cures.

“The Kempner Institute at Harvard represents a remarkable opportunity to bring together approaches and expertise in biological and cognitive science with machine learning, statistics, and computer science to make real progress in understanding how the human brain works to improve how we address disease, create new therapies, and advance our understanding of the human body and the world more broadly,” said President Larry Bacow.

Q&A

Bernardo Sabatini and Sham Kakade [Institute co-directors]

GAZETTE: Tell me about the new institute. What is its main reason for being?

SABATINI: The institute is designed to take from two fields and bring them together, hopefully to create something that’s essentially new, though it’s been tried in a couple of places. Imagine that you have over here cognitive scientists and neurobiologists who study the human brain, including the basic biological mechanisms of intelligence and decision-making. And then over there, you have people from computer science, from mathematics and statistics, who study artificial intelligence systems. Those groups don’t talk to each other very much.

We want to recruit from both populations to fill in the middle and to create a new population, through education, through graduate programs, through funding programs — to grow from academic infancy — those equally versed in neuroscience and in AI systems, who can be leaders for the next generation.

Over the millions of years that vertebrates have been evolving, the human brain has developed specializations that are fundamental for learning and intelligence. We need to know what those are to understand their benefits and to ask whether they can make AI systems better. At the same time, as people who study AI and machine learning (ML) develop mathematical theories as to how those systems work and can say that a network of the following structure with the following properties learns by calculating the following function, then we can take those theories and ask, “Is that actually how the human brain works?”

KAKADE: There’s a question of why now? In the technological space, the advancements are remarkable even to me, as a researcher who knows how these things are being made. I think there’s a long way to go, but many of us feel that this is the right time to study intelligence more broadly. You might also ask: Why is this mission unique and why is this institute different from what’s being done in academia and in industry? Academia is good at putting out ideas. Industry is good at turning ideas into reality. We’re in a bit of a sweet spot. We have the scale to study approaches at a very different level: It’s not going to be just individual labs pursuing their own ideas. We may not be as big as the biggest companies, but we can work on the types of problems that they work on, such as having the compute resources to work on large language models. Industry has exciting research, but the spectrum of ideas produced is very different, because they have different objectives.

For the die-hards, there’s a September 23, 2022 article by Clea Simon in Harvard Gazette, which updates the 2021 story,

Next, Manchester, England.

Manchester Centre for Robotics and AI

Robotots take a break at a lab at The University of Manchester – picture courtesy of Marketing Manchester [downloaded from https://www.manchester.ac.uk/discover/news/manchester-ai-summit-aims-to-attract-experts-in-advanced-engineering-and-robotics/]

A November 22, 2022 University of Manchester press release (also on EurekAlert) announces both a meeting and a new centre, Note: Links to the Centre have been retained; all others have been removed,

How humans and super smart robots will live and work together in the future will be among the key issues being scrutinised by experts at a new centre of excellence for AI and autonomous machines based at The University of Manchester.

The Manchester Centre for Robotics and AI will be a new specialist multi-disciplinary centre to explore developments in smart robotics through the lens of artificial intelligence (AI) and autonomous machinery.

The University of Manchester has built a modern reputation of excellence in AI and robotics, partly based on the legacy of pioneering thought leadership begun in this field in Manchester by legendary codebreaker Alan Turing.

Manchester’s new multi-disciplinary centre is home to world-leading research from across the academic disciplines – and this group will hold its first conference on Wednesday, Nov 23, at the University’s new engineering and materials facilities.

A  highlight will be a joint talk by robotics expert Dr Andy Weightman and theologian Dr Scott Midson which is expected to put a spotlight on ‘posthumanism’, a future world where humans won’t be the only highly intelligent decision-makers.

Dr Weightman, who researches home-based rehabilitation robotics for people with neurological impairment, and Dr Midson, who researches theological and philosophical critiques of posthumanism, will discuss how interdisciplinary research can help with the special challenges of rehabilitation robotics – and, ultimately, what it means to be human “in the face of the promises and challenges of human enhancement through robotic and autonomous machines”.

Other topics that the centre will have a focus on will include applications of robotics in extreme environments.

For the past decade, a specialist Manchester team led by Professor Barry Lennox has designed robots to work safely in nuclear decommissioning sites in the UK. A ground-breaking robot called Lyra that has been developed by Professor Lennox’s team – and recently deployed at the Dounreay site in Scotland, the “world’s deepest nuclear clean up site” – has been listed in Time Magazine’s Top 200 innovations of 2022.

Angelo Cangelosi, Professor of Machine Learning and Robotics at Manchester, said the University offers a world-leading position in the field of autonomous systems – a technology that will be an integral part of our future world. 

Professor Cangelosi, co-Director of Manchester’s Centre for Robotics and AI, said: “We are delighted to host our inaugural conference which will provide a special showcase for our diverse academic expertise to design robotics for a variety of real world applications.

“Our research and innovation team are at the interface between robotics, autonomy and AI – and their knowledge is drawn from across the University’s disciplines, including biological and medical sciences – as well the humanities and even theology. [emphases mine]

“This rich diversity offers Manchester a distinctive approach to designing robots and autonomous systems for real world applications, especially when combined with our novel use of AI-based knowledge.”

Delegates will have a chance to observe a series of robots and autonomous machines being demoed at the new conference.

The University of Manchester’s Centre for Robotics and AI will aim to: 

  • design control systems with a focus on bio-inspired solutions to mechatronics, eg the use of biomimetic sensors, actuators and robot platforms; 
  • develop new software engineering and AI methodologies for verification in autonomous systems, with the aim to design trustworthy autonomous systems; 
  • research human-robot interaction, with a pioneering focus on the use of brain-inspired approaches [emphasis mine] to robot control, learning and interaction; and 
  • research the ethics and human-centred robotics issues, for the understanding of the impact of the use of robots and autonomous systems with individuals and society. 

In some ways, the Kempner Institute and the Manchester Centre for Robotics and AI have very similar interests, especially where the brain is concerned. What fascinates me is the Manchester Centre’s inclusion of theologian Dr Scott Midson and the discussion (at the meeting) of ‘posthumanism’. The difference is between actual engagement at the symposium (the centre) and mere mention in a news release (the institute).

I wish the best for both institutions.

Philosophy and science in Tokyo, Japan from Dec. 1-2, 2022

I have not seen a more timely and à propos overview for a meeting/conference/congress that this one for Tokyo Forum 2022 (hosted by the University of Tokyo and South Korea’s Chey Institute for Advanced Studies),

Dialogue between Philosophy and Science: In a World Facing War, Pandemic, and Climate Change

In the face of war, a pandemic, and climate change, we cannot repeat the history of the last century, in which our ancestors headed down the road to division, global conflict, and environmental destruction.

How can we live more fully and how do we find a new common understanding about what our society should be? Tokyo Forum 2022 will tackle these questions through a series of in-depth dialogues between philosophy and science. The dialogues will weave together the latest findings and deep contemplation, and explore paths that could lead us to viable answers and solutions.

Philosophy of the 21st century must contribute to the construction of a new universality based on locality and diversity. It should be a universality that is open to co-existing with other non-human elements, such as ecosystems and nature, while severely criticizing the understanding of history that unreflectively identifies anthropocentrism with universality.

Science in the 21st century also needs to dispense with its overarching aura of supremacy and lack of self-criticism. There is a need for scientists to make efforts to demarcate their own limits. This also means reexamining what ethics means for science.

Tokyo Forum 2022 will offer multifaceted dialogues between philosophers, scientists, and scholars from various fields of study on the state and humanity in the 21st century, with a view to imagining and proposing a vision of the society we need.

Here are some details about the hybrid event from a November 4, 2022 University of Tokyo press release on EurekAlert,

The University of Tokyo and South Korea’s Chey Institute for Advanced Studies will host Tokyo Forum 2022 from Dec. 1-2, 2022. Under this year’s theme “Dialogue between Philosophy and Science,” the annual symposium will bring together philosophers, scientists and scholars in various fields from around the world for multifaceted dialogues on humanity and the state in the 21st century, while envisioning the society we need.

The event is free and open to the public, and will be held both on site at Yasuda Auditorium of the University of Tokyo and online via livestream. [emphases mine]

Keynote speakers lined up for the first day of the two-day symposium are former U.N. Secretary-General Ban Ki-moon, University of Chicago President Paul Alivisatos and Mariko Hasegawa, president of the Graduate University for Advanced Studies in Japan.

Other featured speakers on the event’s opening day include renowned modern thinker and author Professor Markus Gabriel of the University of Bonn, and physicist Hirosi Ooguri, director of the Kavli Institute for the Physics and Mathematics of the Universe at the University of Tokyo and professor at the California Institute of Technology, who are scheduled to participate in the high-level discussion on the dialogue between philosophy and science.

Columbia University Professor Jeffrey Sachs will take part in a panel discussion, also on Day 1, on tackling global environmental issues with stewardship of the global commons — the stable and resilient Earth system that sustains our lives — as a global common value.

The four panel discussions slated for Day 2 will cover the role of world philosophy in addressing the problems of a globalized world; transformative change for a sustainable future by understanding the diverse values of nature and its contributions to people; the current and future impacts of autonomous robots on society; and finding collective solutions and universal values to pursue equitable and sustainable futures for humanity by looking at interconnections among various fields of inquiry.

Opening remarks will be delivered by University of Tokyo President Teruo Fujii and South Korea’s SK Group Chairman Chey Tae-won, on Day 1. Fujii and Chey Institute President Park In-kook will make closing remarks following the wrap-up session on the second and final day.

Tokyo Forum with its overarching theme “Shaping the Future” is held annually since 2019 to stimulate discussions on finding the best ideas for shaping the world and humanity in the face of complex situations where the conventional wisdom can no longer provide answers.

For more information about the program and speakers of Tokyo Forum 2022, visit the event website and social media accounts:

Website: https://www.tokyoforum.tc.u-tokyo.ac.jp/en/index.html

Twitter: https://twitter.com/UTokyo_forum

Facebook: https://www.facebook.com/UTokyo.tokyo.forum/

To register, fill out the registration form on the Tokyo Forum 2022 website (registration is free but required [emphasis mine] to attend the event): https://www.tokyo-forum-form.com/apply/audiences/en

I’m not sure how they are handling languages. I’m guessing that people are speaking in the language they choose and translations (subtitles or dubbing) are available. For anyone who may have difficulty attending due to timezone issues, there are archives for previous Tokyo Forums. Presumably 2022 will be added at some point in the future.

Antikythera: a new Berggruen Institute program and a 2,000 year old computer

Starting with the new Antikythera program at the Berggruen Institute before moving onto the Antikythera itself, one of my favourite scientific mysteries.

Antikythera program at the Berggruen Institute

An October 5, 2022 Berggruen Institute news release (also received via email) announces a program exploring the impact of planetary-scale computation and invites applications for the program’s first ‘studio’,

Antikythera is convening over 75 philosophers, technologists, designers, and scientists in seminars, design research studios, and global salons to create new models that shift computation toward more viable long-term futures: https://antikythera.xyz/

Applications are now open for researchers to join Antikythera’s fully-funded five month Studio in 2023, launching at the Berggruen Institute in Los Angeles: https://antikythera.xyz/apply/

Today [October 5, 2022] the Berggruen Institute announced that it will incubate Antikythera, an initiative focused on understanding and shaping the impact of computation on philosophy, global society, and planetary systems. Antikythera will engage a wide range of thinkers at the intersections of software, speculative thought, governance, and design to explore computation’s ultimate pitfalls and potentials. Research will range from the significance of machine intelligence and the geopolitics of AI to new economic models and the long-term project of composing a healthy planetary society.

“Against a background of rising geopolitical tensions and an accelerating climate crisis, technology has outpaced our theory. As such, we are less interested in applying philosophy to the topic of computation than generating new ideas from a direct encounter with it.” said Benjamin Bratton, Professor at the University of California, San Diego, and director of the new program. “The purpose of Antikythera is to reorient the question “what is computation for?” and to model what it may become. That is a project that is not only technological but also philosophical, political, and ecological.”

Antikythera will begin this exploration with its Studio program, applications for which are now open at antikythera.xyz/apply/. The Studio program will take place over five months in spring 2023 and bring together researchers from across the world to work in multidisciplinary teams. These teams will work on speculative design proposals, and join 75+ Affiliate Researchers for workshops, talks, and design sprints that inform thinking and propositions around Antikythera’s core research topics. Affiliate Researchers will include philosophers, technologists, designers, scientists, and other thinkers and practitioners. Applications for the program are due November 11, 2022.

Program project outcomes will include new combinations of theory, cinema, software, and policy. The five initial research themes animating this work are:

Synthetic Intelligence: the longer-term implications of machine intelligence, particularly as seen through the lens of artificial language

Hemispherical Stacks: the multipolar geopolitics of planetary computation

Recursive Simulations: the emergence of simulation as an epistemological technology, from scientific simulation to VR/AR

Synthetic Catallaxy: the ongoing organization of computational economics, pricing, and planning

Planetary Sapience: the evolutionary emergence of natural/artificial intelligence, and its role in composing a viable planetary condition

The program is named after the Antikythera Mechanism, the world’s first known computer, used more than 2,000 years ago to predict the movements of constellations and eclipses decades in advance. As an origin point for computation, it combined calculation, orientation and cosmology, dimensions of practice whose synergies may be crucial in setting our planetary future on a better course than it is on today.

Bratton continues, “The evolution of planetary intelligence has also meant centuries of destruction; its future must be radically different. We must ask, what future would make this past worth it? Taking the question seriously demands a different sort of speculative and practical philosophy and a corresponding sort of computation.”

Bratton is a philosopher of technology and Professor at the University of California, San Diego, and author of many books including The Stack: On Software and Sovereignty (MIT Press). His most recent book is The Revenge of the Real: Politics for a Post-Pandemic World (Verso Books), exploring the implications for political philosophy of COVID-19. Associate directors are Ben Cerveny, technologist, speculative designer, and director of the Amsterdam-based Foundation for Public Code, and Stephanie Sherman, strategist, writer, and director of the MA Narrative Environments program at Central St. Martins, London. The Studio is directed by architect and creative director Nicolay Boyadjiev.

In addition to the Studio, program activities will include a series of invitation-only planning salons inviting philosophers, designers, technologists, strategists, and others to discuss how to best interpret and intervene in the future of planetary-scale computation, and the historic philosophical and geopolitical force that it represents. These salons began in London in October 2022 and will continue in locations across the world including in Berlin; Amsterdam; Los Angeles; San Francisco; New York; Mexico City; Seoul; and Venice.

The announcement of Antikythera at the Berggruen Institute follows the recent spinoff of the Transformations of the Human school, successfully incubated at the Institute from 2017-2021.

“Computational technology covering the planet represents one of the largest and most urgent philosophical opportunities of our time,” said Nicolas Berggruen, Chairman and Co-Founder of the Berggruen Institute. “It is with great pleasure that we invite Antikythera to join our work at the Institute. Together, we can develop new ways of thinking to support planetary flourishing in the years to come.”

Web: Antikythera.xyz
Social: Antikythera_xyz on Twitter, Instagram, and Linkedin.
Email: contact@antikythera.xyz

Applications were opened on October, 4, 2022, the deadline is November 11, 2022 followed by interviews. Participants will be confirmed by December 11, 2022. Here are a few more details from the application portal,

Who should apply to the Studio?

Antikythera hopes to bring together a diverse cohort of researchers from different backgrounds, disciplines, perspectives, and levels of experience. The Antikythera research themes engage with global challenges that necessitate harnessing a diversity of thought and expertise. Anyone who is passionate about the research themes of the Antikythera program is strongly encouraged to apply. We accept applications from every discipline and background, from established to emerging researchers. Applicants do not need to meet any specific set of educational or professional experience.

Is the program free?

Yes, the program is free. You will be supported to cover the cost of housing, living expenses, and all program-related fieldwork travel along with a monthly stipend. Any other associated program costs will also be covered by the program.

Is the program in person and full-time?

Yes, the Studio program requires a full-time commitment (PhD students must also be on leave to participate). There is no part-time participation option. Though we understand this commitment may be challenging logistically for some individuals, we believe it is important for the Studio’s success. We will do our best to enable an environment that is comfortable and safe for participants from all backgrounds. Please do not hesitate to contact us if you may require any accommodations or have questions regarding the full-time, in-person nature of the program.

Do I need a Visa?

The Studio is a traveling program with time spent between the USA, Mexico, and South Korea. Applicable visa requirements set by these countries will apply and will vary depending on your nationality. We are aware that current visa appointment wait times may preclude some individuals who would require a brand new visa from being able to enter the US by January, and we are working to ensure access to the program for all (if not for January 2023, then for future Studio cohorts). We will therefore ask you to identify your country of origin and passport/visa status in the application form so we can work to enable your participation. Anyone who is passionate about the research themes of the Antikythera program is strongly encouraged to apply.

For those who like to put a face to a name, you can find out more about the program and the people behind it on this page.

Antikythera, a 2000 year old computer & 100 year old mystery

As noted in the Berggruen Institute news release, the Antikythera Mechanism is considered the world’s first computer (as far as we know). The image below is one of the best known illustrations of the device as visualized by researchers,

Exploded model of the Cosmos gearing of the Antikythera Mechanism. ©2020 Tony Freeth.

Briefly, the Antikythera mechanism was discovered at the turn of the twentieth century in 1901 by sponge divers off the coast of Greece. Philip Chrysopoulos’s September 21, 2022 article for The Greek Reporter gives more details in an exuberant style (Note: Links have been removed),

… now—more than 120 years later—the astounding machine has been recreated once again, using 3-D imagery, by a brilliant group of researchers from University College London (UCL).

Not only is the recreation a thing of great beauty and amazing genius, but it has also made possible a new understanding of how it worked.

Since only eighty-two fragments of the original mechanism are extant—comprising only one-third of the entire calculator—this left researchers stymied as to its full capabilities.

Until this moment [in 2020 according to the copyright for the image], the front of the mechanism, containing most of the gears, has been a bit of a Holy Grail for marine archeologists and astronomers.

Professor Tony Freeth says in an article published in the periodical Scientific Reports: “Ours is the first model that conforms to all the physical evidence and matches the descriptions in the scientific inscriptions engraved on the mechanism itself.”

“The sun, moon and planets are displayed in an impressive tour de force of ancient Greek brilliance,” Freeth said.

The largest surviving piece of the mechanism, referred to by researchers as “Fragment A,” has bearings, pillars, and a block. Another piece, known as “Fragment D,” has a mysterious disk along with an extraordinarily intricate 63-toothed gear and a plate.

The inscriptions—just discovered recently by researchers—on the back cover of the mechanism have a description of the cosmos and the planets, shown by beads of various colors, and move on rings set around the inscriptions.

By employing the information gleaned from recent x-rays of the computer and their knowledge of ancient Greek mathematics, the UCL researchers have now shown that they can demonstrate how the mechanism determined the cycles of the planets Venus and Saturn.

Evaggelos Vallianatos, author of many books on the Antikythera Mechanism writing at Greek Reporter said that it was much more than a mere mechanism. It was a sophisticated, mind-bogglingly complex astronomical computer, he said “and Greeks made it.”

They employed advanced astronomy, mathematics, metallurgy, and engineering to do so, constructing the astronomical device 2,200 years ago. These scientific facts of the computer’s age and its flowless high-tech nature profoundly disturbed some of the scientists who studied it.

A few Western scientists of the twentieth century were shocked by the Antikythera Mechanism, Vallianatos said. They called it an astrolabe for several decades and refused to call it a computer. The astrolabe, a Greek invention, is a useful instrument for calculating the position of the Sun and other prominent stars. Yet, its technology is rudimentary compared to that of the Antikythera device.

In 2015, Kyriakos Efstathiou, a professor of mechanical engineering at the Aristotle University of Thessaloniki and head of the group which studied the Antikythera Mechanism said: “All of our research has shown that our ancestors used their deep knowledge of astronomy and technology to construct such mechanisms, and based only on this conclusion, the history of technology should be re-written because it sets its start many centuries back.”

The professor further explained that the Antikythera Mechanism is undoubtedly the first machine of antiquity which can be classified by the scientific term “computer,” because “it is a machine with an entry where we can import data, and this machine can bring and create results based on a scientific mathematical scale.

In 2016, yet another astounding discovery was made when an inscription on the device was revealed—something like a label or a user’s manual for the device.

It included a discussion of the colors of eclipses, details used at the time in the making of astrological predictions, including the ability to see exact times of eclipses of the moon and the sun, as well as the correct movements of celestial bodies.

Inscribed numbers 76, 19 and 223 show maker “was a Pythagorean”

On one side of the device lies a handle that begins the movement of the whole system. By turning the handle and rotating the gauges in the front and rear of the mechanism, the user could set a date that would reveal the astronomical phenomena that would potentially occur around the Earth.

Physicist Yiannis Bitsakis has said that today the NASA [US National Aeronautics and Space Adiministration] website can detail all the eclipses of the past and those that are to occur in the future. However, “what we do with computers today, was done with the Antikythera Mechanism about 2000 years ago,” he said.

The stars and night heavens have been important to peoples around the world. (This September 18, 2020 posting highlights millennia old astronomy as practiced by indigenous peoples in North America, Australia, and elsewhere. There’s also this March 17, 2022 article “How did ancient civilizations make sense of the cosmos, and what did they get right?” by Susan Bell of University of Southern California on phys.org.)

I have covered the Antikythera in three previous postings (March 17, 2021, August 3, 2016, and October 2, 2012) with the 2021 posting being the most comprehensive and the one featuring Professor Tony Freeth’s latest breakthrough.

However, 2022 has blessed us with more as this April 11, 2022 article by Jennifer Ouellette for Ars Technica reveals (Note: Links have been removed)

The mysterious Antikythera mechanism—an ancient device believed to have been used for tracking the heavens—has fascinated scientists and the public alike since it was first recovered from a shipwreck over a century ago. Much progress has been made in recent years to reconstruct the surviving fragments and learn more about how the mechanism might have been used. And now, members of a team of Greek researchers believe they have pinpointed the start date for the Antikythera mechanism, according to a preprint posted to the physics arXiv repository. Knowing that “day zero” is critical to ensuring the accuracy of the device.

“Any measuring system, from a thermometer to the Antikythera mechanism, needs a calibration in order to [perform] its calculations correctly,” co-author Aristeidis Voulgaris of the Thessaloniki Directorate of Culture and Tourism in Greece told New Scientist. “Of course it wouldn’t have been perfect—it’s not a digital computer, it’s gears—but it would have been very good at predicting solar and lunar eclipses.”

Last year, an interdisciplinary team at University College London (UCL) led by mechanical engineer Tony Freeth made global headlines with their computational model, revealing a dazzling display of the ancient Greek cosmos. The team is currently building a replica mechanism, moving gears and all, using modern machinery. The display is described in the inscriptions on the mechanism’s back cover, featuring planets moving on concentric rings with marker beads as indicators. X-rays of the front cover accurately represent the cycles of Venus and Saturn—462 and 442 years, respectively. 

The Antikythera mechanism was likely built sometime between 200 BCE and 60 BCE. However, in February 2022, Freeth suggested that the famous Greek mathematician and inventor Archimedes (sometimes referred to as the Leonardo da Vinci of antiquity) may have actually designed the mechanism, even if he didn’t personally build it. (Archimedes died in 212 BCE at the hands of a Roman soldier during the siege of Syracuse.) There are references in the writings of Cicero (106-43 BCE) to a device built by Archimedes for tracking the movement of the Sun, Moon, and five planets; it was a prized possession of the Roman general Marcus Claudius Marcellus. According to Freeth, that description is remarkably similar to the Antikythera mechanism, suggesting it was not a one-of-a-kind device.

Voulgaris and his co-authors based their new analysis on a 223-month cycle called a Saros, represented by a spiral inset on the back of the device. The cycle covers the time it takes for the Sun, Moon, and Earth to return to their same positions and includes associated solar and lunar eclipses. Given our current knowledge about how the device likely functioned, as well as the inscriptions, the team believed the start date would coincide with an annular solar eclipse.

“This is a very specific and unique date [December 22, 178 BCE],” Voulgaris said. “In one day, there occurred too many astronomical events for it to be coincidence. This date was a new moon, the new moon was at apogee, there was a solar eclipse, the Sun entered into the constellation Capricorn, it was the winter solstice.”

Others have made independent calculations and arrived at a different conclusion: the calibration date would more likely fall sometime in the summer of 204 BCE, although Voulgaris countered that this doesn’t explain why the winter solstice is engraved so prominently on the device.

“The eclipse predictions on the [device’s back] contain enough astronomical information to demonstrate conclusively that the 18-year series of lunar and solar eclipse predictions started in 204 BCE,” Alexander Jones of New York University told New Scientist, adding that there have been four independent calculations of this. “The reason such a dating is possible is because the Saros period is not a highly accurate equation of lunar and solar periodicities, so every time you push forward by 223 lunar months… the quality of the prediction degrades.”

Read Ouellette’s April 11, 2022 article for a pretty accessible description of the work involved in establishing the date. Here’s a link to and a citation for the latest attempt to date the Antikythera,

The Initial Calibration Date of the Antikythera Mechanism after the Saros spiral mechanical Apokatastasis by Aristeidis Voulgaris, Christophoros Mouratidis, Andreas Vossinakis. arXiv > physics > arXiv:2203.15045 Submission history: From: Aristeidis Voulgaris Mr [view email] [v1] Mon, 28 Mar 2022 19:17:57 UTC (1,545 KB)

It’s open access. The calculations are beyond me otherwise, it’s quite readable.

Getting back to the Berggruen Institute and its Antikythera program/studio, good luck to all the applicants (the Antikythera application portal).

East/West collaboration on scholarship and imagination about humanity’s long-term future— six new fellows at Berggruen Research Center at Peking University

According to a January 4, 2022 Berggruen Institute (also received via email), they have appointed a new crop of fellows for their research center at Peking University,

The Berggruen Institute has announced six scientists and philosophers to serve as Fellows at the Berggruen Research Center at Peking University in Beijing, China. These eminent scholars will work together across disciplines to explore how the great transformations of our time may shift human experience and self-understanding in the decades and centuries to come.

The new Fellows are Chenjian Li, University Chair Professor at Peking University; Xianglong Zhang, professor of philosophy at Peking University; Xiaoli Liu, professor of philosophy at Renmin University of China; Jianqiao Ge, lecturer at the Academy for Advanced Interdisciplinary Studies (AAIS) at Peking University; Xiaoping Chen, Director of the Robotics Laboratory at the University of Science and Technology of China; and Haidan Chen, associate professor of medical ethics and law at the School of Health Humanities at Peking University.

“Amid the pandemic, climate change, and the rest of the severe challenges of today, our Fellows are surmounting linguistic and cultural barriers to imagine positive futures for all people,” said Bing Song, Director of the China Center and Vice President of the Berggruen Institute. “Dialogue and shared understanding are crucial if we are to understand what today’s breakthroughs in science and technology really mean for the human community and the planet we all share.”

The Fellows will investigate deep questions raised by new understandings and capabilities in science and technology, exploring their implications for philosophy and other areas of study.  Chenjian Li is considering the philosophical and ethical considerations of gene editing technology. Meanwhile, Haidan Chen is exploring the social implications of brain/computer interface technologies in China, while Xiaoli Liu is studying philosophical issues arising from the intersections among psychology, neuroscience, artificial intelligence, and art.

Jianqiao Ge’s project considers the impact of artificial intelligence on the human brain, given the relative recency of its evolution into current form. Xianglong Zhang’s work explores the interplay between literary culture and the development of technology. Finally, Xiaoping Chen is developing a new concept for describing innovation that draws from Daoist, Confucianist, and ancient Greek philosophical traditions.

Fellows at the China Center meet monthly with the Institute’s Los Angeles-based Fellows. These fora provide an opportunity for all Fellows to share and discuss their work. Through this cross-cultural dialogue, the Institute is helping to ensure continued high-level of ideas among China, the United States, and the rest of the world about some of the deepest and most fundamental questions humanity faces today.

“Changes in our capability and understanding of the physical world affect all of humanity, and questions about their implications must be pondered at a cross-cultural level,” said Bing. “Through multidisciplinary dialogue that crosses the gulf between East and West, our Fellows are pioneering new thought about what it means to be human.”

Haidan Chen is associate professor of medical ethics and law at the School of Health Humanities at Peking University. She was a visiting postgraduate researcher at the Institute for the Study of Science Technology and Innovation (ISSTI), the University of Edinburgh; a visiting scholar at the Brocher Foundation, Switzerland; and a Fulbright visiting scholar at the Center for Biomedical Ethics, Stanford University. Her research interests embrace the ethical, legal, and social implications (ELSI) of genetics and genomics, and the governance of emerging technologies, in particular stem cells, biobanks, precision medicine, and brain science. Her publications appear at Social Science & MedicineBioethics and other journals.

Xiaoping Chen is the director of the Robotics Laboratory at University of Science and Technology of China. He also currently serves as the director of the Robot Technical Standard Innovation Base, an executive member of the Global AI Council, Chair of the Chinese RoboCup Committee, and a member of the International RoboCup Federation’s Board of Trustees. He has received the USTC’s Distinguished Research Presidential Award and won Best Paper at IEEE ROBIO 2016. His projects have won the IJCAI’s Best Autonomous Robot and Best General-Purpose Robot awards as well as twelve world champions at RoboCup. He proposed an intelligent technology pathway for robots based on Open Knowledge and the Rong-Cha principle, which have been implemented and tested in the long-term research on KeJia and JiaJia intelligent robot systems.

Jianqiao Ge is a lecturer at the Academy for Advanced Interdisciplinary Studies (AAIS) at Peking University. Before, she was a postdoctoral fellow at the University of Chicago and the Principal Investigator / Co-Investigator of more than 10 research grants supported by the Ministry of Science and Technology of China, the National Natural Science Foundation of China, and Beijing Municipal Science & Technology Commission. She has published more than 20 peer-reviewed articles on leading academic journals such as PNAS, the Journal of Neuroscience, and has been awarded two national patents. In 2008, by scanning the human brain with functional MRI, Ge and her collaborator were among the first to confirm that the human brain engages distinct neurocognitive strategies to comprehend human intelligence and artificial intelligence. Ge received her Ph.D. in psychology, B.S in physics, a double B.S in mathematics and applied mathematics, and a double B.S in economics from Peking University.

Chenjian Li is the University Chair Professor of Peking University. He also serves on the China Advisory Board of Eli Lilly and Company, the China Advisory Board of Cornell University, and the Rhodes Scholar Selection Committee. He is an alumnus of Peking University’s Biology Department, Peking Union Medical College, and Purdue University. He was the former Vice Provost of Peking University, Executive Dean of Yuanpei College, and Associate Dean of the School of Life Sciences at Peking University. Prior to his return to China, he was an associate professor at Weill Medical College of Cornell University and the Aidekman Endowed Chair of Neurology at Mount Sinai School of Medicine. Dr. Li’s academic research focuses on the molecular and cellular mechanisms of neurological diseases, cancer drug development, and gene-editing and its philosophical and ethical considerations. Li also writes as a public intellectual on science and humanity, and his Chinese translation of Richard Feynman’s book What Do You Care What Other People Think? received the 2001 National Publisher’s Book Award.

Xiaoli Liu is professor of philosophy at Renmin University. She is also Director of the Chinese Society of Philosophy of Science Leader. Her primary research interests are philosophy of mathematics, philosophy of science and philosophy of cognitive science. Her main works are “Life of Reason: A Study of Gödel’s Thought,” “Challenges of Cognitive Science to Contemporary Philosophy,” “Philosophical Issues in the Frontiers of Cognitive Science.” She edited “Symphony of Mind and Machine” and series of books “Mind and Cognition.” In 2003, she co-founded the “Mind and Machine workshop” with interdisciplinary scholars, which has held 18 consecutive annual meetings. Liu received her Ph.D. from Peking University and was a senior visiting scholar in Harvard University.

Xianglong Zhang is a professor of philosophy at Peking University. His research areas include Confucian philosophy, phenomenology, Western and Eastern comparative philosophy. His major works (in Chinese except where noted) include: Heidegger’s Thought and Chinese Tao of HeavenBiography of HeideggerFrom Phenomenology to ConfuciusThe Exposition and Comments of Contemporary Western Philosophy; The Exposition and Comments of Classic Western PhilosophyThinking to Take Refuge: The Chinese Ancient Philosophies in the GlobalizationLectures on the History of Confucian Philosophy (four volumes); German Philosophy, German Culture and Chinese Philosophical ThinkingHome and Filial Piety: From the View between the Chinese and the Western.

About the Berggruen China Center
Breakthroughs in artificial intelligence and life science have led to the fourth scientific and technological revolution. The Berggruen China Center is a hub for East-West research and dialogue dedicated to the cross-cultural and interdisciplinary study of the transformations affecting humanity. Intellectual themes for research programs are focused on frontier sciences, technologies, and philosophy, as well as issues involving digital governance and globalization.

About the Berggruen Institute:
The Berggruen Institute’s mission is to develop foundational ideas and shape political, economic, and social institutions for the 21st century. Providing critical analysis using an outwardly expansive and purposeful network, we bring together some of the best minds and most authoritative voices from across cultural and political boundaries to explore fundamental questions of our time. Our objective is enduring impact on the progress and direction of societies around the world. To date, projects inaugurated at the Berggruen Institute have helped develop a youth jobs plan for Europe, fostered a more open and constructive dialogue between Chinese leadership and the West, strengthened the ballot initiative process in California, and launched Noema, a new publication that brings thought leaders from around the world together to share ideas. In addition, the Berggruen Prize, a $1 million award, is conferred annually by an independent jury to a thinker whose ideas are shaping human self-understanding to advance humankind.

You can find out more about the Berggruen China Center here and you can access a list along with biographies of all the Berggruen Institute fellows here.

Getting ready

I look forward to hearing about the projects from these thinkers.

Gene editing and ethics

I may have to reread some books in anticipation of Chenjian Li’s philosophical work and ethical considerations of gene editing technology. I wonder if there’ll be any reference to the He Jiankui affair.

(Briefly for those who may not be familiar with the situation, He claimed to be the first to gene edit babies. In November 2018, news about the twins, Lulu and Nana, was a sensation and He was roundly criticized for his work. I have not seen any information about how many babies were gene edited for He’s research; there could be as many as six. My July 28, 2020 posting provided an update. I haven’t stumbled across anything substantive since then.)

There are two books I recommend should you be interested in gene editing, as told through the lens of the He Jiankui affair. If you can, read both as that will give you a more complete picture.

In no particular order: This book provides an extensive and accessible look at the science, the politics of scientific research, and some of the pressures on scientists of all countries. Kevin Davies’ 2020 book, “Editing Humanity; the CRISPR Revolution and the New Era of Genome Editing” provides an excellent introduction from an insider. Here’s more from Davies’ biographical sketch,

Kevin Davies is the executive editor of The CRISPR Journal and the founding editor of Nature Genetics . He holds an MA in biochemistry from the University of Oxford and a PhD in molecular genetics from the University of London. He is the author of Cracking the Genome, The $1,000 Genome, and co-authored a new edition of DNA: The Story of the Genetic Revolution with Nobel Laureate James D. Watson and Andrew Berry. …

The other book is “The Mutant Project; Inside the Global Race to Genetically Modify Humans” (2020) by Eben Kirksey, an anthropologist who has an undergraduate degree in one of the sciences. He too provides scientific underpinning but his focus is on the cultural and personal underpinnings of the He Jiankui affair, on the culture of science research, irrespective of where it’s practiced, and the culture associated with the DIY (do-it-yourself) Biology community. Here’s more from Kirksey’s biographical sketch,

EBEN KIRKSEY is an American anthropologist and Member of the Institute for Advanced Study in Princeton, New Jersey. He has been published in Wired, The Atlantic, The Guardian and The Sunday Times . He is sought out as an expert on science in society by the Associated Press, The Wall Street Journal, The New York Times, Democracy Now, Time and the BBC, among other media outlets. He speaks widely at the world’s leading academic institutions including Oxford, Yale, Columbia, UCLA, and the International Summit of Human Genome Editing, plus music festivals, art exhibits, and community events. Professor Kirksey holds a long-term position at Deakin University in Melbourne, Australia.

Brain/computer interfaces (BCI)

I’m happy to see that Haidan Chen will be exploring the social implications of brain/computer interface technologies in China. I haven’t seen much being done here in Canada but my December 23, 2021 posting, Your cyborg future (brain-computer interface) is closer than you think, highlights work being done at the Imperial College London (ICL),

“For some of these patients, these devices become such an integrated part of themselves that they refuse to have them removed at the end of the clinical trial,” said Rylie Green, one of the authors. “It has become increasingly evident that neurotechnologies have the potential to profoundly shape our own human experience and sense of self.”

You might also find my September 17, 2020 posting has some useful information. Check under the “Brain-computer interfaces, symbiosis, and ethical issues” subhead for another story about attachment to one’s brain implant and also the “Finally” subhead for more reading suggestions.

Artificial intelligence (AI), art, and the brain

I’ve lumped together three of the thinkers, Xiaoli Liu, Jianqiao Ge and Xianglong Zhang, as there is some overlap (in my mind, if nowhere else),

  • Liu’s work on philosophical issues as seen in the intersections of psychology, neuroscience, artificial intelligence, and art
  • Ge’s work on the evolution of the brain and the impact that artificial intelligence may have on it
  • Zhang’s work on the relationship between literary culture and the development of technology

A December 3, 2021 posting, True love with AI (artificial intelligence): The Nature of Things explores emotional and creative AI (long read), is both a review of a recent episode of the Canadian Broadcasting Corporation’s (CBC) science television series,The Nature of Things, and a dive into a number of issues as can be seen under subheads such as “AI and Creativity,” “Kazuo Ishiguro?” and “Evolution.”

You may also want to check out my December 27, 2021 posting, Ai-Da (robot artist) writes and performs poem honouring Dante’s 700th anniversary, for an eye opening experience. If nothing else, just watch the embedded video.

This suggestion relates most closely to Ge’s and Zhang’s work. If you haven’t already come across it, there’s Walter J. Ong’s 1982 book, “Orality and Literacy: The Technologizing of the Word.” From the introductory page of the 2002 edition (PDF),

This classic work explores the vast differences between oral and
literate cultures and offers a brilliantly lucid account of the
intellectual, literary and social effects of writing, print and
electronic technology. In the course of his study, Walter J.Ong
offers fascinating insights into oral genres across the globe and
through time and examines the rise of abstract philosophical and
scientific thinking. He considers the impact of orality-literacy
studies not only on literary criticism and theory but on our very
understanding of what it is to be a human being, conscious of self
and other.

In 2013, a 30th anniversary edition of the book was released and is still in print.

Philosophical traditions

I’m very excited to learn more about Xiaoping Chen’s work describing innovation that draws from Daoist, Confucianist, and ancient Greek philosophical traditions.

Should any of my readers have suggestions for introductory readings on these philosophical traditions, please do use the Comments option for this blog. In fact, if you have suggestions for other readings on these topics, I would be very happy to learn of them.

Congratulations to the six Fellows at the Berggruen Research Center at Peking University in Beijing, China. I look forward to reading articles about your work in the Berggruen Institute’s Noema magazine and, possibly, attending your online events.

Bruno Latour, science, and the 2021 Kyoto Prize in Arts and Philosophy: Commemorative Lecture

The Kyoto Prize (Wikipedia entry) was first given out in 1985. These days (I checked out a currency converter today, November 15, 2021), the Inamori Foundation, which administers the prize, gives out $100M yen per prize, worth about $1,098,000 CAD or $876,800 USD.

Here’s more about the prize from the November 9, 2021 Inamori Foundation press release on EurekAlert,

The Kyoto Prize is an international award of Japanese origin, presented to individuals who have made significant contributions to the progress of science, the advancement of civilization, and the enrichment and elevation of the human spirit. The Prize is granted in the three categories of Advanced Technology, Basic Sciences; Arts and Philosophy, each of which comprises four fields, making a total of 12 fields. Every year, one Prize is awarded in each of the three categories with prize money of 100 million yen per category.

One of the distinctive features of the Kyoto Prize is that it recognizes both “science” and “arts and philosophy” fields. This is because of its founder Kazuo Inamori’s conviction that the future of humanity can be assured only when there is a balance between scientific development and the enrichment of the human spirit.

The recipient for arts and philosophy, Bruno Latour has been mentioned here before (from a July 15, 2020 posting titled, ‘Architecture, the practice of science, and meaning’),

The 1979 book, Laboratory Life: the Social Construction of Scientific Facts by Bruno Latour and Steve Woolgar immediately came to mind on reading about a new book (The New Architecture of Science: Learning from Graphene) linking architecture to the practice of science (research on graphene). It turns out that one of the authors studied with Latour. (For more about Laboratory Life see: Bruno Latour’s Wikipedia entry; scroll down to Main Works)

Back to Latour and his prize from the November 9, 2021 Inamori Foundation press release,

Bruno Latour, Professor Emeritus at Paris Institute of Political Studies (Sciences Po), received the 2021 Kyoto Prize in Arts and Philosophy for his radically re-examining “modernity” by developing a philosophy that focuses on interactions between technoscience and social structure. Latour’s Commemorative Lecture “How to React to a Change in Cosmology” will be released on November 10, 2021, 10:00 AM JST at the 2021 Kyoto Prize Special Website.

“Viruses–we don’t even know if viruses are our enemies or our friends!” says Latour in his lecture. By using the ongoing Covid epidemic as a sort of lead, Latour discusses the shift in cosmology, a structure that distributes agencies around. He then suggests a “new project” we have to work on now, which he assumes is very different from the modernist project.

Bruno Latour has revolutionized the conventional view of science by treating nature, humans, laboratory equipment, and other entities as equal actors, and describing technoscience as the hybrid network of these actors. His philosophy re-examines “modernity” based on the dualism of nature and society. He has a large influence across disciplines, with his multifaceted activities that include proposals regarding global environmental issues.

Latour and the other two 2021 Kyoto Prize laureates are introduced on the 2021 Kyoto Prize Special Website with information about their work, profiles, and three-minute introduction videos. The Kyoto Prize in Advanced Technology for this year went to Andrew Chi-Chih Yao, Professor of Institute for Interdisciplinary Information Sciences at Tsinghua University, and Basic Sciences to Robert G. Roeder, Arnold and Mabel Beckman Professor of Biochemistry and Molecular Biology at The Rockefeller University. 

The folks at the Kyoto Prize have made a three-minute video introduction to Bruno Latour available,

For more information you can check out the Inamori Foundation website. There are two Kyoto Prize websites, the 2021 Kyoto Prize Special Website and the Kyoto Prize website. These are all English language websites and, if you have the language skills and the interest, it is possible to toggle (upper right hand side) and get the Japanese language version.

Finally, there’s a dedicated Bruno Latour webpage on the 2021 Kyoto Prize Special Website and Bruno Latour has his own website where French and English are items are mixed together but it seems the majority of the content is in English.

The metaverse or not

The ‘metaverse’ seems to be everywhere these days (especially since Facebook has made a number of announcements bout theirs (more about that later in this posting).

At this point, the metaverse is very hyped up despite having been around for about 30 years. According to the Wikipedia timeline (see the Metaverse entry), the first one was a MOO in 1993 called ‘The Metaverse’. In any event, it seems like it might be a good time to see what’s changed since I dipped my toe into a metaverse (Second Life by Linden Labs) in 2007.

(For grammar buffs, I switched from definite article [the] to indefinite article [a] purposefully. In reading the various opinion pieces and announcements, it’s not always clear whether they’re talking about a single, overarching metaverse [the] replacing the single, overarching internet or whether there will be multiple metaverses, in which case [a].)

The hype/the buzz … call it what you will

This September 6, 2021 piece by Nick Pringle for Fast Company dates the beginning of the metaverse to a 1992 science fiction novel before launching into some typical marketing hype (for those who don’t know, hype is the short form for hyperbole; Note: Links have been removed),

The term metaverse was coined by American writer Neal Stephenson in his 1993 sci-fi hit Snow Crash. But what was far-flung fiction 30 years ago is now nearing reality. At Facebook’s most recent earnings call [June 2021], CEO Mark Zuckerberg announced the company’s vision to unify communities, creators, and commerce through virtual reality: “Our overarching goal across all of these initiatives is to help bring the metaverse to life.”

So what actually is the metaverse? It’s best explained as a collection of 3D worlds you explore as an avatar. Stephenson’s original vision depicted a digital 3D realm in which users interacted in a shared online environment. Set in the wake of a catastrophic global economic crash, the metaverse in Snow Crash emerged as the successor to the internet. Subcultures sprung up alongside new social hierarchies, with users expressing their status through the appearance of their digital avatars.

Today virtual worlds along these lines are formed, populated, and already generating serious money. Household names like Roblox and Fortnite are the most established spaces; however, there are many more emerging, such as Decentraland, Upland, Sandbox, and the soon to launch Victoria VR.

These metaverses [emphasis mine] are peaking at a time when reality itself feels dystopian, with a global pandemic, climate change, and economic uncertainty hanging over our daily lives. The pandemic in particular saw many of us escape reality into online worlds like Roblox and Fortnite. But these spaces have proven to be a place where human creativity can flourish amid crisis.

In fact, we are currently experiencing an explosion of platforms parallel to the dotcom boom. While many of these fledgling digital worlds will become what Ask Jeeves was to Google, I predict [emphasis mine] that a few will match the scale and reach of the tech giant—or even exceed it.

Because the metaverse brings a new dimension to the internet, brands and businesses will need to consider their current and future role within it. Some brands are already forging the way and establishing a new genre of marketing in the process: direct to avatar (D2A). Gucci sold a virtual bag for more than the real thing in Roblox; Nike dropped virtual Jordans in Fortnite; Coca-Cola launched avatar wearables in Decentraland, and Sotheby’s has an art gallery that your avatar can wander in your spare time.

D2A is being supercharged by blockchain technology and the advent of digital ownership via NFTs, or nonfungible tokens. NFTs are already making waves in art and gaming. More than $191 million was transacted on the “play to earn” blockchain game Axie Infinity in its first 30 days this year. This kind of growth makes NFTs hard for brands to ignore. In the process, blockchain and crypto are starting to feel less and less like “outsider tech.” There are still big barriers to be overcome—the UX of crypto being one, and the eye-watering environmental impact of mining being the other. I believe technology will find a way. History tends to agree.

Detractors see the metaverse as a pandemic fad, wrapping it up with the current NFT bubble or reducing it to Zuck’s [Jeffrey Zuckerberg and Facebook] dystopian corporate landscape. This misses the bigger behavior change that is happening among Gen Alpha. When you watch how they play, it becomes clear that the metaverse is more than a buzzword.

For Gen Alpha [emphasis mine], gaming is social life. While millennials relentlessly scroll feeds, Alphas and Zoomers [emphasis mine] increasingly stroll virtual spaces with their friends. Why spend the evening staring at Instagram when you can wander around a virtual Harajuku with your mates? If this seems ridiculous to you, ask any 13-year-old what they think.

Who is Nick Pringle and how accurate are his predictions?

At the end of his September 6, 2021 piece, you’ll find this,

Nick Pringle is SVP [Senior Vice President] executive creative director at R/GA London.

According to the R/GA Wikipedia entry,

… [the company] evolved from a computer-assisted film-making studio to a digital design and consulting company, as part of a major advertising network.

Here’s how Pringle sees our future, his September 6, 2021 piece,

By thinking “virtual first,” you can see how these spaces become highly experimental, creative, and valuable. The products you can design aren’t bound by physics or marketing convention—they can be anything, and are now directly “ownable” through blockchain. …

I believe that the metaverse is here to stay. That means brands and marketers now have the exciting opportunity to create products that exist in multiple realities. The winners will understand that the metaverse is not a copy of our world, and so we should not simply paste our products, experiences, and brands into it.

I emphasized “These metaverses …” in the previous section to highlight the fact that I find the use of ‘metaverses’ vs. ‘worlds’ confusing as the words are sometimes used as synonyms and sometimes as distinctions. We do it all the time in all sorts of conversations but for someone who’s an outsider to a particular occupational group or subculture, the shifts can make for confusion.

As for Gen Alpha and Zoomer, I’m not a fan of ‘Gen anything’ as shorthand for describing a cohort based on birth years. For example, “For Gen Alpha [emphasis mine], gaming is social life,” ignores social and economic classes, as well as, the importance of locations/geography, e.g., Afghanistan in contrast to the US.

To answer the question I asked, Pringle does not mention any record of accuracy for his predictions for the future but I was able to discover that he is a “multiple Cannes Lions award-winning creative” (more here).

A more measured view of the metaverse

An October 4, 2021 article (What is the metaverse, and do I have to care? One part definition, one part aspiration, one part hype) by Adi Robertson and Jay Peters for The Verge offers a deeper dive into the metaverse (Note: Links have been removed),

In recent months you may have heard about something called the metaverse. Maybe you’ve read that the metaverse is going to replace the internet. Maybe we’re all supposed to live there. Maybe Facebook (or Epic, or Roblox, or dozens of smaller companies) is trying to take it over. And maybe it’s got something to do with NFTs [non-fungible tokens]?

Unlike a lot of things The Verge covers, the metaverse is tough to explain for one reason: it doesn’t necessarily exist. It’s partly a dream for the future of the internet and partly a neat way to encapsulate some current trends in online infrastructure, including the growth of real-time 3D worlds.

Then what is the real metaverse?

There’s no universally accepted definition of a real “metaverse,” except maybe that it’s a fancier successor to the internet. Silicon Valley metaverse proponents sometimes reference a description from venture capitalist Matthew Ball, author of the extensive Metaverse Primer:

“The Metaverse is an expansive network of persistent, real-time rendered 3D worlds and simulations that support continuity of identity, objects, history, payments, and entitlements, and can be experienced synchronously by an effectively unlimited number of users, each with an individual sense of presence.”

Facebook, arguably the tech company with the biggest stake in the metaverse, describes it more simply:

“The ‘metaverse’ is a set of virtual spaces where you can create and explore with other people who aren’t in the same physical space as you.”

There are also broader metaverse-related taxonomies like one from game designer Raph Koster, who draws a distinction between “online worlds,” “multiverses,” and “metaverses.” To Koster, online worlds are digital spaces — from rich 3D environments to text-based ones — focused on one main theme. Multiverses are “multiple different worlds connected in a network, which do not have a shared theme or ruleset,” including Ready Player One’s OASIS. And a metaverse is “a multiverse which interoperates more with the real world,” incorporating things like augmented reality overlays, VR dressing rooms for real stores, and even apps like Google Maps.

If you want something a little snarkier and more impressionistic, you can cite digital scholar Janet Murray — who has described the modern metaverse ideal as “a magical Zoom meeting that has all the playful release of Animal Crossing.”

But wait, now Ready Player One isn’t a metaverse and virtual worlds don’t have to be 3D? It sounds like some of these definitions conflict with each other.

An astute observation.

Why is the term “metaverse” even useful? “The internet” already covers mobile apps, websites, and all kinds of infrastructure services. Can’t we roll virtual worlds in there, too?

Matthew Ball favors the term “metaverse” because it creates a clean break with the present-day internet. [emphasis mine] “Using the metaverse as a distinctive descriptor allows us to understand the enormity of that change and in turn, the opportunity for disruption,” he said in a phone interview with The Verge. “It’s much harder to say ‘we’re late-cycle into the last thing and want to change it.’ But I think understanding this next wave of computing and the internet allows us to be more proactive than reactive and think about the future as we want it to be, rather than how to marginally affect the present.”

A more cynical spin is that “metaverse” lets companies dodge negative baggage associated with “the internet” in general and social media in particular. “As long as you can make technology seem fresh and new and cool, you can avoid regulation,” researcher Joan Donovan told The Washington Post in a recent article about Facebook and the metaverse. “You can run defense on that for several years before the government can catch up.”

There’s also one very simple reason: it sounds more futuristic than “internet” and gets investors and media people (like us!) excited.

People keep saying NFTs are part of the metaverse. Why?

NFTs are complicated in their own right, and you can read more about them here. Loosely, the thinking goes: NFTs are a way of recording who owns a specific virtual good, creating and transferring virtual goods is a big part of the metaverse, thus NFTs are a potentially useful financial architecture for the metaverse. Or in more practical terms: if you buy a virtual shirt in Metaverse Platform A, NFTs can create a permanent receipt and let you redeem the same shirt in Metaverse Platforms B to Z.

Lots of NFT designers are selling collectible avatars like CryptoPunks, Cool Cats, and Bored Apes, sometimes for astronomical sums. Right now these are mostly 2D art used as social media profile pictures. But we’re already seeing some crossover with “metaverse”-style services. The company Polygonal Mind, for instance, is building a system called CryptoAvatars that lets people buy 3D avatars as NFTs and then use them across multiple virtual worlds.

If you have the time, the October 4, 2021 article (What is the metaverse, and do I have to care? One part definition, one part aspiration, one part hype) is definitely worth the read.

Facebook’s multiverse and other news

Since starting this post sometime in September 2021, the situation regarding Facebook has changed a few times. I’ve decided to begin my version of the story from a summer 2021 announcement.

On Monday, July 26, 2021, Facebook announced a new Metaverse product group. From a July 27, 2021 article by Scott Rosenberg for Yahoo News (Note: A link has been removed),

Facebook announced Monday it was forming a new Metaverse product group to advance its efforts to build a 3D social space using virtual and augmented reality tech.

Facebook’s new Metaverse product group will report to Andrew Bosworth, Facebook’s vice president of virtual and augmented reality [emphasis mine], who announced the new organization in a Facebook post.

Facebook, integrity, and safety in the metaverse

On September 27, 2021 Facebook posted this webpage (Building the Metaverse Responsibly by Andrew Bosworth, VP, Facebook Reality Labs [emphasis mine] and Nick Clegg, VP, Global Affairs) on its site,

The metaverse won’t be built overnight by a single company. We’ll collaborate with policymakers, experts and industry partners to bring this to life.

We’re announcing a $50 million investment in global research and program partners to ensure these products are developed responsibly.

We develop technology rooted in human connection that brings people together. As we focus on helping to build the next computing platform, our work across augmented and virtual reality and consumer hardware will deepen that human connection regardless of physical distance and without being tied to devices. 

Introducing the XR [extended reality] Programs and Research Fund

There’s a long road ahead. But as a starting point, we’re announcing the XR Programs and Research Fund, a two-year $50 million investment in programs and external research to help us in this effort. Through this fund, we’ll collaborate with industry partners, civil rights groups, governments, nonprofits and academic institutions to determine how to build these technologies responsibly. 

..

Where integrity and safety are concerned Facebook is once again having some credibility issues according to an October 5, 2021 Associated Press article (Whistleblower testifies Facebook chooses profit over safety, calls for ‘congressional action’) posted on the Canadian Broadcasting Corporation’s (CBC) news online website.

Rebranding Facebook’s integrity and safety issues away?

It seems Facebook’s credibility issues are such that the company is about to rebrand itself according to an October 19, 2021 article by Alex Heath for The Verge (Note: Links have been removed),

Facebook is planning to change its company name next week to reflect its focus on building the metaverse, according to a source with direct knowledge of the matter.

The coming name change, which CEO Mark Zuckerberg plans to talk about at the company’s annual Connect conference on October 28th [2021], but could unveil sooner, is meant to signal the tech giant’s ambition to be known for more than social media and all the ills that entail. The rebrand would likely position the blue Facebook app as one of many products under a parent company overseeing groups like Instagram, WhatsApp, Oculus, and more. A spokesperson for Facebook declined to comment for this story.

Facebook already has more than 10,000 employees building consumer hardware like AR glasses that Zuckerberg believes will eventually be as ubiquitous as smartphones. In July, he told The Verge that, over the next several years, “we will effectively transition from people seeing us as primarily being a social media company to being a metaverse company.”

A rebrand could also serve to further separate the futuristic work Zuckerberg is focused on from the intense scrutiny Facebook is currently under for the way its social platform operates today. A former employee turned whistleblower, Frances Haugen, recently leaked a trove of damning internal documents to The Wall Street Journal and testified about them before Congress. Antitrust regulators in the US and elsewhere are trying to break the company up, and public trust in how Facebook does business is falling.

Facebook isn’t the first well-known tech company to change its company name as its ambitions expand. In 2015, Google reorganized entirely under a holding company called Alphabet, partly to signal that it was no longer just a search engine, but a sprawling conglomerate with companies making driverless cars and health tech. And Snapchat rebranded to Snap Inc. in 2016, the same year it started calling itself a “camera company” and debuted its first pair of Spectacles camera glasses.

If you have time, do read Heath’s article in its entirety.

An October 20, 2021 Thomson Reuters item on CBC (Canadian Broadcasting Corporation) news online includes quotes from some industry analysts about the rebrand,

“It reflects the broadening out of the Facebook business. And then, secondly, I do think that Facebook’s brand is probably not the greatest given all of the events of the last three years or so,” internet analyst James Cordwell at Atlantic Equities said.

“Having a different parent brand will guard against having this negative association transferred into a new brand, or other brands that are in the portfolio,” said Shankha Basu, associate professor of marketing at University of Leeds.

Tyler Jadah’s October 20, 2021 article for the Daily Hive includes an earlier announcement (not mentioned in the other two articles about the rebranding), Note: A link has been removed,

Earlier this week [October 17, 2021], Facebook announced it will start “a journey to help build the next computing platform” and will hire 10,000 new high-skilled jobs within the European Union (EU) over the next five years.

“Working with others, we’re developing what is often referred to as the ‘metaverse’ — a new phase of interconnected virtual experiences using technologies like virtual and augmented reality,” wrote Facebook’s Nick Clegg, the VP of Global Affairs. “At its heart is the idea that by creating a greater sense of “virtual presence,” interacting online can become much closer to the experience of interacting in person.”

Clegg says the metaverse has the potential to help unlock access to new creative, social, and economic opportunities across the globe and the virtual world.

In an email with Facebook’s Corporate Communications Canada, David Troya-Alvarez told Daily Hive, “We don’t comment on rumour or speculation,” in regards to The Verge‘s report.

I will update this posting when and if Facebook rebrands itself into a ‘metaverse’ company.

***See Oct. 28, 2021 update at the end of this posting and prepare yourself for ‘Meta’.***

Who (else) cares about integrity and safety in the metaverse?

Apparently, the international legal firm, Norton Rose Fulbright also cares about safety and integrity in the metaverse. Here’s more from their July 2021 The Metaverse: The evolution of a universal digital platform webpage,

In technology, first-mover advantage is often significant. This is why BigTech and other online platforms are beginning to acquire software businesses to position themselves for the arrival of the Metaverse.  They hope to be at the forefront of profound changes that the Metaverse will bring in relation to digital interactions between people, between businesses, and between them both. 

What is the Metaverse? The short answer is that it does not exist yet. At the moment it is vision for what the future will be like where personal and commercial life is conducted digitally in parallel with our lives in the physical world. Sounds too much like science fiction? For something that does not exist yet, the Metaverse is drawing a huge amount of attention and investment in the tech sector and beyond.  

Here we look at what the Metaverse is, what its potential is for disruptive change, and some of the key legal and regulatory issues future stakeholders may need to consider.

What are the potential legal issues?

The revolutionary nature of the Metaverse is likely to give rise to a range of complex legal and regulatory issues. We consider some of the key ones below. As time goes by, naturally enough, new ones will emerge.

Data

Participation in the Metaverse will involve the collection of unprecedented amounts and types of personal data. Today, smartphone apps and websites allow organisations to understand how individuals move around the web or navigate an app. Tomorrow, in the Metaverse, organisations will be able to collect information about individuals’ physiological responses, their movements and potentially even brainwave patterns, thereby gauging a much deeper understanding of their customers’ thought processes and behaviours.

Users participating in the Metaverse will also be “logged in” for extended amounts of time. This will mean that patterns of behaviour will be continually monitored, enabling the Metaverse and the businesses (vendors of goods and services) participating in the Metaverse to understand how best to service the users in an incredibly targeted way.

The hungry Metaverse participant

How might actors in the Metaverse target persons participating in the Metaverse? Let us assume one such woman is hungry at the time of participating. The Metaverse may observe a woman frequently glancing at café and restaurant windows and stopping to look at cakes in a bakery window, and determine that she is hungry and serve her food adverts accordingly.

Contrast this with current technology, where a website or app can generally only ascertain this type of information if the woman actively searched for food outlets or similar on her device.

Therefore, in the Metaverse, a user will no longer need to proactively provide personal data by opening up their smartphone and accessing their webpage or app of choice. Instead, their data will be gathered in the background while they go about their virtual lives. 

This type of opportunity comes with great data protection responsibilities. Businesses developing, or participating in, the Metaverse will need to comply with data protection legislation when processing personal data in this new environment. The nature of the Metaverse raises a number of issues around how that compliance will be achieved in practice.

Who is responsible for complying with applicable data protection law? 

In many jurisdictions, data protection laws place different obligations on entities depending on whether an entity determines the purpose and means of processing personal data (referred to as a “controller” under the EU General Data Protection Regulation (GDPR)) or just processes personal data on behalf of others (referred to as a “processor” under the GDPR). 

In the Metaverse, establishing which entity or entities have responsibility for determining how and why personal data will be processed, and who processes personal data on behalf of another, may not be easy. It will likely involve picking apart a tangled web of relationships, and there may be no obvious or clear answers – for example:

Will there be one main administrator of the Metaverse who collects all personal data provided within it and determines how that personal data will be processed and shared?
Or will multiple entities collect personal data through the Metaverse and each determine their own purposes for doing so? 

Either way, many questions arise, including:

How should the different entities each display their own privacy notice to users? 
Or should this be done jointly? 
How and when should users’ consent be collected? 
Who is responsible if users’ personal data is stolen or misused while they are in the Metaverse? 
What data sharing arrangements need to be put in place and how will these be implemented?

There’s a lot more to this page including a look at Social Media Regulation and Intellectual Property Rights.

One other thing, according to the Norton Rose Fulbright Wikipedia entry, it is one of the ten largest legal firms in the world.

How many realities are there?

I’m starting to think we should talking about RR (real reality), as well as, VR (virtual reality), AR (augmented reality), MR (mixed reality), and XR (extended reality). It seems that all of these (except RR, which is implied) will be part of the ‘metaverse’, assuming that it ever comes into existence. Happily, I have found a good summarized description of VR/AR/MR/XR in a March 20, 2018 essay by North of 41 on medium.com,

Summary: VR is immersing people into a completely virtual environment; AR is creating an overlay of virtual content, but can’t interact with the environment; MR is a mixed of virtual reality and the reality, it creates virtual objects that can interact with the actual environment. XR brings all three Reality (AR, VR, MR) together under one term.

If you have the interest and approximately five spare minutes, read the entire March 20, 2018 essay, which has embedded images illustrating the various realities.

Alternate Mixed Realities: an example

TransforMR: Pose-Aware Object Substitution for Composing Alternate Mixed Realities (ISMAR ’21)

Here’s a description from one of the researchers, Mohamed Kari, of the video, which you can see above, and the paper he and his colleagues presented at the 20th IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 2021 (from the TransforMR page on YouTube),

We present TransforMR, a video see-through mixed reality system for mobile devices that performs 3D-pose-aware object substitution to create meaningful mixed reality scenes in previously unseen, uncontrolled, and open-ended real-world environments.

To get a sense of how recent this work is, ISMAR 2021 was held from October 4 – 8, 2021.

The team’s 2021 ISMAR paper, TransforMR Pose-Aware Object Substitution for Composing Alternate Mixed Realities by Mohamed Kari, Tobias Grosse-Puppendah, Luis Falconeri Coelho, Andreas Rene Fender, David Bethge, Reinhard Schütte, and Christian Holz lists two educational institutions I’d expect to see (University of Duisburg-Essen and ETH Zürich), the surprise was this one: Porsche AG. Perhaps that explains the preponderance of vehicles in this demonstration.

Space walking in virtual reality

Ivan Semeniuk’s October 2, 2021 article for the Globe and Mail highlights a collaboration between Montreal’s Felix and Paul Studios with NASA (US National Aeronautics and Space Administration) and Time studios,

Communing with the infinite while floating high above the Earth is an experience that, so far, has been known to only a handful.

Now, a Montreal production company aims to share that experience with audiences around the world, following the first ever recording of a spacewalk in the medium of virtual reality.

The company, which specializes in creating virtual-reality experiences with cinematic flair, got its long-awaited chance in mid-September when astronauts Thomas Pesquet and Akihiko Hoshide ventured outside the International Space Station for about seven hours to install supports and other equipment in preparation for a new solar array.

The footage will be used in the fourth and final instalment of Space Explorers: The ISS Experience, a virtual-reality journey to space that has already garnered a Primetime Emmy Award for its first two episodes.

From the outset, the production was developed to reach audiences through a variety of platforms for 360-degree viewing, including 5G-enabled smart phones and tablets. A domed theatre version of the experience for group audiences opened this week at the Rio Tinto Alcan Montreal Planetarium. Those who desire a more immersive experience can now see the first two episodes in VR form by using a headset available through the gaming and entertainment company Oculus. Scenes from the VR series are also on offer as part of The Infinite, an interactive exhibition developed by Montreal’s Phi Studio, whose works focus on the intersection of art and technology. The exhibition, which runs until Nov. 7 [2021], has attracted 40,000 visitors since it opened in July [2021?].

At a time when billionaires are able to head off on private extraterrestrial sojourns that almost no one else could dream of, Lajeunesse [Félix Lajeunesse, co-founder and creative director of Felix and Paul studios] said his project was developed with a very different purpose in mind: making it easier for audiences to become eyewitnesses rather than distant spectators to humanity’s greatest adventure.

For the final instalments, the storyline takes viewers outside of the space station with cameras mounted on the Canadarm, and – for the climax of the series – by following astronauts during a spacewalk. These scenes required extensive planning, not only because of the limited time frame in which they could be gathered, but because of the lighting challenges presented by a constantly shifting sun as the space station circles the globe once every 90 minutes.

… Lajeunesse said that it was equally important to acquire shots that are not just technically spectacular but that serve the underlying themes of Space Explorers: The ISS Experience. These include an examination of human adaptation and advancement, and the unity that emerges within a group of individuals from many places and cultures and who must learn to co-exist in a high risk environment in order to achieve a common goal.

If you have the time, do read Semeniuk’s October 2, 2021 article in its entirety. You can find the exhibits (hopefully, you’re in Montreal) The Infinite here and Space Explorers: The ISS experience here (see the preview below),

The realities and the ‘verses

There always seems to be a lot of grappling with new and newish science/technology where people strive to coin terms and define them while everyone, including members of the corporate community, attempts to cash in.

The last time I looked (probably about two years ago), I wasn’t able to find any good definitions for alternate reality and mixed reality. (By good, I mean something which clearly explicated the difference between the two.) It was nice to find something this time.

As for Facebook and its attempts to join/create a/the metaverse, the company’s timing seems particularly fraught. As well, paradigm-shifting technology doesn’t usually start with large corporations. The company is ignoring its own history.

Multiverses

Writing this piece has reminded me of the upcoming movie, “Doctor Strange in the Multiverse of Madness” (Wikipedia entry). While this multiverse is based on a comic book, the idea of a Multiverse (Wikipedia entry) has been around for quite some time,

Early recorded examples of the idea of infinite worlds existed in the philosophy of Ancient Greek Atomism, which proposed that infinite parallel worlds arose from the collision of atoms. In the third century BCE, the philosopher Chrysippus suggested that the world eternally expired and regenerated, effectively suggesting the existence of multiple universes across time.[1] The concept of multiple universes became more defined in the Middle Ages.

Multiple universes have been hypothesized in cosmology, physics, astronomy, religion, philosophy, transpersonal psychology, music, and all kinds of literature, particularly in science fiction, comic books and fantasy. In these contexts, parallel universes are also called “alternate universes”, “quantum universes”, “interpenetrating dimensions”, “parallel universes”, “parallel dimensions”, “parallel worlds”, “parallel realities”, “quantum realities”, “alternate realities”, “alternate timelines”, “alternate dimensions” and “dimensional planes”.

The physics community has debated the various multiverse theories over time. Prominent physicists are divided about whether any other universes exist outside of our own.

Living in a computer simulation or base reality

The whole thing is getting a little confusing for me so I think I’ll stick with RR (real reality) or as it’s also known base reality. For the notion of base reality, I want to thank astronomer David Kipping of Columbia University in Anil Ananthaswamy’s article for this analysis of the idea that we might all be living in a computer simulation (from my December 8, 2020 posting; scroll down about 50% of the way to the “Are we living in a computer simulation?” subhead),

… there is a more obvious answer: Occam’s razor, which says that in the absence of other evidence, the simplest explanation is more likely to be correct. The simulation hypothesis is elaborate, presuming realities nested upon realities, as well as simulated entities that can never tell that they are inside a simulation. “Because it is such an overly complicated, elaborate model in the first place, by Occam’s razor, it really should be disfavored, compared to the simple natural explanation,” Kipping says.

Maybe we are living in base reality after all—The Matrix, Musk and weird quantum physics notwithstanding.

To sum it up (briefly)

I’m sticking with the base reality (or real reality) concept, which is where various people and companies are attempting to create a multiplicity of metaverses or the metaverse effectively replacing the internet. This metaverse can include any all of these realities (AR/MR/VR/XR) along with base reality. As for Facebook’s attempt to build ‘the metaverse’, it seems a little grandiose.

The computer simulation theory is an interesting thought experiment (just like the multiverse is an interesting thought experiment). I’ll leave them there.

Wherever it is we are living, these are interesting times.

***Updated October 28, 2021: D. (Devindra) Hardawar’s October 28, 2021 article for engadget offers details about the rebranding along with a dash of cynicism (Note: A link has been removed),

Here’s what Facebook’s metaverse isn’t: It’s not an alternative world to help us escape from our dystopian reality, a la Snow Crash. It won’t require VR or AR glasses (at least, not at first). And, most importantly, it’s not something Facebook wants to keep to itself. Instead, as Mark Zuckerberg described to media ahead of today’s Facebook Connect conference, the company is betting it’ll be the next major computing platform after the rise of smartphones and the mobile web. Facebook is so confident, in fact, Zuckerberg announced that it’s renaming itself to “Meta.”

After spending the last decade becoming obsessed with our phones and tablets — learning to stare down and scroll practically as a reflex — the Facebook founder thinks we’ll be spending more time looking up at the 3D objects floating around us in the digital realm. Or maybe you’ll be following a friend’s avatar as they wander around your living room as a hologram. It’s basically a digital world layered right on top of the real world, or an “embodied internet” as Zuckerberg describes.

Before he got into the weeds for his grand new vision, though, Zuckerberg also preempted criticism about looking into the future now, as the Facebook Papers paint the company as a mismanaged behemoth that constantly prioritizes profit over safety. While acknowledging the seriousness of the issues the company is facing, noting that it’ll continue to focus on solving them with “industry-leading” investments, Zuckerberg said: 

“The reality is is that there’s always going to be issues and for some people… they may have the view that there’s never really a great time to focus on the future… From my perspective, I think that we’re here to create things and we believe that we can do this and that technology can make things better. So we think it’s important to to push forward.”

Given the extent to which Facebook, and Zuckerberg in particular, have proven to be untrustworthy stewards of social technology, it’s almost laughable that the company wants us to buy into its future. But, like the rise of photo sharing and group chat apps, Zuckerberg at least has a good sense of what’s coming next. And for all of his talk of turning Facebook into a metaverse company, he’s adamant that he doesn’t want to build a metaverse that’s entirely owned by Facebook. He doesn’t think other companies will either. Like the mobile web, he thinks every major technology company will contribute something towards the metaverse. He’s just hoping to make Facebook a pioneer.

“Instead of looking at a screen, or today, how we look at the Internet, I think in the future you’re going to be in the experiences, and I think that’s just a qualitatively different experience,” Zuckerberg said. It’s not quite virtual reality as we think of it, and it’s not just augmented reality. But ultimately, he sees the metaverse as something that’ll help to deliver more presence for digital social experiences — the sense of being there, instead of just being trapped in a zoom window. And he expects there to be continuity across devices, so you’ll be able to start chatting with friends on your phone and seamlessly join them as a hologram when you slip on AR glasses.

D. (Devindra) Hardawar’s October 28, 2021 article provides a lot more details and I recommend reading it in its entirety.

Cortical spheroids (like mini-brains) could unlock (larger) brain’s mysteries

A March 19, 2021 Northwestern University news release on EurekAlert announces the creation of a device designed to monitor brain organoids (for anyone unfamiliar with brain organoids there’s more information after the news),

A team of scientists, led by researchers at Northwestern University, Shirley Ryan AbilityLab and the University of Illinois at Chicago (UIC), has developed novel technology promising to increase understanding of how brains develop, and offer answers on repairing brains in the wake of neurotrauma and neurodegenerative diseases.

Their research is the first to combine the most sophisticated 3-D bioelectronic systems with highly advanced 3-D human neural cultures. The goal is to enable precise studies of how human brain circuits develop and repair themselves in vitro. The study is the cover story for the March 19 [March 17, 2021 according to the citation] issue of Science Advances.

The cortical spheroids used in the study, akin to “mini-brains,” were derived from human-induced pluripotent stem cells. Leveraging a 3-D neural interface system that the team developed, scientists were able to create a “mini laboratory in a dish” specifically tailored to study the mini-brains and collect different types of data simultaneously. Scientists incorporated electrodes to record electrical activity. They added tiny heating elements to either keep the brain cultures warm or, in some cases, intentionally overheated the cultures to stress them. They also incorporated tiny probes — such as oxygen sensors and small LED lights — to perform optogenetic experiments. For instance, they introduced genes into the cells that allowed them to control the neural activity using different-colored light pulses.

This platform then enabled scientists to perform complex studies of human tissue without directly involving humans or performing invasive testing. In theory, any person could donate a limited number of their cells (e.g., blood sample, skin biopsy). Scientists can then reprogram these cells to produce a tiny brain spheroid that shares the person’s genetic identity. The authors believe that, by combining this technology with a personalized medicine approach using human stem cell-derived brain cultures, they will be able to glean insights faster and generate better, novel interventions.

“The advances spurred by this research will offer a new frontier in the way we study and understand the brain,” said Shirley Ryan AbilityLab’s Dr. Colin Franz, co-lead author on the paper who led the testing of the cortical spheroids. “Now that the 3-D platform has been developed and validated, we will be able to perform more targeted studies on our patients recovering from neurological injury or battling a neurodegenerative disease.”

Yoonseok Park, postdoctoral fellow at Northwestern University and co-lead author, added, “This is just the beginning of an entirely new class of miniaturized, 3-D bioelectronic systems that we can construct to expand the capacity of the regenerative medicine field. For example, our next generation of device will support the formation of even more complex neural circuits from brain to muscle, and increasingly dynamic tissues like a beating heart.”

Current electrode arrays for tissue cultures are 2-D, flat and unable to match the complex structural designs found throughout nature, such as those found in the human brain. Moreover, even when a system is 3-D, it is extremely challenging to incorporate more than one type of material into a small 3-D structure. With this advance, however, an entire class of 3-D bioelectronics devices has been tailored for the field of regenerative medicine.

“Now, with our small, soft 3-D electronics, the capacity to build devices that mimic the complex biological shapes found in the human body is finally possible, providing a much more holistic understanding of a culture,” said Northwestern’s John Rogers, who led the technology development using technology similar to that found in phones and computers. “We no longer have to compromise function to achieve the optimal form for interfacing with our biology.”

As a next step, scientists will use the devices to better understand neurological disease, test drugs and therapies that have clinical potential, and compare different patient-derived cell models. This understanding will then enable a better grasp of individual differences that may account for the wide variation of outcomes seen in neurological rehabilitation.

“As scientists, our goal is to make laboratory research as clinically relevant as possible,” said Kristen Cotton, research assistant in Dr. Franz’s lab. “This 3-D platform opens the door to new experiments, discovery and scientific advances in regenerative neurorehabilitation medicine that have never been possible.”

Caption: Three dimensional multifunctional neural interfaces for cortical spheroids and engineered assembloids Credit: Northwestern University

As for what brain ogranoids might be, Carl Zimmer in an Aug. 29, 2019 article for the New York Times provides an explanation,

Organoids Are Not Brains. How Are They Making Brain Waves?

Two hundred and fifty miles over Alysson Muotri’s head, a thousand tiny spheres of brain cells were sailing through space.

The clusters, called brain organoids, had been grown a few weeks earlier in the biologist’s lab here at the University of California, San Diego. He and his colleagues altered human skin cells into stem cells, then coaxed them to develop as brain cells do in an embryo.

The organoids grew into balls about the size of a pinhead, each containing hundreds of thousands of cells in a variety of types, each type producing the same chemicals and electrical signals as those cells do in our own brains.

In July, NASA packed the organoids aboard a rocket and sent them to the International Space Station to see how they develop in zero gravity.

Now the organoids were stowed inside a metal box, fed by bags of nutritious broth. “I think they are replicating like crazy at this stage, and so we’re going to have bigger organoids,” Dr. Muotri said in a recent interview in his office overlooking the Pacific.

What, exactly, are they growing into? That’s a question that has scientists and philosophers alike scratching their heads.

On Thursday, Dr. Muotri and his colleagues reported that they  have recorded simple brain waves in these organoids. In mature human brains, such waves are produced by widespread networks of neurons firing in synchrony. Particular wave patterns are linked to particular forms of brain activity, like retrieving memories and dreaming.

As the organoids mature, the researchers also found, the waves change in ways that resemble the changes in the developing brains of premature babies.

“It’s pretty amazing,” said Giorgia Quadrato, a neurobiologist at the University of Southern California who was not involved in the new study. “No one really knew if that was possible.”

But Dr. Quadrato stressed it was important not to read too much into the parallels. What she, Dr. Muotri and other brain organoid experts build are clusters of replicating brain cells, not actual brains.

If you have the time, I recommend reading Zimmer’s article in its entirety. Perhaps not coincidentally, Zimmer has an excerpt titled “Lab-Grown Brain Organoids Aren’t Alive. But They’re Not Not Alive, Either.” published in Slate.com,

From Life’s Edge: The Search For What It Means To Be Alive by Carl Zimmer, published by Dutton, an imprint of Penguin Publishing Group, a division of Penguin Random House, LLC. Copyright © 2021 by Carl Zimmer.

Cleber Trujillo led me to a windowless room banked with refrigerators, incubators, and microscopes. He extended his blue-gloved hands to either side and nearly touched the walls. “This is where we spend half our day,” he said.

In that room Trujillo and a team of graduate students raised a special kind of life. He opened an incubator and picked out a clear plastic box. Raising it above his head, he had me look up at it through its base. Inside the box were six circular wells, each the width of a cookie and filled with what looked like watered-down grape juice. In each well 100 pale globes floated, each the size of a housefly head.

Getting back to the research about monitoring brain organoids, here’s a link to and a citation for the paper about cortical spheroids,

Three-dimensional, multifunctional neural interfaces for cortical spheroids and engineered assembloids by Yoonseok Park, Colin K. Franz, Hanjun Ryu, Haiwen Luan, Kristen Y. Cotton, Jong Uk Kim, Ted S. Chung, Shiwei Zhao, Abraham Vazquez-Guardado, Da Som Yang, Kan Li, Raudel Avila, Jack K. Phillips, Maria J. Quezada, Hokyung Jang, Sung Soo Kwak, Sang Min Won, Kyeongha Kwon, Hyoyoung Jeong, Amay J. Bandodkar, Mengdi Han, Hangbo Zhao, Gabrielle R. Osher, Heling Wang, KunHyuck Lee, Yihui Zhang, Yonggang Huang, John D. Finan and John A. Rogers. Science Advances 17 Mar 2021: Vol. 7, no. 12, eabf9153 DOI: 10.1126/sciadv.abf9153

This paper appears to be open access.

According to a March 22, 2021 posting on the Shirley Riley AbilityLab website, the paper is featured on the front cover of Science Advances (vol. 7 no. 12).

A look back at 2020 on this blog and a welcome to 2021

Things past

A year later i still don’t know what came over me but I got the idea that I could write a 10-year (2010 – 2019) review of science culture in Canada during the last few days of 2019. Somehow two and half months later, I managed to publish my 25,000+ multi-part series.

Plus,

Sadly, 2020 started on a somber note with this January 13, 2020 posting, In memory of those in the science, engineering, or technology communities returning to or coming to live or study in Canada on Flight PS752.

COVID-19 was mentioned and featured here a number of times throughout the year. I’m highlighting two of those postings. The first is a June 24, 2020 posting titled, Tiny sponges lure coronavirus away from lung cells. It’s a therapeutic approach that is not a vaccine but a way of neutralizing the virus. The idea is that the nanosponge is coated in the material that the virus seeks in a human cell. Once the virus locks onto the sponge, it is unable to seek out cells. If I remember rightly, the sponges along with the virus are disposed of by the body’s usual processes.

The second COVID-19 posting I’m highlighting is my first ever accepted editorial opinion by the Canadian Science Policy Centre (CSPC). I republished the piece here in a May 15, 2020 posting, which included all of my references. However, the magazine version is more attractively displayed in the CSPC Featured Editorial Series Volume 1, Issue 2, May 2020 PDF on pp. 31-2.

Artist Joseph Nechvatal reached out to me earlier this year regarding his viral symphOny (2006-2008), a 1 hour 40 minute collaborative electronic noise music symphony. It was featured in an April 7, 2020 posting which seemed strangely à propos during a pandemic even though the work was focused on viral artificial life. You can access it for free https://archive.org/details/ViralSymphony but the Internet Archive where this is stored is requesting donations.

Also on a vaguely related COVID-19 note, there’s my December 7, 2020 posting titled, Digital aromas? And a potpourri of ‘scents and sensibility’. As any regular readers may know, I have a longstanding interest in scent and fragrances. The COVID-19 part of the posting (it’s not about losing your sense of smell) is in the subsection titled, Smelling like an old book. Apparently some folks are missing the smell of bookstores and Powell’s books have responded to that need with a new fragrance.

For anyone who may have missed it, I wrote an update of the CRISPR twin affair in my July 28, 2020 posting, titled, July 2020 update on Dr. He Jiankui (the CRISPR twins) situation.

Finishing off with 2020, I wrote a commentary (mostly focused on the Canada chapter) about a book titled, Communicating Science: A Global Perspective in my December 10, 2020 posting. The book offers science communication perspectives from 39 different countries.

Things future

I have no doubt there will be delights ahead but as they are in the realm of discovery and, at this point, they are currently unknown.

My future plans include a posting about trust and governance. This has come about since writing my Dec. 29, 2020 posting titled, “Governments need to tell us when and how they’re using AI (artificial intelligence) algorithms to make decisions” and stumbling across a reference to a December 15, 2020 article by Dr. Andrew Maynard titled, Why Trustworthiness Matters in Building Global Futures. Maynard’s focus was on a newly published report titled, Trust & Tech Governance.

I will also be considering the problematic aspects of science communication and my own shortcomings. On the heels of reading more than usually forthright discussions of racism in Canada across multiple media platforms, I was horrified to discover I had featured, without any caveats, work by a man who was deeply problematic with regard to his beliefs about race. He was a eugenicist, as well as, a zoologist, naturalist, philosopher, physician, professor, marine biologist, and artist who coined many terms in biology, including ecology, phylum, phylogeny, and Protista; see his Wikipedia entry.

A Dec. 23, 2020 news release on EurekAlert (Scientists at Tel Aviv University develop new gene therapy for deafness) and a December 2020 article by Sarah Zhang for The Atlantic about prenatal testing and who gets born have me wanting to further explore the field of how genetic testing and therapies will affect our concepts of ‘normality’. Fingers crossed I’ll be able to get Dr. Gregor Wolbring to answer a few questions for publication here. (Gregor is a tenured associate professor [in Alberta, Canada] at the University of Calgary’s Cumming School of Medicine and a scholar in the field of ‘ableism’. He is deeply knowledgeable about notions of ability vs disability.)

As 2021 looms, I’m hopeful that I’ll be featuring more art/sci (or sciart) postings, which is my segue to a more hopeful note about 2021 will bring us,

The Knobbed Russet has a rough exterior, with creamy insides. Photo courtesy of William Mullan.

It’s an apple! This is one of the many images embedded in Annie Ewbank’s January 6, 2020 article about rare and beautiful apples for Atlas Obscura (featured on getpocket.com),

In early 2020, inside a bright Brooklyn gallery that is plastered in photographs of apples, William Mullan is being besieged with questions.

A writer is researching apples for his novel set in post-World War II New York. An employee of a fruit-delivery company, who covetously eyes the round table on which Mullan has artfully arranged apples, asks where to buy his artwork.

But these aren’t your Granny Smith’s apples. A handful of Knobbed Russets slumping on the table resemble rotting masses. Despite their brown, wrinkly folds, they’re ripe, with clean white interiors. Another, the small Roberts Crab, when sliced by Mullan through the middle to show its vermillion flesh, looks less like an apple than a Bing cherry. The entire lineup consists of apples assembled by Mullan, who, by publishing his fruit photographs in a book and on Instagram, is putting the glorious diversity of apples in the limelight.

Do go and enjoy! Happy 2021!