Category Archives: science

East/West collaboration on scholarship and imagination about humanity’s long-term future— six new fellows at Berggruen Research Center at Peking University

According to a January 4, 2022 Berggruen Institute (also received via email), they have appointed a new crop of fellows for their research center at Peking University,

The Berggruen Institute has announced six scientists and philosophers to serve as Fellows at the Berggruen Research Center at Peking University in Beijing, China. These eminent scholars will work together across disciplines to explore how the great transformations of our time may shift human experience and self-understanding in the decades and centuries to come.

The new Fellows are Chenjian Li, University Chair Professor at Peking University; Xianglong Zhang, professor of philosophy at Peking University; Xiaoli Liu, professor of philosophy at Renmin University of China; Jianqiao Ge, lecturer at the Academy for Advanced Interdisciplinary Studies (AAIS) at Peking University; Xiaoping Chen, Director of the Robotics Laboratory at the University of Science and Technology of China; and Haidan Chen, associate professor of medical ethics and law at the School of Health Humanities at Peking University.

“Amid the pandemic, climate change, and the rest of the severe challenges of today, our Fellows are surmounting linguistic and cultural barriers to imagine positive futures for all people,” said Bing Song, Director of the China Center and Vice President of the Berggruen Institute. “Dialogue and shared understanding are crucial if we are to understand what today’s breakthroughs in science and technology really mean for the human community and the planet we all share.”

The Fellows will investigate deep questions raised by new understandings and capabilities in science and technology, exploring their implications for philosophy and other areas of study.  Chenjian Li is considering the philosophical and ethical considerations of gene editing technology. Meanwhile, Haidan Chen is exploring the social implications of brain/computer interface technologies in China, while Xiaoli Liu is studying philosophical issues arising from the intersections among psychology, neuroscience, artificial intelligence, and art.

Jianqiao Ge’s project considers the impact of artificial intelligence on the human brain, given the relative recency of its evolution into current form. Xianglong Zhang’s work explores the interplay between literary culture and the development of technology. Finally, Xiaoping Chen is developing a new concept for describing innovation that draws from Daoist, Confucianist, and ancient Greek philosophical traditions.

Fellows at the China Center meet monthly with the Institute’s Los Angeles-based Fellows. These fora provide an opportunity for all Fellows to share and discuss their work. Through this cross-cultural dialogue, the Institute is helping to ensure continued high-level of ideas among China, the United States, and the rest of the world about some of the deepest and most fundamental questions humanity faces today.

“Changes in our capability and understanding of the physical world affect all of humanity, and questions about their implications must be pondered at a cross-cultural level,” said Bing. “Through multidisciplinary dialogue that crosses the gulf between East and West, our Fellows are pioneering new thought about what it means to be human.”

Haidan Chen is associate professor of medical ethics and law at the School of Health Humanities at Peking University. She was a visiting postgraduate researcher at the Institute for the Study of Science Technology and Innovation (ISSTI), the University of Edinburgh; a visiting scholar at the Brocher Foundation, Switzerland; and a Fulbright visiting scholar at the Center for Biomedical Ethics, Stanford University. Her research interests embrace the ethical, legal, and social implications (ELSI) of genetics and genomics, and the governance of emerging technologies, in particular stem cells, biobanks, precision medicine, and brain science. Her publications appear at Social Science & MedicineBioethics and other journals.

Xiaoping Chen is the director of the Robotics Laboratory at University of Science and Technology of China. He also currently serves as the director of the Robot Technical Standard Innovation Base, an executive member of the Global AI Council, Chair of the Chinese RoboCup Committee, and a member of the International RoboCup Federation’s Board of Trustees. He has received the USTC’s Distinguished Research Presidential Award and won Best Paper at IEEE ROBIO 2016. His projects have won the IJCAI’s Best Autonomous Robot and Best General-Purpose Robot awards as well as twelve world champions at RoboCup. He proposed an intelligent technology pathway for robots based on Open Knowledge and the Rong-Cha principle, which have been implemented and tested in the long-term research on KeJia and JiaJia intelligent robot systems.

Jianqiao Ge is a lecturer at the Academy for Advanced Interdisciplinary Studies (AAIS) at Peking University. Before, she was a postdoctoral fellow at the University of Chicago and the Principal Investigator / Co-Investigator of more than 10 research grants supported by the Ministry of Science and Technology of China, the National Natural Science Foundation of China, and Beijing Municipal Science & Technology Commission. She has published more than 20 peer-reviewed articles on leading academic journals such as PNAS, the Journal of Neuroscience, and has been awarded two national patents. In 2008, by scanning the human brain with functional MRI, Ge and her collaborator were among the first to confirm that the human brain engages distinct neurocognitive strategies to comprehend human intelligence and artificial intelligence. Ge received her Ph.D. in psychology, B.S in physics, a double B.S in mathematics and applied mathematics, and a double B.S in economics from Peking University.

Chenjian Li is the University Chair Professor of Peking University. He also serves on the China Advisory Board of Eli Lilly and Company, the China Advisory Board of Cornell University, and the Rhodes Scholar Selection Committee. He is an alumnus of Peking University’s Biology Department, Peking Union Medical College, and Purdue University. He was the former Vice Provost of Peking University, Executive Dean of Yuanpei College, and Associate Dean of the School of Life Sciences at Peking University. Prior to his return to China, he was an associate professor at Weill Medical College of Cornell University and the Aidekman Endowed Chair of Neurology at Mount Sinai School of Medicine. Dr. Li’s academic research focuses on the molecular and cellular mechanisms of neurological diseases, cancer drug development, and gene-editing and its philosophical and ethical considerations. Li also writes as a public intellectual on science and humanity, and his Chinese translation of Richard Feynman’s book What Do You Care What Other People Think? received the 2001 National Publisher’s Book Award.

Xiaoli Liu is professor of philosophy at Renmin University. She is also Director of the Chinese Society of Philosophy of Science Leader. Her primary research interests are philosophy of mathematics, philosophy of science and philosophy of cognitive science. Her main works are “Life of Reason: A Study of Gödel’s Thought,” “Challenges of Cognitive Science to Contemporary Philosophy,” “Philosophical Issues in the Frontiers of Cognitive Science.” She edited “Symphony of Mind and Machine” and series of books “Mind and Cognition.” In 2003, she co-founded the “Mind and Machine workshop” with interdisciplinary scholars, which has held 18 consecutive annual meetings. Liu received her Ph.D. from Peking University and was a senior visiting scholar in Harvard University.

Xianglong Zhang is a professor of philosophy at Peking University. His research areas include Confucian philosophy, phenomenology, Western and Eastern comparative philosophy. His major works (in Chinese except where noted) include: Heidegger’s Thought and Chinese Tao of HeavenBiography of HeideggerFrom Phenomenology to ConfuciusThe Exposition and Comments of Contemporary Western Philosophy; The Exposition and Comments of Classic Western PhilosophyThinking to Take Refuge: The Chinese Ancient Philosophies in the GlobalizationLectures on the History of Confucian Philosophy (four volumes); German Philosophy, German Culture and Chinese Philosophical ThinkingHome and Filial Piety: From the View between the Chinese and the Western.

About the Berggruen China Center
Breakthroughs in artificial intelligence and life science have led to the fourth scientific and technological revolution. The Berggruen China Center is a hub for East-West research and dialogue dedicated to the cross-cultural and interdisciplinary study of the transformations affecting humanity. Intellectual themes for research programs are focused on frontier sciences, technologies, and philosophy, as well as issues involving digital governance and globalization.

About the Berggruen Institute:
The Berggruen Institute’s mission is to develop foundational ideas and shape political, economic, and social institutions for the 21st century. Providing critical analysis using an outwardly expansive and purposeful network, we bring together some of the best minds and most authoritative voices from across cultural and political boundaries to explore fundamental questions of our time. Our objective is enduring impact on the progress and direction of societies around the world. To date, projects inaugurated at the Berggruen Institute have helped develop a youth jobs plan for Europe, fostered a more open and constructive dialogue between Chinese leadership and the West, strengthened the ballot initiative process in California, and launched Noema, a new publication that brings thought leaders from around the world together to share ideas. In addition, the Berggruen Prize, a $1 million award, is conferred annually by an independent jury to a thinker whose ideas are shaping human self-understanding to advance humankind.

You can find out more about the Berggruen China Center here and you can access a list along with biographies of all the Berggruen Institute fellows here.

Getting ready

I look forward to hearing about the projects from these thinkers.

Gene editing and ethics

I may have to reread some books in anticipation of Chenjian Li’s philosophical work and ethical considerations of gene editing technology. I wonder if there’ll be any reference to the He Jiankui affair.

(Briefly for those who may not be familiar with the situation, He claimed to be the first to gene edit babies. In November 2018, news about the twins, Lulu and Nana, was a sensation and He was roundly criticized for his work. I have not seen any information about how many babies were gene edited for He’s research; there could be as many as six. My July 28, 2020 posting provided an update. I haven’t stumbled across anything substantive since then.)

There are two books I recommend should you be interested in gene editing, as told through the lens of the He Jiankui affair. If you can, read both as that will give you a more complete picture.

In no particular order: This book provides an extensive and accessible look at the science, the politics of scientific research, and some of the pressures on scientists of all countries. Kevin Davies’ 2020 book, “Editing Humanity; the CRISPR Revolution and the New Era of Genome Editing” provides an excellent introduction from an insider. Here’s more from Davies’ biographical sketch,

Kevin Davies is the executive editor of The CRISPR Journal and the founding editor of Nature Genetics . He holds an MA in biochemistry from the University of Oxford and a PhD in molecular genetics from the University of London. He is the author of Cracking the Genome, The $1,000 Genome, and co-authored a new edition of DNA: The Story of the Genetic Revolution with Nobel Laureate James D. Watson and Andrew Berry. …

The other book is “The Mutant Project; Inside the Global Race to Genetically Modify Humans” (2020) by Eben Kirksey, an anthropologist who has an undergraduate degree in one of the sciences. He too provides scientific underpinning but his focus is on the cultural and personal underpinnings of the He Jiankui affair, on the culture of science research, irrespective of where it’s practiced, and the culture associated with the DIY (do-it-yourself) Biology community. Here’s more from Kirksey’s biographical sketch,

EBEN KIRKSEY is an American anthropologist and Member of the Institute for Advanced Study in Princeton, New Jersey. He has been published in Wired, The Atlantic, The Guardian and The Sunday Times . He is sought out as an expert on science in society by the Associated Press, The Wall Street Journal, The New York Times, Democracy Now, Time and the BBC, among other media outlets. He speaks widely at the world’s leading academic institutions including Oxford, Yale, Columbia, UCLA, and the International Summit of Human Genome Editing, plus music festivals, art exhibits, and community events. Professor Kirksey holds a long-term position at Deakin University in Melbourne, Australia.

Brain/computer interfaces (BCI)

I’m happy to see that Haidan Chen will be exploring the social implications of brain/computer interface technologies in China. I haven’t seen much being done here in Canada but my December 23, 2021 posting, Your cyborg future (brain-computer interface) is closer than you think, highlights work being done at the Imperial College London (ICL),

“For some of these patients, these devices become such an integrated part of themselves that they refuse to have them removed at the end of the clinical trial,” said Rylie Green, one of the authors. “It has become increasingly evident that neurotechnologies have the potential to profoundly shape our own human experience and sense of self.”

You might also find my September 17, 2020 posting has some useful information. Check under the “Brain-computer interfaces, symbiosis, and ethical issues” subhead for another story about attachment to one’s brain implant and also the “Finally” subhead for more reading suggestions.

Artificial intelligence (AI), art, and the brain

I’ve lumped together three of the thinkers, Xiaoli Liu, Jianqiao Ge and Xianglong Zhang, as there is some overlap (in my mind, if nowhere else),

  • Liu’s work on philosophical issues as seen in the intersections of psychology, neuroscience, artificial intelligence, and art
  • Ge’s work on the evolution of the brain and the impact that artificial intelligence may have on it
  • Zhang’s work on the relationship between literary culture and the development of technology

A December 3, 2021 posting, True love with AI (artificial intelligence): The Nature of Things explores emotional and creative AI (long read), is both a review of a recent episode of the Canadian Broadcasting Corporation’s (CBC) science television series,The Nature of Things, and a dive into a number of issues as can be seen under subheads such as “AI and Creativity,” “Kazuo Ishiguro?” and “Evolution.”

You may also want to check out my December 27, 2021 posting, Ai-Da (robot artist) writes and performs poem honouring Dante’s 700th anniversary, for an eye opening experience. If nothing else, just watch the embedded video.

This suggestion relates most closely to Ge’s and Zhang’s work. If you haven’t already come across it, there’s Walter J. Ong’s 1982 book, “Orality and Literacy: The Technologizing of the Word.” From the introductory page of the 2002 edition (PDF),

This classic work explores the vast differences between oral and
literate cultures and offers a brilliantly lucid account of the
intellectual, literary and social effects of writing, print and
electronic technology. In the course of his study, Walter J.Ong
offers fascinating insights into oral genres across the globe and
through time and examines the rise of abstract philosophical and
scientific thinking. He considers the impact of orality-literacy
studies not only on literary criticism and theory but on our very
understanding of what it is to be a human being, conscious of self
and other.

In 2013, a 30th anniversary edition of the book was released and is still in print.

Philosophical traditions

I’m very excited to learn more about Xiaoping Chen’s work describing innovation that draws from Daoist, Confucianist, and ancient Greek philosophical traditions.

Should any of my readers have suggestions for introductory readings on these philosophical traditions, please do use the Comments option for this blog. In fact, if you have suggestions for other readings on these topics, I would be very happy to learn of them.

Congratulations to the six Fellows at the Berggruen Research Center at Peking University in Beijing, China. I look forward to reading articles about your work in the Berggruen Institute’s Noema magazine and, possibly, attending your online events.

Tree music

Hidden Life Radio livestreams music generated from trees (their biodata, that is). Kristin Toussaint in her August 3, 2021 article for Fast Company describes the ‘radio station’, Note: Links have been removed,

Outside of a library in Cambridge, Massachusetts, an over-80-year-old copper beech tree is making music.

As the tree photosynthesizes and absorbs and evaporates water, a solar-powered sensor attached to a leaf measures the micro voltage of all that invisible activity. Sound designer and musician Skooby Laposky assigned a key and note range to those changes in this electric activity, turning the tree’s everyday biological processes into an ethereal song.

That music is available on Hidden Life Radio, an art project by Laposky, with assistance from the Cambridge Department of Public Works Urban Forestry, and funded in part by a grant from the Cambridge Arts Council. Hidden Life Radio also features the musical sounds of two other Cambridge trees: a honey locust and a red oak, both located outside of other Cambridge library branches. The sensors on these trees are solar-powered biodata sonification kits, a technology that has allowed people to turn all sorts of plant activity into music.

… Laposky has created a musical voice for these disappearing trees, and he hopes people tune into Hidden Life Radio and spend time listening to them over time. The music they produce occurs in real time, affected by the weather and whatever the tree is currently doing. Some days they might be silent, especially when there’s been several days without rain, and they’re dehydrated; Laposky is working on adding an archive that includes weather information, so people can go back and hear what the trees sound like on different days, under different conditions. The radio will play 24 hours a day until November, when the leaves will drop—a “natural cycle for the project to end,” Laposky says, “when there aren’t any leaves to connect to anymore.”

The 2021 season is over but you can find an archive of Hidden Life Radio livestreams here. Or, if you happen to be reading this page sometime after January 2022, you can try your luck and click here at Hidden Life Radio livestreams but remember, even if the project has started up again, the tree may not be making music when you check in. So, if you don’t hear anything the first time, try again.

Want to create your own biodata sonification project?

Toussaint’s article sent me on a search for more and I found a website where you can get biodata sonification kits. Sam Cusumano’s electricity for progress website offers lessons, as well as, kits and more.

Sophie Haigney’s February 21, 2020 article for NPR ([US] National Public Radio) highlights other plant music and more ways to tune in to and create it. (h/t Kristin Toussaint)

The Storywrangler, tool exploring billions of social media messages, could predict political & financial turmoil

Being able to analyze Twitter messages (tweets) in real-time is amazing given what I wrote in this January 16, 2013 posting titled: “Researching tweets (the Twitter kind)” about the US Library of Congress and its attempts to access tweets for scholars,”

At least one of the reasons no one has received access to the tweets is that a single search of the archived (2006- 2010) tweets alone would take 24 hours, [emphases mine] …

So, bravo to the researchers at the University of Vermont (UVM). A July 16, 2021 news item on ScienceDaily makes the announcement,

For thousands of years, people looked into the night sky with their naked eyes — and told stories about the few visible stars. Then we invented telescopes. In 1840, the philosopher Thomas Carlyle claimed that “the history of the world is but the biography of great men.” Then we started posting on Twitter.

Now scientists have invented an instrument to peer deeply into the billions and billions of posts made on Twitter since 2008 — and have begun to uncover the vast galaxy of stories that they contain.

Caption: UVM scientists have invented a new tool: the Storywrangler. It visualizes the use of billions of words, hashtags and emoji posted on Twitter. In this example from the tool’s online viewer, three global events from 2020 are highlighted: the death of Iranian general Qasem Soleimani; the beginning of the COVID-19 pandemic; and the Black Lives Matter protests following the murder of George Floyd by Minneapolis police. The new research was published in the journal Science Advances. Credit: UVM

A July 15, 2021 UVM news release (also on EurekAlert but published on July 16, 2021) by Joshua Brown, which originated the news item, provides more detail abut the work,

“We call it the Storywrangler,” says Thayer Alshaabi, a doctoral student at the University of Vermont who co-led the new research. “It’s like a telescope to look — in real time — at all this data that people share on social media. We hope people will use it themselves, in the same way you might look up at the stars and ask your own questions.”

The new tool can give an unprecedented, minute-by-minute view of popularity, from rising political movements to box office flops; from the staggering success of K-pop to signals of emerging new diseases.

The story of the Storywrangler — a curation and analysis of over 150 billion tweets–and some of its key findings were published on July 16 [2021] in the journal Science Advances.

EXPRESSIONS OF THE MANY

The team of eight scientists who invented Storywrangler — from the University of Vermont, Charles River Analytics, and MassMutual Data Science [emphasis mine]– gather about ten percent of all the tweets made every day, around the globe. For each day, they break these tweets into single bits, as well as pairs and triplets, generating frequencies from more than a trillion words, hashtags, handles, symbols and emoji, like “Super Bowl,” “Black Lives Matter,” “gravitational waves,” “#metoo,” “coronavirus,” and “keto diet.”

“This is the first visualization tool that allows you to look at one-, two-, and three-word phrases, across 150 different languages [emphasis mine], from the inception of Twitter to the present,” says Jane Adams, a co-author on the new study who recently finished a three-year position as a data-visualization artist-in-residence at UVM’s Complex Systems Center.

The online tool, powered by UVM’s supercomputer at the Vermont Advanced Computing Core, provides a powerful lens for viewing and analyzing the rise and fall of words, ideas, and stories each day among people around the world. “It’s important because it shows major discourses as they’re happening,” Adams says. “It’s quantifying collective attention.” Though Twitter does not represent the whole of humanity, it is used by a very large and diverse group of people, which means that it “encodes popularity and spreading,” the scientists write, giving a novel view of discourse not just of famous people, like political figures and celebrities, but also the daily “expressions of the many,” the team notes.

In one striking test of the vast dataset on the Storywrangler, the team showed that it could be used to potentially predict political and financial turmoil. They examined the percent change in the use of the words “rebellion” and “crackdown” in various regions of the world. They found that the rise and fall of these terms was significantly associated with change in a well-established index of geopolitical risk for those same places.

WHAT’S HAPPENING?

The global story now being written on social media brings billions of voices — commenting and sharing, complaining and attacking — and, in all cases, recording — about world wars, weird cats, political movements, new music, what’s for dinner, deadly diseases, favorite soccer stars, religious hopes and dirty jokes.

“The Storywrangler gives us a data-driven way to index what regular people are talking about in everyday conversations, not just what reporters or authors have chosen; it’s not just the educated or the wealthy or cultural elites,” says applied mathematician Chris Danforth, a professor at the University of Vermont who co-led the creation of the StoryWrangler with his colleague Peter Dodds. Together, they run UVM’s Computational Story Lab.

“This is part of the evolution of science,” says Dodds, an expert on complex systems and professor in UVM’s Department of Computer Science. “This tool can enable new approaches in journalism, powerful ways to look at natural language processing, and the development of computational history.”

How much a few powerful people shape the course of events has been debated for centuries. But, certainly, if we knew what every peasant, soldier, shopkeeper, nurse, and teenager was saying during the French Revolution, we’d have a richly different set of stories about the rise and reign of Napoleon. “Here’s the deep question,” says Dodds, “what happened? Like, what actually happened?”

GLOBAL SENSOR

The UVM team, with support from the National Science Foundation [emphasis mine], is using Twitter to demonstrate how chatter on distributed social media can act as a kind of global sensor system — of what happened, how people reacted, and what might come next. But other social media streams, from Reddit to 4chan to Weibo, could, in theory, also be used to feed Storywrangler or similar devices: tracing the reaction to major news events and natural disasters; following the fame and fate of political leaders and sports stars; and opening a view of casual conversation that can provide insights into dynamics ranging from racism to employment, emerging health threats to new memes.

In the new Science Advances study, the team presents a sample from the Storywrangler’s online viewer, with three global events highlighted: the death of Iranian general Qasem Soleimani; the beginning of the COVID-19 pandemic; and the Black Lives Matter protests following the murder of George Floyd by Minneapolis police. The Storywrangler dataset records a sudden spike of tweets and retweets using the term “Soleimani” on January 3, 2020, when the United States assassinated the general; the strong rise of “coronavirus” and the virus emoji over the spring of 2020 as the disease spread; and a burst of use of the hashtag “#BlackLivesMatter” on and after May 25, 2020, the day George Floyd was murdered.

“There’s a hashtag that’s being invented while I’m talking right now,” says UVM’s Chris Danforth. “We didn’t know to look for that yesterday, but it will show up in the data and become part of the story.”

Here’s a link to and a citation for the paper,

Storywrangler: A massive exploratorium for sociolinguistic, cultural, socioeconomic, and and political timelines using Twitter by Thayer Alshaabi, Jane L. Adams, Michael V. Arnold, Joshua R. Minot, David R. Dewhurst, Andrew J. Reagan, Christopher M. Danforth and Peter Sheridan Dodds. Science Advances 16 Jul 2021: Vol. 7, no. 29, eabe6534DOI: 10.1126/sciadv.abe6534 DOI: 10.1126/sciadv.abe6534

This paper is open access.

A couple of comments

I’m glad to see they are looking at phrases in many different languages. Although I do experience some hesitation when I consider the two companies involved in this research with the University of Vermont.

Charles River Analytics and MassMutual Data Science would not have been my first guess for corporate involvement but on re-examining the subhead and noting this: “potentially predict political and financial turmoil”, they make perfect sense. Charles River Analytics provides “Solutions to serve the warfighter …”, i.e., soldiers/the military, and MassMutual is an insurance company with a dedicated ‘data science space’ (from the MassMutual Explore Careers Data Science webpage),

What are some key projects that the Data Science team works on?

Data science works with stakeholders throughout the enterprise to automate or support decision making when outcomes are unknown. We help determine the prospective clients that MassMutual should market to, the risk associated with life insurance applicants, and which bonds MassMutual should invest in. [emphases mine]

Of course. The military and financial services. Delightfully, this research is at least partially (mostly?) funded on the public dime, the US National Science Foundation.

Futures exhibition/festival with fish skin fashion and more at the Smithsonian (Washington, DC), Nov. 20, 2021 to July 6, 2022

Fish leather

Before getting to Futures, here’s a brief excerpt from a June 11, 2021 Smithsonian Magazine exhibition preview article by Gia Yetikyel about one of the contributors, Elisa Palomino-Perez (Note: A link has been removed),

Elisa Palomino-Perez sheepishly admits to believing she was a mermaid as a child. Growing up in Cuenca, Spain in the 1970s and ‘80s, she practiced synchronized swimming and was deeply fascinated with fish. Now, the designer’s love for shiny fish scales and majestic oceans has evolved into an empowering mission, to challenge today’s fashion industry to be more sustainable, by using fish skin as a material.

Luxury fashion is no stranger to the artist, who has worked with designers like Christian Dior, John Galliano and Moschino in her 30-year career. For five seasons in the early 2000s, Palomino-Perez had her own fashion brand, inspired by Asian culture and full of color and embroidery. It was while heading a studio for Galliano in 2002 that she first encountered fish leather: a material made when the skin of tuna, cod, carp, catfish, salmon, sturgeon, tilapia or pirarucu gets stretched, dried and tanned.

The history of using fish leather in fashion is a bit murky. The material does not preserve well in the archeological record, and it’s been often overlooked as a “poor person’s” material due to the abundance of fish as a resource. But Indigenous groups living on coasts and rivers from Alaska to Scandinavia to Asia have used fish leather for centuries. Icelandic fishing traditions can even be traced back to the ninth century. While assimilation policies, like banning native fishing rights, forced Indigenous groups to change their lifestyle, the use of fish skin is seeing a resurgence. Its rise in popularity in the world of sustainable fashion has led to an overdue reclamation of tradition for Indigenous peoples.

In 2017, Palomino-Perez embarked on a PhD in Indigenous Arctic fish skin heritage at London College of Fashion, which is a part of the University of the Arts in London (UAL), where she received her Masters of Arts in 1992. She now teaches at Central Saint Martins at UAL, while researching different ways of crafting with fish skin and working with Indigenous communities to carry on the honored tradition.

Yetikyel’s article is fascinating (apparently Nike has used fish leather in one of its sports shoes) and I encourage you to read her June 11, 2021 article, which also covers the history of fish leather use amongst indigenous peoples of the world.

I did some digging and found a few more stories about fish leather. The earlier one is a Canadian Broadcasting Corporation (CBC) November 16, 2017 online news article by Jane Adey,

Designer Arndis Johannsdottir holds up a stunning purse, decorated with shiny strips of gold and silver leather at Kirsuberjatred, an art and design store in downtown Reykjavik, Iceland.

The purse is one of many in a colourful window display that’s drawing in buyers.

Johannsdottir says customers’ eyes often widen when they discover the metallic material is fish skin. 

Johannsdottir, a fish-skin designing pioneer, first came across the product 35 years ago.

She was working as a saddle smith when a woman came into her shop with samples of fish skin her husband had tanned after the war. Hundreds of pieces had been lying in a warehouse for 40 years.

“Nobody wanted it because plastic came on the market and everybody was fond of plastic,” she said.

“After 40 years, it was still very, very strong and the colours were beautiful and … I fell in love with it immediately.”

Johannsdottir bought all the skins the woman had to offer, gave up saddle making and concentrated on fashionable fish skin.

Adey’s November 16, 2017 article goes on to mention another Icelandic fish leather business looking to make fish leather a fashion staple.

Chloe Williams’s April 28, 2020 article for Hakkai Magazine explores the process of making fish leather and the new interest in making it,

Tracy Williams slaps a plastic cutting board onto the dining room table in her home in North Vancouver, British Columbia. Her friend, Janey Chang, has already laid out the materials we will need: spoons, seashells, a stone, and snack-sized ziplock bags filled with semi-frozen fish. Williams says something in Squamish and then translates for me: “You are ready to make fish skin.”

Chang peels a folded salmon skin from one of the bags and flattens it on the table. “You can really have at her,” she says, demonstrating how to use the edge of the stone to rub away every fiber of flesh. The scales on the other side of the skin will have to go, too. On a sockeye skin, they come off easily if scraped from tail to head, she adds, “like rubbing a cat backwards.” The skin must be clean, otherwise it will rot or fail to absorb tannins that will help transform it into leather.

Williams and Chang are two of a scant but growing number of people who are rediscovering the craft of making fish skin leather, and they’ve agreed to teach me their methods. The two artists have spent the past five or six years learning about the craft and tying it back to their distinct cultural perspectives. Williams, a member of the Squamish Nation—her ancestral name is Sesemiya—is exploring the craft through her Indigenous heritage. Chang, an ancestral skills teacher at a Squamish Nation school, who has also begun teaching fish skin tanning in other BC communities, is linking the craft to her Chinese ancestry.

Before the rise of manufactured fabrics, Indigenous peoples from coastal and riverine regions around the world tanned or dried fish skins and sewed them into clothing. The material is strong and water-resistant, and it was essential to survival. In Japan, the Ainu crafted salmon skin into boots, which they strapped to their feet with rope. Along the Amur River in northeastern China and Siberia, Hezhen and Nivkh peoples turned the material into coats and thread. In northern Canada, the Inuit made clothing, and in Alaska, several peoples including the Alutiiq, Athabascan, and Yup’ik used fish skins to fashion boots, mittens, containers, and parkas. In the winter, Yup’ik men never left home without qasperrluk—loose-fitting, hooded fish skin parkas—which could double as shelter in an emergency. The men would prop up the hood with an ice pick and pin down the edges to make a tent-like structure.

On a Saturday morning, I visit Aurora Skala in Saanich on Vancouver Island, British Columbia, to learn about the step after scraping and tanning: softening. Skala, an anthropologist working in language revitalization, has taken an interest in making fish skin leather in her spare time. When I arrive at her house, a salmon skin that she has tanned in an acorn infusion—a cloudy, brown liquid now resting in a jar—is stretched out on the kitchen counter, ready to be worked.

Skala dips her fingers in a jar of sunflower oil and rubs it on her hands before massaging it into the skin. The skin smells only faintly of fish; the scent reminds me of salt and smoke, though the skin has been neither salted nor smoked. “Once you start this process, you can’t stop,” she says. If the skin isn’t worked consistently, it will stiffen as it dries.

Softening the leather with oil takes about four hours, Skala says. She stretches the skin between clenched hands, pulling it in every direction to loosen the fibers while working in small amounts of oil at a time. She’ll also work her skins across other surfaces for extra softening; later, she’ll take this piece outside and rub it back and forth along a metal cable attached to a telephone pole. Her pace is steady, unhurried, soothing. Back in the day, people likely made fish skin leather alongside other chores related to gathering and processing food or fibers, she says. The skin will be done when it’s soft and no longer absorbs oil.

Onto the exhibition.

Futures (November 20, 2021 to July 6, 2022 at the Smithsonian)

A February 24, 2021 Smithsonian Magazine article by Meilan Solly serves as an announcement for the Futures exhibition/festival (Note: Links have been removed),

When the Smithsonian’s Arts and Industries Building (AIB) opened to the public in 1881, observers were quick to dub the venue—then known as the National Museum—America’s “Palace of Wonders.” It was a fitting nickname: Over the next century, the site would go on to showcase such pioneering innovations as the incandescent light bulb, the steam locomotive, Charles Lindbergh’s Spirit of St. Louis and space-age rockets.

“Futures,” an ambitious, immersive experience set to open at AIB this November, will act as a “continuation of what the [space] has been meant to do” from its earliest days, says consulting curator Glenn Adamson. “It’s always been this launchpad for the Smithsonian itself,” he adds, paving the way for later museums as “a nexus between all of the different branches of the [Institution].” …

Part exhibition and part festival, “Futures”—timed to coincide with the Smithsonian’s 175th anniversary—takes its cue from the world’s fairs of the 19th and 20th centuries, which introduced attendees to the latest technological and scientific developments in awe-inspiring celebrations of human ingenuity. Sweeping in scale (the building-wide exploration spans a total of 32,000 square feet) and scope, the show is set to feature historic artifacts loaned from numerous Smithsonian museums and other institutions, large-scale installations, artworks, interactive displays and speculative designs. It will “invite all visitors to discover, debate and delight in the many possibilities for our shared future,” explains AIB director Rachel Goslins in a statement.

“Futures” is split into four thematic halls, each with its own unique approach to the coming centuries. “Futures Past” presents visions of the future imagined by prior generations, as told through objects including Alexander Graham Bell’s experimental telephone, an early android and a full-scale Buckminster Fuller geodesic dome. “In hindsight, sometimes [a prediction is] amazing,” says Adamson, who curated the history-centric section. “Sometimes it’s sort of funny. Sometimes it’s a little dismaying.”

Futures That Work” continues to explore the theme of technological advancement, but with a focus on problem-solving rather than the lessons of the past. Climate change is at the fore of this section, with highlighted solutions ranging from Capsula Mundi’s biodegradable burial urns to sustainable bricks made out of mushrooms and purely molecular artificial spices that cut down on food waste while preserving natural resources.

Futures That Inspire,” meanwhile, mimics AIB’s original role as a place of wonder and imagination. “If I were bringing a 7-year-old, this is probably where I would take them first,” says Adamson. “This is where you’re going to be encountering things that maybe look a bit more like science fiction”—for instance, flying cars, self-sustaining floating cities and Afrofuturist artworks.

The final exhibition hall, “Futures That Unite,” emphasizes human relationships, discussing how connections between people can produce a more equitable society. Among others, the list of featured projects includes (Im)possible Baby, a speculative design endeavor that imagines what same-sex couples’ children might look like if they shared both parents’ DNA, and Not The Only One (N’TOO), an A.I.-assisted oral history project. [all emphases mine]

I haven’t done justice to Solly’s February 24, 2021 article, which features embedded images and offers a more hopeful view of the future than is currently the fashion.

Futures asks: Would you like to plan the future?

Nate Berg’s November 22, 2021 article for Fast Company features an interactive urban planning game that’s part of the Futures exhibition/festival,

The Smithsonian Institution wants you to imagine the almost ideal city block of the future. Not the perfect block, not utopia, but the kind of urban place where you get most of what you want, and so does everybody else.

Call it urban design by compromise. With a new interactive multiplayer game, the museum is hoping to show that the urban spaces of the future can achieve mutual goals only by being flexible and open to the needs of other stakeholders.

The game is designed for three players, each in the role of either the city’s mayor, a real estate developer or an ecologist. The roles each have their own primary goals – the mayor wants a well-served populace, the developer wants to build successful projects, and the ecologist wants the urban environment to coexist with the natural environment. Each role takes turns adding to the block, either in discrete projects or by amending what another player has contributed. Options are varied, but include everything from traditional office buildings and parks to community centers and algae farms. The players each try to achieve their own goals on the block, while facing the reality that other players may push the design in unexpected directions. These tradeoffs and their impact on the block are explained by scores on four basic metrics: daylight, carbon footprint, urban density, and access to services. How each player builds onto the block can bring scores up or down.

To create the game, the Smithsonian teamed up with Autodesk, the maker of architectural design tools like AutoCAD, an industry standard. Autodesk developed a tool for AI-based generative design that offers up options for a city block’s design, using computing power to make suggestions on what could go where and how aiming to achieve one goal, like boosting residential density, might detract from or improve another set of goals, like creating open space. “Sometimes you’ll do something that you think is good but it doesn’t really help the overall score,” says Brian Pene, director of emerging technology at Autodesk. “So that’s really showing people to take these tradeoffs and try attributes other than what achieves their own goals.” The tool is meant to show not how AI can generate the perfect design, but how the differing needs of various stakeholders inevitably require some tradeoffs and compromises.

Futures online and in person

Here are links to Futures online and information about visiting in person,

For its 175th anniversary, the Smithsonian is looking forward.

What do you think of when you think of the future? FUTURES is the first building-wide exploration of the future on the National Mall. Designed by the award-winning Rockwell Group, FUTURES spans 32,000 square feet inside the Arts + Industries Building. Now on view until July 6, 2022, FUTURES is your guide to a vast array of interactives, artworks, technologies, and ideas that are glimpses into humanity’s next chapter. You are, after all, only the latest in a long line of future makers.

Smell a molecule. Clean your clothes in a wetland. Meditate with an AI robot. Travel through space and time. Watch water being harvested from air. Become an emoji. The FUTURES is yours to decide, debate, delight. We invite you to dream big, and imagine not just one future, but many possible futures on the horizon—playful, sustainable, inclusive. In moments of great change, we dare to be hopeful. How will you create the future you want to live in?

Happy New Year!

Two (very loud) new species in Australia: the Slender Bleating Tree Frog and the Screaming Tree Frog

Slender Bleating Tree Frog (H.B. Hines) [downloaded from https://www.newcastle.edu.au/newsroom/featured/screaming-for-attention-surprise-discovery-of-two-new-and-very-loud-frog-species]

A November 22, 2021 item on phys.org announces two ‘new to science’ frog species in Australia,

Scientists from the University of Newcastle [Australia], Australian Museum, South Australian Museum, and Queensland National Parks and Wildlife have found and described two new, very loud frog species from eastern Australia: the Slender Bleating Tree Frog, Litoria balatus, and Screaming Tree Frog, Litoria quiritatus.

Published today [November 22, 2021] in Zootaxa, the newly described Slender Bleating Tree Frog is present in Queensland, while the Screaming Tree Frog occurs from around Taree in NSW [new South Wales] to just over the border in Victoria.

Scientifically described with the help of citizen scientists and their recordings through the Australian Museum’s FrogID app, the new frog species were once thought to be one species [emphasis mine], the Bleating Tree Frog, Litoria dentata.

A November 22, 2021 University of Newcastle press release, which originated the news item, has a great headline and more details about the ‘new’ frog species (Note: Links have been removed; Curious about what they sound like? Check out Dr. Jodi Rowley’s Nov. 22, 2021 posting for the Australian Museum blog for embedded video and audio files),

Screaming for attention: Surprise discovery of two new – and very loud – frog species

..

Australian Museum herpetologist and lead scientist on the groundbreaking FrogID project, Dr Jodi Rowley, said that the Bleating Tree Frog is well known to residents along the east coast of Australia for its extremely loud, piercing, almost painful call.

“These noisy frog bachelors are super loud when they are trying to woo their mates,” Rowley said.

The scientists analysed many calls submitted to the FrogID project from across Queensland and NSW to differentiate between the calls.

“Our examination revealed that their calls differ slightly in how long, how high-pitched and how rapid-fire they are. The Slender Bleating Tree Frog has the shortest, most rapid-fire and highest pitched calls,” Rowley explained.

Chief Research Scientist of Evolutionary Biology, South Australian Museum, Professor Steven Donnellan said that genetic work was the first clue that there are actually three species.

“Although similar in appearance, and in their piercing calls, the frogs are genetically very different. I’m still amazed that it’s taken us so long to discover that the loudest frog in Australia is not one but three species,” Professor Donnellan said.

“How many more undescribed species in the ‘quiet achiever’ category are awaiting their scientific debut?”

The three species vary subtly in appearance. The Slender Bleating Tree Frog, as its name suggests, is slender in appearance, and has a white line extending down its side, and males have a distinctly black vocal sac.

The Screaming Tree Frog isn’t nearly as slender, doesn’t have the white line extending down its side, and males have a bright yellow vocal sac. In the breeding season, the entire body of males of the Screaming Tree Frog also tend to turn a lemon yellow.

The Robust Bleating Tree Frog is most similar in appearance to the Screaming Tree Frog, but males have a brownish vocal sac that turns a dull yellow or yellowish brown when fully inflated.

Professor Michael Mahony of the University of Newcastle’s School of Environmental and Life Sciences – who over his long career has developed a cryopreservation method, the first genome bank for Australian frogs – said the three closely-related species are relatively common and widespread.

“They are also all at least somewhat tolerant of modified environments, being recorded as part of the FrogID project relatively often in backyards and paddocks, as well as more natural habitats,” Professor Mahony said.

Dr Rowley noted that these new frog species brings the total number of native frog species known from Australia to 246, including the recently recognised Gurrumul’s Toadlet and the Wollumbin Pouched Frog.

“The research and help from our citizen scientists highlights the valuable contribution that everyone can make to better understand and conserve our frogs,” Rowley said.

Here’s a link to and a citation for the paper,

Two new frog species from the Litoria rubella species group from eastern Australia by J. J. L. Rowley, M. J. Mahony, H. B. Hines, S. Myers, L.C. Price, G.M. Shea, S. C. Donnellan. Zootaxa, 5071(1), 1–41. DOI: https://doi.org/10.11646/zootaxa.5071.1.1 Published November 22, 2021

This paper appears to be open access.

You can find out more about the FrogID project here (I first mentioned it in an August 2, 2021 posting featuring a sadder frog story).

Virgin birth in a Sardinian aquarium and whistled languages could help us understand dolphins

A virgin birth story seems particularly apt at this time of the year (as I was taught the story, Jesus was born of a virgin birth on Christmas Day). As for the whistled language story, that’s pure self-indulgence.

Virgin shark birth

From an August 26, 2021 article by Harry Baker for Live Science (Note: Links have been removed),

A shark’s rare “virgin birth” in an Italian aquarium may be the first of its kind, scientists say.

The female baby smoothhound shark (Mustelus mustelus) — known as Ispera, or “hope” in *Sardinian* — was recently born at the Cala Gonone Aquarium in Sardinia to a mother that has spent the past decade sharing a tank with one other female and no males, Newsweek reported.

This rare phenomenon, known as parthenogenesis, is the result of females’ ability to self-fertilize their own eggs in extreme scenarios. Parthenogenesis has been observed in more than 80 vertebrate species — including sharks, fish and reptiles — but this may be the first documented occurrence in a smoothhound shark, according to Newsweek.

“It has been documented in quite a few species of sharks and rays now,” Demian Chapman, director of the sharks and rays conservation program at Mote Marine Laboratory & Aquarium in Florida, told Live Science. “But it is difficult to detect in the wild, so we really only know about it from captive animals,” said Chapman, who has led several studies on shark parthenogenesis.

A September 2, 2021 article by Louisa Wright for DW.com provides additional details (Note: Links have been removed),

To procreate, most species require an egg to be fertilized by a sperm. That’s the case with sharks, too. But some animals can produce offspring all by themselves. This is called parthenogenesis.

The term comes from the Greek words parthenos, meaning “virgin,” and genesis, meaning “origin.”

The case in Italy could be the first time this “immaculate conception” has occurred in smooth-hound sharks, at least in captivity.

… scientists still don’t know how often it happens, says Kevin Feldheim, a researcher at the Field Museum in Chicago, who researches the mating habits of sharks.”We don’t know how common it is and the handful of cases we have seen have mostly taken place in an aquarium setting,” Feldheim told DW.

One study from the Field Museum discovered parthenogenesis in a wild population of smalltooth sawfish, a type of ray. This was the first time a vertebrate (animals with backbones inside their body), which usually reproduces the conventional way with a mate, was found to reproduce asexually in the wild, Feldheim said.

Whistling could give insight into dolphin communication

A September 21, 2021 news item on phys.org announces research into how whistled languages might help us understand dolphins better,

Whistling while you work isn’t just a distraction for some people. More than 80 cultures employ a whistled form of their native language to communicate over long distances. A multidisciplinary team of scientists believe that some of these whistled languages can serve as a model for elucidating how information may be encoded in dolphin whistle communication. They made their case in a new paper published in the journal Frontiers in Psychology.

A September 21, 2021 Frontiers [open access publishers] news release on EurekAlert explains how whistled languages might provide a key to understanding dolphin communication,

Whistled human speech mostly evolved in places where people live in rugged terrain, such as mountains or dense forest, because the sounds carry much farther than ordinary speech or even shouting. While these whistled languages vary by region and culture, the basic principle is the same: People simplify words, syllable by syllable, into whistled melodies.

Trained whistlers can understand an amazing amount of information. In whistled Turkish, for example, common whistled sentences are understood up to 90 percent of the time. This ability to extract meaning from whistled speech has attracted linguists and other researchers interested in investigating the intricacies of how the human brain processes and even creates language.

The idea that human whistled speech could also be a model for how mammals like bottlenose dolphins communicate first emerged in the 1960s with work by René-Guy Busnel, a French researcher who pioneered the study of whistled languages. More recently, some of Busnel’s former colleagues have teamed up to explore the potential synergy between bottlenose dolphins and humans, which have largest brain relative to body size on the planet.

While humans and dolphins produce sounds and convey information differently, the structure and attributes found across human whistle languages may provide insights as to how bottlenose dolphins encode complex information, according to coauthor Dr Diana Reiss, a professor of psychology at Hunter College in the United States whose research focuses on understanding cognition and communication in dolphins and other cetaceans.

Lead author Dr Julien Meyer, a linguist in the Gipsa Lab at the French national research center (CNRS), offered this example: The ability of a listener to decode human language or whistled speech relies on the listener’s language competency, such as understanding phonemes, a unit of sound that can distinguish one word from another. However, images of sounds called sonograms are not always segmented by silences between these units in human whistled speech.

“By contrast, scientists trying to decode the whistled communication of dolphins and other whistling species often categorize whistles based on the silent intervals between whistles,” Reiss noted. In other words, researchers may need to rethink how they categorize whistled animal communication based on what the sonograms reveal about how information is conveyed structurally in human whistled speech.

Meyer, Reiss and coauthor Dr Marcelo Magnasco, a biophysicist and professor at Rockefeller University, plan to apply this and other insights discussed in their paper to develop new techniques to analyze dolphin whistles. They will leverage dolphin whistle data compiled by Reiss and Magnasco with a database on whistled speech that Meyer has been collecting since 2003 with the CNRS, the Collegium of Lyon, the Museu Paraense Emílio Goeldi in Brazil and several nonprofit research associations focused on whistled and instrumental speech (The World Whistles, Yo Silbo, Silbo herreño). 

“On these data, for example, we will develop new algorithms and test some hypotheses about combinatorial structure,” Meyer said, referring to the building blocks of language like phonemes that can be combined to impart meaning. 

Magnasco noted that scientists already use machine learning and AI to help track dolphins in videos and even to identify dolphin calls. However, Reiss said, to have an AI algorithm capable of “deciphering” dolphin whistle communication, “we would need to know what the minimum unit of meaningful sound is, how they are organized, and how they function.”

Here’s a link to and a citation for the paper,

The Relevance of Human Whistled Languages for the Analysis and Decoding of Dolphin Communication by Julien Meyer, Marcelo O. Magnasco, and Diana Reiss. Front. Psychol., 21 September 2021 DOI: https://doi.org/10.3389/fpsyg.2021.689501

This paper is open access.

*December 30, 2021: “The female baby smoothhound shark (Mustelus mustelus) — known as Ispera, or “hope” in Maltese …” was corrected to “hope” in Sardinian … .” When you think about it, it makes a lot more sense than naming a special baby shark in a language not native to where it was born. Thank you to Carla and her partner who is from Sardinia!*

True love with AI (artificial intelligence): The Nature of Things explores emotional and creative AI (long read)

The Canadian Broadcasting Corporation’s (CBC) science television series,The Nature of Things, which has been broadcast since November 1960, explored the world of emotional, empathic and creative artificial intelligence (AI) in a Friday, November 19, 2021 telecast titled, The Machine That Feels,

The Machine That Feels explores how artificial intelligence (AI) is catching up to us in ways once thought to be uniquely human: empathy, emotional intelligence and creativity.

As AI moves closer to replicating humans, it has the potential to reshape every aspect of our world – but most of us are unaware of what looms on the horizon.

Scientists see AI technology as an opportunity to address inequities and make a better, more connected world. But it also has the capacity to do the opposite: to stoke division and inequality and disconnect us from fellow humans. The Machine That Feels, from The Nature of Things, shows viewers what they need to know about a field that is advancing at a dizzying pace, often away from the public eye.

What does it mean when AI makes art? Can AI interpret and understand human emotions? How is it possible that AI creates sophisticated neural networks that mimic the human brain? The Machine That Feels investigates these questions, and more.

In Vienna, composer Walter Werzowa has — with the help of AI — completed Beethoven’s previously unfinished 10th symphony. By feeding data about Beethoven, his music, his style and the original scribbles on the 10th symphony into an algorithm, AI has created an entirely new piece of art.

In Atlanta, Dr. Ayanna Howard and her robotics lab at Georgia Tech are teaching robots how to interpret human emotions. Where others see problems, Howard sees opportunity: how AI can help fill gaps in education and health care systems. She believes we need a fundamental shift in how we perceive robots: let’s get humans and robots to work together to help others.

At Tufts University in Boston, a new type of biological robot has been created: the xenobot. The size of a grain of sand, xenobots are grown from frog heart and skin cells, and combined with the “mind” of a computer. Programmed with a specific task, they can move together to complete it. In the future, they could be used for environmental cleanup, digesting microplastics and targeted drug delivery (like releasing chemotherapy compounds directly into tumours).

The film includes interviews with global leaders, commentators and innovators from the AI field, including Geoff Hinton, Yoshua Bengio, Ray Kurzweil and Douglas Coupland, who highlight some of the innovative and cutting-edge AI technologies that are changing our world.

The Machine That Feels focuses on one central question: in the flourishing age of artificial intelligence, what does it mean to be human?

I’ll get back to that last bit, “… what does it mean to be human?” later.

There’s a lot to appreciate in this 44 min. programme. As you’d expect, there was a significant chunk of time devoted to research being done in the US but Poland and Japan also featured and Canadian content was substantive. A number of tricky topics were covered and transitions from one topic to the next were smooth.

In the end credits, I counted over 40 source materials from Getty Images, Google Canada, Gatebox, amongst others. It would have been interesting to find out which segments were produced by CBC.

David Suzuki’s (programme host) script was well written and his narration was enjoyable, engaging, and non-intrusive. That last quality is not always true of CBC hosts who can fall into the trap of overdramatizing the text.

Drilling down

I have followed artificial intelligence stories in a passive way (i.e., I don’t seek them out) for many years. Even so, there was a lot of material in the programme that was new to me.

For example, there was this love story (from the ‘I love her and see her as a real woman.’ Meet a man who ‘married’ an artificial intelligence hologram webpage on the CBC),

In the The Machine That Feels, a documentary from The Nature of Things, we meet Kondo Akihiko, a Tokyo resident who “married” a hologram of virtual pop singer Hatsune Miku using a certificate issued by Gatebox (the marriage isn’t recognized by the state, and Gatebox acknowledges the union goes “beyond dimensions”).

I found Akihiko to be quite moving when he described his relationship, which is not unique. It seems some 4,000 men have ‘wed’ their digital companions, you can read about that and more on the ‘I love her and see her as a real woman.’ Meet a man who ‘married’ an artificial intelligence hologram webpage.

What does it mean to be human?

Overall, this Nature of Things episode embraces certainty, which means the question of what it means to human is referenced rather than seriously discussed. An unanswerable philosophical question, the programme is ill-equipped to address it, especially since none of the commentators are philosophers or seem inclined to philosophize.

The programme presents AI as a juggernaut. Briefly mentioned is the notion that we need to make some decisions about how our juggernaut is developed and utilized. No one discusses how we go about making changes to systems that are already making critical decisions for us. (For more about AI and decision-making, see my February 28, 2017 posting and scroll down to the ‘Algorithms and big data’ subhead for Cathy O’Neil’s description of how important decisions that affect us are being made by AI systems. She is the author of the 2016 book, ‘Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy’; still a timely read.)

In fact, the programme’s tone is mostly one of breathless excitement. A few misgivings are expressed, e.g,, one woman who has an artificial ‘texting friend’ (Replika; a chatbot app) noted that it can ‘get into your head’ when she had a chat where her ‘friend’ told her that all of a woman’s worth is based on her body; she pushed back but intimated that someone more vulnerable could find that messaging difficult to deal with.

The sequence featuring Akihiko and his hologram ‘wife’ is followed by one suggesting that people might become more isolated and emotionally stunted as they interact with artificial friends. It should be noted, Akihiko’s wife is described as ‘perfect’. I gather perfection means that you are always understanding and have no needs of your own. She also seems to be about 18″ high.

Akihiko has obviously been asked about his ‘wife’ before as his answers are ready. They boil down to “there are many types of relationships” and there’s nothing wrong with that. It’s an intriguing thought which is not explored.

Also unexplored, these relationships could be said to resemble slavery. After all, you pay for these friends over which you have control. But perhaps that’s alright since AI friends don’t have consciousness. Or do they? In addition to not being able to answer the question, “what is it to be human?” we still can’t answer the question, “what is consciousness?”

AI and creativity

The Nature of Things team works fast. ‘Beethoven X – The AI Project’ had its first performance on October 9, 2021. (See my October 1, 2021 post ‘Finishing Beethoven’s unfinished 10th Symphony’ for more information from Ahmed Elgammal’s (Director of the Art & AI Lab at Rutgers University) technical perspective on the project.

Briefly, Beethoven died before completing his 10th symphony and a number of computer scientists, musicologists, AI, and musicians collaborated to finish the symphony.)

The one listener (Felix Mayer, music professor at the Technical University Munich) in the hall during a performance doesn’t consider the work to be a piece of music. He does have a point. Beethoven left some notes but this ’10th’ is at least partly mathematical guesswork. A set of probabilities where an algorithm chooses which note comes next based on probability.

There was another artist also represented in the programme. Puzzlingly, it was the still living Douglas Coupland. In my opinion, he’s better known as a visual artist than a writer (his Wikipedia entry lists him as a novelist first) but he has succeeded greatly in both fields.

What makes his inclusion in the Nature of Things ‘The Machine That Feels’ programme puzzling, is that it’s not clear how he worked with artificial intelligence in a collaborative fashion. Here’s a description of Coupland’s ‘AI’ project from a June 29, 2021 posting by Chris Henry on the Google Outreach blog (Note: Links have been removed),

… when the opportunity presented itself to explore how artificial intelligence (AI) inspires artistic expression — with the help of internationally renowned Canadian artist Douglas Coupland — the Google Research team jumped on it. This collaboration, with the support of Google Arts & Culture, culminated in a project called Slogans for the Class of 2030, which spotlights the experiences of the first generation of young people whose lives are fully intertwined with the existence of AI. 

This collaboration was brought to life by first introducing Coupland’s written work to a machine learning language model. Machine learning is a form of AI that provides computer systems the ability to automatically learn from data. In this case, Google research scientists tuned a machine learning algorithm with Coupland’s 30-year body of written work — more than a million words — so it would familiarize itself with the author’s unique style of writing. From there, curated general-public social media posts on selected topics were added to teach the algorithm how to craft short-form, topical statements. [emphases mine]

Once the algorithm was trained, the next step was to process and reassemble suggestions of text for Coupland to use as inspiration to create twenty-five Slogans for the Class of 2030. [emphasis mine]

I would comb through ‘data dumps’ where characters from one novel were speaking with those in other novels in ways that they might actually do. It felt like I was encountering a parallel universe Doug,” Coupland says. “And from these outputs, the statements you see here in this project appeared like gems. Did I write them? Yes. No. Could they have existed without me? No.” [emphases mine]

So, the algorithms crunched through Coupland’s word and social media texts to produce slogans, which Coupland then ‘combed through’ to pick out 25 slogans for the ‘Slogans For The Class of 2030’ project. (Note: In the programme, he says that he started a sentence and then the AI system completed that sentence with material gleaned from his own writings, which brings to Exquisite Corpse, a collaborative game for writers originated by the Surrealists, possibly as early as 1918.)

The ‘slogans’ project also reminds me of William S. Burroughs and the cut-up technique used in his work. From the William S. Burroughs Cut-up technique webpage on the Language is a Virus website (Thank you to Lake Rain Vajra for a very interesting website),

The cutup is a mechanical method of juxtaposition in which Burroughs literally cuts up passages of prose by himself and other writers and then pastes them back together at random. This literary version of the collage technique is also supplemented by literary use of other media. Burroughs transcribes taped cutups (several tapes spliced into each other), film cutups (montage), and mixed media experiments (results of combining tapes with television, movies, or actual events). Thus Burroughs’s use of cutups develops his juxtaposition technique to its logical conclusion as an experimental prose method, and he also makes use of all contemporary media, expanding his use of popular culture.

[Burroughs says] “All writing is in fact cut-ups. A collage of words read heard overheard. What else? Use of scissors renders the process explicit and subject to extension and variation. Clear classical prose can be composed entirely of rearranged cut-ups. Cutting and rearranging a page of written words introduces a new dimension into writing enabling the writer to turn images in cinematic variation. Images shift sense under the scissors smell images to sound sight to sound to kinesthetic. This is where Rimbaud was going with his color of vowels. And his “systematic derangement of the senses.” The place of mescaline hallucination: seeing colors tasting sounds smelling forms.

“The cut-ups can be applied to other fields than writing. Dr Neumann [emphasis mine] in his Theory of Games and Economic behavior introduces the cut-up method of random action into game and military strategy: assume that the worst has happened and act accordingly. … The cut-up method could be used to advantage in processing scientific data. [emphasis mine] How many discoveries have been made by accident? We cannot produce accidents to order. The cut-ups could add new dimension to films. Cut gambling scene in with a thousand gambling scenes all times and places. Cut back. Cut streets of the world. Cut and rearrange the word and image in films. There is no reason to accept a second-rate product when you can have the best. And the best is there for all. Poetry is for everyone . . .”

First, John von Neumann (1902 – 57) is a very important figure in the history of computing. From a February 25, 2017 John von Neumann and Modern Computer Architecture essay on the ncLab website, “… he invented the computer architecture that we use today.”

Here’s Burroughs on the history of writers and cutups (thank you to QUEDEAR for posting this clip),

You can hear Burroughs talk about the technique and how he started using it in 1959.

There is no explanation from Coupland as to how his project differs substantively from Burroughs’ cut-ups or a session of Exquisite Corpse. The use of a computer programme to crunch through data and give output doesn’t seem all that exciting. *(More about computers and chatbots at end of posting).* It’s hard to know if this was an interview situation where he wasn’t asked the question or if the editors decided against including it.

Kazuo Ishiguro?

Given that Ishiguro’s 2021 book (Klara and the Sun) is focused on an artificial friend and raises the question of ‘what does it mean to be human’, as well as the related question, ‘what is the nature of consciousness’, it would have been interesting to hear from him. He spent a fair amount of time looking into research on machine learning in preparation for his book. Maybe he was too busy?

AI and emotions

The work being done by Georgia Tech’s Dr. Ayanna Howard and her robotics lab is fascinating. They are teaching robots how to interpret human emotions. The segment which features researchers teaching and interacting with robots, Pepper and Salt, also touches on AI and bias.

Watching two African American researchers talk about the ways in which AI is unable to read emotions on ‘black’ faces as accurately as ‘white’ faces is quite compelling. It also reinforces the uneasiness you might feel after the ‘Replika’ segment where an artificial friend informs a woman that her only worth is her body.

(Interestingly, Pepper and Salt are produced by Softbank Robotics, part of Softbank, a multinational Japanese conglomerate, [see a June 28, 2021 article by Ian Carlos Campbell for The Verge] whose entire management team is male according to their About page.)

While Howard is very hopeful about the possibilities of a machine that can read emotions, she doesn’t explore (on camera) any means for pushing back against bias other than training AI by using more black faces to help them learn. Perhaps more representative management and coding teams in technology companies?

While the programme largely focused on AI as an algorithm on a computer, robots can be enabled by AI (as can be seen in the segment with Dr. Howard).

My February 14, 2019 posting features research with a completely different approach to emotions and machines,

“I’ve always felt that robots shouldn’t just be modeled after humans [emphasis mine] or be copies of humans,” he [Guy Hoffman, assistant professor at Cornell University)] said. “We have a lot of interesting relationships with other species. Robots could be thought of as one of those ‘other species,’ not trying to copy what we do but interacting with us with their own language, tapping into our own instincts.”

[from a July 16, 2018 Cornell University news release on EurekAlert]

This brings the question back to, what is consciousness?

What scientists aren’t taught

Dr. Howard notes that scientists are not taught to consider the implications of their work. Her comment reminded me of a question I was asked many years ago after a presentation, it concerned whether or not science had any morality. (I said, no.)

My reply angered an audience member (a visual artist who was working with scientists at the time) as she took it personally and started defending scientists as good people who care and have morals and values. She failed to understand that the way in which we teach science conforms to a notion that somewhere there are scientific facts which are neutral and objective. Society and its values are irrelevant in the face of the larger ‘scientific truth’ and, as a consequence, you don’t need to teach or discuss how your values or morals affect that truth or what the social implications of your work might be.

Science is practiced without much if any thought to values. By contrast, there is the medical injunction, “Do no harm,” which suggests to me that someone recognized competing values. E.g., If your important and worthwhile research is harming people, you should ‘do no harm’.

The experts, the connections, and the Canadian content

It’s been a while since I’ve seen Ray Kurzweil mentioned but he seems to be getting more attention these days. (See this November 16, 2021 posting by Jonny Thomson titled, “The Singularity: When will we all become super-humans? Are we really only a moment away from “The Singularity,” a technological epoch that will usher in a new era in human evolution?” on The Big Think for more). Note: I will have a little more about evolution later in this post.

Interestingly, Kurzweil is employed by Google these days (see his Wikipedia entry, the column to the right). So is Geoffrey Hinton, another one of the experts in the programme (see Hinton’s Wikipedia entry, the column to the right, under Institutions).

I’m not sure about Yoshu Bengio’s relationship with Google but he’s a professor at the Université de Montréal, and he’s the Scientific Director for Mila ((Quebec’s Artificial Intelligence research institute)) & IVADO (Institut de valorisation des données), Note: IVADO is not particularly relevant to what’s being discussed in this post.

As for Mila, the Canada Google blog in a November 21, 2016 posting notes a $4.5M grant to the institution,

Google invests $4.5 Million in Montreal AI Research

A new grant from Google for the Montreal Institute for Learning Algorithms (MILA) will fund seven faculty across a number of Montreal institutions and will help tackle some of the biggest challenges in machine learning and AI, including applications in the realm of systems that can understand and generate natural language. In other words, better understand a fan’s enthusiasm for Les Canadien [sic].

Google is expanding its academic support of deep learning at MILA, renewing Yoshua Bengio’s Focused Research Award and offering Focused Research Awards to MILA faculty at University of Montreal and McGill University:

Google reaffirmed their commitment to Mila in 2020 with a grant worth almost $4M (from a November 13, 2020 posting on the Mila website, Note: A link has been removed),

Google Canada announced today [November 13, 2020] that it will be renewing its funding of Mila – Quebec Artificial Intelligence Institute, with a generous pledge of nearly $4M over a three-year period. Google previously invested $4.5M US in 2016, enabling Mila to grow from 25 to 519 researchers.

In a piece written for Google’s Official Canada Blog, Yoshua Bengio, Mila Scientific Director, says that this year marked a “watershed moment for the Canadian AI community,” as the COVID-19 pandemic created unprecedented challenges that demanded rapid innovation and increased interdisciplinary collaboration between researchers in Canada and around the world.

COVID-19 has changed the world forever and many industries, from healthcare to retail, will need to adapt to thrive in our ‘new normal.’ As we look to the future and how priorities will shift, it is clear that AI is no longer an emerging technology but a useful tool that can serve to solve world problems. Google Canada recognizes not only this opportunity but the important task at hand and I’m thrilled they have reconfirmed their support of Mila with an additional $3,95 million funding grant until 22.

– Yoshua Bengio, for Google’s Official Canada Blog

Interesting, eh? Of course, Douglas Coupland is working with Google, presumably for money, and that would connect over 50% of the Canadian content (Douglas Coupland, Yoshua Bengio, and Geoffrey Hinton; Kurzweil is an American) in the programme to Google.

My hat’s off to Google’s marketing communications and public relations teams.

Anthony Morgan of Science Everywhere also provided some Canadian content. His LinkedIn profile indicates that he’s working on a PhD in molecular science, which is described this way, “My work explores the characteristics of learning environments, that support critical thinking and the relationship between critical thinking and wisdom.”

Morgan is also the founder and creative director of Science Everywhere, from his LinkedIn profile, “An events & media company supporting knowledge mobilization, community engagement, entrepreneurship and critical thinking. We build social tools for better thinking.”

There is this from his LinkedIn profile,

I develop, create and host engaging live experiences & media to foster critical thinking.

I’ve spent my 15+ years studying and working in psychology and science communication, thinking deeply about the most common individual and societal barriers to critical thinking. As an entrepreneur, I lead a team to create, develop and deploy cultural tools designed to address those barriers. As a researcher I study what we can do to reduce polarization around science.

There’s a lot more to Morgan (do look him up; he has connections to the CBC and other media outlets). The difficulty is: why was he chosen to talk about artificial intelligence and emotions and creativity when he doesn’t seem to know much about the topic? He does mention GPT-3, an AI programming language. He seems to be acting as an advocate for AI although he offers this bit of almost cautionary wisdom, “… algorithms are sets of instructions.” (You can can find out more about it in my April 27, 2021 posting. There’s also this November 26, 2021 posting [The Inherent Limitations of GPT-3] by Andrey Kurenkov, a PhD student with the Stanford [University] Vision and Learning Lab.)

Most of the cautionary commentary comes from Luke Stark, assistant professor at Western [Ontario] University’s Faculty of Information and Media Studies. He’s the one who mentions stunted emotional growth.

Before moving on, there is another set of connections through the Pan-Canadian Artificial Intelligence Strategy, a Canadian government science funding initiative announced in the 2017 federal budget. The funds allocated to the strategy are administered by the Canadian Institute for Advanced Research (CIFAR). Yoshua Bengio through Mila is associated with the strategy and CIFAR, as is Geoffrey Hinton through his position as Chief Scientific Advisor for the Vector Institute.

Evolution

Getting back to “The Singularity: When will we all become super-humans? Are we really only a moment away from “The Singularity,” a technological epoch that will usher in a new era in human evolution?” Xenobots point in a disconcerting (for some of us) evolutionary direction.

I featured the work, which is being done at Tufts University in the US, in my June 21, 2021 posting, which includes an embedded video,

From a March 31, 2021 news item on ScienceDaily,

Last year, a team of biologists and computer scientists from Tufts University and the University of Vermont (UVM) created novel, tiny self-healing biological machines from frog cells called “Xenobots” that could move around, push a payload, and even exhibit collective behavior in the presence of a swarm of other Xenobots.

Get ready for Xenobots 2.0.

Also from an excerpt in the posting, the team has “created life forms that self-assemble a body from single cells, do not require muscle cells to move, and even demonstrate the capability of recordable memory.”

Memory is key to intelligence and this work introduces the notion of ‘living’ robots which leads to questioning what constitutes life. ‘The Machine That Feels’ is already grappling with far too many questions to address this development but introducing the research here might have laid the groundwork for the next episode, The New Human, telecast on November 26, 2021,

While no one can be certain what will happen, evolutionary biologists and statisticians are observing trends that could mean our future feet only have four toes (so long, pinky toe) or our faces may have new combinations of features. The new humans might be much taller than their parents or grandparents, or have darker hair and eyes.

And while evolution takes a lot of time, we might not have to wait too long for a new version of ourselves.

Technology is redesigning the way we look and function — at a much faster pace than evolution. We are merging with technology more than ever before: our bodies may now have implanted chips, smart limbs, exoskeletons and 3D-printed organs. A revolutionary gene editing technique has given us the power to take evolution into our own hands and alter our own DNA. How long will it be before we are designing our children?

As the story about the xenobots doesn’t say, we could also take the evolution of another species into our hands.

David Suzuki, where are you?

Our programme host, David Suzuki surprised me. I thought that as an environmentalist he’d point out that the huge amounts of computing power needed for artificial intelligence as mentioned in the programme, constitutes an environmental issue. I also would have expected a geneticist like Suzuki might have some concerns with regard to xenobots but perhaps that’s being saved for the next episode (The New Human) of the Nature of Things.

Artificial stupidity

Thanks to Will Knight for introducing me to the term ‘artificial stupidity’. Knight, a senior writer covers artificial intelligence for WIRED magazine. According to its Wikipedia entry,

Artificial stupidity is commonly used as a humorous opposite of the term artificial intelligence (AI), often as a derogatory reference to the inability of AI technology to adequately perform its tasks.[1] However, within the field of computer science, artificial stupidity is also used to refer to a technique of “dumbing down” computer programs in order to deliberately introduce errors in their responses.

Knight was using the term in its humorous, derogatory form.

Finally

The episode certainly got me thinking if not quite in the way producers might have hoped. ‘The Machine That Feels’ is a glossy, pretty well researched piece of infotainment.

To be blunt, I like and have no problems with infotainment but it can be seductive. I found it easier to remember the artificial friends, wife, xenobots, and symphony than the critiques and concerns.

Hopefully, ‘The Machine That Feels’ stimulates more interest in some very important topics. If you missed the telecast, you can catch the episode here.

For anyone curious about predictive policing, which was mentioned in the Ayanna Howard segment, see my November 23, 2017 posting about Vancouver’s plunge into AI and car theft.

*ETA December 6, 2021: One of the first ‘chatterbots’ was ELIZA, a computer programme developed from1964 to 1966. The most famous ELIZA script was DOCTOR, where the programme simulated a therapist. Many early users believed ELIZA understood and could respond as a human would despite Joseph Weizenbaum’s (creator of the programme) insistence otherwise.

Lost Women of Science

Both an organization and a podcast series, Lost Women of Science is preparing for its second, third, and fourth podcasts seasons thanks to a grant announced in a November 19, 2021 Lost Women of Science news release (on Cision),

 Journalist and author Katie Hafner, and bioethicist Amy Scharf, today announced that the Lost Women of Science podcast series will continue for an additional three seasons thanks to a grant award of $446,760 from the Gordon and Betty Moore Foundation. The podcast series will continue its partnership with public media organization PRX and the award-winning Scientific American magazine.

The first season features multiple in-depth episodes centered on Dr. Dorothy Andersen, a pediatric pathologist who identified and named cystic fibrosis in 1938. Three episodes are now available across all major podcast listening platforms, including Apple Podcasts, Google Podcasts, Spotify, Stitcher, and Amazon Music. The fourth episode [I believe it’s Season 1] will be released on Thanksgiving Day [November 25, 2021].

Genny Biggs, Special Projects Officer of the Gordon and Betty Moore Foundation said, “We have been excited about this project from our initial conversations and have been pleased to see the results. Our history books have unfortunately taught us too little about these women and we support bringing their stories to the forefront. We hope they will inspire the next generation of female scientists.”

Hafner said, “The response to the podcast so far has been overwhelmingly positive.  We could not be more grateful to the Gordon and Betty Moore Foundation, not only for early funding to help us get started, but for continued support and confidence that will allow us to tell more stories.”

Dr. Maria Klawe, President of Harvey Mudd College and Chair of the Lost Women of Science Initiative Advisory Board, said, “It’s wonderful that the Gordon and Betty Moore Foundation recognizes that women have been making great contributions to science for centuries, even though they’re often not recognized. And the rich storytelling approach has deep impact in helping people understand the importance of a scientist’s work.”

Earlier funding for Lost Women of Science has come from the Gordon and Betty Moore Foundation, Schmidt Futures and the John Templeton Foundation. The Initiative is also partnering with Barnard College at Columbia University, one-third of whose graduates are STEM majors. Harvey Mudd College graciously served as an early Fiscal Sponsor.

To learn more about the Lost Women of Science Initiative, or to donate to this important work, please visit: www.lostwomenofscience.org and follow @lostwomenofsci.

About Lost Women of Science:

The Lost Women of Science Initiativeis a 501(c)3 nonprofit with two overarching and interrelated missions: to tell the story of female scientists who made groundbreaking achievements in their fields, yet remain largely unknown to the general public, and to inspire girls and young women to pursue education and careers in STEM. The Initiative’s flagship is its Lost Women of Science podcast series. As a full, mission-driven organization, the Lost Women of Science Initiative plans to digitize and archive its research, and to make all primary source material available to students and historians of science.

About the Gordon and Betty Moore Foundation:

The Gordon and Betty Moore Foundation fosters path-breaking scientific discovery, environmental conservation, patient care improvements and preservation of the special character of the Bay Area. Visit Moore.org and follow @MooreFound.

You can listen to this trailer for Season 1,

The four episodes currently available constitute a four-part series on Dorothy Andersen, her work, and how she got ‘lost’. You can find the podcasts here.

Thank you to the publicist who sent the announcement about the grant!

Heritage science at the University of Kentucky (US)

Before launching into the news, there is very interesting terminology coming up (for a Canadian anyway). The University of Kentucky is also referred to as UK, not to be confused with the United Kingdom, which also uses those initials. As well, the reference to ‘commonwealth’ is a reference to the state of Kentucky’s full name. From the Commonwealth (U.S. State) Wikipedia entry, Note: Links have been removed,

Commonwealth is a term used by four of the 50 states of the United States in their full official state names. “Commonwealth” is a traditional English term for a political community founded for the common good.[1] The four states – Kentucky,[2] Massachusetts,[3] Pennsylvania,[4] and Virginia[5] – are all in the Eastern United States, and prior to the formation of the United States in 1776, were British colonial possessions (although Kentucky did not exist as an independent polity under British rule, instead being a part of Virginia). As such, they share a strong influence of English common law in some of their laws and institutions.[6][7]

On to the news. A November 5, 2021 University of Kentucky (UK) news release (also on EurekAlert but published November 8, 2021) by Lindsey Piercy, Alicia Gregory, and Ben Corwin describes what is being planned at the new EduceLab with a $14 million grant (Note 1: Links have been removed; Note 2: A video of the research team discussing EduceLab is embedded with the news release on the University of Kentucky website),

It’s the signature on a bourbon barrel — it’s the ancient footprints in Mammoth Cave.

Heritage science is all around us and has deep roots in the Commonwealth.

Kentucky’s story begins in prehistoric times, when mammoths roamed the Ohio River Valley at Big Bone Lick.

Now, thanks to a $14 million infrastructure grant from the National Science Foundation, the University of Kentucky is poised to tell that story in new, groundbreaking ways through the lens of heritage science.

“We are at a turning point,” Brent Seales, UK Alumni Professor in the Department of Computer Science, said. “Science and technology present a host of exciting opportunities to the heritage sector. They must not be wasted.”

For more than 20 years, Seales has been working to create and use high-tech, non-invasive tools to rescue hidden texts and restore them to humanity. Dubbed “the man who can read the unreadable,” he has garnered international recognition for his “virtual unwrapping” work to read damaged ancient artifacts — such as the Dead Sea Scrolls and Herculaneum papyrus rolls — without ever physically opening them.

Now, Seales is expanding his research.

Using the NSF infrastructure funding, he has gathered a team of experts from the College of Engineering and the College of Arts and Sciences to build EduceLab — UK’s vision for next-generation heritage science. The collaborative facility will focus on developing innovative artificial intelligence (AI) solutions for the unique challenges presented by cultural heritage objects.

Heritage science draws on engineering, the humanities and the sciences to enhance the understanding of our past, inform the present and guide our future. Ultimately, the goal is to enrich people’s lives and celebrate both the commonality and diversity of the human experience.

“The word Educe means ‘to bring out from data’ or ‘to develop something that is latent but not on its own explicit.’ That’s what we’ve been doing with our virtual unwrapping work. And that context has created an opportunity to expand the very focused question of, ‘Can we read what’s inside a scroll?’ to a broader question of, ‘What heritage science questions can we answer right here in Kentucky,’ Seales explained. “My goal is to rally some of the best researchers here around that theme and build a world-class laboratory that allows us to pose and then answer some of those questions.”

And the quest for answers has already begun.

“Here at UK, we are tremendously well positioned to bring in collaborations, because we have all major colleges in one contiguous campus,” Hugo Reyes-Centeno, an assistant professor in the Department of Anthropology, added. “I see tremendous potential to integrate quantitative analysis and new methodologies that will inform the theoretical perspectives that are the hallmark of the social sciences.”

Multimillion Dollar Renovation to Enhance William S. Webb Museum

EduceLab will function as a user facility for the heritage community and have its home base in UK’s William S. Webb Museum of Anthropology, located on Export Street in Lexington, next to the main campus.

Founded in 1931, the museum remains dedicated to enhancing knowledge about and preservation of the nation’s cultural heritage.

The Webb Museum houses a world-renowned archaeological collection from more than 250 properties listed on the National Register of Historic Places — including Native American, Revolutionary War- and Civil War-era sites.

The collections provide a link to the roots of the Commonwealth and its people. Additionally, the immense research archives provide educational services, practical training and research opportunities for the campus community and beyond — making it the ideal location for EduceLab.

“Within Kentucky, it’s probably a well-kept secret that we have some of the best collections that relate to this question of the first agricultural populations in Eastern North America. The Webb Museum, which is primarily a research center, is not your classic bricks and mortar display. We maintain the collections for the state of Kentucky for research purposes,” Crothers [George Crothers, Director, William S. Webb Museum of Anthropology] said. “This is going to significantly impact what we do in the museum, and in archaeology in general, because it’s providing us access to the most sophisticated and high-level equipment, which we didn’t have before.”

EduceLab has four parts: FLEX, BENCH, MOBILE and CYBER.

BENCH

Modern technology is key to understanding how relics of our past were made.

BENCH will work to acquire the instruments needed to conduct leading-edge materials science, which will help establish a comprehensive workflow.

“My role is to bring the perspective of materials characterization. As a materials engineer, I look at what materials are made of. That helps us understand how a specimen was made in the first place and the technology that was used to create it. And I apply that to metals and alloys or ceramics that are used in industry, but we can also apply that to cultural heritage artifacts,” John Balk, William T. Bryan Professor of Materials Engineering and associate dean for research and graduate studies in UK Engineering, said. “It’s definitely a new application space for me, but we can apply these scientific techniques and really learn about the material — the artifact — and put that in the right context of cultural heritage.”

FLEX

In 2016, Seales’ team developed the Volume Cartographer, a revolutionary computer program for locating and mapping 2D surfaces within a 3D object. The software pipeline is used with micro-CT to generate extremely high-resolution images — enabling the ability to read a document without ever needing to physically open it. The charred scroll from En Gedi was the first complete text to be revealed using the software.

While the first-of-its-kind software has profoundly impacted history and literature, not all damaged artifacts are created equal.

Seales and his team have often found it difficult to use equipment that is poorly suited for the odd shapes and sizes — so they decided to build their own.

“With the FLEX cluster, we will have a prototype environment where we can envision, build and test custom instrument configurations built around the heritage object under study,” Seales said. “That is truly a novel approach not seen anywhere else at the mid-scale level.”

MOBILE

It’s one thing to bring an object into the lab. It’s another to go to the object in the field.

By setting up in the parking lot of a museum or by collecting data at an archeological site, the MOBILE team will take EduceLab on the road.

Suzanne Smith, along with faculty members Sean Bailey and Mike Sama, will deploy the use of unmanned aerial systems for field campaigns. “And in that field campaign, we do all kinds of measurements from the air over a larger area,” Smith, director of UK Unmanned Systems Research Consortium, explained. “It’s using all different kinds of sensors that can give different perspectives on the shapes that are being measured, and we can even see through some of the materials — giving us the historical context of that whole area. It’s just such a bigger scale of where that history has happened.”

Additionally, the MOBILE team will use external displays for community involvement. “They can actually see this information coming in,” Smith said. “There are going to be exciting discoveries that happen in the moment, and the public will be able to be right there.”

MOBILE TO CYBER

While MOBILE oversees collecting data, CYBER will be tasked with generating and sharing the data.

As the link between MOBILE and CYBER, that’s where Corey Baker’s expertise in wireless communications comes in. CYBER will be critical when helping to further drive advancements in drone fleets.

“There are a lot of devices in use when it comes to the unmanned vehicles component. They will pick up data and transfer data. But many times, they may not have internet connectivity,” Baker, an assistant professor in computer science, said. “My research focuses on the question, when the internet is limited or nonexistent, how do you build applications and systems to disseminate information?”

Additionally, Baker believes technology should be an enabler not just for researchers, but for the entire community. “These types of projects are not just designed to produce something that looks fancy. But it’s designed to make a difference.”

Students Remain Key in Unlocking Sealed Secrets

Over the years, this team of UK faculty members has been as committed to developing students’ talents. By engaging in hands-on research, they’re able to determine an area of interest and jump start their careers. 

“I never would have imagined that I would go into academia to pursue some of the questions that always interested me. But if it were not for that undergraduate research experience that ultimately led me to Europe and to the discovery of this field of heritage science, I probably wouldn’t be here now,” Reyes-Centeno said. “The undergraduate research component is certainly something we’ll be continuing to develop for our students. Our students must have those opportunities.”

The Promise Moving Forward

Seales is considered the foremost expert in the digital restoration of cultural antiquities. To this day, his quest to uncover ancient wisdom is ever evolving.

Overcoming damage incurred by time is no small challenge. But Seales, and his dedicated team, are committed to conquering the seemingly impossible.

“We’re in a time now where our cultural heritage is the key to understanding and embracing our diversity,” he said. “Focusing on heritage science can be key to unlocking, in a positive way, how that heritage can help us understand each other, collaborate together and shape our future. We plan to keep showing the world what can be done, right here at UK.”

You can find out more about EduceLab here.