The Storywrangler, tool exploring billions of social media messages, could predict political & financial turmoil

Being able to analyze Twitter messages (tweets) in real-time is amazing given what I wrote in this January 16, 2013 posting titled: “Researching tweets (the Twitter kind)” about the US Library of Congress and its attempts to access tweets for scholars,”

At least one of the reasons no one has received access to the tweets is that a single search of the archived (2006- 2010) tweets alone would take 24 hours, [emphases mine] …

So, bravo to the researchers at the University of Vermont (UVM). A July 16, 2021 news item on ScienceDaily makes the announcement,

For thousands of years, people looked into the night sky with their naked eyes — and told stories about the few visible stars. Then we invented telescopes. In 1840, the philosopher Thomas Carlyle claimed that “the history of the world is but the biography of great men.” Then we started posting on Twitter.

Now scientists have invented an instrument to peer deeply into the billions and billions of posts made on Twitter since 2008 — and have begun to uncover the vast galaxy of stories that they contain.

Caption: UVM scientists have invented a new tool: the Storywrangler. It visualizes the use of billions of words, hashtags and emoji posted on Twitter. In this example from the tool’s online viewer, three global events from 2020 are highlighted: the death of Iranian general Qasem Soleimani; the beginning of the COVID-19 pandemic; and the Black Lives Matter protests following the murder of George Floyd by Minneapolis police. The new research was published in the journal Science Advances. Credit: UVM

A July 15, 2021 UVM news release (also on EurekAlert but published on July 16, 2021) by Joshua Brown, which originated the news item, provides more detail abut the work,

“We call it the Storywrangler,” says Thayer Alshaabi, a doctoral student at the University of Vermont who co-led the new research. “It’s like a telescope to look — in real time — at all this data that people share on social media. We hope people will use it themselves, in the same way you might look up at the stars and ask your own questions.”

The new tool can give an unprecedented, minute-by-minute view of popularity, from rising political movements to box office flops; from the staggering success of K-pop to signals of emerging new diseases.

The story of the Storywrangler — a curation and analysis of over 150 billion tweets–and some of its key findings were published on July 16 [2021] in the journal Science Advances.

EXPRESSIONS OF THE MANY

The team of eight scientists who invented Storywrangler — from the University of Vermont, Charles River Analytics, and MassMutual Data Science [emphasis mine]– gather about ten percent of all the tweets made every day, around the globe. For each day, they break these tweets into single bits, as well as pairs and triplets, generating frequencies from more than a trillion words, hashtags, handles, symbols and emoji, like “Super Bowl,” “Black Lives Matter,” “gravitational waves,” “#metoo,” “coronavirus,” and “keto diet.”

“This is the first visualization tool that allows you to look at one-, two-, and three-word phrases, across 150 different languages [emphasis mine], from the inception of Twitter to the present,” says Jane Adams, a co-author on the new study who recently finished a three-year position as a data-visualization artist-in-residence at UVM’s Complex Systems Center.

The online tool, powered by UVM’s supercomputer at the Vermont Advanced Computing Core, provides a powerful lens for viewing and analyzing the rise and fall of words, ideas, and stories each day among people around the world. “It’s important because it shows major discourses as they’re happening,” Adams says. “It’s quantifying collective attention.” Though Twitter does not represent the whole of humanity, it is used by a very large and diverse group of people, which means that it “encodes popularity and spreading,” the scientists write, giving a novel view of discourse not just of famous people, like political figures and celebrities, but also the daily “expressions of the many,” the team notes.

In one striking test of the vast dataset on the Storywrangler, the team showed that it could be used to potentially predict political and financial turmoil. They examined the percent change in the use of the words “rebellion” and “crackdown” in various regions of the world. They found that the rise and fall of these terms was significantly associated with change in a well-established index of geopolitical risk for those same places.

WHAT’S HAPPENING?

The global story now being written on social media brings billions of voices — commenting and sharing, complaining and attacking — and, in all cases, recording — about world wars, weird cats, political movements, new music, what’s for dinner, deadly diseases, favorite soccer stars, religious hopes and dirty jokes.

“The Storywrangler gives us a data-driven way to index what regular people are talking about in everyday conversations, not just what reporters or authors have chosen; it’s not just the educated or the wealthy or cultural elites,” says applied mathematician Chris Danforth, a professor at the University of Vermont who co-led the creation of the StoryWrangler with his colleague Peter Dodds. Together, they run UVM’s Computational Story Lab.

“This is part of the evolution of science,” says Dodds, an expert on complex systems and professor in UVM’s Department of Computer Science. “This tool can enable new approaches in journalism, powerful ways to look at natural language processing, and the development of computational history.”

How much a few powerful people shape the course of events has been debated for centuries. But, certainly, if we knew what every peasant, soldier, shopkeeper, nurse, and teenager was saying during the French Revolution, we’d have a richly different set of stories about the rise and reign of Napoleon. “Here’s the deep question,” says Dodds, “what happened? Like, what actually happened?”

GLOBAL SENSOR

The UVM team, with support from the National Science Foundation [emphasis mine], is using Twitter to demonstrate how chatter on distributed social media can act as a kind of global sensor system — of what happened, how people reacted, and what might come next. But other social media streams, from Reddit to 4chan to Weibo, could, in theory, also be used to feed Storywrangler or similar devices: tracing the reaction to major news events and natural disasters; following the fame and fate of political leaders and sports stars; and opening a view of casual conversation that can provide insights into dynamics ranging from racism to employment, emerging health threats to new memes.

In the new Science Advances study, the team presents a sample from the Storywrangler’s online viewer, with three global events highlighted: the death of Iranian general Qasem Soleimani; the beginning of the COVID-19 pandemic; and the Black Lives Matter protests following the murder of George Floyd by Minneapolis police. The Storywrangler dataset records a sudden spike of tweets and retweets using the term “Soleimani” on January 3, 2020, when the United States assassinated the general; the strong rise of “coronavirus” and the virus emoji over the spring of 2020 as the disease spread; and a burst of use of the hashtag “#BlackLivesMatter” on and after May 25, 2020, the day George Floyd was murdered.

“There’s a hashtag that’s being invented while I’m talking right now,” says UVM’s Chris Danforth. “We didn’t know to look for that yesterday, but it will show up in the data and become part of the story.”

Here’s a link to and a citation for the paper,

Storywrangler: A massive exploratorium for sociolinguistic, cultural, socioeconomic, and and political timelines using Twitter by Thayer Alshaabi, Jane L. Adams, Michael V. Arnold, Joshua R. Minot, David R. Dewhurst, Andrew J. Reagan, Christopher M. Danforth and Peter Sheridan Dodds. Science Advances 16 Jul 2021: Vol. 7, no. 29, eabe6534DOI: 10.1126/sciadv.abe6534 DOI: 10.1126/sciadv.abe6534

This paper is open access.

A couple of comments

I’m glad to see they are looking at phrases in many different languages. Although I do experience some hesitation when I consider the two companies involved in this research with the University of Vermont.

Charles River Analytics and MassMutual Data Science would not have been my first guess for corporate involvement but on re-examining the subhead and noting this: “potentially predict political and financial turmoil”, they make perfect sense. Charles River Analytics provides “Solutions to serve the warfighter …”, i.e., soldiers/the military, and MassMutual is an insurance company with a dedicated ‘data science space’ (from the MassMutual Explore Careers Data Science webpage),

What are some key projects that the Data Science team works on?

Data science works with stakeholders throughout the enterprise to automate or support decision making when outcomes are unknown. We help determine the prospective clients that MassMutual should market to, the risk associated with life insurance applicants, and which bonds MassMutual should invest in. [emphases mine]

Of course. The military and financial services. Delightfully, this research is at least partially (mostly?) funded on the public dime, the US National Science Foundation.

Cyborg soil?

Edith Hammer, lecturer (Biology) at Lund University (Sweden) has written a July 22, 2021 essay for The Conversation (h/t July 23, 2021 news item on phys.org) that has everything.: mystery, cyborgs, unexpected denizens, and a phenomenon explored for the first time (Note: Links have been removed),

Dig a teaspoon into your nearest clump of soil, and what you’ll emerge with will contain more microorganisms than there are people on Earth. We know this from lab studies that analyse samples of earth scooped from the microbial wild to determine which forms of microscopic life exist in the world beneath our feet.

The problem is, such studies can’t actually tell us how this subterranean kingdom of fungi, flagellates and amoebae operates in the ground. Because they entail the removal of soil from its environment, these studies destroy the delicate structures of mud, water and air in which the soil microbes reside.

This prompted my lab to develop a way to spy on these underground workers, who are indispensable in their role as organic matter recycling agents, without disturbing their micro-habitats.

Our study revealed the dark, dank cities in which soil microbes reside [emphasis mine]. We found labyrinths of tiny highways, skyscrapers, bridges and rivers which are navigated by microorganisms to find food, or to avoid becoming someone’s next meal. This new window into what’s happening underground could help us better appreciate and preserve Earth’s increasingly damaged soils.

Here’s how the soil scientists probed the secrets buried in soil (Note: A link has been removed),

In our study, we developed a new kind of “cyborg soil”, which is half natural and half artificial. It consists of microengineered chips that we either buried in the wild, or surrounded with soil in the lab for enough time for the microbial cities to emerge within the mud.

The chips literally act like windows to the underground. A transparent patch in the otherwise opaque soil, the chip is cut to mimic the pore structures of actual soil, which are often strange and counter-intuitive at the scale that microbes experience them.

Different physical laws become dominant at the micro scale compared to what we’re acquainted to in our macro world. Water clings to surfaces, and resting bacteria get pushed around by the movement of water molecules. Air bubbles form insurmountable barriers for many microorganisms, due to the surface tension of the water around them.

Here’s some of the what they found,

When we excavated our first chips, we were met with the full variety of single-celled organisms, nematodes, tiny arthropods and species of bacteria that exist in our soils. Fungal hyphae, which burrow like plant roots underground, had quickly grown into the depths of our cyborg soil pores, creating a direct living connection between the real soil and our chips.

This meant we could study a phenomenon known only from lab studies: the “fungal highways” along which bacteria “hitchhike” to disperse through soil. Bacteria usually disperse through water, so by making some of our chips air-filled we could watch how bacteria smuggle themselves into new pores by following the groping arms of fungal hyphae.

Unexpectedly, we also found a high number of protists – enigmatic single-celled organisms which are neither animal, plant or fungus – in the spaces around hyphae. Clearly they too hitch a ride on the fungal highway – a so-far completely unexplored phenomenon.

The essay has a number of embedded videos and images illustrating a fascinating world in a ‘teaspoon of soil’.

Here’s a link to and a citation for the study by the researchers at Lund University,

Microfluidic chips provide visual access to in situ soil ecology by Paola Micaela Mafla-Endara, Carlos Arellano-Caicedo, Kristin Aleklett, Milda Pucetaite, Pelle Ohlsson & Edith C. Hammer. Communications Biology volume 4, Article number: 889 (2021) DOI: https://doi.org/10.1038/s42003-021-02379-5 Published: 20 July 2021

This paper is open access.

Restoring words with a neuroprosthesis

There seems to have been an update to the script for the voiceover. You’ll find it at the 1 min. 30 secs. mark ( spoken: “with up to 93% accuracy at 18 words per minute`’ vs. written “with median 74% accuracy at 15 words per minute)".

A July 14, 2021 news item on ScienceDaily announces the latest work on a a neuroprosthetic from the University of California at San Francisco (UCSF),

Researchers at UC San Francisco have successfully developed a “speech neuroprosthesis” that has enabled a man with severe paralysis to communicate in sentences, translating signals from his brain to the vocal tract directly into words that appear as text on a screen.

The achievement, which was developed in collaboration with the first participant of a clinical research trial, builds on more than a decade of effort by UCSF neurosurgeon Edward Chang, MD, to develop a technology that allows people with paralysis to communicate even if they are unable to speak on their own. The study appears July 15 [2021] in the New England Journal of Medicine.

A July 14, 2021 UCSF news release (also on EurekAlert), which originated the news item, delves further into the topic,

“To our knowledge, this is the first successful demonstration of direct decoding of full words from the brain activity of someone who is paralyzed and cannot speak,” said Chang, the Joan and Sanford Weill Chair of Neurological Surgery at UCSF, Jeanne Robertson Distinguished Professor, and senior author on the study. “It shows strong promise to restore communication by tapping into the brain’s natural speech machinery.”

Each year, thousands of people lose the ability to speak due to stroke, accident, or disease. With further development, the approach described in this study could one day enable these people to fully communicate.

Translating Brain Signals into Speech

Previously, work in the field of communication neuroprosthetics has focused on restoring communication through spelling-based approaches to type out letters one-by-one in text. Chang’s study differs from these efforts in a critical way: his team is translating signals intended to control muscles of the vocal system for speaking words, rather than signals to move the arm or hand to enable typing. Chang said this approach taps into the natural and fluid aspects of speech and promises more rapid and organic communication.

“With speech, we normally communicate information at a very high rate, up to 150 or 200 words per minute,” he said, noting that spelling-based approaches using typing, writing, and controlling a cursor are considerably slower and more laborious. “Going straight to words, as we’re doing here, has great advantages because it’s closer to how we normally speak.”

Over the past decade, Chang’s progress toward this goal was facilitated by patients at the UCSF Epilepsy Center who were undergoing neurosurgery to pinpoint the origins of their seizures using electrode arrays placed on the surface of their brains. These patients, all of whom had normal speech, volunteered to have their brain recordings analyzed for speech-related activity. Early success with these patient volunteers paved the way for the current trial in people with paralysis.

Previously, Chang and colleagues in the UCSF Weill Institute for Neurosciences mapped the cortical activity patterns associated with vocal tract movements that produce each consonant and vowel. To translate those findings into speech recognition of full words, David Moses, PhD, a postdoctoral engineer in the Chang lab and lead author of the new study, developed new methods for real-time decoding of those patterns, as well as incorporating statistical language models to improve accuracy.

But their success in decoding speech in participants who were able to speak didn’t guarantee that the technology would work in a person whose vocal tract is paralyzed. “Our models needed to learn the mapping between complex brain activity patterns and intended speech,” said Moses. “That poses a major challenge when the participant can’t speak.”

In addition, the team didn’t know whether brain signals controlling the vocal tract would still be intact for people who haven’t been able to move their vocal muscles for many years. “The best way to find out whether this could work was to try it,” said Moses.

The First 50 Words

To investigate the potential of this technology in patients with paralysis, Chang partnered with colleague Karunesh Ganguly, MD, PhD, an associate professor of neurology, to launch a study known as “BRAVO” (Brain-Computer Interface Restoration of Arm and Voice). The first participant in the trial is a man in his late 30s who suffered a devastating brainstem stroke more than 15 years ago that severely damaged the connection between his brain and his vocal tract and limbs. Since his injury, he has had extremely limited head, neck, and limb movements, and communicates by using a pointer attached to a baseball cap to poke letters on a screen.

The participant, who asked to be referred to as BRAVO1, worked with the researchers to create a 50-word vocabulary that Chang’s team could recognize from brain activity using advanced computer algorithms. The vocabulary – which includes words such as “water,” “family,” and “good” – was sufficient to create hundreds of sentences expressing concepts applicable to BRAVO1’s daily life.

For the study, Chang surgically implanted a high-density electrode array over BRAVO1’s speech motor cortex. After the participant’s full recovery, his team recorded 22 hours of neural activity in this brain region over 48 sessions and several months. In each session, BRAVO1 attempted to say each of the 50 vocabulary words many times while the electrodes recorded brain signals from his speech cortex.

Translating Attempted Speech into Text

To translate the patterns of recorded neural activity into specific intended words, Moses’s two co-lead authors, Sean Metzger and Jessie Liu, both bioengineering graduate students in the Chang Lab, used custom neural network models, which are forms of artificial intelligence. When the participant attempted to speak, these networks distinguished subtle patterns in brain activity to detect speech attempts and identify which words he was trying to say.

To test their approach, the team first presented BRAVO1 with short sentences constructed from the 50 vocabulary words and asked him to try saying them several times. As he made his attempts, the words were decoded from his brain activity, one by one, on a screen.

Then the team switched to prompting him with questions such as “How are you today?” and “Would you like some water?” As before, BRAVO1’s attempted speech appeared on the screen. “I am very good,” and “No, I am not thirsty.”

Chang and Moses found that the system was able to decode words from brain activity at rate of up to 18 words per minute with up to 93 percent accuracy (75 percent median). Contributing to the success was a language model Moses applied that implemented an “auto-correct” function, similar to what is used by consumer texting and speech recognition software.

Moses characterized the early trial results as a proof of principle. “We were thrilled to see the accurate decoding of a variety of meaningful sentences,” he said. “We’ve shown that it is actually possible to facilitate communication in this way and that it has potential for use in conversational settings.”

Looking forward, Chang and Moses said they will expand the trial to include more participants affected by severe paralysis and communication deficits. The team is currently working to increase the number of words in the available vocabulary, as well as improve the rate of speech.

Both said that while the study focused on a single participant and a limited vocabulary, those limitations don’t diminish the accomplishment. “This is an important technological milestone for a person who cannot communicate naturally,” said Moses, “and it demonstrates the potential for this approach to give a voice to people with severe paralysis and speech loss.”

… all of UCSF. Funding sources [emphasis mine] included National Institutes of Health (U01 NS098971-01), philanthropy, and a sponsored research agreement with Facebook Reality Labs (FRL), [emphasis mine] which completed in early 2021.

UCSF researchers conducted all clinical trial design, execution, data analysis and reporting. Research participant data were collected solely by UCSF, are held confidentially, and are not shared with third parties. FRL provided high-level feedback and machine learning advice.

Here’s a link to and a citation for the paper,

Neuroprosthesis for Decoding Speech in a Paralyzed Person with Anarthria by David A. Moses, Ph.D., Sean L. Metzger, M.S., Jessie R. Liu, B.S., Gopala K. Anumanchipalli, Ph.D., Joseph G. Makin, Ph.D., Pengfei F. Sun, Ph.D., Josh Chartier, Ph.D., Maximilian E. Dougherty, B.A., Patricia M. Liu, M.A., Gary M. Abrams, M.D., Adelyn Tu-Chan, D.O., Karunesh Ganguly, M.D., Ph.D., and Edward F. Chang, M.D. N Engl J Med 2021; 385:217-227 DOI: 10.1056/NEJMoa2027540 Published July 15, 2021

This paper is mostly behind a paywall but you do have this option: “Create your account to get 2 free subscriber-only articles each month.”

Attosecond imaging technology with record high-harmonic generation

This July 21, 2021 news item on Nanowerk is all about laser pulses and tiny timescales.

Cornell researchers have developed nanostructures that enable record-breaking conversion of laser pulses into high-harmonic generation, paving the way for new scientific tools for high-resolution imaging and studying physical processes that occur at the scale of an attosecond – one quintillionth of a second [emphasis mine].

High-harmonic generation has long been used to merge photons from a pulsing laser into one, ultrashort photon with much higher energy, producing extreme ultraviolet light and X-rays used for a variety of scientific purposes. Traditionally, gases have been used as sources of harmonics, but a research team led by Gennady Shvets, professor of applied and engineering physics in the College of Engineering, has shown that engineered nanostructures have a bright future for this application.

llustration of an infrared laser hitting a gallium-phosphide metsurface, which efficiently produces even and odd high-harmonic generation. Credit: Daniil Shilkin/Provided

A July 21, 2021 Cornell University news release by Syl Kacapyr (also on EurekAlert), which originated the news item, provides more detail about the nanostructures,

The nanostructures created by the team make up an ultrathin resonant gallium-phosphide metasurface that overcomes many of the usual problems associated with high-harmonic generation in gases and other solids. The gallium-phosphide material permits harmonics of all orders without reabsorbing them, and the specialized structure can interact with the laser pulse’s entire light spectrum.

“Achieving this required engineering of the metasurface’s structure using full-wave simulations,” Shcherbakov [Maxim Shcherbakov] said. “We carefully selected the parameters of the gallium-phosphide particles to fulfill this condition, and then it took a custom nanofabrication flow to bring it to light.”

The result is nanostructures capable of generating both even and odd harmonics – a limitation of most other harmonic materials – covering a wide range of photon energies between 1.3 and 3 electron volts. The record-breaking conversion efficiency enables scientists to observe molecular and electronic dynamics within a material with just one laser shot, helping to preserve samples that may otherwise be degraded by multiple high-powered shots.

The study is the first to observe high-harmonic generated radiation from a single laser pulse, which allowed the metasurface to withstand high powers – five to 10 times higher than previously shown in other metasurfaces.

“It opens up new opportunities to study matter at ultrahigh fields, a regime not readily accessible before,” Shcherbakov said. “With our method, we envision that people can study materials beyond metasurfaces, including but not limited to crystals, 2D materials, single atoms, artificial atomic lattices and other quantum systems.”

Now that the research team has demonstrated the advantages of using nanostructures for high-harmonic generation, it hopes to improve high-harmonic devices and facilities by stacking the nanostructures together to replace a solid-state source, such as crystals.

Here’s a link to and a citation for the paper,

Generation of even and odd high harmonics in resonant metasurfaces using single and multiple ultra-intense laser pulses by Maxim R. Shcherbakov, Haizhong Zhang, Michael Tripepi, Giovanni Sartorello, Noah Talisa, Abdallah AlShafey, Zhiyuan Fan, Justin Twardowski, Leonid A. Krivitsky, Arseniy I. Kuznetsov, Enam Chowdhury & Gennady Shvets. Nature Communications volume 12, Article number: 4185 DOI: https://doi.org/10.1038/s41467-021-24450-9 Published: 07 July 2021

This paper is open access.

SFU’s Philippe Pasquier speaks at “The rise of Creative AI and its ethics” online event on Tuesday, January 11, 2022 at 6 am PST

Simon Fraser University’s (SFU) Metacreation Lab for Creative AI (artificial intelligence) in Vancouver, Canada, has just sent me (via email) a January 2022 newsletter, which you can find here. There are a two items I found of special interest.

Max Planck Centre for Humans and Machines Seminars

From the January 2022 newsletter,

Max Planck Institute Seminar – The rise of Creative AI & its ethics
January 11, 2022 at 15:00 pm [sic] CET | 6:00 am PST

Next Monday [sic], Philippe Pasquier, director of the Metacreation Labn will
be providing a seminar titled “The rise of Creative AI & its ethics”
[Tuesday, January 11, 2022] at the Max Planck Institute’s Centre for Humans and
Machine [sic].

The Centre for Humans and Machines invites interested attendees to
our public seminars, which feature scientists from our institute and
experts from all over the world. Their seminars usually take 1 hour and
provide an opportunity to meet the speaker afterwards.

The seminar is openly accessible to the public via Webex Access, and
will be a great opportunity to connect with colleagues and friends of
the Lab on European and East Coast time. For more information and the
link, head to the Centre for Humans and Machines’ Seminars page linked
below.

Max Planck Institute – Upcoming Events

The Centre’s seminar description offers an abstract for the talk and a profile of Philippe Pasquier,

Creative AI is the subfield of artificial intelligence concerned with the partial or complete automation of creative tasks. In turn, creative tasks are those for which the notion of optimality is ill-defined. Unlike car driving, chess moves, jeopardy answers or literal translations, creative tasks are more subjective in nature. Creative AI approaches have been proposed and evaluated in virtually every creative domain: design, visual art, music, poetry, cooking, … These algorithms most often perform at human-competitive or superhuman levels for their precise task. Two main use of these algorithms have emerged that have implications on workflows reminiscent of the industrial revolution:

– Augmentation (a.k.a, computer-assisted creativity or co-creativity): a human operator interacts with the algorithm, often in the context of already existing creative software.

– Automation (computational creativity): the creative task is performed entirely by the algorithms without human intervention in the generation process.

Both usages will have deep implications for education and work in creative fields. Away from the fear of strong – sentient – AI, taking over the world: What are the implications of these ongoing developments for students, educators and professionals? How will Creative AI transform the way we create, as well as what we create?

Philippe Pasquier is a professor at Simon Fraser University’s School for Interactive Arts and Technology, where he directs the Metacreation Lab for Creative AI since 2008. Philippe leads a research-creation program centred around generative systems for creative tasks. As such, he is a scientist specialized in artificial intelligence, a multidisciplinary media artist, an educator, and a community builder. His contributions range from theoretical research on generative systems, computational creativity, multi-agent systems, machine learning, affective computing, and evaluation methodologies. This work is applied in the creative software industry as well as through artistic practice in computer music, interactive and generative art.

Interpreting soundscapes

Folks at the Metacreation Lab have made available an interactive search engine for sounds, from the January 2022 newsletter,

Audio Metaphor is an interactive search engine that transforms users’ queries into soundscapes interpreting them.  Using state of the art algorithms for sound retrieval, segmentation, background and foreground classification, AuMe offers a way to explore the vast open source library of sounds available on the  freesound.org online community through natural language and its semantic, symbolic, and metaphorical expressions. 

We’re excited to see Audio Metaphor included  among many other innovative projects on Freesound Labs, a directory of projects, hacks, apps, research and other initiatives that use content from Freesound or use the Freesound API. Take a minute to check out the variety of projects applying creative coding, machine learning, and many other techniques towards the exploration of sound and music creation, generative music, and soundscape composition in diverse forms an interfaces.

Explore AuMe and other FreeSound Labs projects    

The Audio Metaphor (AuMe) webpage on the Metacreation Lab website has a few more details about the search engine,

Audio Metaphor (AuMe) is a research project aimed at designing new methodologies and tools for sound design and composition practices in film, games, and sound art. Through this project, we have identified the processes involved in working with audio recordings in creative environments, addressing these in our research by implementing computational systems that can assist human operations.

We have successfully developed Audio Metaphor for the retrieval of audio file recommendations from natural language texts, and even used phrases generated automatically from Twitter to sonify the current state of Web 2.0. Another significant achievement of the project has been in the segmentation and classification of environmental audio with composition-specific categories, which were then applied in a generative system approach. This allows users to generate sound design simply by entering textual prompts.

As we direct Audio Metaphor further toward perception and cognition, we will continue to contribute to the music information retrieval field through environmental audio classification and segmentation. The project will continue to be instrumental in the design and implementation of new tools for sound designers and artists.

See more information on the website audiometaphor.ca.

As for Freesound Labs, you can find them here.

‘Find the Birds’ mobile game has a British Columbia (Canada) location

Adam Dhalla in a January 5, 2022 posting on the Nature Conservancy Canada blog announced a new location for a ‘Find the Birds’ game,

Since its launch six months ago …, with an initial Arizona simulated birding location, Find the Birds (a free educational mobile game about birds and conservation) now has over 7,000 players in 46 countries on six continents. In the game, players explore realistic habitats, find and take virtual photos of accurately animated local bird species and complete conservation quests. Thanks in a large part to the creative team at Thought Generation Society (the non-profit game production organization I’m working with), Find the Birds is a Canadian-made success story.

Going back nine months to an April 9, 2021 posting and the first ‘Find the Birds’ announcement by Adam Dhalla for the Nature Conservancy Canada blog,

It is not a stretch to say that our planet is in dire need of more conservationists, and environmentally minded people in general. Birds and birdwatching are gateways to introducing conservation and science to a new generation.

… it seems as though younger generations are often unaware of the amazing world in their backyard. They don’t hear the birdsong emanating from the trees during the morning chorus. …

This problem inspired my dad and me to come up with the original concept for Find the Birds, a free educational mobile game about birds and conservation. I was 10 at the time, and I discovered that I was usually the only kid out birdwatching. So we thought, why not bring the birds to them via the digital technology they are already immersed in?

Find the Birds reflects on the birding and conservation experience. Players travel the globe as an animated character on their smartphone or tablet and explore real-life, picturesque environments, finding different bird species. The unique element of this game is its attention to detail; everything in the game is based on science. …

Here’s a trailer for the game featuring its first location, Arizona,

Now back to Dhalla’s January 5, 2022 posting for more about the latest iteration of the game and other doings (Note: Links have been removed),

Recently, the British Columbia location was added, which features Sawmill Lake in the Okanagan Valley, Tofino on the coast and a journey in the Pacific Ocean. Some of the local bird species included are Steller’s jays (BC’s provincial bird), black oystercatchers and western meadowlarks. Conservation quests include placing nest boxes for northern saw-whet owls and cleaning up beach litter.

I’ve always loved Steller’s jays! We get a lot of them in our backyard. It’s far lesser known bird than blue jay, so I wanted to give them some attention. That’s the terrific thing about being the co-creator of the game: I get to help choose the species, the quests — everything! So all the birds in the BC locations are some of my favourites.

The black oystercatcher is another underappreciated species. I’ve seen them along the coasts of BC, where they are relatively common. …

To gauge the game’s impact on conservation education, I recently conducted an online player survey. Of the 101 players who completed the survey, 71 per cent were in the 8–15 age group, which means I am reaching my peers. But 21 per cent were late teens and adults, so the game’s appeal is not limited to children. Fifty-one per cent were male and 49 per cent female: this equality is encouraging, as most games in general have a much smaller percentage of female players.

And the game is helping people connect with nature! Ninety-eight per cent of players said the game increased their appreciation of birds. …

As a result of the game’s reputation and the above data, I was invited to present my findings at the 2022 International Ornithological Congress. So, I will be traveling to Durban, South Africa, next August to spread the word on reaching and teaching a new generation of birders, ornithologists and conservationists. …

You can find the game here at FindtheBirds.com and you can find Thought Generation here.

For the curious, here’s a black oystercatcher caught in the act,

Black oystercatcher (Photo by Tracey Chen, CC BY-NC 4.0) [downloaded from https://www.natureconservancy.ca/en/blog/find-the-birds-british-columbia.html#.YdcjWSaIapr]

Science Policy 101 on January 13, 2021

It was a mysterious retweet from the Canadian Light Source (synchrotron) which led me to CAP_SAC (CAP being the Canadian Association of Physicists and SAC being Student Advisory Council) and their Science Policy 101 Panel,

The CAP Student Advisory Council is hosting a science policy 101 panel Thursday, January 13th at 15h00 EST [3 pm EST].  The (free) registration link can be found here.

What is science policy and how does it interact with the frontiers of physics research? What can you do at the undergraduate or graduate level in order to start contributing to the conversation? Our three panelists will talk about their experiences in science policy, how their backgrounds in physics and chemistry have helped or motivated them in their current careers, and give some tips on how to become involved.

Aimee Gunther is the current Deputy Director of the Quantum Sensors Challenge Program at the National Research Council of Canada. She was a Mitacs Canadian Science Policy Fellow and served as a scientific advisor in quantum topics for Canada’s Defense Research and Development, co-authoring and co-developing the Quantum Science and Technology Strategy for the Department of National Defense and the Canadian Armed Forces. Aimee received her PhD from the University of Waterloo in Quantum Information.  Learn more about Aimee on Linkedin.

Anh-Khoi Trinh currently sits on the board of directors of Montreal-based, non-profit organization, Science & Policy Exchange. Science & Policy Exchange aims to foster the student voice in evidence-based decision making and to bring together leader experts from academic, industry, and government to engage and inform students and the public on issues at the interface of science and policy. Ahn-Khoi is currently doing a PhD in string theory and quantum gravity at McGill University.  Learn more about Anh-Khoi on Linkedin.

Monika Stolar is a co-founder of ElectSTEM, an evidence-based non-profit organization with the goal of engaging more scientists and engineers in politics. She also currently works as Simon Fraser University’s industry and research relations officer. Monika holds a PhD in organophosphorus materials from the University of Calgary and completed postdoctoral positions at York University and the Massachusetts Institute of Technology.  Learn more about Monika on Linkedin.

I haven’t come across Aimee Gunther or Anh-Khoi Trinh before but Monika Stolar has been mentioned here twice, in an August 16, 2021 posting about Elect STEM and their Periodically Political podcast and again in an August 30, 2021 posting about an upcoming federal election.

Science and stories: an online talk January 5, 2022 and a course starting on January 10, 2022

So far this year all I’ve been posting about are events and contests. Continuing on that theme, I have an event and, something new, a course.

Massey Dialogues on January 5, 2022, 1 – 2 pm PST

“The Art of Science-Telling: How Science Education Can Shape Society” is scheduled for today (Wednesday, January 5, 5022 at 1 pm PST or 4 pm EST), You can find the livestream here on YouTube,

Massey College

Join us for the first Massey Dialogues of 2022 from 4:00-5:00pm ET on the Art of Science-Telling: How Science Education Can Shape Society.

Farah Qaiser (Evidence for Democracy), Dr. Bonnie Schmidt (Let’s Talk Science) and Carolyn Tuohy (Senior Fellow) will discuss what nonprofits can do for science education and policy, moderated by Junior Fellow Keshna Sood.

The Dialogues are open to the public – we invite everyone to join and take part in what will be a very informative online discussion. Participants are invited to submit questions to the speakers in real time via the Chat function to the right of the screen.

——-

To ensure you never miss a Massey Event, subscribe to our YouTube channel: https://www.youtube.com/user/masseyco…

We also invite you to visit masseycollege.ca/calendar for upcoming events.

Follow us on social media:

twitter.com/masseycollege
instagram.com/massey_college
linkedin.com/school/massey-college
facebook.com/MasseyCollege

Support our work: masseycollege.ca/support-us

You can find out more about the Massey Dialogues here. As for the college, it’s affiliated with the University of Toronto as per the information on the College’s Governance webpage.

Simon Fraser University (SFU; Vancouver, Canada) and a science communication course

I stumbled across “Telling Science Stories” being offered for SFU’s Spring 2022 semester in my twitter feed. Apparently there’s still space for students in the course.

I was a little surprised by how hard it was to find basic information such as: when does the course start? Yes, I found that and more, here’s what I managed to dig up,

From the PUB 480/877 Telling Science Stories course description webpage,

In this hands-on course, students will learn the value of sharing research knowledge beyond the university walls, along with the skills necessary to become effective science storytellers.

Climate change, vaccines, artificial intelligence, genetic editing — these are just a few examples of the essential role scientific evidence can play in society. But connecting science and society is no simple task: it requires key publishing and communication skills, as well as an understanding of the values, goals, and needs of the publics who stand to benefit from this knowledge.

This course will provide students with core skills and knowledge needed to share compelling science stories with diverse audiences, in a variety of formats. Whether it’s through writing books, podcasting, or creating science art, students will learn why we communicate science, develop an understanding of the core principles of effective audience engagement, and gain skills in publishing professional science content for print, radio, and online formats. The instructor is herself a science writer and communicator; in addition, students will have the opportunity to learn from a wide range of guest lecturers, including authors, artists, podcasters, and more. While priority will be given to students enrolled in the Publishing Minor, this course is open to all students who are interested in the evolving relationship between science and society.

I’m not sure if an outsider (someone who’s not a member of the SFU student body) can attend but it doesn’t hurt to ask.

The course is being given by Alice Fleerackers, here’s more from her profile page on the ScholCommLab (Scholarly Communications Laboratory) website,

Alice Fleerackers is a researcher and lab manager at the ScholCommLab and a doctoral student at Simon Fraser University’s Interdisciplinary Studies program, where she works under the supervision of Dr. Juan Pablo Alperin to explore how health science is communicated online. Her doctoral research is supported by a Joseph-Armand Bombardier Canada Graduate Scholarship from SSHRC and a Michael Stevenson Graduate Scholarship from SFU.

In addition, Alice volunteers with a number of non-profit organizations in an effort to foster greater public understanding and engagement with science. She is a Research Officer at Art the Science, Academic Liaison of Science Borealis, Board Member of the Science Writers and Communicators of Canada (SWCC), and a member of the Scientific Committee for the Public Communication of Science and Technology Network (PCST). She is also a freelance health and science writer whose work has appeared in the Globe and Mail, National Post, and Nautilus, among other outlets. Find her on Twitter at @FleerackersA.

Logistics such as when and where the course is being held (from the course outline webpage),

Telling Science Stories

Class Number: 4706

Delivery Method: In Person

Course Times + Location: Tu, Th 10:30 AM – 12:20 PM
HCC 2540, Vancouver

Instructor: Alice Fleerackers
afleerac@sfu.ca

According to the Spring 2022 Calendar Academic Dates webpage, the course starts on Monday, January 10, 2021 and I believe the room number (HCC2540) means the course will be held at SFU’s downtown Vancouver site at Harbour Centre, 515 West Hastings Street.

Given that SFU claims to be “Canada’s leading engaged university,” they do a remarkably poor job of actually engaging with anyone who’s not member of the community, i.e., an outsider.

Science Says 2022 SciArt Contest (Jan. 3 – 31, 2022) for California (US) residents

Science Says is affiliated with the University of California at Davis (UC Davis). Here’s a little more about the UC Davis group from the Science Says homepage,

We are a team of friendly neighborhood scientists passionate about making science accessible to the general public. We aim to cultivate a community of science communicators at UC Davis dedicated to making scientific research accessible, relevant, and interesting to everyone. 

As for the contest, here’s more from the 2022 Science Art Contest webpage,

Jan 3-31, 2022 @ 12:00am – 11:59pm

We want to feature your science art in our second annual science art competition! The intersection of science and art offers a unique opportunity for creative science communication.

To participate in our contest you must:

1. Submit one piece of work considered artistic and creative: beautiful microscopy, field photography, paintings, crafts, etc.

2. The work must be shareable on our social media platforms. We encourage you to include your handle or name in the submitted image.

3. You must live within California to be considered for prizes.

You may compete in one of three categories: UC Davis affiliate (student, staff, faculty), the local Davis/Sacramento area or California. *If out of state, you can submit your work for honorable mention to be featured on our social media and news release, although you can’t be considered for prizes.

Winners will be determined by popular vote via a Google Form offered through our February newsletter, social media and website. Prizes vary depending on the contest selected. For entrants in either the UC Davis affiliate contest or local Davis/Sacramento contest, first prize will receive a cash prize of $75 and second place will receive a cash prize of $50. For entrants in the California contest, first place will receive a cash prize of $50.

Submit Here

Submissions open the first week of January and close on January 31, 2022. Voting begins February 2, 2022 and ends February 16, 2022. Winners will be announced by social media and a special news release on our website and contacted via email on February 23, 2022. Prizes will be awarded by March 4, 2022.

H/t to Raymond K. Nakamura for his retweet of the competition announcement by the Science Says team on Twitter.

Art/Sci or SciArt?

It depends on who’s talking. An artist will say art/sci or art/science and a scientist will say sciart. The focus, or pride of place, of course, being placed on the speaker’s primary interest.

Futures exhibition/festival with fish skin fashion and more at the Smithsonian (Washington, DC), Nov. 20, 2021 to July 6, 2022

Fish leather

Before getting to Futures, here’s a brief excerpt from a June 11, 2021 Smithsonian Magazine exhibition preview article by Gia Yetikyel about one of the contributors, Elisa Palomino-Perez (Note: A link has been removed),

Elisa Palomino-Perez sheepishly admits to believing she was a mermaid as a child. Growing up in Cuenca, Spain in the 1970s and ‘80s, she practiced synchronized swimming and was deeply fascinated with fish. Now, the designer’s love for shiny fish scales and majestic oceans has evolved into an empowering mission, to challenge today’s fashion industry to be more sustainable, by using fish skin as a material.

Luxury fashion is no stranger to the artist, who has worked with designers like Christian Dior, John Galliano and Moschino in her 30-year career. For five seasons in the early 2000s, Palomino-Perez had her own fashion brand, inspired by Asian culture and full of color and embroidery. It was while heading a studio for Galliano in 2002 that she first encountered fish leather: a material made when the skin of tuna, cod, carp, catfish, salmon, sturgeon, tilapia or pirarucu gets stretched, dried and tanned.

The history of using fish leather in fashion is a bit murky. The material does not preserve well in the archeological record, and it’s been often overlooked as a “poor person’s” material due to the abundance of fish as a resource. But Indigenous groups living on coasts and rivers from Alaska to Scandinavia to Asia have used fish leather for centuries. Icelandic fishing traditions can even be traced back to the ninth century. While assimilation policies, like banning native fishing rights, forced Indigenous groups to change their lifestyle, the use of fish skin is seeing a resurgence. Its rise in popularity in the world of sustainable fashion has led to an overdue reclamation of tradition for Indigenous peoples.

In 2017, Palomino-Perez embarked on a PhD in Indigenous Arctic fish skin heritage at London College of Fashion, which is a part of the University of the Arts in London (UAL), where she received her Masters of Arts in 1992. She now teaches at Central Saint Martins at UAL, while researching different ways of crafting with fish skin and working with Indigenous communities to carry on the honored tradition.

Yetikyel’s article is fascinating (apparently Nike has used fish leather in one of its sports shoes) and I encourage you to read her June 11, 2021 article, which also covers the history of fish leather use amongst indigenous peoples of the world.

I did some digging and found a few more stories about fish leather. The earlier one is a Canadian Broadcasting Corporation (CBC) November 16, 2017 online news article by Jane Adey,

Designer Arndis Johannsdottir holds up a stunning purse, decorated with shiny strips of gold and silver leather at Kirsuberjatred, an art and design store in downtown Reykjavik, Iceland.

The purse is one of many in a colourful window display that’s drawing in buyers.

Johannsdottir says customers’ eyes often widen when they discover the metallic material is fish skin. 

Johannsdottir, a fish-skin designing pioneer, first came across the product 35 years ago.

She was working as a saddle smith when a woman came into her shop with samples of fish skin her husband had tanned after the war. Hundreds of pieces had been lying in a warehouse for 40 years.

“Nobody wanted it because plastic came on the market and everybody was fond of plastic,” she said.

“After 40 years, it was still very, very strong and the colours were beautiful and … I fell in love with it immediately.”

Johannsdottir bought all the skins the woman had to offer, gave up saddle making and concentrated on fashionable fish skin.

Adey’s November 16, 2017 article goes on to mention another Icelandic fish leather business looking to make fish leather a fashion staple.

Chloe Williams’s April 28, 2020 article for Hakkai Magazine explores the process of making fish leather and the new interest in making it,

Tracy Williams slaps a plastic cutting board onto the dining room table in her home in North Vancouver, British Columbia. Her friend, Janey Chang, has already laid out the materials we will need: spoons, seashells, a stone, and snack-sized ziplock bags filled with semi-frozen fish. Williams says something in Squamish and then translates for me: “You are ready to make fish skin.”

Chang peels a folded salmon skin from one of the bags and flattens it on the table. “You can really have at her,” she says, demonstrating how to use the edge of the stone to rub away every fiber of flesh. The scales on the other side of the skin will have to go, too. On a sockeye skin, they come off easily if scraped from tail to head, she adds, “like rubbing a cat backwards.” The skin must be clean, otherwise it will rot or fail to absorb tannins that will help transform it into leather.

Williams and Chang are two of a scant but growing number of people who are rediscovering the craft of making fish skin leather, and they’ve agreed to teach me their methods. The two artists have spent the past five or six years learning about the craft and tying it back to their distinct cultural perspectives. Williams, a member of the Squamish Nation—her ancestral name is Sesemiya—is exploring the craft through her Indigenous heritage. Chang, an ancestral skills teacher at a Squamish Nation school, who has also begun teaching fish skin tanning in other BC communities, is linking the craft to her Chinese ancestry.

Before the rise of manufactured fabrics, Indigenous peoples from coastal and riverine regions around the world tanned or dried fish skins and sewed them into clothing. The material is strong and water-resistant, and it was essential to survival. In Japan, the Ainu crafted salmon skin into boots, which they strapped to their feet with rope. Along the Amur River in northeastern China and Siberia, Hezhen and Nivkh peoples turned the material into coats and thread. In northern Canada, the Inuit made clothing, and in Alaska, several peoples including the Alutiiq, Athabascan, and Yup’ik used fish skins to fashion boots, mittens, containers, and parkas. In the winter, Yup’ik men never left home without qasperrluk—loose-fitting, hooded fish skin parkas—which could double as shelter in an emergency. The men would prop up the hood with an ice pick and pin down the edges to make a tent-like structure.

On a Saturday morning, I visit Aurora Skala in Saanich on Vancouver Island, British Columbia, to learn about the step after scraping and tanning: softening. Skala, an anthropologist working in language revitalization, has taken an interest in making fish skin leather in her spare time. When I arrive at her house, a salmon skin that she has tanned in an acorn infusion—a cloudy, brown liquid now resting in a jar—is stretched out on the kitchen counter, ready to be worked.

Skala dips her fingers in a jar of sunflower oil and rubs it on her hands before massaging it into the skin. The skin smells only faintly of fish; the scent reminds me of salt and smoke, though the skin has been neither salted nor smoked. “Once you start this process, you can’t stop,” she says. If the skin isn’t worked consistently, it will stiffen as it dries.

Softening the leather with oil takes about four hours, Skala says. She stretches the skin between clenched hands, pulling it in every direction to loosen the fibers while working in small amounts of oil at a time. She’ll also work her skins across other surfaces for extra softening; later, she’ll take this piece outside and rub it back and forth along a metal cable attached to a telephone pole. Her pace is steady, unhurried, soothing. Back in the day, people likely made fish skin leather alongside other chores related to gathering and processing food or fibers, she says. The skin will be done when it’s soft and no longer absorbs oil.

Onto the exhibition.

Futures (November 20, 2021 to July 6, 2022 at the Smithsonian)

A February 24, 2021 Smithsonian Magazine article by Meilan Solly serves as an announcement for the Futures exhibition/festival (Note: Links have been removed),

When the Smithsonian’s Arts and Industries Building (AIB) opened to the public in 1881, observers were quick to dub the venue—then known as the National Museum—America’s “Palace of Wonders.” It was a fitting nickname: Over the next century, the site would go on to showcase such pioneering innovations as the incandescent light bulb, the steam locomotive, Charles Lindbergh’s Spirit of St. Louis and space-age rockets.

“Futures,” an ambitious, immersive experience set to open at AIB this November, will act as a “continuation of what the [space] has been meant to do” from its earliest days, says consulting curator Glenn Adamson. “It’s always been this launchpad for the Smithsonian itself,” he adds, paving the way for later museums as “a nexus between all of the different branches of the [Institution].” …

Part exhibition and part festival, “Futures”—timed to coincide with the Smithsonian’s 175th anniversary—takes its cue from the world’s fairs of the 19th and 20th centuries, which introduced attendees to the latest technological and scientific developments in awe-inspiring celebrations of human ingenuity. Sweeping in scale (the building-wide exploration spans a total of 32,000 square feet) and scope, the show is set to feature historic artifacts loaned from numerous Smithsonian museums and other institutions, large-scale installations, artworks, interactive displays and speculative designs. It will “invite all visitors to discover, debate and delight in the many possibilities for our shared future,” explains AIB director Rachel Goslins in a statement.

“Futures” is split into four thematic halls, each with its own unique approach to the coming centuries. “Futures Past” presents visions of the future imagined by prior generations, as told through objects including Alexander Graham Bell’s experimental telephone, an early android and a full-scale Buckminster Fuller geodesic dome. “In hindsight, sometimes [a prediction is] amazing,” says Adamson, who curated the history-centric section. “Sometimes it’s sort of funny. Sometimes it’s a little dismaying.”

Futures That Work” continues to explore the theme of technological advancement, but with a focus on problem-solving rather than the lessons of the past. Climate change is at the fore of this section, with highlighted solutions ranging from Capsula Mundi’s biodegradable burial urns to sustainable bricks made out of mushrooms and purely molecular artificial spices that cut down on food waste while preserving natural resources.

Futures That Inspire,” meanwhile, mimics AIB’s original role as a place of wonder and imagination. “If I were bringing a 7-year-old, this is probably where I would take them first,” says Adamson. “This is where you’re going to be encountering things that maybe look a bit more like science fiction”—for instance, flying cars, self-sustaining floating cities and Afrofuturist artworks.

The final exhibition hall, “Futures That Unite,” emphasizes human relationships, discussing how connections between people can produce a more equitable society. Among others, the list of featured projects includes (Im)possible Baby, a speculative design endeavor that imagines what same-sex couples’ children might look like if they shared both parents’ DNA, and Not The Only One (N’TOO), an A.I.-assisted oral history project. [all emphases mine]

I haven’t done justice to Solly’s February 24, 2021 article, which features embedded images and offers a more hopeful view of the future than is currently the fashion.

Futures asks: Would you like to plan the future?

Nate Berg’s November 22, 2021 article for Fast Company features an interactive urban planning game that’s part of the Futures exhibition/festival,

The Smithsonian Institution wants you to imagine the almost ideal city block of the future. Not the perfect block, not utopia, but the kind of urban place where you get most of what you want, and so does everybody else.

Call it urban design by compromise. With a new interactive multiplayer game, the museum is hoping to show that the urban spaces of the future can achieve mutual goals only by being flexible and open to the needs of other stakeholders.

The game is designed for three players, each in the role of either the city’s mayor, a real estate developer or an ecologist. The roles each have their own primary goals – the mayor wants a well-served populace, the developer wants to build successful projects, and the ecologist wants the urban environment to coexist with the natural environment. Each role takes turns adding to the block, either in discrete projects or by amending what another player has contributed. Options are varied, but include everything from traditional office buildings and parks to community centers and algae farms. The players each try to achieve their own goals on the block, while facing the reality that other players may push the design in unexpected directions. These tradeoffs and their impact on the block are explained by scores on four basic metrics: daylight, carbon footprint, urban density, and access to services. How each player builds onto the block can bring scores up or down.

To create the game, the Smithsonian teamed up with Autodesk, the maker of architectural design tools like AutoCAD, an industry standard. Autodesk developed a tool for AI-based generative design that offers up options for a city block’s design, using computing power to make suggestions on what could go where and how aiming to achieve one goal, like boosting residential density, might detract from or improve another set of goals, like creating open space. “Sometimes you’ll do something that you think is good but it doesn’t really help the overall score,” says Brian Pene, director of emerging technology at Autodesk. “So that’s really showing people to take these tradeoffs and try attributes other than what achieves their own goals.” The tool is meant to show not how AI can generate the perfect design, but how the differing needs of various stakeholders inevitably require some tradeoffs and compromises.

Futures online and in person

Here are links to Futures online and information about visiting in person,

For its 175th anniversary, the Smithsonian is looking forward.

What do you think of when you think of the future? FUTURES is the first building-wide exploration of the future on the National Mall. Designed by the award-winning Rockwell Group, FUTURES spans 32,000 square feet inside the Arts + Industries Building. Now on view until July 6, 2022, FUTURES is your guide to a vast array of interactives, artworks, technologies, and ideas that are glimpses into humanity’s next chapter. You are, after all, only the latest in a long line of future makers.

Smell a molecule. Clean your clothes in a wetland. Meditate with an AI robot. Travel through space and time. Watch water being harvested from air. Become an emoji. The FUTURES is yours to decide, debate, delight. We invite you to dream big, and imagine not just one future, but many possible futures on the horizon—playful, sustainable, inclusive. In moments of great change, we dare to be hopeful. How will you create the future you want to live in?

Happy New Year!