Monthly Archives: January 2022

Tamarind shells turned into carbon nanosheets for supercapacitors

Fro anyone who needs a shot of happiness, this is a very happy scientist,

Caption: Assistant Professor (Steve) Cuong Dang, from NTU’s School of Electrical and Electronic Engineering, who led the study, displaying pieces of tamarind shell, which were integral to the study. Credit to NTU Singapore

A July 14, 2021 news item on ScienceDaily describes the source of assistant professor (Steve) Cuong Dang’s happiness,

Shells of tamarind, a tropical fruit consumed worldwide, are discarded during food production. As they are bulky, tamarind shells take up a considerable amount of space in landfills where they are disposed as agricultural waste.

However, a team of international scientists led by Nanyang Technological University, Singapore (NTU Singapore) has found a way to deal with the problem. By processing the tamarind shells which are rich in carbon, the scientists converted the waste material into carbon nanosheets, which are a key component of supercapacitors – energy storage devices that are used in automobiles, buses, electric vehicles, trains, and elevators.

The study reflects NTU’s commitment to address humanity’s grand challenges on sustainability as part of its 2025 strategic plan, which seeks to accelerate the translation of research discoveries into innovations that mitigate our impact on the environment.

A July 14, 2021 NTU press release (also here [scroll down to click on the link to the full press release] and on EurekAlert but published July 13, 2021), which originated the news item, delves further into the topic,

he team, made up of researchers from NTU Singapore, the Western Norway University of Applied Sciences in Norway, and Alagappa University in India, believes that these nanosheets, when scaled up, could be an eco-friendly alternative to their industrially produced counterparts, and cut down on waste at the same time.

Assistant Professor (Steve) Cuong Dang, from NTU’s School of Electrical and Electronic Engineering, who led the study, said: “Through a series of analysis, we found that the performance of our tamarind shell-derived nanosheets was comparable to their industrially made counterparts in terms of porous structure and electrochemical properties. The process to make the nanosheets is also the standard method to produce active carbon nanosheets.”

Professor G. Ravi, Head, Department of Physics, who co-authored the study with Asst Prof Dr R. Yuvakkumar, who are both from Alagappa University, said: “The use of tamarind shells may reduce the amount of space required for landfills, especially in regions in Asia such as India, one of the world’s largest producers of tamarind, which is also grappling with waste disposal issues.”

The study was published in the peer-reviewed scientific journal Chemosphere in June [2021].

The step-by-step recipe for carbon nanosheets

To manufacture the carbon nanosheets, the researchers first washed tamarind fruit shells and dried them at 100°C for around six hours, before grinding them into powder.

The scientists then baked the powder in a furnace for 150 minutes at 700-900 degrees Celsius in the absence of oxygen to convert them into ultrathin sheets of carbon known as nanosheets.

Tamarind shells are rich in carbon and porous in nature, making them an ideal material from which to manufacture carbon nanosheets.

A common material used to produce carbon nanosheets are industrial hemp fibres. However, they require to be heated at over 180°C for 24 hours – four times longer than that of tamarind shells, and at a higher temperature. This is before the hemp is further subjected to intense heat to convert them into carbon nanosheets.

Professor Dhayalan Velauthapillai, Head of the research group for Advanced Nanomaterials for Clean Energy and Health Applications at Western Norway University of Applied Sciences, who participated in the study, said: “Carbon nanosheets comprise of layers of carbon atoms arranged in interconnecting hexagons, like a honeycomb. The secret behind their energy storing capabilities lies in their porous structure leading to large surface area which help the material to store large amounts of electric charges.”

The tamarind shell-derived nanosheets also showed good thermal stability and electric conductivity, making them promising options for energy storage.

The researchers hope to explore larger scale production of the carbon nanosheets with agricultural partners. They are also working on reducing the energy needed for the production process, making it more environmentally friendly, and are seeking to improve the electrochemical properties of the nanosheets.

The team also hopes to explore the possibility of using different types of fruit skins or shells to produce carbon nanosheets.

Here’s a link to and a citation for the paper,

Cleaner production of tamarind fruit shell into bio-mass derived porous 3D-activated carbon nanosheets by CVD technique for supercapacitor applications by V. Thirumal, K. Dhamodharan, R. Yuvakkumar, G. Ravi, B. Saravanakumar, M. Thambidurai, Cuong Dang, Dhayalan Velauthapillai. Chemosphere Volume 282, November 2021, 131033 DOI: https://doi.org/10.1016/j.chemosphere.2021.131033 Available online 2 June 2021.

This paper is behind a paywall.

Because we could all do with a little more happiness these days,

Caption: (L-R) Senior Research Fellow Dr Thambidurai Mariyappan, also from NTU’s School of Electrical and Electronic Engineering, who was part of the study, and Asst Prof Dang, holding up tamarind pods. Credit to NTU Singapore

The Storywrangler, tool exploring billions of social media messages, could predict political & financial turmoil

Being able to analyze Twitter messages (tweets) in real-time is amazing given what I wrote in this January 16, 2013 posting titled: “Researching tweets (the Twitter kind)” about the US Library of Congress and its attempts to access tweets for scholars,”

At least one of the reasons no one has received access to the tweets is that a single search of the archived (2006- 2010) tweets alone would take 24 hours, [emphases mine] …

So, bravo to the researchers at the University of Vermont (UVM). A July 16, 2021 news item on ScienceDaily makes the announcement,

For thousands of years, people looked into the night sky with their naked eyes — and told stories about the few visible stars. Then we invented telescopes. In 1840, the philosopher Thomas Carlyle claimed that “the history of the world is but the biography of great men.” Then we started posting on Twitter.

Now scientists have invented an instrument to peer deeply into the billions and billions of posts made on Twitter since 2008 — and have begun to uncover the vast galaxy of stories that they contain.

Caption: UVM scientists have invented a new tool: the Storywrangler. It visualizes the use of billions of words, hashtags and emoji posted on Twitter. In this example from the tool’s online viewer, three global events from 2020 are highlighted: the death of Iranian general Qasem Soleimani; the beginning of the COVID-19 pandemic; and the Black Lives Matter protests following the murder of George Floyd by Minneapolis police. The new research was published in the journal Science Advances. Credit: UVM

A July 15, 2021 UVM news release (also on EurekAlert but published on July 16, 2021) by Joshua Brown, which originated the news item, provides more detail abut the work,

“We call it the Storywrangler,” says Thayer Alshaabi, a doctoral student at the University of Vermont who co-led the new research. “It’s like a telescope to look — in real time — at all this data that people share on social media. We hope people will use it themselves, in the same way you might look up at the stars and ask your own questions.”

The new tool can give an unprecedented, minute-by-minute view of popularity, from rising political movements to box office flops; from the staggering success of K-pop to signals of emerging new diseases.

The story of the Storywrangler — a curation and analysis of over 150 billion tweets–and some of its key findings were published on July 16 [2021] in the journal Science Advances.

EXPRESSIONS OF THE MANY

The team of eight scientists who invented Storywrangler — from the University of Vermont, Charles River Analytics, and MassMutual Data Science [emphasis mine]– gather about ten percent of all the tweets made every day, around the globe. For each day, they break these tweets into single bits, as well as pairs and triplets, generating frequencies from more than a trillion words, hashtags, handles, symbols and emoji, like “Super Bowl,” “Black Lives Matter,” “gravitational waves,” “#metoo,” “coronavirus,” and “keto diet.”

“This is the first visualization tool that allows you to look at one-, two-, and three-word phrases, across 150 different languages [emphasis mine], from the inception of Twitter to the present,” says Jane Adams, a co-author on the new study who recently finished a three-year position as a data-visualization artist-in-residence at UVM’s Complex Systems Center.

The online tool, powered by UVM’s supercomputer at the Vermont Advanced Computing Core, provides a powerful lens for viewing and analyzing the rise and fall of words, ideas, and stories each day among people around the world. “It’s important because it shows major discourses as they’re happening,” Adams says. “It’s quantifying collective attention.” Though Twitter does not represent the whole of humanity, it is used by a very large and diverse group of people, which means that it “encodes popularity and spreading,” the scientists write, giving a novel view of discourse not just of famous people, like political figures and celebrities, but also the daily “expressions of the many,” the team notes.

In one striking test of the vast dataset on the Storywrangler, the team showed that it could be used to potentially predict political and financial turmoil. They examined the percent change in the use of the words “rebellion” and “crackdown” in various regions of the world. They found that the rise and fall of these terms was significantly associated with change in a well-established index of geopolitical risk for those same places.

WHAT’S HAPPENING?

The global story now being written on social media brings billions of voices — commenting and sharing, complaining and attacking — and, in all cases, recording — about world wars, weird cats, political movements, new music, what’s for dinner, deadly diseases, favorite soccer stars, religious hopes and dirty jokes.

“The Storywrangler gives us a data-driven way to index what regular people are talking about in everyday conversations, not just what reporters or authors have chosen; it’s not just the educated or the wealthy or cultural elites,” says applied mathematician Chris Danforth, a professor at the University of Vermont who co-led the creation of the StoryWrangler with his colleague Peter Dodds. Together, they run UVM’s Computational Story Lab.

“This is part of the evolution of science,” says Dodds, an expert on complex systems and professor in UVM’s Department of Computer Science. “This tool can enable new approaches in journalism, powerful ways to look at natural language processing, and the development of computational history.”

How much a few powerful people shape the course of events has been debated for centuries. But, certainly, if we knew what every peasant, soldier, shopkeeper, nurse, and teenager was saying during the French Revolution, we’d have a richly different set of stories about the rise and reign of Napoleon. “Here’s the deep question,” says Dodds, “what happened? Like, what actually happened?”

GLOBAL SENSOR

The UVM team, with support from the National Science Foundation [emphasis mine], is using Twitter to demonstrate how chatter on distributed social media can act as a kind of global sensor system — of what happened, how people reacted, and what might come next. But other social media streams, from Reddit to 4chan to Weibo, could, in theory, also be used to feed Storywrangler or similar devices: tracing the reaction to major news events and natural disasters; following the fame and fate of political leaders and sports stars; and opening a view of casual conversation that can provide insights into dynamics ranging from racism to employment, emerging health threats to new memes.

In the new Science Advances study, the team presents a sample from the Storywrangler’s online viewer, with three global events highlighted: the death of Iranian general Qasem Soleimani; the beginning of the COVID-19 pandemic; and the Black Lives Matter protests following the murder of George Floyd by Minneapolis police. The Storywrangler dataset records a sudden spike of tweets and retweets using the term “Soleimani” on January 3, 2020, when the United States assassinated the general; the strong rise of “coronavirus” and the virus emoji over the spring of 2020 as the disease spread; and a burst of use of the hashtag “#BlackLivesMatter” on and after May 25, 2020, the day George Floyd was murdered.

“There’s a hashtag that’s being invented while I’m talking right now,” says UVM’s Chris Danforth. “We didn’t know to look for that yesterday, but it will show up in the data and become part of the story.”

Here’s a link to and a citation for the paper,

Storywrangler: A massive exploratorium for sociolinguistic, cultural, socioeconomic, and and political timelines using Twitter by Thayer Alshaabi, Jane L. Adams, Michael V. Arnold, Joshua R. Minot, David R. Dewhurst, Andrew J. Reagan, Christopher M. Danforth and Peter Sheridan Dodds. Science Advances 16 Jul 2021: Vol. 7, no. 29, eabe6534DOI: 10.1126/sciadv.abe6534 DOI: 10.1126/sciadv.abe6534

This paper is open access.

A couple of comments

I’m glad to see they are looking at phrases in many different languages. Although I do experience some hesitation when I consider the two companies involved in this research with the University of Vermont.

Charles River Analytics and MassMutual Data Science would not have been my first guess for corporate involvement but on re-examining the subhead and noting this: “potentially predict political and financial turmoil”, they make perfect sense. Charles River Analytics provides “Solutions to serve the warfighter …”, i.e., soldiers/the military, and MassMutual is an insurance company with a dedicated ‘data science space’ (from the MassMutual Explore Careers Data Science webpage),

What are some key projects that the Data Science team works on?

Data science works with stakeholders throughout the enterprise to automate or support decision making when outcomes are unknown. We help determine the prospective clients that MassMutual should market to, the risk associated with life insurance applicants, and which bonds MassMutual should invest in. [emphases mine]

Of course. The military and financial services. Delightfully, this research is at least partially (mostly?) funded on the public dime, the US National Science Foundation.

Cyborg soil?

Edith Hammer, lecturer (Biology) at Lund University (Sweden) has written a July 22, 2021 essay for The Conversation (h/t July 23, 2021 news item on phys.org) that has everything.: mystery, cyborgs, unexpected denizens, and a phenomenon explored for the first time (Note: Links have been removed),

Dig a teaspoon into your nearest clump of soil, and what you’ll emerge with will contain more microorganisms than there are people on Earth. We know this from lab studies that analyse samples of earth scooped from the microbial wild to determine which forms of microscopic life exist in the world beneath our feet.

The problem is, such studies can’t actually tell us how this subterranean kingdom of fungi, flagellates and amoebae operates in the ground. Because they entail the removal of soil from its environment, these studies destroy the delicate structures of mud, water and air in which the soil microbes reside.

This prompted my lab to develop a way to spy on these underground workers, who are indispensable in their role as organic matter recycling agents, without disturbing their micro-habitats.

Our study revealed the dark, dank cities in which soil microbes reside [emphasis mine]. We found labyrinths of tiny highways, skyscrapers, bridges and rivers which are navigated by microorganisms to find food, or to avoid becoming someone’s next meal. This new window into what’s happening underground could help us better appreciate and preserve Earth’s increasingly damaged soils.

Here’s how the soil scientists probed the secrets buried in soil (Note: A link has been removed),

In our study, we developed a new kind of “cyborg soil”, which is half natural and half artificial. It consists of microengineered chips that we either buried in the wild, or surrounded with soil in the lab for enough time for the microbial cities to emerge within the mud.

The chips literally act like windows to the underground. A transparent patch in the otherwise opaque soil, the chip is cut to mimic the pore structures of actual soil, which are often strange and counter-intuitive at the scale that microbes experience them.

Different physical laws become dominant at the micro scale compared to what we’re acquainted to in our macro world. Water clings to surfaces, and resting bacteria get pushed around by the movement of water molecules. Air bubbles form insurmountable barriers for many microorganisms, due to the surface tension of the water around them.

Here’s some of the what they found,

When we excavated our first chips, we were met with the full variety of single-celled organisms, nematodes, tiny arthropods and species of bacteria that exist in our soils. Fungal hyphae, which burrow like plant roots underground, had quickly grown into the depths of our cyborg soil pores, creating a direct living connection between the real soil and our chips.

This meant we could study a phenomenon known only from lab studies: the “fungal highways” along which bacteria “hitchhike” to disperse through soil. Bacteria usually disperse through water, so by making some of our chips air-filled we could watch how bacteria smuggle themselves into new pores by following the groping arms of fungal hyphae.

Unexpectedly, we also found a high number of protists – enigmatic single-celled organisms which are neither animal, plant or fungus – in the spaces around hyphae. Clearly they too hitch a ride on the fungal highway – a so-far completely unexplored phenomenon.

The essay has a number of embedded videos and images illustrating a fascinating world in a ‘teaspoon of soil’.

Here’s a link to and a citation for the study by the researchers at Lund University,

Microfluidic chips provide visual access to in situ soil ecology by Paola Micaela Mafla-Endara, Carlos Arellano-Caicedo, Kristin Aleklett, Milda Pucetaite, Pelle Ohlsson & Edith C. Hammer. Communications Biology volume 4, Article number: 889 (2021) DOI: https://doi.org/10.1038/s42003-021-02379-5 Published: 20 July 2021

This paper is open access.

Restoring words with a neuroprosthesis

There seems to have been an update to the script for the voiceover. You’ll find it at the 1 min. 30 secs. mark (spoken: “with up to 93% accuracy at 18 words per minute`’ vs. written “with median 74% accuracy at 15 words per minute)".

A July 14, 2021 news item on ScienceDaily announces the latest work on a neuroprosthetic from the University of California at San Francisco (UCSF),

Researchers at UC San Francisco have successfully developed a “speech neuroprosthesis” that has enabled a man with severe paralysis to communicate in sentences, translating signals from his brain to the vocal tract directly into words that appear as text on a screen.

The achievement, which was developed in collaboration with the first participant of a clinical research trial, builds on more than a decade of effort by UCSF neurosurgeon Edward Chang, MD, to develop a technology that allows people with paralysis to communicate even if they are unable to speak on their own. The study appears July 15 [2021] in the New England Journal of Medicine.

A July 14, 2021 UCSF news release (also on EurekAlert), which originated the news item, delves further into the topic,

“To our knowledge, this is the first successful demonstration of direct decoding of full words from the brain activity of someone who is paralyzed and cannot speak,” said Chang, the Joan and Sanford Weill Chair of Neurological Surgery at UCSF, Jeanne Robertson Distinguished Professor, and senior author on the study. “It shows strong promise to restore communication by tapping into the brain’s natural speech machinery.”

Each year, thousands of people lose the ability to speak due to stroke, accident, or disease. With further development, the approach described in this study could one day enable these people to fully communicate.

Translating Brain Signals into Speech

Previously, work in the field of communication neuroprosthetics has focused on restoring communication through spelling-based approaches to type out letters one-by-one in text. Chang’s study differs from these efforts in a critical way: his team is translating signals intended to control muscles of the vocal system for speaking words, rather than signals to move the arm or hand to enable typing. Chang said this approach taps into the natural and fluid aspects of speech and promises more rapid and organic communication.

“With speech, we normally communicate information at a very high rate, up to 150 or 200 words per minute,” he said, noting that spelling-based approaches using typing, writing, and controlling a cursor are considerably slower and more laborious. “Going straight to words, as we’re doing here, has great advantages because it’s closer to how we normally speak.”

Over the past decade, Chang’s progress toward this goal was facilitated by patients at the UCSF Epilepsy Center who were undergoing neurosurgery to pinpoint the origins of their seizures using electrode arrays placed on the surface of their brains. These patients, all of whom had normal speech, volunteered to have their brain recordings analyzed for speech-related activity. Early success with these patient volunteers paved the way for the current trial in people with paralysis.

Previously, Chang and colleagues in the UCSF Weill Institute for Neurosciences mapped the cortical activity patterns associated with vocal tract movements that produce each consonant and vowel. To translate those findings into speech recognition of full words, David Moses, PhD, a postdoctoral engineer in the Chang lab and lead author of the new study, developed new methods for real-time decoding of those patterns, as well as incorporating statistical language models to improve accuracy.

But their success in decoding speech in participants who were able to speak didn’t guarantee that the technology would work in a person whose vocal tract is paralyzed. “Our models needed to learn the mapping between complex brain activity patterns and intended speech,” said Moses. “That poses a major challenge when the participant can’t speak.”

In addition, the team didn’t know whether brain signals controlling the vocal tract would still be intact for people who haven’t been able to move their vocal muscles for many years. “The best way to find out whether this could work was to try it,” said Moses.

The First 50 Words

To investigate the potential of this technology in patients with paralysis, Chang partnered with colleague Karunesh Ganguly, MD, PhD, an associate professor of neurology, to launch a study known as “BRAVO” (Brain-Computer Interface Restoration of Arm and Voice). The first participant in the trial is a man in his late 30s who suffered a devastating brainstem stroke more than 15 years ago that severely damaged the connection between his brain and his vocal tract and limbs. Since his injury, he has had extremely limited head, neck, and limb movements, and communicates by using a pointer attached to a baseball cap to poke letters on a screen.

The participant, who asked to be referred to as BRAVO1, worked with the researchers to create a 50-word vocabulary that Chang’s team could recognize from brain activity using advanced computer algorithms. The vocabulary – which includes words such as “water,” “family,” and “good” – was sufficient to create hundreds of sentences expressing concepts applicable to BRAVO1’s daily life.

For the study, Chang surgically implanted a high-density electrode array over BRAVO1’s speech motor cortex. After the participant’s full recovery, his team recorded 22 hours of neural activity in this brain region over 48 sessions and several months. In each session, BRAVO1 attempted to say each of the 50 vocabulary words many times while the electrodes recorded brain signals from his speech cortex.

Translating Attempted Speech into Text

To translate the patterns of recorded neural activity into specific intended words, Moses’s two co-lead authors, Sean Metzger and Jessie Liu, both bioengineering graduate students in the Chang Lab, used custom neural network models, which are forms of artificial intelligence. When the participant attempted to speak, these networks distinguished subtle patterns in brain activity to detect speech attempts and identify which words he was trying to say.

To test their approach, the team first presented BRAVO1 with short sentences constructed from the 50 vocabulary words and asked him to try saying them several times. As he made his attempts, the words were decoded from his brain activity, one by one, on a screen.

Then the team switched to prompting him with questions such as “How are you today?” and “Would you like some water?” As before, BRAVO1’s attempted speech appeared on the screen. “I am very good,” and “No, I am not thirsty.”

Chang and Moses found that the system was able to decode words from brain activity at rate of up to 18 words per minute with up to 93 percent accuracy (75 percent median). Contributing to the success was a language model Moses applied that implemented an “auto-correct” function, similar to what is used by consumer texting and speech recognition software.

Moses characterized the early trial results as a proof of principle. “We were thrilled to see the accurate decoding of a variety of meaningful sentences,” he said. “We’ve shown that it is actually possible to facilitate communication in this way and that it has potential for use in conversational settings.”

Looking forward, Chang and Moses said they will expand the trial to include more participants affected by severe paralysis and communication deficits. The team is currently working to increase the number of words in the available vocabulary, as well as improve the rate of speech.

Both said that while the study focused on a single participant and a limited vocabulary, those limitations don’t diminish the accomplishment. “This is an important technological milestone for a person who cannot communicate naturally,” said Moses, “and it demonstrates the potential for this approach to give a voice to people with severe paralysis and speech loss.”

… all of UCSF. Funding sources [emphasis mine] included National Institutes of Health (U01 NS098971-01), philanthropy, and a sponsored research agreement with Facebook Reality Labs (FRL), [emphasis mine] which completed in early 2021.

UCSF researchers conducted all clinical trial design, execution, data analysis and reporting. Research participant data were collected solely by UCSF, are held confidentially, and are not shared with third parties. FRL provided high-level feedback and machine learning advice.

Here’s a link to and a citation for the paper,

Neuroprosthesis for Decoding Speech in a Paralyzed Person with Anarthria by David A. Moses, Ph.D., Sean L. Metzger, M.S., Jessie R. Liu, B.S., Gopala K. Anumanchipalli, Ph.D., Joseph G. Makin, Ph.D., Pengfei F. Sun, Ph.D., Josh Chartier, Ph.D., Maximilian E. Dougherty, B.A., Patricia M. Liu, M.A., Gary M. Abrams, M.D., Adelyn Tu-Chan, D.O., Karunesh Ganguly, M.D., Ph.D., and Edward F. Chang, M.D. N Engl J Med 2021; 385:217-227 DOI: 10.1056/NEJMoa2027540 Published July 15, 2021

This paper is mostly behind a paywall but you do have this option: “Create your account to get 2 free subscriber-only articles each month.”

*Sept. 4, 2023 I have made a few minor corrections (a) removed an extra space (b) removed an extra ‘a’.

Attosecond imaging technology with record high-harmonic generation

This July 21, 2021 news item on Nanowerk is all about laser pulses and tiny timescales.

Cornell researchers have developed nanostructures that enable record-breaking conversion of laser pulses into high-harmonic generation, paving the way for new scientific tools for high-resolution imaging and studying physical processes that occur at the scale of an attosecond – one quintillionth of a second [emphasis mine].

High-harmonic generation has long been used to merge photons from a pulsing laser into one, ultrashort photon with much higher energy, producing extreme ultraviolet light and X-rays used for a variety of scientific purposes. Traditionally, gases have been used as sources of harmonics, but a research team led by Gennady Shvets, professor of applied and engineering physics in the College of Engineering, has shown that engineered nanostructures have a bright future for this application.

llustration of an infrared laser hitting a gallium-phosphide metsurface, which efficiently produces even and odd high-harmonic generation. Credit: Daniil Shilkin/Provided

A July 21, 2021 Cornell University news release by Syl Kacapyr (also on EurekAlert), which originated the news item, provides more detail about the nanostructures,

The nanostructures created by the team make up an ultrathin resonant gallium-phosphide metasurface that overcomes many of the usual problems associated with high-harmonic generation in gases and other solids. The gallium-phosphide material permits harmonics of all orders without reabsorbing them, and the specialized structure can interact with the laser pulse’s entire light spectrum.

“Achieving this required engineering of the metasurface’s structure using full-wave simulations,” Shcherbakov [Maxim Shcherbakov] said. “We carefully selected the parameters of the gallium-phosphide particles to fulfill this condition, and then it took a custom nanofabrication flow to bring it to light.”

The result is nanostructures capable of generating both even and odd harmonics – a limitation of most other harmonic materials – covering a wide range of photon energies between 1.3 and 3 electron volts. The record-breaking conversion efficiency enables scientists to observe molecular and electronic dynamics within a material with just one laser shot, helping to preserve samples that may otherwise be degraded by multiple high-powered shots.

The study is the first to observe high-harmonic generated radiation from a single laser pulse, which allowed the metasurface to withstand high powers – five to 10 times higher than previously shown in other metasurfaces.

“It opens up new opportunities to study matter at ultrahigh fields, a regime not readily accessible before,” Shcherbakov said. “With our method, we envision that people can study materials beyond metasurfaces, including but not limited to crystals, 2D materials, single atoms, artificial atomic lattices and other quantum systems.”

Now that the research team has demonstrated the advantages of using nanostructures for high-harmonic generation, it hopes to improve high-harmonic devices and facilities by stacking the nanostructures together to replace a solid-state source, such as crystals.

Here’s a link to and a citation for the paper,

Generation of even and odd high harmonics in resonant metasurfaces using single and multiple ultra-intense laser pulses by Maxim R. Shcherbakov, Haizhong Zhang, Michael Tripepi, Giovanni Sartorello, Noah Talisa, Abdallah AlShafey, Zhiyuan Fan, Justin Twardowski, Leonid A. Krivitsky, Arseniy I. Kuznetsov, Enam Chowdhury & Gennady Shvets. Nature Communications volume 12, Article number: 4185 DOI: https://doi.org/10.1038/s41467-021-24450-9 Published: 07 July 2021

This paper is open access.

SFU’s Philippe Pasquier speaks at “The rise of Creative AI and its ethics” online event on Tuesday, January 11, 2022 at 6 am PST

Simon Fraser University’s (SFU) Metacreation Lab for Creative AI (artificial intelligence) in Vancouver, Canada, has just sent me (via email) a January 2022 newsletter, which you can find here. There are a two items I found of special interest.

Max Planck Centre for Humans and Machines Seminars

From the January 2022 newsletter,

Max Planck Institute Seminar – The rise of Creative AI & its ethics
January 11, 2022 at 15:00 pm [sic] CET | 6:00 am PST

Next Monday [sic], Philippe Pasquier, director of the Metacreation Labn will
be providing a seminar titled “The rise of Creative AI & its ethics”
[Tuesday, January 11, 2022] at the Max Planck Institute’s Centre for Humans and
Machine [sic].

The Centre for Humans and Machines invites interested attendees to
our public seminars, which feature scientists from our institute and
experts from all over the world. Their seminars usually take 1 hour and
provide an opportunity to meet the speaker afterwards.

The seminar is openly accessible to the public via Webex Access, and
will be a great opportunity to connect with colleagues and friends of
the Lab on European and East Coast time. For more information and the
link, head to the Centre for Humans and Machines’ Seminars page linked
below.

Max Planck Institute – Upcoming Events

The Centre’s seminar description offers an abstract for the talk and a profile of Philippe Pasquier,

Creative AI is the subfield of artificial intelligence concerned with the partial or complete automation of creative tasks. In turn, creative tasks are those for which the notion of optimality is ill-defined. Unlike car driving, chess moves, jeopardy answers or literal translations, creative tasks are more subjective in nature. Creative AI approaches have been proposed and evaluated in virtually every creative domain: design, visual art, music, poetry, cooking, … These algorithms most often perform at human-competitive or superhuman levels for their precise task. Two main use of these algorithms have emerged that have implications on workflows reminiscent of the industrial revolution:

– Augmentation (a.k.a, computer-assisted creativity or co-creativity): a human operator interacts with the algorithm, often in the context of already existing creative software.

– Automation (computational creativity): the creative task is performed entirely by the algorithms without human intervention in the generation process.

Both usages will have deep implications for education and work in creative fields. Away from the fear of strong – sentient – AI, taking over the world: What are the implications of these ongoing developments for students, educators and professionals? How will Creative AI transform the way we create, as well as what we create?

Philippe Pasquier is a professor at Simon Fraser University’s School for Interactive Arts and Technology, where he directs the Metacreation Lab for Creative AI since 2008. Philippe leads a research-creation program centred around generative systems for creative tasks. As such, he is a scientist specialized in artificial intelligence, a multidisciplinary media artist, an educator, and a community builder. His contributions range from theoretical research on generative systems, computational creativity, multi-agent systems, machine learning, affective computing, and evaluation methodologies. This work is applied in the creative software industry as well as through artistic practice in computer music, interactive and generative art.

Interpreting soundscapes

Folks at the Metacreation Lab have made available an interactive search engine for sounds, from the January 2022 newsletter,

Audio Metaphor is an interactive search engine that transforms users’ queries into soundscapes interpreting them.  Using state of the art algorithms for sound retrieval, segmentation, background and foreground classification, AuMe offers a way to explore the vast open source library of sounds available on the  freesound.org online community through natural language and its semantic, symbolic, and metaphorical expressions. 

We’re excited to see Audio Metaphor included  among many other innovative projects on Freesound Labs, a directory of projects, hacks, apps, research and other initiatives that use content from Freesound or use the Freesound API. Take a minute to check out the variety of projects applying creative coding, machine learning, and many other techniques towards the exploration of sound and music creation, generative music, and soundscape composition in diverse forms an interfaces.

Explore AuMe and other FreeSound Labs projects    

The Audio Metaphor (AuMe) webpage on the Metacreation Lab website has a few more details about the search engine,

Audio Metaphor (AuMe) is a research project aimed at designing new methodologies and tools for sound design and composition practices in film, games, and sound art. Through this project, we have identified the processes involved in working with audio recordings in creative environments, addressing these in our research by implementing computational systems that can assist human operations.

We have successfully developed Audio Metaphor for the retrieval of audio file recommendations from natural language texts, and even used phrases generated automatically from Twitter to sonify the current state of Web 2.0. Another significant achievement of the project has been in the segmentation and classification of environmental audio with composition-specific categories, which were then applied in a generative system approach. This allows users to generate sound design simply by entering textual prompts.

As we direct Audio Metaphor further toward perception and cognition, we will continue to contribute to the music information retrieval field through environmental audio classification and segmentation. The project will continue to be instrumental in the design and implementation of new tools for sound designers and artists.

See more information on the website audiometaphor.ca.

As for Freesound Labs, you can find them here.

‘Find the Birds’ mobile game has a British Columbia (Canada) location

Adam Dhalla in a January 5, 2022 posting on the Nature Conservancy Canada blog announced a new location for a ‘Find the Birds’ game,

Since its launch six months ago …, with an initial Arizona simulated birding location, Find the Birds (a free educational mobile game about birds and conservation) now has over 7,000 players in 46 countries on six continents. In the game, players explore realistic habitats, find and take virtual photos of accurately animated local bird species and complete conservation quests. Thanks in a large part to the creative team at Thought Generation Society (the non-profit game production organization I’m working with), Find the Birds is a Canadian-made success story.

Going back nine months to an April 9, 2021 posting and the first ‘Find the Birds’ announcement by Adam Dhalla for the Nature Conservancy Canada blog,

It is not a stretch to say that our planet is in dire need of more conservationists, and environmentally minded people in general. Birds and birdwatching are gateways to introducing conservation and science to a new generation.

… it seems as though younger generations are often unaware of the amazing world in their backyard. They don’t hear the birdsong emanating from the trees during the morning chorus. …

This problem inspired my dad and me to come up with the original concept for Find the Birds, a free educational mobile game about birds and conservation. I was 10 at the time, and I discovered that I was usually the only kid out birdwatching. So we thought, why not bring the birds to them via the digital technology they are already immersed in?

Find the Birds reflects on the birding and conservation experience. Players travel the globe as an animated character on their smartphone or tablet and explore real-life, picturesque environments, finding different bird species. The unique element of this game is its attention to detail; everything in the game is based on science. …

Here’s a trailer for the game featuring its first location, Arizona,

Now back to Dhalla’s January 5, 2022 posting for more about the latest iteration of the game and other doings (Note: Links have been removed),

Recently, the British Columbia location was added, which features Sawmill Lake in the Okanagan Valley, Tofino on the coast and a journey in the Pacific Ocean. Some of the local bird species included are Steller’s jays (BC’s provincial bird), black oystercatchers and western meadowlarks. Conservation quests include placing nest boxes for northern saw-whet owls and cleaning up beach litter.

I’ve always loved Steller’s jays! We get a lot of them in our backyard. It’s far lesser known bird than blue jay, so I wanted to give them some attention. That’s the terrific thing about being the co-creator of the game: I get to help choose the species, the quests — everything! So all the birds in the BC locations are some of my favourites.

The black oystercatcher is another underappreciated species. I’ve seen them along the coasts of BC, where they are relatively common. …

To gauge the game’s impact on conservation education, I recently conducted an online player survey. Of the 101 players who completed the survey, 71 per cent were in the 8–15 age group, which means I am reaching my peers. But 21 per cent were late teens and adults, so the game’s appeal is not limited to children. Fifty-one per cent were male and 49 per cent female: this equality is encouraging, as most games in general have a much smaller percentage of female players.

And the game is helping people connect with nature! Ninety-eight per cent of players said the game increased their appreciation of birds. …

As a result of the game’s reputation and the above data, I was invited to present my findings at the 2022 International Ornithological Congress. So, I will be traveling to Durban, South Africa, next August to spread the word on reaching and teaching a new generation of birders, ornithologists and conservationists. …

You can find the game here at FindtheBirds.com and you can find Thought Generation here.

For the curious, here’s a black oystercatcher caught in the act,

Black oystercatcher (Photo by Tracey Chen, CC BY-NC 4.0) [downloaded from https://www.natureconservancy.ca/en/blog/find-the-birds-british-columbia.html#.YdcjWSaIapr]

Science Policy 101 on January 13, 2021

It was a mysterious retweet from the Canadian Light Source (synchrotron) which led me to CAP_SAC (CAP being the Canadian Association of Physicists and SAC being Student Advisory Council) and their Science Policy 101 Panel,

The CAP Student Advisory Council is hosting a science policy 101 panel Thursday, January 13th at 15h00 EST [3 pm EST].  The (free) registration link can be found here.

What is science policy and how does it interact with the frontiers of physics research? What can you do at the undergraduate or graduate level in order to start contributing to the conversation? Our three panelists will talk about their experiences in science policy, how their backgrounds in physics and chemistry have helped or motivated them in their current careers, and give some tips on how to become involved.

Aimee Gunther is the current Deputy Director of the Quantum Sensors Challenge Program at the National Research Council of Canada. She was a Mitacs Canadian Science Policy Fellow and served as a scientific advisor in quantum topics for Canada’s Defense Research and Development, co-authoring and co-developing the Quantum Science and Technology Strategy for the Department of National Defense and the Canadian Armed Forces. Aimee received her PhD from the University of Waterloo in Quantum Information.  Learn more about Aimee on Linkedin.

Anh-Khoi Trinh currently sits on the board of directors of Montreal-based, non-profit organization, Science & Policy Exchange. Science & Policy Exchange aims to foster the student voice in evidence-based decision making and to bring together leader experts from academic, industry, and government to engage and inform students and the public on issues at the interface of science and policy. Ahn-Khoi is currently doing a PhD in string theory and quantum gravity at McGill University.  Learn more about Anh-Khoi on Linkedin.

Monika Stolar is a co-founder of ElectSTEM, an evidence-based non-profit organization with the goal of engaging more scientists and engineers in politics. She also currently works as Simon Fraser University’s industry and research relations officer. Monika holds a PhD in organophosphorus materials from the University of Calgary and completed postdoctoral positions at York University and the Massachusetts Institute of Technology.  Learn more about Monika on Linkedin.

I haven’t come across Aimee Gunther or Anh-Khoi Trinh before but Monika Stolar has been mentioned here twice, in an August 16, 2021 posting about Elect STEM and their Periodically Political podcast and again in an August 30, 2021 posting about an upcoming federal election.

Science and stories: an online talk January 5, 2022 and a course starting on January 10, 2022

So far this year all I’ve been posting about are events and contests. Continuing on that theme, I have an event and, something new, a course.

Massey Dialogues on January 5, 2022, 1 – 2 pm PST

“The Art of Science-Telling: How Science Education Can Shape Society” is scheduled for today (Wednesday, January 5, 5022 at 1 pm PST or 4 pm EST), You can find the livestream here on YouTube,

Massey College

Join us for the first Massey Dialogues of 2022 from 4:00-5:00pm ET on the Art of Science-Telling: How Science Education Can Shape Society.

Farah Qaiser (Evidence for Democracy), Dr. Bonnie Schmidt (Let’s Talk Science) and Carolyn Tuohy (Senior Fellow) will discuss what nonprofits can do for science education and policy, moderated by Junior Fellow Keshna Sood.

The Dialogues are open to the public – we invite everyone to join and take part in what will be a very informative online discussion. Participants are invited to submit questions to the speakers in real time via the Chat function to the right of the screen.

——-

To ensure you never miss a Massey Event, subscribe to our YouTube channel: https://www.youtube.com/user/masseyco…

We also invite you to visit masseycollege.ca/calendar for upcoming events.

Follow us on social media:

twitter.com/masseycollege
instagram.com/massey_college
linkedin.com/school/massey-college
facebook.com/MasseyCollege

Support our work: masseycollege.ca/support-us

You can find out more about the Massey Dialogues here. As for the college, it’s affiliated with the University of Toronto as per the information on the College’s Governance webpage.

Simon Fraser University (SFU; Vancouver, Canada) and a science communication course

I stumbled across “Telling Science Stories” being offered for SFU’s Spring 2022 semester in my twitter feed. Apparently there’s still space for students in the course.

I was a little surprised by how hard it was to find basic information such as: when does the course start? Yes, I found that and more, here’s what I managed to dig up,

From the PUB 480/877 Telling Science Stories course description webpage,

In this hands-on course, students will learn the value of sharing research knowledge beyond the university walls, along with the skills necessary to become effective science storytellers.

Climate change, vaccines, artificial intelligence, genetic editing — these are just a few examples of the essential role scientific evidence can play in society. But connecting science and society is no simple task: it requires key publishing and communication skills, as well as an understanding of the values, goals, and needs of the publics who stand to benefit from this knowledge.

This course will provide students with core skills and knowledge needed to share compelling science stories with diverse audiences, in a variety of formats. Whether it’s through writing books, podcasting, or creating science art, students will learn why we communicate science, develop an understanding of the core principles of effective audience engagement, and gain skills in publishing professional science content for print, radio, and online formats. The instructor is herself a science writer and communicator; in addition, students will have the opportunity to learn from a wide range of guest lecturers, including authors, artists, podcasters, and more. While priority will be given to students enrolled in the Publishing Minor, this course is open to all students who are interested in the evolving relationship between science and society.

I’m not sure if an outsider (someone who’s not a member of the SFU student body) can attend but it doesn’t hurt to ask.

The course is being given by Alice Fleerackers, here’s more from her profile page on the ScholCommLab (Scholarly Communications Laboratory) website,

Alice Fleerackers is a researcher and lab manager at the ScholCommLab and a doctoral student at Simon Fraser University’s Interdisciplinary Studies program, where she works under the supervision of Dr. Juan Pablo Alperin to explore how health science is communicated online. Her doctoral research is supported by a Joseph-Armand Bombardier Canada Graduate Scholarship from SSHRC and a Michael Stevenson Graduate Scholarship from SFU.

In addition, Alice volunteers with a number of non-profit organizations in an effort to foster greater public understanding and engagement with science. She is a Research Officer at Art the Science, Academic Liaison of Science Borealis, Board Member of the Science Writers and Communicators of Canada (SWCC), and a member of the Scientific Committee for the Public Communication of Science and Technology Network (PCST). She is also a freelance health and science writer whose work has appeared in the Globe and Mail, National Post, and Nautilus, among other outlets. Find her on Twitter at @FleerackersA.

Logistics such as when and where the course is being held (from the course outline webpage),

Telling Science Stories

Class Number: 4706

Delivery Method: In Person

Course Times + Location: Tu, Th 10:30 AM – 12:20 PM
HCC 2540, Vancouver

Instructor: Alice Fleerackers
afleerac@sfu.ca

According to the Spring 2022 Calendar Academic Dates webpage, the course starts on Monday, January 10, 2021 and I believe the room number (HCC2540) means the course will be held at SFU’s downtown Vancouver site at Harbour Centre, 515 West Hastings Street.

Given that SFU claims to be “Canada’s leading engaged university,” they do a remarkably poor job of actually engaging with anyone who’s not member of the community, i.e., an outsider.

Science Says 2022 SciArt Contest (Jan. 3 – 31, 2022) for California (US) residents

Science Says is affiliated with the University of California at Davis (UC Davis). Here’s a little more about the UC Davis group from the Science Says homepage,

We are a team of friendly neighborhood scientists passionate about making science accessible to the general public. We aim to cultivate a community of science communicators at UC Davis dedicated to making scientific research accessible, relevant, and interesting to everyone. 

As for the contest, here’s more from the 2022 Science Art Contest webpage,

Jan 3-31, 2022 @ 12:00am – 11:59pm

We want to feature your science art in our second annual science art competition! The intersection of science and art offers a unique opportunity for creative science communication.

To participate in our contest you must:

1. Submit one piece of work considered artistic and creative: beautiful microscopy, field photography, paintings, crafts, etc.

2. The work must be shareable on our social media platforms. We encourage you to include your handle or name in the submitted image.

3. You must live within California to be considered for prizes.

You may compete in one of three categories: UC Davis affiliate (student, staff, faculty), the local Davis/Sacramento area or California. *If out of state, you can submit your work for honorable mention to be featured on our social media and news release, although you can’t be considered for prizes.

Winners will be determined by popular vote via a Google Form offered through our February newsletter, social media and website. Prizes vary depending on the contest selected. For entrants in either the UC Davis affiliate contest or local Davis/Sacramento contest, first prize will receive a cash prize of $75 and second place will receive a cash prize of $50. For entrants in the California contest, first place will receive a cash prize of $50.

Submit Here

Submissions open the first week of January and close on January 31, 2022. Voting begins February 2, 2022 and ends February 16, 2022. Winners will be announced by social media and a special news release on our website and contacted via email on February 23, 2022. Prizes will be awarded by March 4, 2022.

H/t to Raymond K. Nakamura for his retweet of the competition announcement by the Science Says team on Twitter.

Art/Sci or SciArt?

It depends on who’s talking. An artist will say art/sci or art/science and a scientist will say sciart. The focus, or pride of place, of course, being placed on the speaker’s primary interest.