Mixing the unmixable for all new nanoparticles

This news comes out of the University of Maryland and the discovery could led to nanoparticles that have never before been imagined. From a March 29, 2018 news item on ScienceDaily,

Making a giant leap in the ‘tiny’ field of nanoscience, a multi-institutional team of researchers is the first to create nanoscale particles composed of up to eight distinct elements generally known to be immiscible, or incapable of being mixed or blended together. The blending of multiple, unmixable elements into a unified, homogenous nanostructure, called a high entropy alloy nanoparticle, greatly expands the landscape of nanomaterials — and what we can do with them.

This research makes a significant advance on previous efforts that have typically produced nanoparticles limited to only three different elements and to structures that do not mix evenly. Essentially, it is extremely difficult to squeeze and blend different elements into individual particles at the nanoscale. The team, which includes lead researchers at University of Maryland, College Park (UMD)’s A. James Clark School of Engineering, published a peer-reviewed paper based on the research featured on the March 30 [2018] cover of Science.

A March 29, 2018 University of Maryland press release (also on EurekAlert), which originated the news item, delves further (Note: Links have been removed),

“Imagine the elements that combine to make nanoparticles as Lego building blocks. If you have only one to three colors and sizes, then you are limited by what combinations you can use and what structures you can assemble,” explains Liangbing Hu, associate professor of materials science and engineering at UMD and one of the corresponding authors of the paper. “What our team has done is essentially enlarged the toy chest in nanoparticle synthesis; now, we are able to build nanomaterials with nearly all metallic and semiconductor elements.”

The researchers say this advance in nanoscience opens vast opportunities for a wide range of applications that includes catalysis (the acceleration of a chemical reaction by a catalyst), energy storage (batteries or supercapacitors), and bio/plasmonic imaging, among others.

To create the high entropy alloy nanoparticles, the researchers employed a two-step method of flash heating followed by flash cooling. Metallic elements such as platinum, nickel, iron, cobalt, gold, copper, and others were exposed to a rapid thermal shock of approximately 3,000 degrees Fahrenheit, or about half the temperature of the sun, for 0.055 seconds. The extremely high temperature resulted in uniform mixtures of the multiple elements. The subsequent rapid cooling (more than 100,000 degrees Fahrenheit per second) stabilized the newly mixed elements into the uniform nanomaterial.

“Our method is simple, but one that nobody else has applied to the creation of nanoparticles. By using a physical science approach, rather than a traditional chemistry approach, we have achieved something unprecedented,” says Yonggang Yao, a Ph.D. student at UMD and one of the lead authors of the paper.

To demonstrate one potential use of the nanoparticles, the research team used them as advanced catalysts for ammonia oxidation, which is a key step in the production of nitric acid (a liquid acid that is used in the production of ammonium nitrate for fertilizers, making plastics, and in the manufacturing of dyes). They were able to achieve 100 percent oxidation of ammonia and 99 percent selectivity toward desired products with the high entropy alloy nanoparticles, proving their ability as highly efficient catalysts.

Yao says another potential use of the nanoparticles as catalysts could be the generation of chemicals or fuels from carbon dioxide.

“The potential applications for high entropy alloy nanoparticles are not limited to the field of catalysis. With cross-discipline curiosity, the demonstrated applications of these particles will become even more widespread,” says Steven D. Lacey, a Ph.D. student at UMD and also one of the lead authors of the paper.

This research was performed through a multi-institutional collaboration of Prof. Liangbing Hu’s group at the University of Maryland, College Park; Prof. Reza Shahbazian-Yassar’s group at University of Illinois at Chicago; Prof. Ju Li’s group at the Massachusetts Institute of Technology; Prof. Chao Wang’s group at Johns Hopkins University; and Prof. Michael Zachariah’s group at the University of Maryland, College Park.

What outside experts are saying about this research:

“This is quite amazing; Dr. Hu creatively came up with this powerful technique, carbo-thermal shock synthesis, to produce high entropy alloys of up to eight different elements in a single nanoparticle. This is indeed unthinkable for bulk materials synthesis. This is yet another beautiful example of nanoscience!,” says Peidong Yang, the S.K. and Angela Chan Distinguished Professor of Energy and professor of chemistry at the University of California, Berkeley and member of the American Academy of Arts and Sciences.

“This discovery opens many new directions. There are simulation opportunities to understand the electronic structure of the various compositions and phases that are important for the next generation of catalyst design. Also, finding correlations among synthesis routes, composition, and phase structure and performance enables a paradigm shift toward guided synthesis,” says George Crabtree, Argonne Distinguished Fellow and director of the Joint Center for Energy Storage Research at Argonne National Laboratory.

More from the research coauthors:

“Understanding the atomic order and crystalline structure in these multi-element nanoparticles reveals how the synthesis can be tuned to optimize their performance. It would be quite interesting to further explore the underlying atomistic mechanisms of the nucleation and growth of high entropy alloy nanoparticle,” says Reza Shahbazian-Yassar, associate professor at the University of Illinois at Chicago and a corresponding author of the paper.

“Carbon metabolism drives ‘living’ metal catalysts that frequently move around, split, or merge, resulting in a nanoparticle size distribution that’s far from the ordinary, and highly tunable,” says Ju Li, professor at the Massachusetts Institute of Technology and a corresponding author of the paper.

“This method enables new combinations of metals that do not exist in nature and do not otherwise go together. It enables robust tuning of the composition of catalytic materials to optimize the activity, selectivity, and stability, and the application will be very broad in energy conversions and chemical transformations,” says Chao Wang, assistant professor of chemical and biomolecular engineering at Johns Hopkins University and one of the study’s authors.

Here’s a link to and a citation for the paper,

Carbothermal shock synthesis of high-entropy-alloy nanoparticles by Yonggang Yao, Zhennan Huang, Pengfei Xie, Steven D. Lacey, Rohit Jiji Jacob, Hua Xie, Fengjuan Chen, Anmin Nie, Tiancheng Pu, Miles Rehwoldt, Daiwei Yu, Michael R. Zachariah, Chao Wang, Reza Shahbazian-Yassar, Ju Li, Liangbing Hu. Science 30 Mar 2018: Vol. 359, Issue 6383, pp. 1489-1494 DOI: 10.1126/science.aan5412

This paper is behind a paywall.

Removing more than 99% of crude oil from ‘produced’ water (well water)

Should you have an oil well nearby (see The Urban Oil Fields of Los Angeles in an August 28, 2014 photo essay by Alan Taylor for The Atlantic for examples of oil wells in various municipalities and cities associated with LS) , this news from Texas may interest you.

From an August 15, 2018 news item on Nanowerk,

Oil and water tend to separate, but they mix well enough to form stable oil-in-water emulsions in produced water from oil reservoirs to become a problem. Rice University scientists have developed a nanoparticle-based solution that reliably removes more than 99 percent of the emulsified oil that remains after other processing is done.
The Rice lab of chemical engineer Sibani Lisa Biswal made a magnetic nanoparticle compound that efficiently separates crude oil droplets from produced water that have proven difficult to remove with current methods.

An August 15, 2018 Rice University news release (also on EurekAlert), which originated the news item, describes the work in more detail,

Produced water [emphasis mine] comes from production wells along with oil. It often includes chemicals and surfactants pumped into a reservoir to push oil to the surface from tiny pores or cracks, either natural or fractured, deep underground. Under pressure and the presence of soapy surfactants, some of the oil and water form stable emulsions that cling together all the way back to the surface.

While methods exist to separate most of the oil from the production flow, engineers at Shell Global Solutions, which sponsored the project, told Biswal and her team that the last 5 percent of oil tends to remain stubbornly emulsified with little chance to be recovered.

“Injected chemicals and natural surfactants in crude oil can oftentimes chemically stabilize the oil-water interface, leading to small droplets of oil in water which are challenging to break up,” said Biswal, an associate professor of chemical and biomolecular engineering and of materials science and nanoengineering.

The Rice lab’s experience with magnetic particles and expertise in amines, courtesy of former postdoctoral researcher and lead author Qing Wang, led it to combine techniques. The researchers added amines to magnetic iron nanoparticles. Amines carry a positive charge that helps the nanoparticles find negatively charged oil droplets. Once they do, the nanoparticles bind the oil. Magnets are then able to pull the droplets and nanoparticles out of the solution.

“It’s often hard to design nanoparticles that don’t simply aggregate in the high salinities that are typically found in reservoir fluids, but these are quite stable in the produced water,” Biswal said.

The enhanced nanoparticles were tested on emulsions made in the lab with model oil as well as crude oil.

In both cases, researchers inserted nanoparticles into the emulsions, which they simply shook by hand and machine to break the oil-water bonds and create oil-nanoparticle bonds within minutes. Some of the oil floated to the top, while placing the test tube on a magnet pulled the infused nanotubes to the bottom, leaving clear water in between.

Best of all, Biswal said, the nanoparticles can be washed with a solvent and reused while the oil can be recovered. The researchers detailed six successful charge-discharge cycles of their compound and suspect it will remain effective for many more.

She said her lab is designing a flow-through reactor to process produced water in bulk and automatically recycle the nanoparticles. That would be valuable for industry and for sites like offshore oil rigs, where treated water could be returned to the ocean.

It seems to me that ‘produced water’ is another term for polluted water.I guess it’s the reverse to Shakespeare’s “a rose by any other name would smell as sweet” with polluted water by any other name seeming more palatable.

Here’s a link to and a citation for the paper,

Recyclable amine-functionalized magnetic nanoparticles for efficient demulsification of crude oil-in-water emulsions by Qing Wang, Maura C. Puerto, Sumedh Warudkar, Jack Buehler, and Sibani L. Biswal. Environ. Sci.: Water Res. Technol., 2018, Advance Article DOI: 10.1039/C8EW00188J First published on 15 Aug 2018

This paper is behind a paywall.

Rice has included this image amongst others in their news release,

Rice University engineers have developed magnetic nanoparticles that separate the last droplets of oil from produced water at wells. The particles draw in the bulk of the oil and are then attracted to the magnet, as demonstrated here. Photo by Jeff Fitlow

There’s also this video, which, in my book, borders on magical,

Killing bacteria on contact with dragonfly-inspired nanocoating

Scientists in Singapore were inspired by dragonflies and cicadas according to a March 28, 2018 news item on Nanowerk (Note: A link has been removed),

Studies have shown that the wings of dragonflies and cicadas prevent bacterial growth due to their natural structure. The surfaces of their wings are covered in nanopillars making them look like a bed of nails. When bacteria come into contact with these surfaces, their cell membranes get ripped apart immediately and they are killed. This inspired researchers from the Institute of Bioengineering and Nanotechnology (IBN) of A*STAR to invent an anti-bacterial nano coating for disinfecting frequently touched surfaces such as door handles, tables and lift buttons.

This technology will prove particularly useful in creating bacteria-free surfaces in places like hospitals and clinics, where sterilization is important to help control the spread of infections. Their new research was recently published in the journal Small (“ZnO Nanopillar Coated Surfaces with Substrate-Dependent Superbactericidal Property”)

Image 1: Zinc oxide nanopillars that looked like a bed of nails can kill a broad range of germs when used as a coating on frequently-touched surfaces. Courtesy: A*STAR

A March 28, 2018 Agency for Science Technology and Research (A*STAR) press release, which originated the news item, describes the work further,

80% of common infections are spread by hands, according to the B.C. [province of Canada] Centre for Disease Control1. Disinfecting commonly touched surfaces helps to reduce the spread of harmful germs by our hands, but would require manual and repeated disinfection because germs grow rapidly. Current disinfectants may also contain chemicals like triclosan which are not recognized as safe and effective 2, and may lead to bacterial resistance and environmental contamination if used extensively.

“There is an urgent need for a better way to disinfect surfaces without causing bacterial resistance or harm to the environment. This will help us to prevent the transmission of infectious diseases from contact with surfaces,” said IBN Executive Director Professor Jackie Y. Ying.

To tackle this problem, a team of researchers led by IBN Group Leader Dr Yugen Zhang created a novel nano coating that can spontaneously kill bacteria upon contact. Inspired by studies on dragonflies and cicadas, the IBN scientists grew nanopilllars of zinc oxide, a compound known for its anti-bacterial and non-toxic properties. The zinc oxide nanopillars can kill a broad range of germs like E. coli and S. aureus that are commonly transmitted from surface contact.

Tests on ceramic, glass, titanium and zinc surfaces showed that the coating effectively killed up to 99.9% of germs found on the surfaces. As the bacteria are killed mechanically rather than chemically, the use of the nano coating would not contribute to environmental pollution. Also, the bacteria will not be able to develop resistance as they are completely destroyed when their cell walls are pierced by the nanopillars upon contact.

Further studies revealed that the nano coating demonstrated the best bacteria killing power when it is applied on zinc surfaces, compared with other surfaces. This is because the zinc oxide nanopillars catalyzed the release of superoxides (or reactive oxygen species), which could even kill nearby free floating bacteria that were not in direct contact with the surface. This super bacteria killing power from the combination of nanopillars and zinc broadens the scope of applications of the coating beyond hard surfaces.

Subsequently, the researchers studied the effect of placing a piece of zinc that had been coated with zinc oxide nanopillars into water containing E. coli. All the bacteria were killed, suggesting that this material could potentially be used for water purification.

Dr Zhang said, “Our nano coating is designed to disinfect surfaces in a novel yet practical way. This study demonstrated that our coating can effectively kill germs on different types of surfaces, and also in water. We were also able to achieve super bacteria killing power when the coating was used on zinc surfaces because of its dual mechanism of action. We hope to use this technology to create bacteria-free surfaces in a safe, inexpensive and effective manner, especially in places where germs tend to accumulate.”

IBN has recently received a grant from the National Research Foundation, Prime Minister’s Office, Singapore, under its Competitive Research Programme to further develop this coating technology in collaboration with Tan Tock Seng Hospital for commercial application over the next 5 years.

1 B.C. Centre for Disease Control

2 U.S. Food & Drug Administration

(I wasn’t expecting to see a reference to my home province [BC Centre for Disease Control].) Back to the usual, here’s a link to and a citation for the paper,

ZnO Nanopillar Coated Surfaces with Substrate‐Dependent Superbactericidal Property by Guangshun Yi, Yuan Yuan, Xiukai Li, Yugen Zhang. Small https://doi.org/10.1002/smll.201703159 First published: 22 February 2018

This paper is behind a paywall.

One final comment, this research reminds me of research into simulating shark skin because that too has bacteria-killing nanostructures. My latest about the sharkskin research is a Sept, 18, 2014 posting.

Dalhousie University’s (Halifax, Nova Scotia, Canada) 200th anniversary with Axel Becke whose discoveries apply to nanotechnology and pharmaceuticals

To celebrate its 200th, Dalhousie University has developed the Dalhousie Originals 200th anniversary storytelling project featuring a number of prominent intellectuals and scientists associated with the university. Axel Becke, whose work has had an impact on nanotechnology and more, is one of them (from the Dalhousie Originals Axel Becke webpage),

Though he didn’t know it at the time, Axel Becke’s (1953 – present) career took a turn for the stratosphere during a 1991 lunch on the French Riviera with Dr. John Pople.

Over the previous decade, Dr. Becke had developed a formula to vastly improve the accuracy of chemical calculations using Density Functional Theory (DFT). But few were listening to him. Now, at a conference lunch, he had the ear of a true titan of theoretical chemistry and future Nobel Prize winner. And it didn’t take long for Dr. Pople to be convinced — certainly before the cheque arrived.

That conversation “turned the tide,” says Dr. Becke, and a year later Dr. Pople, who had discovered the most ubiquitous computational chemistry code in the world, was using Dr. Becke’s ideas.

Today those ideas have made DFT the most-used computational method in electronic structure theory. Its applications allow us to do everything from developing nanotechnology to designing better drugs to making stronger concrete. “At a fundamental level, DFT can be used to describe all of chemistry, biochemistry, biology, nanosystems and materials,” Dr. Becke told Nature in 2014. “Everything in our terrestrial world depends on the motions of electrons — therefore, DFT literally underlies everything.”

No wonder, then, Dr. Becke is one of the most cited scientists in the world. Two of his papers landed on Nature’s 2014 list of the top 100 most-referenced science articles ever — one at number 25, the other at number eight, both with Becke as the sole author.

A big credit for his success goes to Russell Boyd, he says, a mentor and his supervisor during his postdoctoral fellowship at Dal from 1981 to 1984. Dr. Boyd was a young, talented theoretical chemist in his own right, and he was smart enough to let a 28-year-old Dr. Becke explore. “The three years that I was here, he basically just left me alone. And that’s where I came up with my ideas, and those ideas have served me for the rest of my career, and they serve me now.”

After a couple of decades as a chemistry professor at Queen’s University, Becke returned to Dal in 2006 to serve as the Killam Chair in Computational Science. From then until he retired from teaching and became Professor Emeritus in 2015, the accolades started pouring in: Fellow of the Royal Society of London (2006), Theoretical Chemistry Award of the American Chemical Society (2014), Medal of the Chemical Institute of Canada (2015), the Canada Council Killam Prize (2016) and Canada’s most prestigious science prize: the $1 million NSERC Herzberg Gold Medal (2015).

And to think it all hinged on a lunch beside the Mediterranean.

“When I look back on things, I’m enjoying the ride,” says Dr. Becke. “But if it hadn’t been for that conversation with Sir John Pople in 1991, it might not have happened. Of course we don’t know, but it might not have happened.”

There is a very short video,

You are seeing Axel Becke in the still but it’s actor, Brandon Liddard  (BA’17 Theatre, Fountain School of Performing Arts, Dalhousie) in a re-enactment.


See Nobel prize winner’s (Kostya Novoselov) collaborative art/science video project on August 17, 2018 (Manchester, UK)

Dr. Konstantin (Kostya) Novoselov, one of the two scientists at the University of Manchester (UK) who were awarded Nobel prizes for their work with graphene, has embarked on an artistic career of sorts. From an August 8, 2018 news item on Nanowwerk,

Nobel prize-winning physicist Sir Kostya Novoselov worked with artist Mary Griffiths to create Prospect Planes – a video artwork resulting from months of scientific and artistic research and experimentation using graphene.

Prospect Planes will be unveiled as part of The Hexagon Experiment series of events at the Great Exhibition of the North 2018, Newcastle, on August 17 [2018].

An August 9, 2018 University of Manchester press release, which originated the news item (differences in the dates are likely due to timezones), describes the art/science project in some detail,

The fascinating video art project aims to shed light on graphene’s unique qualities and potential.

Providing a fascinating insight into scientific research into graphene, Prospect Planes began with a graphite drawing by Griffiths, symbolising the chemical element carbon.

This was replicated in graphene by Sir Kostya Novoselov, creating a microscopic 2D graphene version of Griffiths’ drawing just one atom thick and invisible to the naked eye.

They then used Raman spectroscopy to record a molecular fingerprint of the graphene image, using that fingerprint to map a digital visual representation of graphene’s unique qualities.

The six-part Hexagon Experiment series was inspired by the creativity of the Friday evening sessions that led to the isolation of graphene at The University of Manchester by Novoselov and Sir Andre Geim.

Mary Griffiths, has previously worked on other graphene artworks including From Seathwaite an installation in the National Graphene Institute, which depicts the story of graphite and graphene – its geography, geology and development in the North West of England.

Mary Griffiths, who is also Senior Curator at The Whitworth said: “Having previously worked alongside Kostya on other projects, I was aware of his passion for art. This has been a tremendously exciting and rewarding project, which will help people to better understand the unique qualities of graphene, while bringing Manchester’s passion for collaboration and creativity across the arts, industry and science to life.

“In many ways, the story of the scientific research which led to the creation of Prospect Planes is as exciting as the artwork itself. By taking my pencil drawing and patterning it in 2D with a single layer of graphene atoms, then creating an animated digital work of art from the graphene data, we hope to provoke further conversations about the nature of the first 2D material and the potential benefits and purposes of graphene.”

Sir Kostya Novoselov said: “In this particular collaboration with Mary, we merged two existing concepts to develop a new platform, which can result in multiple art projects. I really hope that we will continue working together to develop this platform even further.”

The Hexagon Experiment is taking place just a few months before the official launch of the £60m Graphene Engineering Innovation Centre, part of a major investment in 2D materials infrastructure across Manchester, cementing its reputation as Graphene City.

Prospect Planes was commissioned by Manchester-based creative music charity Brighter Sound.

The Hexagon Experiment is part of Both Sides Now – a three-year initiative to support, inspire and showcase women in music across the North of England, supported through Arts Council England’s Ambition for Excellence fund.

It took some searching but I’ve found the specific Hexagon event featuring Sir Novoselov’s and Mary Griffin’s work. From ‘The Hexagon Experiment #3: Adventures in Flatland’ webpage,

Lauren Laverne is joined by composer Sara Lowes and visual artist Mary Griffiths to discuss their experiments with music, art and science. Followed by a performance of Sara Lowes’ graphene-inspired composition Graphene Suite, and the unveiling of new graphene art by Mary Griffiths and Professor Kostya Novoselov. Alongside Andre Geim, Novoselov was awarded the Nobel Prize in Physics in 2010 for his groundbreaking experiments with graphene.

About The Hexagon Experiment

Music, art and science collide in an explosive celebration of women’s creativity

A six-part series of ‘Friday night experiments’ featuring live music, conversations and original commissions from pioneering women at the forefront of music, art and science.

Inspired by the creativity that led to the discovery of the Nobel-Prize winning ‘wonder material’ graphene, The Hexagon Experiment brings together the North’s most exciting musicians and scientists for six free events – from music made by robots to a spectacular tribute to an unsung heroine.

Presented by Brighter Sound and the National Graphene Institute at The University of Manchester, as part of the Great Exhibition of the North.

Buy tickets here.

One final comment, the title for the evening appears to have been inspired by a novella, from the Flatland Wikipedia entry (Note: Links have been removed),

Flatland: A Romance of Many Dimensions is a satirical novella by the English schoolmaster Edwin Abbott Abbott, first published in 1884 by Seeley & Co. of London.

Written pseudonymously by “A Square”,[1] the book used the fictional two-dimensional world of Flatland to comment on the hierarchy of Victorian culture, but the novella’s more enduring contribution is its examination of dimensions.[2]

That’s all folks.

ETA August 14, 2018: Not quite all. Hopefully this attempt to add a few details for people not familiar with graphene won’t lead increased confusion. The Hexagon event ‘Advetures in Flatland’ which includes Novoselov’s and Griffiths’ video project features some wordplay based on graphene’s two dimensional nature.

Extinction of Experience (EOE)

‘Extinction of experience’ is a bit of an attention getter isn’t it? Well, it worked for me when I first saw it and it seems particularly apt after putting together my August 9, 2018 posting about the 2018 SIGGRAPH conference, in particular, the ‘Previews’ where I featured a synthetic sound project. Here’s a little more about EOE from a July 3, 2018 news item on phys.org,

Opportunities for people to interact with nature have declined over the past century, as most people now live in urban areas and spend much of their time indoors. And while adults are not only experiencing nature less, they are also less likely to take their children outdoors and shape their attitudes toward nature, creating a negative cycle. In 1978, ecologist Robert Pyle coined the phrase “extinction of experience” (EOE) to describe this alienation from nature, and argued that this process is one of the greatest causes of the biodiversity crisis. Four decades later, the question arises: How can we break the cycle and begin to reverse EOE?

A July 3, 2018 North Carolina Museum of Natural Sciences news release, which originated the news item, delves further,

In citizen science programs, people participate in real research, helping scientists conduct studies on local, regional and even global scales. In a study released today, researchers from the North Carolina Museum of Natural Sciences, North Carolina State University, Rutgers University, and the Technion-Israel Institute of Technology propose nature-based citizen science as a means to reconnect people to nature. For people to take the next step and develop a desire to preserve nature, they need to not only go outdoors or learn about nature, but to develop emotional connections to and empathy for nature. Because citizen science programs usually involve data collection, they encourage participants to search for, observe and investigate natural elements around them. According to co-author Caren Cooper, assistant head of the Biodiversity Lab at the N.C. Museum of Natural Sciences, “Nature-based citizen science provides a structure and purpose that might help people notice nature around them and appreciate it in their daily lives.”

To search for evidence of these patterns across programs and the ability of citizen science to reach non-scientific audiences, the researchers studied the participants of citizen science programs. They reviewed 975 papers, analyzed results from studies that included participants’ motivations and/or outcomes in nature-oriented programs, and found that nature-based citizen science fosters cognitive and emotional aspects of experiences in nature, giving it the potential to reverse EOE.

The eMammal citizen science programs offer children opportunities to use technology to observe nature in new ways. Photo: Matt Zeher. The eMammal citizen science programs offer children opportunities to use technology to observe nature in new ways. Photo: Matt Zeher.

The N.C. Museum of Natural Sciences’ Stephanie Schuttler, lead author on the study and scientist on the eMammal citizen science camera trapping program, saw anecdotal evidence of this reversal through her work incorporating camera trap research into K-12 classrooms. “Teachers would tell me how excited and surprised students were about the wildlife in their school yards,” Schuttler says. “They had no idea their campus flourished with coyotes, foxes and deer.” The study Schuttler headed shows citizen science increased participants’ knowledge, skills, interest in and curiosity about nature, and even produced positive behavioral changes. For example, one study revealed that participants in the Garden Butterfly Watch program changed gardening practices to make their yards more hospitable to wildlife. Another study found that participants in the Coastal Observation and Seabird Survey Team program started cleaning up beaches during surveys, even though this was never suggested by the facilitators.

While these results are promising, the EOE study also revealed that this work has only just begun and that most programs do not reach audiences who are not already engaged in science or nature. Only 26 of the 975 papers evaluated participants’ motivations and/or outcomes, and only one of these papers studied children, the most important demographic in reversing EOE. “Many studies were full of amazing stories on how citizen science awakened participants to the nature around them, however, most did not study outcomes,” Schuttler notes. “To fully evaluate the ability for nature-based citizen science to affect people, we encourage citizen science programs to formally study their participants and not just study the system in question.”

Additionally, most citizen science programs attracted or even recruited environmentally mindful participants who likely already spend more time outside than the average person. “If we really want to reconnect people to nature, we need to preach beyond the choir, and attract people who are not already interested in science and/or nature,” Schuttler adds. And as co-author Assaf Shwartz of Technion-Israel Institute of Technology asserts, “The best way to avert the extinction of experience is to create meaningful experiences of nature in the places where we all live and work – cities. Participating in citizen science is an excellent way to achieve this goal, as participation can enhance the sense of commitment people have to protect nature.”

Luckily, some other factors appear to influence participants’ involvement in citizen science. Desire for wellbeing, stewardship and community may provide a gateway for people to participate, an important first step in connecting people to nature. Though nature-based citizen science programs provide opportunities for people to interact with nature, further research on the mechanisms that drive this relationship is needed to strengthen our understanding of various outcomes of citizen science.

And, I because I love dragonflies,

Nature-based citizen science programs, like Dragonfly Pond Watch, offer participants opportunities to observe nature more closely. Credit: Lea Shell.

Here’s a link to and a citation for the paper,

Bridging the nature gap: can citizen science reverse the extinction of experience? by Stephanie G Schuttler, Amanda E Sorensen, Rebecca C Jordan, Caren Cooper, Assaf Shwartz. Frontiers in Ecology and the Environment. DOI: https://doi.org/10.1002/fee.1826 First published: 03 July 2018

This paper is behind a paywall.

SIGGRAPH (Special Interest Group on Computer GRAPHics and Interactive Techniques) and their art gallery from Aug. 12 – 16, 2018 (making the world synthetic) in Vancouver (Canada)

While my main interest is the group’s temporary art gallery, I am providing a brief explanatory introduction and a couple of previews for SIGGRAPH 2018.


For anyone unfamiliar with the Special Interest Group on Computer GRAPHics and Interactive Techniques (SIGGRAPH) and its conferences, from the SIGGRAPH Wikipedia entry Note: Links have been removed),

Some highlights of the conference are its Animation Theater and Electronic Theater presentations, where recently created CG films are played. There is a large exhibition floor, where several hundred companies set up elaborate booths and compete for attention and recruits. Most of the companies are in the engineering, graphics, motion picture, or video game industries. There are also many booths for schools which specialize in computer graphics or interactivity.

Dozens of research papers are presented each year, and SIGGRAPH is widely considered the most prestigious forum for the publication of computer graphics research.[1] The recent paper acceptance rate for SIGGRAPH has been less than 26%.[2] The submitted papers are peer-reviewed in a single-blind process.[3] There has been some criticism about the preference of SIGGRAPH paper reviewers for novel results rather than useful incremental progress.[4][5] …

This is the third SIGGRAPH Vancouver has hosted; the others were in 2011 and 2014.  The theme for the 2018 iteration is ‘Generations’; here’s more about it from an Aug. 2, 2018 article by Terry Flores for Variety,

While its focus is firmly forward thinking, SIGGRAPH 2018, the computer graphics, animation, virtual reality, games, digital art, mixed reality, and emerging technologies conference, is also tipping its hat to the past thanks to its theme this year: Generations. The conference runs Aug. 12-16 in Vancouver, B.C.

“In the literal people sense, pioneers in the computer graphics industry are standing shoulder to shoulder with researchers, practitioners and the future of the industry — young people — mentoring them, dabbling across multiple disciplines to innovate, relate, and grow,” says SIGGRAPH 2018 conference chair Roy C. Anthony, VP of creative development and operations at software and technology firm Ventuz. “This is really what SIGGRAPH has always been about. Generations really seemed like a very appropriate way of looking back and remembering where we all came from and how far we’ve come.”

SIGGRAPH 2018 has a number of treats in store for attendees, including the debut of Disney’s first VR film, the short “Cycles”; production sessions on the making of “Blade Runner 2049,” “Game of Thrones,” “Incredibles 2” and “Avengers: Infinity War”; as well as sneak peeks of Disney’s upcoming “Ralph Breaks the Internet: Wreck-It Ralph 2” and Laika’s “Missing Link.”

That list of ‘treats’ in the last paragraph makes the conference seem more like an iteration of a ‘comic-con’ than a technology conference.


I have four items about work that will be presented at SIGGRAPH 2018, First up, something about ‘redirected walking’ from a June 18, 2018 Association for Computing Machinery news release on EurekAlert,

CHICAGO–In the burgeoning world of virtual reality (VR) technology, it remains a challenge to provide users with a realistic perception of infinite space and natural walking capabilities in the virtual environment. A team of computer scientists has introduced a new approach to address this problem by leveraging a natural human phenomenon: eye blinks.

All humans are functionally blind for about 10 percent of the time under normal circumstances due to eye blinks and saccades, a rapid movement of the eye between two points or objects. Eye blinks are a common and natural cause of so-called “change blindness,” which indicates the inability for humans to notice changes to visual scenes. Zeroing in on eye blinks and change blindness, the team has devised a novel computational system that effectively redirects the user in the virtual environment during these natural instances, all with undetectable camera movements to deliver orientation redirection.

“Previous RDW [redirected walking] techniques apply rotations continuously while the user is walking. But the amount of unnoticeable rotations is limited,” notes Eike Langbehn, lead author of the research and doctoral candidate at the University of Hamburg. “That’s why an orthogonal approach is needed–we add some additional rotations when the user is not focused on the visuals. When we learned that humans are functionally blind for some time due to blinks, we thought, ‘Why don’t we do the redirection during eye blinks?'”

Human eye blinks occur approximately 10 to 20 times per minute, about every 4 to 19 seconds. Leveraging this window of opportunity–where humans are unable to detect major motion changes while in a virtual environment–the researchers devised an approach to synchronize a computer graphics rendering system with this visual process, and introduce any useful motion changes in virtual scenes to enhance users’ overall VR experience.

The researchers’ experiments revealed that imperceptible camera rotations of 2 to 5 degrees and translations of 4 to 9 cm of the user’s viewpoint are possible during a blink without users even noticing. They tracked test participants’ eye blinks by an eye tracker in a VR head-mounted display. In a confirmatory study, the team validated that participants could not reliably detect in which of two eye blinks their viewpoint was manipulated while walking a VR curved path. The tests relied on unconscious natural eye blinking, but the researchers say redirection during blinking could be carried out consciously. Since users can consciously blink multiple times a day without much effort, eye blinks provide great potential to be used as an intentional trigger in their approach.

The team will present their work at SIGGRAPH 2018, held 12-16 August in Vancouver, British Columbia. The annual conference and exhibition showcases the world’s leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.

“RDW is a big challenge since current techniques still need too much space to enable unlimited walking in VR,” notes Langbehn. “Our work might contribute to a reduction of space since we found out that unnoticeable rotations of up to five degrees are possible during blinks. This means we can improve the performance of RDW by approximately 50 percent.”

The team’s results could be used in combination with other VR research, such as novel steering algorithms, improved path prediction, and rotations during saccades, to name a few. Down the road, such techniques could some day enable consumer VR users to virtually walk beyond their living room.

Langbehn collaborated on the work with Frank Steinicke of University of Hamburg, Markus Lappe of University of Muenster, Gregory F. Welch of University of Central Florida, and Gerd Bruder, also of University of Central Florida. For the full paper and video, visit the team’s project page.



ACM, the Association for Computing Machinery, is the world’s largest educational and scientific computing society, uniting educators, researchers, and professionals to inspire dialogue, share resources, and address the field’s challenges. ACM SIGGRAPH is a special interest group within ACM that serves as an interdisciplinary community for members in research, technology, and applications in computer graphics and interactive techniques. SIGGRAPH is the world’s leading annual interdisciplinary educational experience showcasing the latest in computer graphics and interactive techniques. SIGGRAPH 2018, marking the 45th annual conference hosted by ACM SIGGRAPH, will take place from 12-16 August at the Vancouver Convention Centre in Vancouver, B.C.

They have provided an image illustrating what they mean (I don’t find it especially informative),

Caption: The viewing behavior of a virtual reality user, including fixations (in green) and saccades (in red). A blink fully suppresses visual perception. Credit: Eike Langbehn

Next up (2), there’s Disney Corporation’s first virtual reality (VR) short, from a July 19, 2018  Association for Computing Machinery news release on EurekAlert,

Walt Disney Animation Studios will debut its first ever virtual reality short film at SIGGRAPH 2018, and the hope is viewers will walk away feeling connected to the characters as equally as they will with the VR technology involved in making the film.

Cycles, an experimental film directed by Jeff Gipson, centers around the true meaning of creating a home and the life it holds inside its walls. The idea for the film is personal, inspired by Gipson’s childhood spending time with his grandparents and creating memories in their home, and later, having to move them to an assisted living residence.

“Every house has a story unique to the people, the characters who live there,” says Gipson. “We wanted to create a story in this single place and be able to have the viewer witness life happening around them. It is an emotionally driven film, expressing the real ups and downs, the happy and sad moments in life.”

For Cycles, Gipson also drew from his past life as an architect, having spent several years designing skate parks, and from his passion for action sports, including freestyle BMX. In Los Angeles, where Gipson lives, it is not unusual to find homes with an empty swimming pool reserved for skating or freestyle biking. Part of the pitch for Cycles came out of Gipson’s experience riding in these empty pools and being curious about the homes attached to them, the families who lived there, and the memories they made.

SIGGRAPH attendees will have the opportunity to experience Cycles at the Immersive Pavilion, a new space for this year’s conference. The Pavilion is devoted exclusively to virtual, augmented, and mixed reality and will contain: the VR Theater, a storytelling extravaganza that is part of the Computer Animation Festival; the Vrcade, a space for VR, AR, and MR games or experiences; and the well-known Village, for showcasing large-scale projects. SIGGRAPH 2018, held 12-16 August in Vancouver, British Columbia, is an annual gathering that showcases the world’s leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.

The production team completed Cycles in four months with about 50 collaborators as part of a professional development program at the studio. A key difference in VR filmmaking includes getting creative with how to translate a story to the VR “screen.” Pre-visualizing the narrative, for one, was a challenge. Rather than traditional storyboarding, Gipson and his team instead used a mix of Quill VR painting techniques and motion capture to “storyboard” Cycles, incorporating painters and artists to generate sculptures or 3D models of characters early on and draw scenes for the VR space. The creators also got innovative with the use of light and color saturation in scenes to help guide the user’s eyes during the film.

“What’s cool for VR is that we are really on the edge of trying to figure out what it is and how to tell stories in this new medium,” says Gipson. “In VR, you can look anywhere and really be transported to a different world, experience it from different angles, and see every detail. We want people watching to feel alive and feel emotion, and give them a true cinematic experience.”

This is Gipson’s VR directorial debut. He joined Walt Disney Animation Studios in 2013, serving as a lighting artist on Disney favorites like Frozen, Zootopia, and Moana. Of getting to direct the studio’s first VR short, he says, “VR is an amazing technology and a lot of times the technology is what is really celebrated. We hope more and more people begin to see the emotional weight of VR films, and with Cycles in particular, we hope they will feel the emotions we aimed to convey with our story.”

Apparently this is a still from the ‘short’,

Caption: Disney Animation Studios will present ‘Cycles’ , its first virtual reality (VR) short, at ACM SIGGRAPH 2018. Credit: Disney Animation Studios

There’s also something (3) from Google as described in a July 26, 2018 Association of Computing Machinery news release on EurekAlert,

Google has unveiled a new virtual reality (VR) immersive experience based on a novel system that captures and renders high-quality, realistic images from the real world using light fields. Created by a team of leading researchers at Google, Welcome to Light Fields is the tech giant’s splash into the nascent arena of light fields VR experiences, an exciting corner of VR video technology gaining traction for its promise to deliver extremely high-quality imagery and experiences in the virtual world.

Google released Welcome to Light Fields earlier this year as a free app on Steam VR for HTC Vive, Oculus Rift, and Windows Mixed Reality headsets. The creators will demonstrate the VR experience at SIGGRAPH 2018, in the Immersive Pavilion, a new space for this year’s conference. The Pavilion is devoted exclusively to virtual, augmented, and mixed reality and will contain: the Vrcade, a space for VR, AR, and MR games or experiences; the VR Theater, a storytelling extravaganza that is part of the Computer Animation Festival; and the well-known Village, for showcasing large-scale projects. SIGGRAPH 2018, held 12-16 August in Vancouver, British Columbia, is an annual gathering that showcases the world’s leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.

Destinations in Welcome to Light Fields include NASA’s Space Shuttle Discovery, delivering to viewers an astronaut’s view inside the flight deck, which has never been open to the public; the pristine teak and mahogany interiors of the Gamble House, an architectural treasure in Pasadena, CA; and the glorious St. Stephen’s Church in Granada Hills, CA, home to a stunning wall of more than 14,000 pieces of glimmering stained glass.

“I love that light fields in VR can teleport you to exotic places in the real world, and truly make you believe you are there,” says Ryan Overbeck, software engineer at Google who co-led the project. “To me, this is magic.”

To bring this experience to life, Overbeck worked with a team that included Paul Debevec, senior staff engineer at Google, who managed the project and led the hardware piece with engineers Xueming Yu, Jay Busch, and Graham Fyffe. With Overbeck, Daniel Erickson and Daniel Evangelakos focused on the software end. The researchers designed a comprehensive system for capturing and rendering high-quality, spherical light field still images from footage captured in the real world. They developed two easy-to-use light field camera rigs, based on the GoPro Hero4action sports camera, that efficiently capture thousands of images on the surface of a sphere. Those images were then passed through a cloud-based light-field-processing pipeline.

Among other things, explains Overbeck, “The processing pipeline uses computer vision to place the images in 3D and generate depth maps, and we use a modified version of our vp9 video codec

to compress the light field data down to a manageable size.” To render a light field dataset, he notes, the team used a rendering algorithm that blends between the thousands of light field images in real-time.

The team relied on Google’s talented pool of engineers in computer vision, graphics, video compression, and machine learning to overcome the unique challenges posed in light fields technology. They also collaborated closely with the WebM team (who make the vp9 video codec) to develop the high-quality light field compression format incorporated into their system, and leaned heavily on the expertise of the Jump VR team to help pose the images and generate depth maps. (Jump is Google’s professional VR system for achieving 3D-360 video production at scale.)

Indeed, with Welcome to Light Fields, the Google team is demonstrating the potential and promise of light field VR technology, showcasing the technology’s ability to provide a truly immersive experience with a level of unmatched realism. Though light fields technology has been researched and explored in computer graphics for more than 30 years, practical systems for actually delivering high-quality light field experiences has not yet been possible.

Part of the team’s motivation behind creating this VR light field experience is to invigorate the nascent field.

“Welcome to Light Fields proves that it is now possible to make a compelling light field VR viewer that runs on consumer-grade hardware, and we hope that this knowledge will encourage others to get involved with building light field technology and media,” says Overbeck. “We understand that in order to eventually make compelling consumer products based on light fields, we need a thriving light field ecosystem. We need open light field codecs, we need artists creating beautiful light field imagery, and we need people using VR in order to engage with light fields.”

I don’t really understand why this image, which looks like something belongs on advertising material, would be chosen to accompany a news release on a science-based distribution outlet,

Caption: A team of leading researchers at Google, will unveil the new immersive virtual reality (VR) experience “Welcome to Lightfields” at ACM SIGGRAPH 2018. Credit: Image courtesy of Google/Overbeck

Finally (4), ‘synthesizing realistic sounds’ is announced in an Aug. 6, 2018 Stanford University (US) news release (also on EurekAlert) by Taylor Kubota,

Advances in computer-generated imagery have brought vivid, realistic animations to life, but the sounds associated with what we see simulated on screen, such as two objects colliding, are often recordings. Now researchers at Stanford University have developed a system that automatically renders accurate sounds for a wide variety of animated phenomena.

“There’s been a Holy Grail in computing of being able to simulate reality for humans. We can animate scenes and render them visually with physics and computer graphics, but, as for sounds, they are usually made up,” said Doug James, professor of computer science at Stanford University. “Currently there exists no way to generate realistic synchronized sounds for complex animated content, such as splashing water or colliding objects, automatically. This fills that void.”

The researchers will present their work on this sound synthesis system as part of ACM SIGGRAPH 2018, the leading conference on computer graphics and interactive techniques. In addition to enlivening movies and virtual reality worlds, this system could also help engineering companies prototype how products would sound before being physically produced, and hopefully encourage designs that are quieter and less irritating, the researchers said.

“I’ve spent years trying to solve partial differential equations – which govern how sound propagates – by hand,” said Jui-Hsien Wang, a graduate student in James’ lab and in the Institute for Computational and Mathematical Engineering (ICME), and lead author of the paper. “This is actually a place where you don’t just solve the equation but you can actually hear it once you’ve done it. That’s really exciting to me and it’s fun.”

Predicting sound

Informed by geometry and physical motion, the system figures out the vibrations of each object and how, like a loudspeaker, those vibrations excite sound waves. It computes the pressure waves cast off by rapidly moving and vibrating surfaces but does not replicate room acoustics. So, although it does not recreate the echoes in a grand cathedral, it can resolve detailed sounds from scenarios like a crashing cymbal, an upside-down bowl spinning to a stop, a glass filling up with water or a virtual character talking into a megaphone.

Most sounds associated with animations rely on pre-recorded clips, which require vast manual effort to synchronize with the action on-screen. These clips are also restricted to noises that exist – they can’t predict anything new. Other systems that produce and predict sounds as accurate as those of James and his team work only in special cases, or assume the geometry doesn’t deform very much. They also require a long pre-computation phase for each separate object.

“Ours is essentially just a render button with minimal pre-processing that treats all objects together in one acoustic wave simulation,” said Ante Qu, a graduate student in James’ lab and co-author of the paper.

The simulated sound that results from this method is highly detailed. It takes into account the sound waves produced by each object in an animation but also predicts how those waves bend, bounce or deaden based on their interactions with other objects and sound waves in the scene.

Challenges ahead

In its current form, the group’s process takes a while to create the finished product. But, now that they have proven this technique’s potential, they can focus on performance optimizations, such as implementing their method on parallel GPU hardware, that should make it drastically faster.

And, even in its current state, the results are worth the wait.

“The first water sounds we generated with the system were among the best ones we had simulated – and water is a huge challenge in computer-generated sound,” said James. “We thought we might get a little improvement, but it is dramatically better than previous approaches even right out of the box. It was really striking.”

Although the group’s work has faithfully rendered sounds of various objects spinning, falling and banging into each other, more complex objects and interactions – like the reverberating tones of a Stradivarius violin – remain difficult to model realistically. That, the group said, will have to wait for a future solution.

Timothy Langlois of Adobe Research is a co-author of this paper. This research was funded by the National Science Foundation and Adobe Research. James is also a professor, by courtesy, of music and a member of Stanford Bio-X.

Researchers Timothy Langlois, Doug L. James, Ante Qu and Jui-Hsien Wang have created a video featuring highlights of animations with sounds synthesized using the Stanford researchers’ new system.,

The researchers have also provided this image,

By computing pressure waves cast off by rapidly moving and vibrating surfaces – such as a cymbal – a new sound synthesis system developed by Stanford researchers can automatically render realistic sound for computer animations. (Image credit: Timothy Langlois, Doug L. James, Ante Qu and Jui-Hsien Wang)

It does seem like we’re synthesizing the world around us, eh?

The SIGGRAPH 2018 art gallery

Here’s what SIGGRAPH had to say about its 2018 art gallery in Vancouver and the themes for the conference and the gallery (from a May 18, 2018 Associating for Computing Machinery news release on globalnewswire.com (also on this 2018 SIGGRAPH webpage),

SIGGRAPH 2018, the world’s leading showcase of digital art created using computer graphics and interactive techniques, will present a special Art Gallery, entitled “Origins,” and historic Art Papers in Vancouver, B.C. The 45th SIGGRAPH conference will take place 12–16 August at the Vancouver Convention Centre. The programs will also honor the generations of creators that have come before through a special, 50th anniversary edition of the Leonard journal. To register for the conference, visit S2018.SIGGRAPH.ORG.

The SIGGRAPH 2018 ART GALLERY is a curated exhibition, conceived as a dialogical space that enables the viewer to reflect on man’s diverse cultural values and rituals through contemporary creative practices. Building upon an exciting and eclectic selection of creative practices mediated through technologies that represent the sophistication of our times, the SIGGRAPH 2018 Art Gallery will embrace the narratives of the indigenous communities based near Vancouver and throughout Canada as a source of inspiration. The exhibition will feature contemporary media artworks, art pieces by indigenous communities, and other traces of technologically mediated Ludic practices.

Andrés Burbano, SIGGRAPH 2018 Art Gallery chair and professor at Universidad de los Andes, said, “The Art Gallery aims to articulate myth and technology, science and art, the deep past and the computational present, and will coalesce around a theme of ‘Origins.’ Media and technological creative expressions will explore principles such as the origins of the cosmos, the origins of life, the origins of human presence, the origins of the occupation of territories in the Americas, and the origins of people living in the vast territories of the Arctic.”

He continued, “The venue [in Vancouver] hopes to rekindle the original spark that ignited the collaborative spirit of the SIGGRAPH community of engineers, scientists, and artists, who came together to create the very first conference in the early 1970s.”

Highlights from the 2018 Art Gallery include:

Transformation Mask (Canada) [Technology Based]
Shawn Hunt, independent; and Microsoft Garage: Andy Klein, Robert Butterworth, Jonathan Cobb, Jeremy Kersey, Stacey Mulcahy, Brendan O’Rourke, Brent Silk, and Julia Taylor-Hell, Microsoft Vancouver

TRANSFORMATION MASK is an interactive installation that features the Microsoft HoloLens. It utilizes electronics and mechanical engineering to express a physical and digital transformation. Participants are immersed in spatial sounds and holographic visuals.

Somnium (U.S.) [Science Based]
Marko Peljhan, Danny Bazo, and Karl Yerkes, University of California, Santa Barbara

Somnium is a cybernetic installation that provides visitors with the ability to sensorily, cognitively, and emotionally contemplate and experience exoplanetary discoveries, their macro and micro dimensions, and the potential for life in our Galaxy. Some might call it “space telescope.”

Ernest Edmonds Retrospective – Art Systems 1968-2018 (United Kingdom) [History Based]
Ernest Edmonds, De Montfort University

Celebrating one of the pioneers of computer graphics-based art since the early 1970s, this Ernest Edmonds career retrospective will showcase snapshots of Edmonds’ work as it developed over the years. With one piece from each decade, the retrospective will also demonstrate how vital the Leonardo journal has been throughout the 50-year journey.

In addition to the works above, the Art Gallery will feature pieces from notable female artists Ozge Samanci, Ruth West, and Nicole L’Hullier. For more information about the Edmonds retrospective, read THIS POST ON THE ACM SIGGRAPH BLOG.

The SIGGRAPH 2018 ART PAPERS program is designed to feature research from artists, scientists, theorists, technologists, historians, and more in one of four categories: project description, theory/criticism, methods, or history. The chosen work was selected by an international jury of scholars, artists, and immersive technology developers.

To celebrate the 50th anniversary of LEONARDO (MIT Press), and 10 years of its annual SIGGRAPH issue, SIGGRAPH 2018 is pleased to announce a special anniversary edition of the journal, which will feature the 2018 art papers. For 50 years, Leonardo has been the definitive publication for artist-academics. To learn more about the relationship between SIGGRAPH and the journal, listen to THIS EPISODE OF THE SIGGRAPH SPOTLIGHT PODCAST.

“In order to encourage a wider range of topics, we introduced a new submission type, short papers. This enabled us to accept more content than in previous years. Additionally, for the first time, we will introduce sessions that integrate the Art Gallery artist talks with Art Papers talks, promoting richer connections between these two creative communities,” said Angus Forbes, SIGGRAPH 2018 Art Papers chair and professor at University of California, Santa Cruz.

Art Papers highlights include:

Alienating the Familiar with CGI: A Recipe for Making a Full CGI Art House Animated Feature [Long]
Alex Counsell and Paul Charisse, University of Portsmouth

This paper explores the process of making and funding an art house feature film using full CGI in a marketplace where this has never been attempted. It explores cutting-edge technology and production approaches, as well as routes to successful fundraising.

Augmented Fauna and Glass Mutations: A Dialogue Between Material and Technique in Glassblowing and 3D Printing [Long]
Tobias Klein, City University of Hong Kong

The two presented artworks, “Augmented Fauna” and “Glass Mutations,” were created during an artist residence at the PILCHUCK GLASS SCHOOL. They are examples of the qualities and methods established through a synthesis between digital workflows and traditional craft processes and thus formulate the notion of digital craftsmanship.

Inhabitat: An Imaginary Ecosystem in a Children’s Science Museum [Short]
Graham Wakefield, York University, and Haru Hyunkyung Ji, OCAD University

“Inhabitat” is a mixed reality artwork in which participants become part of an imaginary ecology through three simultaneous perspectives of scale and agency; three distinct ways to see with other eyes. This imaginary world was exhibited at a children’s science museum for five months, using an interactive projection-augmented sculpture, a large screen and speaker array, and a virtual reality head-mounted display.

What’s the what?

My father used to say that and I always assumed it meant summarize the high points, if you need to, and get to the point—fast. In that spirit, I am both fascinated and mildly appalled. The virtual, mixed, and augmented reality technologies, as well as, the others being featured at SIGGRAPH 2018 are wondrous in many ways but it seems we are coming ever closer to a world where we no longer interact with nature or other humans directly. (see my August 10, 2018 posting about the ‘extinction of experience’ for research that encourages more direct interaction with nature) I realize that SIGGRAPH is intended as a primarily technical experience but I think a little more content questioning these technologies and their applications (social implications) might be in order. That’s often the artist’s role but I can’t see anything in the art gallery descriptions that hint at any sort of fundamental critique.

Scientists, outreach and Twitter research plus some tips from a tweeting scientist

I have two bits today and both concern science and Twitter.

Twitter science research

A doodle by Isabelle Côté to illustrate her recent study on the effectiveness of scientists using Twitter to share their research with the public. Credit: Isabelle Côté

I was quite curious about this research on scientists and their Twitter audiences coming from Simon Fraser University (SFU; Vancouver, Canada). From a July 11, 2018 SFU news release (also on EurekAlert),

Isabelle Côté is an SFU professor of marine ecology and conservation and an active science communicator whose prime social media platform is Twitter.

Côté, who has cultivated more than 5,800 followers since she began tweeting in 2012, recently became curious about who her followers are.

“I wanted to know if my followers are mainly scientists or non-scientists – in other words was I preaching to the choir or singing from the rooftops?” she says.

Côté and collaborator Emily Darling set out to find the answer by analyzing the active Twitter accounts of more than 100 ecology and evolutionary biology faculty members at 85 institutions across 11 countries.

Their methodology included categorizing followers as either “inreach” if they were academics, scientists and conservation agencies and donors; or “outreach” if they were science educators, journalists, the general public, politicians and government agencies.

Côté found that scientists with fewer than 1,000 followers primarily reach other scientists. However, scientists with more than 1,000 followers have more types of followers, including those in the “outreach” category.

Twitter and other forms of social media provide scientists with a potential way to share their research with the general public and, importantly, decision- and policy-makers. Côté says public pressure can be a pathway to drive change at a higher level. However, she notes that while social media is an asset, it is “not likely an effective replacement for the more direct science-to-policy outreach that many scientists are now engaging in, such as testifying in front of special governmental committees, directly contacting decision-makers, etc.”

Further, even with greater diversity and reach of followers, the authors concede there are still no guarantees that Twitter messages will be read or understood. Côté cites evidence that people selectively read what fits with their perception of the world, that changing followers’ minds about deeply held beliefs is challenging.

“While Twitter is emerging as a medium of choice for scientists, studies have shown that less than 40 per cent of academic scientists use the platform,” says Côté.

“There’s clearly a lot of room for scientists to build a social media presence and increase their scientific outreach. Our results provide scientists with clear evidence that social media can be used as a first step to disseminate scientific messages well beyond the ivory tower.”

Here’s a link to and a citation for the paper (my thoughts on the matter are after),

Scientists on Twitter: Preaching to the choir or singing from the rooftops? by Isabelle M. Côté and Emily S. Darling. Facets DOI: https://doi.org/10.1139/facets-2018-0002 Published Online 28 June 2018

This paper is in an open access journal.

Thoughts on the research

Neither of the researchers, Côté and Darling, appears to have any social science training; so where I’d ordinarily laud the researchers for their good work, I have to include extra kudos for taking on a type of research outside their usual domain of expertise.

If this sort of thing interests you and you have the time, I definitely recommend reading the paper (from the paper‘s introduction), Note: Links have been removed)

Communication has always been an integral part of the scientific endeavour. In Victorian times, for example, prominent scientists such as Thomas H. Huxley and Louis Agassiz delivered public lectures that were printed, often verbatim, in newspapers and magazines (Weigold 2001), and Charles Darwin wrote his seminal book “On the origin of species” for a popular, non-specialist audience (Desmond and Moore 1991). In modern times, the pace of science communication has become immensely faster, information is conveyed in smaller units, and the modes of delivery are far more numerous. These three trends have culminated in the use of social media by scientists to share their research in accessible and relevant ways to potential audiences beyond their peers. The emphasis on accessibility and relevance aligns with calls for scientists to abandon jargon and to frame and share their science, especially in a “post-truth” world that can emphasize emotion over factual information (Nisbet and Mooney 2007; Bubela et al. 2009; Wilcox 2012; Lubchenco 2017).

The microblogging platform Twitter is emerging as a medium of choice for scientists (Collins et al. 2016), although it is still used by a minority (<40%) of academic faculty (Bart 2009; Noorden 2014). Twitter allows users to post short messages (originally up to 140 characters, increased to 280 characters since November 2017) that can be read by any other user. Users can elect to follow other users whose posts they are interested in, in which case they automatically see their followees’ tweets; conversely, users can be followed by other users, in which case their tweets can be seen by their followers. No permission is needed to follow a user, and reciprocation of following is not mandatory. Tweets can be categorized (with hashtags), repeated (retweeted), and shared via other social media platforms, which can exponentially amplify their spread and can offer links to websites, blogs, or scientific papers (Shiffman 2012).

There are scientific advantages to using digital communication technologies such as Twitter. Scientific users describe it as a means to stay abreast of new scientific literature, grant opportunities, and science policy, to promote their own published papers and exchange ideas, and to participate in conferences they cannot attend in person as “virtual delegates” (Bonetta 2009; Bik and Goldstein 2013; Parsons et al. 2014; Bombaci et al. 2016). Twitter can play a role in most parts of the life cycle of a scientific publication, from making connections with potential collaborators, to collecting data or finding data sources, to dissemination of the finished product (Darling et al. 2013; Choo et al. 2015). There are also some quantifiable benefits for scientists using social media. For example, papers that are tweeted about more often also accumulate more citations (Eysenbach 2011; Thelwall et al. 2013; Peoples et al. 2016), and the volume of tweets in the first week following publication correlates with the likelihood of a paper becoming highly cited (Eysenbach 2011), although such relationships are not always present (e.g., Haustein et al. 2014).

In addition to any academic benefits, scientists might adopt social media, and Twitter in particular, because of the potential to increase the reach of scientific messages and direct engagement with non-scientific audiences (Choo et al. 2015). This potential comes from the fact that Twitter leverages the power of weak ties, defined as low-investment social interactions that are not based on personal relationships (Granovetter 1973). On Twitter, follower–followee relationships are weak: users generally do not personally know the people they follow or the people who follow them, as their interactions are based mainly on message content. Nevertheless, by retweeting and sharing messages, weak ties can act as bridges across social, geographic, or cultural groups and contribute to a wide and rapid spread of information (Zhao et al. 2010; Ugander et al. 2012). The extent to which the messages of tweeting scientists benefit from the power of weak ties is unknown. Does Twitter provide a platform that allows scientists to simply promote their findings to other scientists within the ivory tower (i.e., “inreach”), or are tweeting scientists truly exploiting social media to potentially reach new audiences (“outreach”) (Bik et al. 2015; McClain and Neeley 2015; Fig. 1)?

Fig. 1. Conceptual depiction of inreach and outreach for Twitter communication by academic faculty. Left: If Twitter functions as an inreach tool, tweeting scientists might primarily reach only other scientists and perhaps, over time (arrow), some applied conservation and management science organizations. Right: If Twitter functions as an outreach tool, tweeting scientists might first reach other scientists, but over time (arrow) they will eventually attract members of the media, members of the public who are not scientists, and decision-makers (not necessarily in that order) as followers.

I’m glad to see this work but it’s use of language is not as precise in some places as it could be. They use the term ‘scientists’ throughout but their sample is made up of scientists identified as ecology and/or evolutionary biology (EEMB) researchers, as they briefly note in their Abstract and in the Methods section. With the constant use of the generic term, scientist, throughout most of the paper and taken in tandem with its use in the title, it’s easy to forget that this was a sample of a very specific population..

That the researchers’ sample of EEMB scientists is made up of those working at universities (academic scientists) is clear and it presents an interesting problem. How much does it matter that these are academic scientists? Both in regard to the research itself and with regard to perceptions about scientists. A sentence stating the question is beyond the scope of their research might have been a good idea.

Impressively, Darling and Côté have reached past the English language community to include other language groups, “We considered as many non-English Twitter profiles as possible by including common translations of languages we were familiar with (i.e., French and Spanish: biologista, professeur, profesora, etc.) in our search strings; …”

I cannot emphasize how rare it is to see this attempt to reach out beyond the English language community. Yes!

Getting back to my concern about language,  I would have used ‘suspect’ rather than ‘assume’ in this sentence from the paper’s Discussion, “We assume [emphasis mine] that the patterns we have uncovered for a sample of ecologists and evolutionary biologists in faculty positions can apply broadly across other academic disciplines.” I agree it’s quite likely but it’s an hypothesis/supposition and  needs to be tested. For example, will this hold true if you examine social scientists (such as economists, linguists, political scientists, psychologists, …) or physicists or mathematicians or …?

Is this evidence of unconscious bias regarding wheat the researchers term as ‘non-scientists’?  From the paper’s Discussion (Note: Links have been removed),

Of course, high numbers, diversity, and reach of followers offer no guarantee that messages will be read or understood. There is evidence that people selectively read what fits with their perception of the world (e.g., Sears and Freedman 1967; McPherson et al. 2001; Sunstein 2001; Himelboim et al. 2013). Thus, non-scientists [emphases mine] who follow scientists on Twitter might already be positively inclined to consume scientific information. If this is true, then one could argue that Twitter therefore remains an echo chamber, but it is a much larger one than the usual readership of scientific publications. Moreover, it is difficult to gauge the level of understanding of scientific tweets. The brevity and fragmented nature of science tweets can lead to shallow processing and comprehension of the message (Jiang et al. 2016). One metric of the influence of tweets is the extent to which they are shared (i.e., retweeted). Twitter users retweet posts when they find them interesting (hence the posts were at least read, if not understood) and when they deem the source credible (Metaxas et al. 2015). To our knowledge, there are no data on how often tweets by scientists are reposted by different types of followers. Such information would provide further evidence for an outreach function of Twitter in science communication.

Yes, it’s true that high numbers, etc. do not guarantee your messages will be read or understood and that people do selectively choose what fits their perception of the world. However, that applies equally to scientists and non-scientists despite what the authors appear to be claiming. Also, their use of the term non-scientist is not clear to me. Is this a synonym for ‘general public’ or is it being applied to anyone who may not have an educational background in science but is designated in another category such as policy makers, science communicators, etc. in the research paper?

In any event, ‘policy makers’ absorb a great deal of the researchers’ attention, from the paper’s Discussion (Note: Links have been removed),

Under most theories of change that describe how science ultimately affects evidence-based policies, decision-makers are a crucial group that should be engaged by scientists (Smith et al. 2013). Policy changes can be effected either through direct application of research to policy or, more often, via pressure from public awareness, which can drive or be driven by research (Baron 2010; Phillis et al. 2013). Either pathway requires active engagement by scientists with society (Lubchenco 2017). It is arguably easier than ever for scientists to have access to decision- and policy-makers, as officials at all levels of government are increasingly using social media to connect with the public (e.g., Grant et al. 2010; Kapp et al. 2015). However, we found that decision-makers accounted for only ∼0.3% (n = 191 out of 64 666) of the followers of academic scientists (see also Bombaci et al. 2016 in relation to the audiences of conference tweeting). Moreover, decision-makers begin to follow scientists in greater numbers only once the latter have reached a certain level of “popularity” (i.e., ∼2200 followers; Table 2). The general concern about whether scientific tweets are actually read by followers applies even more strongly to decision-makers, as they are known to use Twitter largely as a broadcasting tool rather than for dialogue (Grant et al. 2010). Thus, social media is not likely an effective replacement for more direct science-to-policy outreach that many scientists are now engaging in, such as testifying in front of special governmental committees, directly contacting decision-makers, etc. However, by actively engaging a large Twitter following of non-scientists, scientists increase the odds of being followed by a decision-maker who might see their messages, as well as the odds of being identified as a potential expert for further contributions.

It may due to the types of materials I tend to stumble across but science outreach has usually been presented as largely an educational effort with the long term goal of assuring the public will continue to support science funding. This passage in the research paper suggests more immediate political and career interests.

Should scientists be on Twitter?

This paper might discourage someone whose primary goal is to reach policy makers via this social media platform but the researchers seem to feel there is value in reaching out to a larger audience. While I’m not comfortable with how the researchers have generalized their results to the entire population of scientists, those results are intriguing..

This next bit features a scientist who as it turns out could be described as an EEMB (evolutionary biology and/or ecology) researcher.

How to tweet science

Stephen Heard wrote a July 31, 2018 posting on his Scientist Sees Squirrel blog about his Twitter feed,

At the 2018 conference of the Canadian Society for Ecology and Evolution, I was part of a lunchtime workshop, “The How and Why of Tweeting Science” – along with 5 friends.  Here I’ll share my slides and commentary.  I hope the other presenters will do the same, and I’ll link to them here as they become available.


I’ve been active on Twitter for about 4 years, but I’m very far from an expert, so my contribution to #CSEETweetShop was more to raise questions than to answer them.  What does it mean to “tweet to the science community”?  Here I’ll share some thoughts about Twitter audience, content, and voice.  These are, of course, my own (roughly formed) opinions, not some kind of wisdom on stone tablets, so take them with the requisite grain of salt!



Just as we do with blogging, we can draw a distinction between two audiences we might intend to reach via Twitter.  We might use Twitter for outreach, to talk to the general public – we could call this “science-communication tweeting”.  Or we could use Twitter for “inreach”, to talk to other scientists – which is what I’d call “science-community tweeting”.  But: for a couple of reasons, this distinction is not as clear as you might thing.  Or at least, your intent to reach one audience or the other may not match the outcome.

There are some data on the topic of scientists’ Twitter audiences.  The data in the slide above come from a recent paper by Isabelle Coté and Emily Darling.  They’re for a sample of 110 faculty members in ecology and evolution, for whom audiences are broken down by their relationship (if any) to science.  The key result: most ecology and evolution faculty on Twitter have audiences dominated by other scientists (light blue), with the general public (dark blue) a significant but more modest chunk. There’s variation, some of which may well relate to the tweeters’ intended audiences – but we can draw two fairly clear conclusions:

  • Nearly all of us tweet mostly to the science community; but
  • Almost none of us tweets only to the science community (or for that matter only to the general public).

The same paper analyzes follower composition as a function of audience size, and these data suggest that one’s audience is likely to change it builds.  Notice how the dark-blue “general public” line lags behind, then catches, the light-blue “other scientists” line*.  Earlier in your Twitter career, it’s likely that your audience will be even more strongly dominated by the science community – whether or not that’s what you intend.

In short: you probably can’t pick the audience you’re talking to; but you can pick the audience you’re talking for.  Given that, how might you use Twitter to talk for the science community?

I particularly like his constant questions about audience. He discusses other issues, such as content, but he always returns to the audience. Having worked in communication(s) and marketing, I have to applaud his focus on the audience. I can’t tell you how many times, we’d answer the question as to whom our audience was and we’d never revisit it. (mea culpa) Heard’s insistence on constantly checking in and questioning your assumptions is excellent.

Seeing  Coté’s and Darling’s paper cited in his presentation, gives some idea of how closely he follows the thinking about science outreach in his field.

Both Coté’s and Darling’s academic paper and Heard’s posting make for accessible reading while offering valuable information.

The new knitting: electronics and batteries

Researchers from China have developed a new type of yarn for flexible electronics. A March 28, 2018 news item on Nanowerk announces the work, (Note: A link has been removed),

When someone thinks about knitting, they usually don’t conjure up an image of sweaters and scarves made of yarn that can power watches and lights. But that’s just what one group is reporting in ACS Nano (“Waterproof and Tailorable Elastic Rechargeable Yarn Zinc Ion Batteries by a Cross-Linked Polyacrylamide Electrolyte”). They have developed a rechargeable yarn battery that is waterproof and flexible. It also can be cut into pieces and still work.

A March 28, 2018 2018 American Chemical Society (ACS) news release (also on EurekAlert), which originated the news item, expands on the theme (Note: Links have been removed),

Most people are familiar with smartwatches, but for wearable electronics to progress, scientists will need to overcome the challenge of creating a device that is deformable, durable, versatile and wearable while still holding and maintaining a charge. One dimensional fiber or yarn has shown promise, since it is tiny, flexible and lightweight. Previous studies have had some success combining one-dimensional fibers with flexible Zn-MnO2 batteries, but many of these lose charge capacity and are not rechargeable. So, Chunyi Zhi and colleagues wanted to develop a rechargeable yarn zinc-ion battery that would maintain its charge capacity, while being waterproof and flexible.

The group twisted carbon nanotube fibers into a yarn, then coated one piece of yarn with zinc to form an anode, and another with magnesium oxide to form a cathode. These two pieces were then twisted like a double helix and coated with a polyacrylamide electrolyte and encased in silicone. Upon testing, the yarn zinc-ion battery was stable, had a high charge capacity and was rechargeable and waterproof. In addition, the material could be knitted and stretched. It also could be cut into several pieces, each of which could power a watch. In a proof-of-concept demonstration, eight pieces of the cut yarn battery were woven into a long piece that could power a belt containing 100 light emitting diodes (known as LEDs) and an electroluminescent panel.

The authors acknowledge funding from the National Natural Science Foundation of China and the Research Grants Council of Hong Kong Joint Research Scheme, City University of Hong Kong and the Sichuan Provincial Department of Science & Technology.

Here’s an image the researchers have used to illustrate their work,


Courtesy: American Chemical Society

Here’s a link to and a citation for the paper,

Waterproof and Tailorable Elastic Rechargeable Yarn Zinc Ion Batteries by a Cross-Linked Polyacrylamide Electrolyte by Hongfei Li, Zhuoxin Liu, Guojin Liang, Yang Huang, Yan Huang, Minshen Zhu, Zengxia Pe, Qi Xue, Zijie Tang, Yukun Wang, Baohua Li, and Chunyi Zhi. ACS Nano, Article ASAP DOI: 10.1021/acsnano.7b09003 Publication Date (Web): March 28, 2018

Copyright © 2018 American Chemical Society

This paper is behind a paywall.

3D printed all liquid electronics

Even after watching the video, I still don’t quite believe it. A March 28, 2018 news item on ScienceDaily announces the work,

Scientists from the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab [or LBNL]) have developed a way to print 3-D structures composed entirely of liquids. Using a modified 3-D printer, they injected threads of water into silicone oil — sculpting tubes made of one liquid within another liquid.

They envision their all-liquid material could be used to construct liquid electronics that power flexible, stretchable devices. The scientists also foresee chemically tuning the tubes and flowing molecules through them, leading to new ways to separate molecules or precisely deliver nanoscale building blocks to under-construction compounds.

A March 28, 2018 Berkeley Lab March 26, 2018 news release (also on EurekAlert), which originated the news item, describe the work in more detail,

The researchers have printed threads of water between 10 microns and 1 millimeter in diameter, and in a variety of spiraling and branching shapes up to several meters in length. What’s more, the material can conform to its surroundings and repeatedly change shape.

“It’s a new class of material that can reconfigure itself, and it has the potential to be customized into liquid reaction vessels for many uses, from chemical synthesis to ion transport to catalysis,” said Tom Russell, a visiting faculty scientist in Berkeley Lab’s Materials Sciences Division. He developed the material with Joe Forth, a postdoctoral researcher in the Materials Sciences Division, as well as other scientists from Berkeley Lab and several other institutions. They report their research March 24 [2018] in the journal Advanced Materials.

The material owes its origins to two advances: learning how to create liquid tubes inside another liquid, and then automating the process.

For the first step, the scientists developed a way to sheathe tubes of water in a special nanoparticle-derived surfactant that locks the water in place. The surfactant, essentially soap, prevents the tubes from breaking up into droplets. Their surfactant is so good at its job, the scientists call it a nanoparticle supersoap.

The supersoap was achieved by dispersing gold nanoparticles into water and polymer ligands into oil. The gold nanoparticles and polymer ligands want to attach to each other, but they also want to remain in their respective water and oil mediums. The ligands were developed with help from Brett Helms at the Molecular Foundry, a DOE Office of Science User Facility located at Berkeley Lab.

In practice, soon after the water is injected into the oil, dozens of ligands in the oil attach to individual nanoparticles in the water, forming a nanoparticle supersoap. These supersoaps jam together and vitrify, like glass, which stabilizes the interface between oil and water and locks the liquid structures in position.

This stability means we can stretch water into a tube, and it remains a tube. Or we can shape water into an ellipsoid, and it remains an ellipsoid,” said Russell. “We’ve used these nanoparticle supersoaps to print tubes of water that last for several months.”

Next came automation. Forth modified an off-the-shelf 3-D printer by removing the components designed to print plastic and replacing them with a syringe pump and needle that extrudes liquid. He then programmed the printer to insert the needle into the oil substrate and inject water in a predetermined pattern.

“We can squeeze liquid from a needle, and place threads of water anywhere we want in three dimensions,” said Forth. “We can also ping the material with an external force, which momentarily breaks the supersoap’s stability and changes the shape of the water threads. The structures are endlessly reconfigurable.”

This image illustrates how the water is printed,

These schematics show the printing of water in oil using a nanoparticle supersoap. Gold nanoparticles in the water combine with polymer ligands in the oil to form an elastic film (nanoparticle supersoap) at the interface, locking the structure in place. (Credit: Berkeley Lab)

Here’s a link to and a citation for the paper,

Reconfigurable Printed Liquids by Joe Forth, Xubo Liu, Jaffar Hasnain, Anju Toor, Karol Miszta, Shaowei Shi, Phillip L. Geissler, Todd Emrick, Brett A. Helms, Thomas P. Russell. Advanced Materials https://doi.org/10.1002/adma.201707603 First published: 24 March 2018

This paper is behind a paywall.