Tag Archives: US

Removing more than 99% of crude oil from ‘produced’ water (well water)

Should you have an oil well nearby (see The Urban Oil Fields of Los Angeles in an August 28, 2014 photo essay by Alan Taylor for The Atlantic for examples of oil wells in various municipalities and cities associated with LS) , this news from Texas may interest you.

From an August 15, 2018 news item on Nanowerk,

Oil and water tend to separate, but they mix well enough to form stable oil-in-water emulsions in produced water from oil reservoirs to become a problem. Rice University scientists have developed a nanoparticle-based solution that reliably removes more than 99 percent of the emulsified oil that remains after other processing is done.
The Rice lab of chemical engineer Sibani Lisa Biswal made a magnetic nanoparticle compound that efficiently separates crude oil droplets from produced water that have proven difficult to remove with current methods.

An August 15, 2018 Rice University news release (also on EurekAlert), which originated the news item, describes the work in more detail,

Produced water [emphasis mine] comes from production wells along with oil. It often includes chemicals and surfactants pumped into a reservoir to push oil to the surface from tiny pores or cracks, either natural or fractured, deep underground. Under pressure and the presence of soapy surfactants, some of the oil and water form stable emulsions that cling together all the way back to the surface.

While methods exist to separate most of the oil from the production flow, engineers at Shell Global Solutions, which sponsored the project, told Biswal and her team that the last 5 percent of oil tends to remain stubbornly emulsified with little chance to be recovered.

“Injected chemicals and natural surfactants in crude oil can oftentimes chemically stabilize the oil-water interface, leading to small droplets of oil in water which are challenging to break up,” said Biswal, an associate professor of chemical and biomolecular engineering and of materials science and nanoengineering.

The Rice lab’s experience with magnetic particles and expertise in amines, courtesy of former postdoctoral researcher and lead author Qing Wang, led it to combine techniques. The researchers added amines to magnetic iron nanoparticles. Amines carry a positive charge that helps the nanoparticles find negatively charged oil droplets. Once they do, the nanoparticles bind the oil. Magnets are then able to pull the droplets and nanoparticles out of the solution.

“It’s often hard to design nanoparticles that don’t simply aggregate in the high salinities that are typically found in reservoir fluids, but these are quite stable in the produced water,” Biswal said.

The enhanced nanoparticles were tested on emulsions made in the lab with model oil as well as crude oil.

In both cases, researchers inserted nanoparticles into the emulsions, which they simply shook by hand and machine to break the oil-water bonds and create oil-nanoparticle bonds within minutes. Some of the oil floated to the top, while placing the test tube on a magnet pulled the infused nanotubes to the bottom, leaving clear water in between.

Best of all, Biswal said, the nanoparticles can be washed with a solvent and reused while the oil can be recovered. The researchers detailed six successful charge-discharge cycles of their compound and suspect it will remain effective for many more.

She said her lab is designing a flow-through reactor to process produced water in bulk and automatically recycle the nanoparticles. That would be valuable for industry and for sites like offshore oil rigs, where treated water could be returned to the ocean.

It seems to me that ‘produced water’ is another term for polluted water.I guess it’s the reverse to Shakespeare’s “a rose by any other name would smell as sweet” with polluted water by any other name seeming more palatable.

Here’s a link to and a citation for the paper,

Recyclable amine-functionalized magnetic nanoparticles for efficient demulsification of crude oil-in-water emulsions by Qing Wang, Maura C. Puerto, Sumedh Warudkar, Jack Buehler, and Sibani L. Biswal. Environ. Sci.: Water Res. Technol., 2018, Advance Article DOI: 10.1039/C8EW00188J First published on 15 Aug 2018

This paper is behind a paywall.

Rice has included this image amongst others in their news release,

Rice University engineers have developed magnetic nanoparticles that separate the last droplets of oil from produced water at wells. The particles draw in the bulk of the oil and are then attracted to the magnet, as demonstrated here. Photo by Jeff Fitlow

There’s also this video, which, in my book, borders on magical,

Extinction of Experience (EOE)

‘Extinction of experience’ is a bit of an attention getter isn’t it? Well, it worked for me when I first saw it and it seems particularly apt after putting together my August 9, 2018 posting about the 2018 SIGGRAPH conference, in particular, the ‘Previews’ where I featured a synthetic sound project. Here’s a little more about EOE from a July 3, 2018 news item on phys.org,

Opportunities for people to interact with nature have declined over the past century, as most people now live in urban areas and spend much of their time indoors. And while adults are not only experiencing nature less, they are also less likely to take their children outdoors and shape their attitudes toward nature, creating a negative cycle. In 1978, ecologist Robert Pyle coined the phrase “extinction of experience” (EOE) to describe this alienation from nature, and argued that this process is one of the greatest causes of the biodiversity crisis. Four decades later, the question arises: How can we break the cycle and begin to reverse EOE?

A July 3, 2018 North Carolina Museum of Natural Sciences news release, which originated the news item, delves further,

In citizen science programs, people participate in real research, helping scientists conduct studies on local, regional and even global scales. In a study released today, researchers from the North Carolina Museum of Natural Sciences, North Carolina State University, Rutgers University, and the Technion-Israel Institute of Technology propose nature-based citizen science as a means to reconnect people to nature. For people to take the next step and develop a desire to preserve nature, they need to not only go outdoors or learn about nature, but to develop emotional connections to and empathy for nature. Because citizen science programs usually involve data collection, they encourage participants to search for, observe and investigate natural elements around them. According to co-author Caren Cooper, assistant head of the Biodiversity Lab at the N.C. Museum of Natural Sciences, “Nature-based citizen science provides a structure and purpose that might help people notice nature around them and appreciate it in their daily lives.”

To search for evidence of these patterns across programs and the ability of citizen science to reach non-scientific audiences, the researchers studied the participants of citizen science programs. They reviewed 975 papers, analyzed results from studies that included participants’ motivations and/or outcomes in nature-oriented programs, and found that nature-based citizen science fosters cognitive and emotional aspects of experiences in nature, giving it the potential to reverse EOE.

The eMammal citizen science programs offer children opportunities to use technology to observe nature in new ways. Photo: Matt Zeher. The eMammal citizen science programs offer children opportunities to use technology to observe nature in new ways. Photo: Matt Zeher.

The N.C. Museum of Natural Sciences’ Stephanie Schuttler, lead author on the study and scientist on the eMammal citizen science camera trapping program, saw anecdotal evidence of this reversal through her work incorporating camera trap research into K-12 classrooms. “Teachers would tell me how excited and surprised students were about the wildlife in their school yards,” Schuttler says. “They had no idea their campus flourished with coyotes, foxes and deer.” The study Schuttler headed shows citizen science increased participants’ knowledge, skills, interest in and curiosity about nature, and even produced positive behavioral changes. For example, one study revealed that participants in the Garden Butterfly Watch program changed gardening practices to make their yards more hospitable to wildlife. Another study found that participants in the Coastal Observation and Seabird Survey Team program started cleaning up beaches during surveys, even though this was never suggested by the facilitators.

While these results are promising, the EOE study also revealed that this work has only just begun and that most programs do not reach audiences who are not already engaged in science or nature. Only 26 of the 975 papers evaluated participants’ motivations and/or outcomes, and only one of these papers studied children, the most important demographic in reversing EOE. “Many studies were full of amazing stories on how citizen science awakened participants to the nature around them, however, most did not study outcomes,” Schuttler notes. “To fully evaluate the ability for nature-based citizen science to affect people, we encourage citizen science programs to formally study their participants and not just study the system in question.”

Additionally, most citizen science programs attracted or even recruited environmentally mindful participants who likely already spend more time outside than the average person. “If we really want to reconnect people to nature, we need to preach beyond the choir, and attract people who are not already interested in science and/or nature,” Schuttler adds. And as co-author Assaf Shwartz of Technion-Israel Institute of Technology asserts, “The best way to avert the extinction of experience is to create meaningful experiences of nature in the places where we all live and work – cities. Participating in citizen science is an excellent way to achieve this goal, as participation can enhance the sense of commitment people have to protect nature.”

Luckily, some other factors appear to influence participants’ involvement in citizen science. Desire for wellbeing, stewardship and community may provide a gateway for people to participate, an important first step in connecting people to nature. Though nature-based citizen science programs provide opportunities for people to interact with nature, further research on the mechanisms that drive this relationship is needed to strengthen our understanding of various outcomes of citizen science.

And, I because I love dragonflies,

Nature-based citizen science programs, like Dragonfly Pond Watch, offer participants opportunities to observe nature more closely. Credit: Lea Shell.

Here’s a link to and a citation for the paper,

Bridging the nature gap: can citizen science reverse the extinction of experience? by Stephanie G Schuttler, Amanda E Sorensen, Rebecca C Jordan, Caren Cooper, Assaf Shwartz. Frontiers in Ecology and the Environment. DOI: https://doi.org/10.1002/fee.1826 First published: 03 July 2018

This paper is behind a paywall.

SIGGRAPH (Special Interest Group on Computer GRAPHics and Interactive Techniques) and their art gallery from Aug. 12 – 16, 2018 (making the world synthetic) in Vancouver (Canada)

While my main interest is the group’s temporary art gallery, I am providing a brief explanatory introduction and a couple of previews for SIGGRAPH 2018.

Introduction

For anyone unfamiliar with the Special Interest Group on Computer GRAPHics and Interactive Techniques (SIGGRAPH) and its conferences, from the SIGGRAPH Wikipedia entry Note: Links have been removed),

Some highlights of the conference are its Animation Theater and Electronic Theater presentations, where recently created CG films are played. There is a large exhibition floor, where several hundred companies set up elaborate booths and compete for attention and recruits. Most of the companies are in the engineering, graphics, motion picture, or video game industries. There are also many booths for schools which specialize in computer graphics or interactivity.

Dozens of research papers are presented each year, and SIGGRAPH is widely considered the most prestigious forum for the publication of computer graphics research.[1] The recent paper acceptance rate for SIGGRAPH has been less than 26%.[2] The submitted papers are peer-reviewed in a single-blind process.[3] There has been some criticism about the preference of SIGGRAPH paper reviewers for novel results rather than useful incremental progress.[4][5] …

This is the third SIGGRAPH Vancouver has hosted; the others were in 2011 and 2014.  The theme for the 2018 iteration is ‘Generations’; here’s more about it from an Aug. 2, 2018 article by Terry Flores for Variety,

While its focus is firmly forward thinking, SIGGRAPH 2018, the computer graphics, animation, virtual reality, games, digital art, mixed reality, and emerging technologies conference, is also tipping its hat to the past thanks to its theme this year: Generations. The conference runs Aug. 12-16 in Vancouver, B.C.

“In the literal people sense, pioneers in the computer graphics industry are standing shoulder to shoulder with researchers, practitioners and the future of the industry — young people — mentoring them, dabbling across multiple disciplines to innovate, relate, and grow,” says SIGGRAPH 2018 conference chair Roy C. Anthony, VP of creative development and operations at software and technology firm Ventuz. “This is really what SIGGRAPH has always been about. Generations really seemed like a very appropriate way of looking back and remembering where we all came from and how far we’ve come.”

SIGGRAPH 2018 has a number of treats in store for attendees, including the debut of Disney’s first VR film, the short “Cycles”; production sessions on the making of “Blade Runner 2049,” “Game of Thrones,” “Incredibles 2” and “Avengers: Infinity War”; as well as sneak peeks of Disney’s upcoming “Ralph Breaks the Internet: Wreck-It Ralph 2” and Laika’s “Missing Link.”

That list of ‘treats’ in the last paragraph makes the conference seem more like an iteration of a ‘comic-con’ than a technology conference.

Previews

I have four items about work that will be presented at SIGGRAPH 2018, First up, something about ‘redirected walking’ from a June 18, 2018 Association for Computing Machinery news release on EurekAlert,

CHICAGO–In the burgeoning world of virtual reality (VR) technology, it remains a challenge to provide users with a realistic perception of infinite space and natural walking capabilities in the virtual environment. A team of computer scientists has introduced a new approach to address this problem by leveraging a natural human phenomenon: eye blinks.

All humans are functionally blind for about 10 percent of the time under normal circumstances due to eye blinks and saccades, a rapid movement of the eye between two points or objects. Eye blinks are a common and natural cause of so-called “change blindness,” which indicates the inability for humans to notice changes to visual scenes. Zeroing in on eye blinks and change blindness, the team has devised a novel computational system that effectively redirects the user in the virtual environment during these natural instances, all with undetectable camera movements to deliver orientation redirection.

“Previous RDW [redirected walking] techniques apply rotations continuously while the user is walking. But the amount of unnoticeable rotations is limited,” notes Eike Langbehn, lead author of the research and doctoral candidate at the University of Hamburg. “That’s why an orthogonal approach is needed–we add some additional rotations when the user is not focused on the visuals. When we learned that humans are functionally blind for some time due to blinks, we thought, ‘Why don’t we do the redirection during eye blinks?'”

Human eye blinks occur approximately 10 to 20 times per minute, about every 4 to 19 seconds. Leveraging this window of opportunity–where humans are unable to detect major motion changes while in a virtual environment–the researchers devised an approach to synchronize a computer graphics rendering system with this visual process, and introduce any useful motion changes in virtual scenes to enhance users’ overall VR experience.

The researchers’ experiments revealed that imperceptible camera rotations of 2 to 5 degrees and translations of 4 to 9 cm of the user’s viewpoint are possible during a blink without users even noticing. They tracked test participants’ eye blinks by an eye tracker in a VR head-mounted display. In a confirmatory study, the team validated that participants could not reliably detect in which of two eye blinks their viewpoint was manipulated while walking a VR curved path. The tests relied on unconscious natural eye blinking, but the researchers say redirection during blinking could be carried out consciously. Since users can consciously blink multiple times a day without much effort, eye blinks provide great potential to be used as an intentional trigger in their approach.

The team will present their work at SIGGRAPH 2018, held 12-16 August in Vancouver, British Columbia. The annual conference and exhibition showcases the world’s leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.

“RDW is a big challenge since current techniques still need too much space to enable unlimited walking in VR,” notes Langbehn. “Our work might contribute to a reduction of space since we found out that unnoticeable rotations of up to five degrees are possible during blinks. This means we can improve the performance of RDW by approximately 50 percent.”

The team’s results could be used in combination with other VR research, such as novel steering algorithms, improved path prediction, and rotations during saccades, to name a few. Down the road, such techniques could some day enable consumer VR users to virtually walk beyond their living room.

Langbehn collaborated on the work with Frank Steinicke of University of Hamburg, Markus Lappe of University of Muenster, Gregory F. Welch of University of Central Florida, and Gerd Bruder, also of University of Central Florida. For the full paper and video, visit the team’s project page.

###

About ACM, ACM SIGGRAPH, and SIGGRAPH 2018

ACM, the Association for Computing Machinery, is the world’s largest educational and scientific computing society, uniting educators, researchers, and professionals to inspire dialogue, share resources, and address the field’s challenges. ACM SIGGRAPH is a special interest group within ACM that serves as an interdisciplinary community for members in research, technology, and applications in computer graphics and interactive techniques. SIGGRAPH is the world’s leading annual interdisciplinary educational experience showcasing the latest in computer graphics and interactive techniques. SIGGRAPH 2018, marking the 45th annual conference hosted by ACM SIGGRAPH, will take place from 12-16 August at the Vancouver Convention Centre in Vancouver, B.C.

They have provided an image illustrating what they mean (I don’t find it especially informative),

Caption: The viewing behavior of a virtual reality user, including fixations (in green) and saccades (in red). A blink fully suppresses visual perception. Credit: Eike Langbehn

Next up (2), there’s Disney Corporation’s first virtual reality (VR) short, from a July 19, 2018  Association for Computing Machinery news release on EurekAlert,

Walt Disney Animation Studios will debut its first ever virtual reality short film at SIGGRAPH 2018, and the hope is viewers will walk away feeling connected to the characters as equally as they will with the VR technology involved in making the film.

Cycles, an experimental film directed by Jeff Gipson, centers around the true meaning of creating a home and the life it holds inside its walls. The idea for the film is personal, inspired by Gipson’s childhood spending time with his grandparents and creating memories in their home, and later, having to move them to an assisted living residence.

“Every house has a story unique to the people, the characters who live there,” says Gipson. “We wanted to create a story in this single place and be able to have the viewer witness life happening around them. It is an emotionally driven film, expressing the real ups and downs, the happy and sad moments in life.”

For Cycles, Gipson also drew from his past life as an architect, having spent several years designing skate parks, and from his passion for action sports, including freestyle BMX. In Los Angeles, where Gipson lives, it is not unusual to find homes with an empty swimming pool reserved for skating or freestyle biking. Part of the pitch for Cycles came out of Gipson’s experience riding in these empty pools and being curious about the homes attached to them, the families who lived there, and the memories they made.

SIGGRAPH attendees will have the opportunity to experience Cycles at the Immersive Pavilion, a new space for this year’s conference. The Pavilion is devoted exclusively to virtual, augmented, and mixed reality and will contain: the VR Theater, a storytelling extravaganza that is part of the Computer Animation Festival; the Vrcade, a space for VR, AR, and MR games or experiences; and the well-known Village, for showcasing large-scale projects. SIGGRAPH 2018, held 12-16 August in Vancouver, British Columbia, is an annual gathering that showcases the world’s leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.

The production team completed Cycles in four months with about 50 collaborators as part of a professional development program at the studio. A key difference in VR filmmaking includes getting creative with how to translate a story to the VR “screen.” Pre-visualizing the narrative, for one, was a challenge. Rather than traditional storyboarding, Gipson and his team instead used a mix of Quill VR painting techniques and motion capture to “storyboard” Cycles, incorporating painters and artists to generate sculptures or 3D models of characters early on and draw scenes for the VR space. The creators also got innovative with the use of light and color saturation in scenes to help guide the user’s eyes during the film.

“What’s cool for VR is that we are really on the edge of trying to figure out what it is and how to tell stories in this new medium,” says Gipson. “In VR, you can look anywhere and really be transported to a different world, experience it from different angles, and see every detail. We want people watching to feel alive and feel emotion, and give them a true cinematic experience.”

This is Gipson’s VR directorial debut. He joined Walt Disney Animation Studios in 2013, serving as a lighting artist on Disney favorites like Frozen, Zootopia, and Moana. Of getting to direct the studio’s first VR short, he says, “VR is an amazing technology and a lot of times the technology is what is really celebrated. We hope more and more people begin to see the emotional weight of VR films, and with Cycles in particular, we hope they will feel the emotions we aimed to convey with our story.”

Apparently this is a still from the ‘short’,

Caption: Disney Animation Studios will present ‘Cycles’ , its first virtual reality (VR) short, at ACM SIGGRAPH 2018. Credit: Disney Animation Studios

There’s also something (3) from Google as described in a July 26, 2018 Association of Computing Machinery news release on EurekAlert,

Google has unveiled a new virtual reality (VR) immersive experience based on a novel system that captures and renders high-quality, realistic images from the real world using light fields. Created by a team of leading researchers at Google, Welcome to Light Fields is the tech giant’s splash into the nascent arena of light fields VR experiences, an exciting corner of VR video technology gaining traction for its promise to deliver extremely high-quality imagery and experiences in the virtual world.

Google released Welcome to Light Fields earlier this year as a free app on Steam VR for HTC Vive, Oculus Rift, and Windows Mixed Reality headsets. The creators will demonstrate the VR experience at SIGGRAPH 2018, in the Immersive Pavilion, a new space for this year’s conference. The Pavilion is devoted exclusively to virtual, augmented, and mixed reality and will contain: the Vrcade, a space for VR, AR, and MR games or experiences; the VR Theater, a storytelling extravaganza that is part of the Computer Animation Festival; and the well-known Village, for showcasing large-scale projects. SIGGRAPH 2018, held 12-16 August in Vancouver, British Columbia, is an annual gathering that showcases the world’s leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.

Destinations in Welcome to Light Fields include NASA’s Space Shuttle Discovery, delivering to viewers an astronaut’s view inside the flight deck, which has never been open to the public; the pristine teak and mahogany interiors of the Gamble House, an architectural treasure in Pasadena, CA; and the glorious St. Stephen’s Church in Granada Hills, CA, home to a stunning wall of more than 14,000 pieces of glimmering stained glass.

“I love that light fields in VR can teleport you to exotic places in the real world, and truly make you believe you are there,” says Ryan Overbeck, software engineer at Google who co-led the project. “To me, this is magic.”

To bring this experience to life, Overbeck worked with a team that included Paul Debevec, senior staff engineer at Google, who managed the project and led the hardware piece with engineers Xueming Yu, Jay Busch, and Graham Fyffe. With Overbeck, Daniel Erickson and Daniel Evangelakos focused on the software end. The researchers designed a comprehensive system for capturing and rendering high-quality, spherical light field still images from footage captured in the real world. They developed two easy-to-use light field camera rigs, based on the GoPro Hero4action sports camera, that efficiently capture thousands of images on the surface of a sphere. Those images were then passed through a cloud-based light-field-processing pipeline.

Among other things, explains Overbeck, “The processing pipeline uses computer vision to place the images in 3D and generate depth maps, and we use a modified version of our vp9 video codec

to compress the light field data down to a manageable size.” To render a light field dataset, he notes, the team used a rendering algorithm that blends between the thousands of light field images in real-time.

The team relied on Google’s talented pool of engineers in computer vision, graphics, video compression, and machine learning to overcome the unique challenges posed in light fields technology. They also collaborated closely with the WebM team (who make the vp9 video codec) to develop the high-quality light field compression format incorporated into their system, and leaned heavily on the expertise of the Jump VR team to help pose the images and generate depth maps. (Jump is Google’s professional VR system for achieving 3D-360 video production at scale.)

Indeed, with Welcome to Light Fields, the Google team is demonstrating the potential and promise of light field VR technology, showcasing the technology’s ability to provide a truly immersive experience with a level of unmatched realism. Though light fields technology has been researched and explored in computer graphics for more than 30 years, practical systems for actually delivering high-quality light field experiences has not yet been possible.

Part of the team’s motivation behind creating this VR light field experience is to invigorate the nascent field.

“Welcome to Light Fields proves that it is now possible to make a compelling light field VR viewer that runs on consumer-grade hardware, and we hope that this knowledge will encourage others to get involved with building light field technology and media,” says Overbeck. “We understand that in order to eventually make compelling consumer products based on light fields, we need a thriving light field ecosystem. We need open light field codecs, we need artists creating beautiful light field imagery, and we need people using VR in order to engage with light fields.”

I don’t really understand why this image, which looks like something belongs on advertising material, would be chosen to accompany a news release on a science-based distribution outlet,

Caption: A team of leading researchers at Google, will unveil the new immersive virtual reality (VR) experience “Welcome to Lightfields” at ACM SIGGRAPH 2018. Credit: Image courtesy of Google/Overbeck

Finally (4), ‘synthesizing realistic sounds’ is announced in an Aug. 6, 2018 Stanford University (US) news release (also on EurekAlert) by Taylor Kubota,

Advances in computer-generated imagery have brought vivid, realistic animations to life, but the sounds associated with what we see simulated on screen, such as two objects colliding, are often recordings. Now researchers at Stanford University have developed a system that automatically renders accurate sounds for a wide variety of animated phenomena.

“There’s been a Holy Grail in computing of being able to simulate reality for humans. We can animate scenes and render them visually with physics and computer graphics, but, as for sounds, they are usually made up,” said Doug James, professor of computer science at Stanford University. “Currently there exists no way to generate realistic synchronized sounds for complex animated content, such as splashing water or colliding objects, automatically. This fills that void.”

The researchers will present their work on this sound synthesis system as part of ACM SIGGRAPH 2018, the leading conference on computer graphics and interactive techniques. In addition to enlivening movies and virtual reality worlds, this system could also help engineering companies prototype how products would sound before being physically produced, and hopefully encourage designs that are quieter and less irritating, the researchers said.

“I’ve spent years trying to solve partial differential equations – which govern how sound propagates – by hand,” said Jui-Hsien Wang, a graduate student in James’ lab and in the Institute for Computational and Mathematical Engineering (ICME), and lead author of the paper. “This is actually a place where you don’t just solve the equation but you can actually hear it once you’ve done it. That’s really exciting to me and it’s fun.”

Predicting sound

Informed by geometry and physical motion, the system figures out the vibrations of each object and how, like a loudspeaker, those vibrations excite sound waves. It computes the pressure waves cast off by rapidly moving and vibrating surfaces but does not replicate room acoustics. So, although it does not recreate the echoes in a grand cathedral, it can resolve detailed sounds from scenarios like a crashing cymbal, an upside-down bowl spinning to a stop, a glass filling up with water or a virtual character talking into a megaphone.

Most sounds associated with animations rely on pre-recorded clips, which require vast manual effort to synchronize with the action on-screen. These clips are also restricted to noises that exist – they can’t predict anything new. Other systems that produce and predict sounds as accurate as those of James and his team work only in special cases, or assume the geometry doesn’t deform very much. They also require a long pre-computation phase for each separate object.

“Ours is essentially just a render button with minimal pre-processing that treats all objects together in one acoustic wave simulation,” said Ante Qu, a graduate student in James’ lab and co-author of the paper.

The simulated sound that results from this method is highly detailed. It takes into account the sound waves produced by each object in an animation but also predicts how those waves bend, bounce or deaden based on their interactions with other objects and sound waves in the scene.

Challenges ahead

In its current form, the group’s process takes a while to create the finished product. But, now that they have proven this technique’s potential, they can focus on performance optimizations, such as implementing their method on parallel GPU hardware, that should make it drastically faster.

And, even in its current state, the results are worth the wait.

“The first water sounds we generated with the system were among the best ones we had simulated – and water is a huge challenge in computer-generated sound,” said James. “We thought we might get a little improvement, but it is dramatically better than previous approaches even right out of the box. It was really striking.”

Although the group’s work has faithfully rendered sounds of various objects spinning, falling and banging into each other, more complex objects and interactions – like the reverberating tones of a Stradivarius violin – remain difficult to model realistically. That, the group said, will have to wait for a future solution.

Timothy Langlois of Adobe Research is a co-author of this paper. This research was funded by the National Science Foundation and Adobe Research. James is also a professor, by courtesy, of music and a member of Stanford Bio-X.

Researchers Timothy Langlois, Doug L. James, Ante Qu and Jui-Hsien Wang have created a video featuring highlights of animations with sounds synthesized using the Stanford researchers’ new system.,

The researchers have also provided this image,

By computing pressure waves cast off by rapidly moving and vibrating surfaces – such as a cymbal – a new sound synthesis system developed by Stanford researchers can automatically render realistic sound for computer animations. (Image credit: Timothy Langlois, Doug L. James, Ante Qu and Jui-Hsien Wang)

It does seem like we’re synthesizing the world around us, eh?

The SIGGRAPH 2018 art gallery

Here’s what SIGGRAPH had to say about its 2018 art gallery in Vancouver and the themes for the conference and the gallery (from a May 18, 2018 Associating for Computing Machinery news release on globalnewswire.com (also on this 2018 SIGGRAPH webpage),

SIGGRAPH 2018, the world’s leading showcase of digital art created using computer graphics and interactive techniques, will present a special Art Gallery, entitled “Origins,” and historic Art Papers in Vancouver, B.C. The 45th SIGGRAPH conference will take place 12–16 August at the Vancouver Convention Centre. The programs will also honor the generations of creators that have come before through a special, 50th anniversary edition of the Leonard journal. To register for the conference, visit S2018.SIGGRAPH.ORG.

The SIGGRAPH 2018 ART GALLERY is a curated exhibition, conceived as a dialogical space that enables the viewer to reflect on man’s diverse cultural values and rituals through contemporary creative practices. Building upon an exciting and eclectic selection of creative practices mediated through technologies that represent the sophistication of our times, the SIGGRAPH 2018 Art Gallery will embrace the narratives of the indigenous communities based near Vancouver and throughout Canada as a source of inspiration. The exhibition will feature contemporary media artworks, art pieces by indigenous communities, and other traces of technologically mediated Ludic practices.

Andrés Burbano, SIGGRAPH 2018 Art Gallery chair and professor at Universidad de los Andes, said, “The Art Gallery aims to articulate myth and technology, science and art, the deep past and the computational present, and will coalesce around a theme of ‘Origins.’ Media and technological creative expressions will explore principles such as the origins of the cosmos, the origins of life, the origins of human presence, the origins of the occupation of territories in the Americas, and the origins of people living in the vast territories of the Arctic.”

He continued, “The venue [in Vancouver] hopes to rekindle the original spark that ignited the collaborative spirit of the SIGGRAPH community of engineers, scientists, and artists, who came together to create the very first conference in the early 1970s.”

Highlights from the 2018 Art Gallery include:

Transformation Mask (Canada) [Technology Based]
Shawn Hunt, independent; and Microsoft Garage: Andy Klein, Robert Butterworth, Jonathan Cobb, Jeremy Kersey, Stacey Mulcahy, Brendan O’Rourke, Brent Silk, and Julia Taylor-Hell, Microsoft Vancouver

TRANSFORMATION MASK is an interactive installation that features the Microsoft HoloLens. It utilizes electronics and mechanical engineering to express a physical and digital transformation. Participants are immersed in spatial sounds and holographic visuals.

Somnium (U.S.) [Science Based]
Marko Peljhan, Danny Bazo, and Karl Yerkes, University of California, Santa Barbara

Somnium is a cybernetic installation that provides visitors with the ability to sensorily, cognitively, and emotionally contemplate and experience exoplanetary discoveries, their macro and micro dimensions, and the potential for life in our Galaxy. Some might call it “space telescope.”

Ernest Edmonds Retrospective – Art Systems 1968-2018 (United Kingdom) [History Based]
Ernest Edmonds, De Montfort University

Celebrating one of the pioneers of computer graphics-based art since the early 1970s, this Ernest Edmonds career retrospective will showcase snapshots of Edmonds’ work as it developed over the years. With one piece from each decade, the retrospective will also demonstrate how vital the Leonardo journal has been throughout the 50-year journey.

In addition to the works above, the Art Gallery will feature pieces from notable female artists Ozge Samanci, Ruth West, and Nicole L’Hullier. For more information about the Edmonds retrospective, read THIS POST ON THE ACM SIGGRAPH BLOG.

The SIGGRAPH 2018 ART PAPERS program is designed to feature research from artists, scientists, theorists, technologists, historians, and more in one of four categories: project description, theory/criticism, methods, or history. The chosen work was selected by an international jury of scholars, artists, and immersive technology developers.

To celebrate the 50th anniversary of LEONARDO (MIT Press), and 10 years of its annual SIGGRAPH issue, SIGGRAPH 2018 is pleased to announce a special anniversary edition of the journal, which will feature the 2018 art papers. For 50 years, Leonardo has been the definitive publication for artist-academics. To learn more about the relationship between SIGGRAPH and the journal, listen to THIS EPISODE OF THE SIGGRAPH SPOTLIGHT PODCAST.

“In order to encourage a wider range of topics, we introduced a new submission type, short papers. This enabled us to accept more content than in previous years. Additionally, for the first time, we will introduce sessions that integrate the Art Gallery artist talks with Art Papers talks, promoting richer connections between these two creative communities,” said Angus Forbes, SIGGRAPH 2018 Art Papers chair and professor at University of California, Santa Cruz.

Art Papers highlights include:

Alienating the Familiar with CGI: A Recipe for Making a Full CGI Art House Animated Feature [Long]
Alex Counsell and Paul Charisse, University of Portsmouth

This paper explores the process of making and funding an art house feature film using full CGI in a marketplace where this has never been attempted. It explores cutting-edge technology and production approaches, as well as routes to successful fundraising.

Augmented Fauna and Glass Mutations: A Dialogue Between Material and Technique in Glassblowing and 3D Printing [Long]
Tobias Klein, City University of Hong Kong

The two presented artworks, “Augmented Fauna” and “Glass Mutations,” were created during an artist residence at the PILCHUCK GLASS SCHOOL. They are examples of the qualities and methods established through a synthesis between digital workflows and traditional craft processes and thus formulate the notion of digital craftsmanship.

Inhabitat: An Imaginary Ecosystem in a Children’s Science Museum [Short]
Graham Wakefield, York University, and Haru Hyunkyung Ji, OCAD University

“Inhabitat” is a mixed reality artwork in which participants become part of an imaginary ecology through three simultaneous perspectives of scale and agency; three distinct ways to see with other eyes. This imaginary world was exhibited at a children’s science museum for five months, using an interactive projection-augmented sculpture, a large screen and speaker array, and a virtual reality head-mounted display.

What’s the what?

My father used to say that and I always assumed it meant summarize the high points, if you need to, and get to the point—fast. In that spirit, I am both fascinated and mildly appalled. The virtual, mixed, and augmented reality technologies, as well as, the others being featured at SIGGRAPH 2018 are wondrous in many ways but it seems we are coming ever closer to a world where we no longer interact with nature or other humans directly. (see my August 10, 2018 posting about the ‘extinction of experience’ for research that encourages more direct interaction with nature) I realize that SIGGRAPH is intended as a primarily technical experience but I think a little more content questioning these technologies and their applications (social implications) might be in order. That’s often the artist’s role but I can’t see anything in the art gallery descriptions that hint at any sort of fundamental critique.

The mystifying physics of paint-on semiconductors

I was not expecting a Canadian connection but it seems we are heavily invested in this research at the Georgia Institute of Technology (Georgia Tech), from a March 19, 2018 news item on ScienceDaily,

Some novel materials that sound too good to be true turn out to be true and good. An emergent class of semiconductors, which could affordably light up our future with nuanced colors emanating from lasers, lamps, and even window glass, could be the latest example.

These materials are very radiant, easy to process from solution, and energy-efficient. The nagging question of whether hybrid organic-inorganic perovskites (HOIPs) could really work just received a very affirmative answer in a new international study led by physical chemists at the Georgia Institute of Technology.

A March 19,. 2018 Georgia Tech news release (also on EurekAlert), which originated the news item, provides more detail,

The researchers observed in an HOIP a “richness” of semiconducting physics created by what could be described as electrons dancing on chemical underpinnings that wobble like a funhouse floor in an earthquake. That bucks conventional wisdom because established semiconductors rely upon rigidly stable chemical foundations, that is to say, quieter molecular frameworks, to produce the desired quantum properties.

“We don’t know yet how it works to have these stable quantum properties in this intense molecular motion,” said first author Felix Thouin, a graduate research assistant at Georgia Tech. “It defies physics models we have to try to explain it. It’s like we need some new physics.”

Quantum properties surprise

Their gyrating jumbles have made HOIPs challenging to examine, but the team of researchers from a total of five research institutes in four countries succeeded in measuring a prototypical HOIP and found its quantum properties on par with those of established, molecularly rigid semiconductors, many of which are graphene-based.

“The properties were at least as good as in those materials and may be even better,” said Carlos Silva, a professor in Georgia Tech’s School of Chemistry and Biochemistry. Not all semiconductors also absorb and emit light well, but HOIPs do, making them optoelectronic and thus potentially useful in lasers, LEDs, other lighting applications, and also in photovoltaics.

The lack of molecular-level rigidity in HOIPs also plays into them being more flexibly produced and applied.

Silva co-led the study with physicist Ajay Ram Srimath Kandada. Their team published the results of their study on two-dimensional HOIPs on March 8, 2018, in the journal Physical Review Materials. Their research was funded by EU Horizon 2020, the Natural Sciences and Engineering Research Council of Canada, the Fond Québécois pour la Recherche, the [National] Research Council of Canada, and the National Research Foundation of Singapore. [emphases mine]

The ‘solution solution’

Commonly, semiconducting properties arise from static crystalline lattices of neatly interconnected atoms. In silicon, for example, which is used in most commercial solar cells, they are interconnected silicon atoms. The same principle applies to graphene-like semiconductors.

“These lattices are structurally not very complex,” Silva said. “They’re only one atom thin, and they have strict two-dimensional properties, so they’re much more rigid.”

“You forcefully limit these systems to two dimensions,” said Srimath Kandada, who is a Marie Curie International Fellow at Georgia Tech and the Italian Institute of Technology. “The atoms are arranged in infinitely expansive, flat sheets, and then these very interesting and desirable optoelectronic properties emerge.”

These proven materials impress. So, why pursue HOIPs, except to explore their baffling physics? Because they may be more practical in important ways.

“One of the compelling advantages is that they’re all made using low-temperature processing from solutions,” Silva said. “It takes much less energy to make them.”

By contrast, graphene-based materials are produced at high temperatures in small amounts that can be tedious to work with. “With this stuff (HOIPs), you can make big batches in solution and coat a whole window with it if you want to,” Silva said.

Funhouse in an earthquake

For all an HOIP’s wobbling, it’s also a very ordered lattice with its own kind of rigidity, though less limiting than in the customary two-dimensional materials.

“It’s not just a single layer,” Srimath Kandada said. “There is a very specific perovskite-like geometry.” Perovskite refers to the shape of an HOIPs crystal lattice, which is a layered scaffolding.

“The lattice self-assembles,” Srimath Kandada said, “and it does so in a three-dimensional stack made of layers of two-dimensional sheets. But HOIPs still preserve those desirable 2D quantum properties.”

Those sheets are held together by interspersed layers of another molecular structure that is a bit like a sheet of rubber bands. That makes the scaffolding wiggle like a funhouse floor.

“At room temperature, the molecules wiggle all over the place. That disrupts the lattice, which is where the electrons live. It’s really intense,” Silva said. “But surprisingly, the quantum properties are still really stable.”

Having quantum properties work at room temperature without requiring ultra-cooling is important for practical use as a semiconductor.

Going back to what HOIP stands for — hybrid organic-inorganic perovskites – this is how the experimental material fit into the HOIP chemical class: It was a hybrid of inorganic layers of a lead iodide (the rigid part) separated by organic layers (the rubber band-like parts) of phenylethylammonium (chemical formula (PEA)2PbI4).

The lead in this prototypical material could be swapped out for a metal safer for humans to handle before the development of an applicable material.

Electron choreography

HOIPs are great semiconductors because their electrons do an acrobatic square dance.

Usually, electrons live in an orbit around the nucleus of an atom or are shared by atoms in a chemical bond. But HOIP chemical lattices, like all semiconductors, are configured to share electrons more broadly.

Energy levels in a system can free the electrons to run around and participate in things like the flow of electricity and heat. The orbits, which are then empty, are called electron holes, and they want the electrons back.

“The hole is thought of as a positive charge, and of course, the electron has a negative charge,” Silva said. “So, hole and electron attract each other.”

The electrons and holes race around each other like dance partners pairing up to what physicists call an “exciton.” Excitons act and look a lot like particles themselves, though they’re not really particles.

Hopping biexciton light

In semiconductors, millions of excitons are correlated, or choreographed, with each other, which makes for desirable properties, when an energy source like electricity or laser light is applied. Additionally, excitons can pair up to form biexcitons, boosting the semiconductor’s energetic properties.

“In this material, we found that the biexciton binding energies were high,” Silva said. “That’s why we want to put this into lasers because the energy you input ends up to 80 or 90 percent as biexcitons.”

Biexcitons bump up energetically to absorb input energy. Then they contract energetically and pump out light. That would work not only in lasers but also in LEDs or other surfaces using the optoelectronic material.

“You can adjust the chemistry (of HOIPs) to control the width between biexciton states, and that controls the wavelength of the light given off,” Silva said. “And the adjustment can be very fine to give you any wavelength of light.”

That translates into any color of light the heart desires.

###

Coauthors of this paper were Stefanie Neutzner and Annamaria Petrozza from the Italian Institute of Technology (IIT); Daniele Cortecchia from IIT and Nanyang Technological University (NTU), Singapore; Cesare Soci from the Centre for Disruptive Photonic Technologies, Singapore; Teddy Salim and Yeng Ming Lam from NTU; and Vlad Dragomir and Richard Leonelli from the University of Montreal. …

Three Canadian science funding agencies plus European and Singaporean science funding agencies but not one from the US ? That’s a bit unusual for research undertaken at a US educational institution.

In any event, here’s a link to and a citation for the paper,

Stable biexcitons in two-dimensional metal-halide perovskites with strong dynamic lattice disorder by Félix Thouin, Stefanie Neutzner, Daniele Cortecchia, Vlad Alexandru Dragomir, Cesare Soci, Teddy Salim, Yeng Ming Lam, Richard Leonelli, Annamaria Petrozza, Ajay Ram Srimath Kandada, and Carlos Silva. Phys. Rev. Materials 2, 034001 – Published 8 March 2018

This paper is behind a paywall.

Better motor control for prosthetic hands (the illusion of feeling) and a discussion of superprostheses and reality

I have two bits about prosthetics, one which focuses on how most of us think of them and another about science fiction fantasies.

Better motor control

This new technology comes via a collaboration between the University of Alberta, the University of New Brunswick (UNB) and Ohio’s Cleveland Clinic, from a March 18, 2018 article by Nicole Ireland for the Canadian Broadcasting Corporation’s (CBC) news online,

Rob Anderson was fighting wildfires in Alberta when the helicopter he was in crashed into the side of a mountain. He survived, but lost his left arm and left leg.

More than 10 years after that accident, Anderson, now 39, says prosthetic limb technology has come a long way, and he feels fortunate to be using “top of the line stuff” to help him function as normally as possible. In fact, he continues to work for the Alberta government’s wildfire fighting service.

His powered prosthetic hand can do basic functions like opening and closing, but he doesn’t feel connected to it — and has limited ability to perform more intricate movements with it, such as shaking hands or holding a glass.

Anderson, who lives in Grande Prairie, Alta., compares its function to “doing things with a long pair of pliers.”

“There’s a disconnect between what you’re physically touching and what your body is doing,” he told CBC News.

Anderson is one of four Canadian participants in a study that suggests there’s a way to change that. …

Six people, all of whom had arm amputations from below the elbow or higher, took part in the research. It found that strategically placed vibrating “robots” made them “feel” the movements of their prosthetic hands, allowing them to grasp and grip objects with much more control and accuracy.

All of the participants had all previously undergone a specialized surgical procedure called “targeted re-innervation.” The nerves that had connected to their hands before they were amputated were rewired to link instead to muscles (including the biceps and triceps) in their remaining upper arms and in their chests.

For the study, researchers placed the robotic devices on the skin over those re-innervated muscles and vibrated them as the participants opened, closed, grasped or pinched with their prosthetic hands.

While the vibration was turned on, the participants “felt” their artificial hands moving and could adjust their grip based on the sensation. …

I have an April 24, 2017 posting about a tetraplegic patient who had a number of electrodes implanted in his arms and hands linked to a brain-machine interface and which allowed him to move his hands and arms; the implants were later removed. It is a different problem with a correspondingly different technological solution but there does seem to be increased interest in implanting sensors and electrodes into the human body to increase mobility and/or sensation.

Anderson describes how it ‘feels,

“It was kind of surreal,” Anderson said. “I could visually see the hand go out, I would touch something, I would squeeze it and my phantom hand felt like it was being closed and squeezing on something and it was sending the message back to my brain.

“It was a very strange sensation to actually be able to feel that feedback because I hadn’t in 10 years.”

The feeling of movement in the prosthetic hand is an illusion, the researchers say, since the vibration is actually happening to a muscle elsewhere in the body. But the sensation appeared to have a real effect on the participants.

“They were able to control their grasp function and how much they were opening the hand, to the same degree that someone with an intact hand would,” said study co-author Dr. Jacqueline Hebert, an associate professor in the Faculty of Rehabilitation Medicine at the University of Alberta.

Although the researchers are encouraged by the study findings, they acknowledge that there was a small number of participants, who all had access to the specialized re-innervation surgery to redirect the nerves from their amputated hands to other parts of their body.

The next step, they say, is to see if they can also simulate the feeling of movement in a broader range of people who have had other types of amputations, including legs, and have not had the re-innervation surgery.

Here’s a March 15, 2018  CBC New Brunswick radio interview about the work,

This is a bit longer than most of the embedded audio pieces that I have here but it’s worth it. Sadly, I can’t identify the interviewer who did a very good job with Jon Sensinger, associate director of UNB’s Institute of Biomedical Engineering. One more thing, I noticed that the interviewer made no mention of the University of Alberta in her introduction or in the subsequent interview. I gather regionalism reigns supreme everywhere in Canada. Or, maybe she and Sensinger just forgot. It happens when you’re excited. Also, there were US institutions in Ohio and Virginia that participated in this work.

Here’s a link to and a citation for the team’s paper,

Illusory movement perception improves motor control for prosthetic hands by Paul D. Marasco, Jacqueline S. Hebert, Jon W. Sensinger, Courtney E. Shell, Jonathon S. Schofield, Zachary C. Thumser, Raviraj Nataraj, Dylan T. Beckler, Michael R. Dawson, Dan H. Blustein, Satinder Gill, Brett D. Mensh, Rafael Granja-Vazquez, Madeline D. Newcomb, Jason P. Carey, and Beth M. Orzell. Science Translational Medicine 14 Mar 2018: Vol. 10, Issue 432, eaao6990 DOI: 10.1126/scitranslmed.aao6990

This paper is open access.

Superprostheses and our science fiction future

A March 20, 2018 news item on phys.org features an essay on about superprostheses and/or assistive devices,

Assistive devices may soon allow people to perform virtually superhuman feats. According to Robert Riener, however, there are more pressing goals than developing superhumans.

What had until recently been described as a futuristic vision has become a reality: the first self-declared “cyborgs” have had chips implanted in their bodies so that they can open doors and make cashless payments. The latest robotic hand prostheses succeed in performing all kinds of grips and tasks requiring dexterity. Parathletes fitted with running and spring prostheses compete – and win – against the best, non-impaired athletes. Then there are robotic pets and talking humanoid robots adding a bit of excitement to nursing homes.

Some media are even predicting that these high-tech creations will bring about forms of physiological augmentation overshadowing humans’ physical capabilities in ways never seen before. For instance, hearing aids are eventually expected to offer the ultimate in hearing; retinal implants will enable vision with a sharpness rivalling that of any eagle; motorised exoskeletons will transform soldiers into tireless fighting machines.

Visions of the future: the video game Deus Ex: Human Revolution highlights the emergence of physiological augmentation. (Visualisations: Square Enix) Courtesy: ETH Zurich

Professor Robert Riener uses the image above to illustrate the notion of superprosthese in his March 20, 2018 essay on the ETH Zurich website,

All of these prophecies notwithstanding, our robotic transformation into superheroes will not be happening in the immediate future and can still be filed under Hollywood hero myths. Compared to the technology available today, our bodies are a true marvel whose complexity and performance allows us to perform an extremely wide spectrum of tasks. Hundreds of efficient muscles, thousands of independently operating motor units along with millions of sensory receptors and billions of nerve cells allow us to perform delicate and detailed tasks with tweezers or lift heavy loads. Added to this, our musculoskeletal system is highly adaptable, can partly repair itself and requires only minimal amounts of energy in the form of relatively small amounts of food consumed.

Machines will not be able to match this any time soon. Today’s assistive devices are still laboratory experiments or niche products designed for very specific tasks. Markus Rehm, an athlete with a disability, does not use his innovative spring prosthesis to go for walks or drive a car. Nor can today’s conventional arm prostheses help a person tie their shoes or button up their shirt. Lifting devices used for nursing care are not suitable for helping with personal hygiene tasks or in psychotherapy. And robotic pets quickly lose their charm the moment their batteries die.

Solving real problems

There is no denying that advances continue to be made. Since the scientific and industrial revolutions, we have become dependent on relentless progress and growth, and we can no longer separate today’s world from this development. There are, however, more pressing issues to be solved than creating superhumans.

On the one hand, engineers need to dedicate their efforts to solving the real problems of patients, the elderly and people with disabilities. Better technical solutions are needed to help them lead normal lives and assist them in their work. We need motorised prostheses that also work in the rain and wheelchairs that can manoeuvre even with snow on the ground. Talking robotic nurses also need to be understood by hard-of-hearing pensioners as well as offer simple and dependable interactivity. Their batteries need to last at least one full day to be recharged overnight.

In addition, financial resources need to be available so that all people have access to the latest technologies, such as a high-quality household prosthesis for the family man, an extra prosthesis for the avid athlete or a prosthesis for the pensioner. [emphasis mine]

Breaking down barriers

What is just as important as the ongoing development of prostheses and assistive devices is the ability to minimise or eliminate physical barriers. Where there are no stairs, there is no need for elaborate special solutions like stair lifts or stairclimbing wheelchairs – or, presumably, fully motorised exoskeletons.

Efforts also need to be made to transform the way society thinks about people with disabilities. More acknowledgement of the day-to-day challenges facing patients with disabilities is needed, which requires that people be confronted with the topic of disability when they are still children. Such projects must be promoted at home and in schools so that living with impairments can also attain a state of normality and all people can partake in society. It is therefore also necessary to break down mental barriers.

The road to a virtually superhuman existence is still far and long. Anyone reading this text will not live to see it. In the meantime, the task at hand is to tackle the mundane challenges in order to simplify people’s daily lives in ways that do not require technology, that allow people to be active participants and improve their quality of life – instead of wasting our time getting caught up in cyborg euphoria and digital mania.

I’m struck by Riener’s reference to financial resources and access. Sensinger mentions financial resources in his CBC radio interview although his concern is with convincing funders that prostheses that mimic ‘feeling’ are needed.

I’m also struck by Riener’s discussion about nontechnological solutions for including people with all kinds of abilities and disabilities.

There was no grand plan for combining these two news bits; I just thought they were interesting together.

My name is Steve and I’m a sub auroral ion drift

Photo: The Aurora Named STEVE Couresty: NASA Goddard

That stunning image is one of a series, many of which were taken by amateur photographers as noted in a March 14, 2018 US National Aeronautics and Space Agency (NASA)/Goddard Space Flight Center news release (also on EurekAlert) by Kasha Patel about how STEVE was discovered,

Notanee Bourassa knew that what he was seeing in the night sky was not normal. Bourassa, an IT technician in Regina, Canada, trekked outside of his home on July 25, 2016, around midnight with his two younger children to show them a beautiful moving light display in the sky — an aurora borealis. He often sky gazes until the early hours of the morning to photograph the aurora with his Nikon camera, but this was his first expedition with his children. When a thin purple ribbon of light appeared and starting glowing, Bourassa immediately snapped pictures until the light particles disappeared 20 minutes later. Having watched the northern lights for almost 30 years since he was a teenager, he knew this wasn’t an aurora. It was something else.

From 2015 to 2016, citizen scientists — people like Bourassa who are excited about a science field but don’t necessarily have a formal educational background — shared 30 reports of these mysterious lights in online forums and with a team of scientists that run a project called Aurorasaurus. The citizen science project, funded by NASA and the National Science Foundation, tracks the aurora borealis through user-submitted reports and tweets.

The Aurorasaurus team, led by Liz MacDonald, a space scientist at NASA’s Goddard Space Flight Center in Greenbelt, Maryland, conferred to determine the identity of this mysterious phenomenon. MacDonald and her colleague Eric Donovan at the University of Calgary in Canada talked with the main contributors of these images, amateur photographers in a Facebook group called Alberta Aurora Chasers, which included Bourassa and lead administrator Chris Ratzlaff. Ratzlaff gave the phenomenon a fun, new name, Steve, and it stuck.

But people still didn’t know what it was.

Scientists’ understanding of Steve changed that night Bourassa snapped his pictures. Bourassa wasn’t the only one observing Steve. Ground-based cameras called all-sky cameras, run by the University of Calgary and University of California, Berkeley, took pictures of large areas of the sky and captured Steve and the auroral display far to the north. From space, ESA’s (the European Space Agency) Swarm satellite just happened to be passing over the exact area at the same time and documented Steve.

For the first time, scientists had ground and satellite views of Steve. Scientists have now learned, despite its ordinary name, that Steve may be an extraordinary puzzle piece in painting a better picture of how Earth’s magnetic fields function and interact with charged particles in space. The findings are published in a study released today in Science Advances.

“This is a light display that we can observe over thousands of kilometers from the ground,” said MacDonald. “It corresponds to something happening way out in space. Gathering more data points on STEVE will help us understand more about its behavior and its influence on space weather.”

The study highlights one key quality of Steve: Steve is not a normal aurora. Auroras occur globally in an oval shape, last hours and appear primarily in greens, blues and reds. Citizen science reports showed Steve is purple with a green picket fence structure that waves. It is a line with a beginning and end. People have observed Steve for 20 minutes to 1 hour before it disappears.

If anything, auroras and Steve are different flavors of an ice cream, said MacDonald. They are both created in generally the same way: Charged particles from the Sun interact with Earth’s magnetic field lines.

The uniqueness of Steve is in the details. While Steve goes through the same large-scale creation process as an aurora, it travels along different magnetic field lines than the aurora. All-sky cameras showed that Steve appears at much lower latitudes. That means the charged particles that create Steve connect to magnetic field lines that are closer to Earth’s equator, hence why Steve is often seen in southern Canada.

Perhaps the biggest surprise about Steve appeared in the satellite data. The data showed that Steve comprises a fast moving stream of extremely hot particles called a sub auroral ion drift, or SAID. Scientists have studied SAIDs since the 1970s but never knew there was an accompanying visual effect. The Swarm satellite recorded information on the charged particles’ speeds and temperatures, but does not have an imager aboard.

“People have studied a lot of SAIDs, but we never knew it had a visible light. Now our cameras are sensitive enough to pick it up and people’s eyes and intellect were critical in noticing its importance,” said Donovan, a co-author of the study. Donovan led the all-sky camera network and his Calgary colleagues lead the electric field instruments on the Swarm satellite.

Steve is an important discovery because of its location in the sub auroral zone, an area of lower latitude than where most auroras appear that is not well researched. For one, with this discovery, scientists now know there are unknown chemical processes taking place in the sub auroral zone that can lead to this light emission.

Second, Steve consistently appears in the presence of auroras, which usually occur at a higher latitude area called the auroral zone. That means there is something happening in near-Earth space that leads to both an aurora and Steve. Steve might be the only visual clue that exists to show a chemical or physical connection between the higher latitude auroral zone and lower latitude sub auroral zone, said MacDonald.

“Steve can help us understand how the chemical and physical processes in Earth’s upper atmosphere can sometimes have local noticeable effects in lower parts of Earth’s atmosphere,” said MacDonald. “This provides good insight on how Earth’s system works as a whole.”

The team can learn a lot about Steve with additional ground and satellite reports, but recording Steve from the ground and space simultaneously is a rare occurrence. Each Swarm satellite orbits Earth every 90 minutes and Steve only lasts up to an hour in a specific area. If the satellite misses Steve as it circles Earth, Steve will probably be gone by the time that same satellite crosses the spot again.

In the end, capturing Steve becomes a game of perseverance and probability.

“It is my hope that with our timely reporting of sightings, researchers can study the data so we can together unravel the mystery of Steve’s origin, creation, physics and sporadic nature,” said Bourassa. “This is exciting because the more I learn about it, the more questions I have.”

As for the name “Steve” given by the citizen scientists? The team is keeping it as an homage to its initial name and discoverers. But now it is STEVE, short for Strong Thermal Emission Velocity Enhancement.

Other collaborators on this work are: the University of Calgary, New Mexico Consortium, Boston University, Lancaster University, Athabasca University, Los Alamos National Laboratory and the Alberta Aurora Chasers Facebook group.

If you live in an area where you may see STEVE or an aurora, submit your pictures and reports to Aurorasaurus through aurorasaurus.org or the free iOS and Android mobile apps. To learn how to spot STEVE, click here.

There is a video with MacDonald describing the work and featuring more images,

Katherine Kornei’s March 14, 2018 article for sciencemag.org adds more detail about the work,

Citizen scientists first began posting about Steve on social media several years ago. Across New Zealand, Canada, the United States, and the United Kingdom, they reported an unusual sight in the night sky: a purplish line that arced across the heavens for about an hour at a time, visible at lower latitudes than classical aurorae, mostly in the spring and fall. … “It’s similar to a contrail but doesn’t disperse,” says Notanee Bourassa, an aurora photographer in Saskatchewan province in Canada [Regina as mentioned in the news release is the capital of the province of Saskatchewan].

Traditional aurorae are often green, because oxygen atoms present in Earth’s atmosphere emit that color light when they’re bombarded by charged particles trapped in Earth’s magnetic field. They also appear as a diffuse glow—rather than a distinct line—on the northern or southern horizon. Without a scientific theory to explain the new sight, a group of citizen scientists led by aurora enthusiast Chris Ratzlaff of Canada’s Alberta province [usually referred to as Canada’s province of Alberta or simply, the province of Alberta] playfully dubbed it Steve, after a line in the 2006 children’s movie Over the Hedge.

Aurorae have been studied for decades, but people may have missed Steve because their cameras weren’t sensitive enough, says Elizabeth MacDonald, a space physicist at NASA Goddard Space Flight Center in Greenbelt, Maryland, and leader of the new research. MacDonald and her team have used data from a European satellite called Swarm-A to study Steve in its native environment, about 200 kilometers up in the atmosphere. Swarm-A’s instruments revealed that the charged particles in Steve had a temperature of about 6000°C, “impressively hot” compared with the nearby atmosphere, MacDonald says. And those ions were flowing from east to west at nearly 6 kilometers per second, …

Here’s a link to and a citation for the paper,

New science in plain sight: Citizen scientists lead to the discovery of optical structure in the upper atmosphere by Elizabeth A. MacDonald, Eric Donovan, Yukitoshi Nishimura, Nathan A. Case, D. Megan Gillies, Bea Gallardo-Lacourt, William E. Archer, Emma L. Spanswick, Notanee Bourassa, Martin Connors, Matthew Heavner, Brian Jackel, Burcu Kosar, David J. Knudsen, Chris Ratzlaff, and Ian Schofield. Science Advances 14 Mar 2018:
Vol. 4, no. 3, eaaq0030 DOI: 10.1126/sciadv.aaq0030

This paper is open access. You’ll note that Notanee Bourassa is listed as an author. For more about Bourassa, there’s his Twitter feed (@DJHardwired) and his YouTube Channel. BTW, his Twitter bio notes that he’s “Recently heartbroken,” as well as, “Seasoned human male. Expert storm chaser, aurora photographer, drone flyer and on-air FM radio DJ.” Make of that what you will.

Body-on-a-chip (10 organs)

Also known as human-on-a-chip, the 10-organ body-on-a-chip was being discussed at the 9th World Congress on Alternatives to Animal Testing in the Life Sciences in 2014 in Prague, Czech Republic (see this July 1, 2015 posting for more). At the time, scientists were predicting success at achieving their goal of 10 organs on-a-chip in 2017 (the best at the time was four organs). Only a few months past that deadline, scientists from the Massachusetts Institute of Technology (MIT) seem to have announced a ’10 organ chip’ in a March 14, 2018 news item on ScienceDaily,

MIT engineers have developed new technology that could be used to evaluate new drugs and detect possible side effects before the drugs are tested in humans. Using a microfluidic platform that connects engineered tissues from up to 10 organs, the researchers can accurately replicate human organ interactions for weeks at a time, allowing them to measure the effects of drugs on different parts of the body.

Such a system could reveal, for example, whether a drug that is intended to treat one organ will have adverse effects on another.

A March 14, 2018 MIT news release (also on EurekAlert), which originated the news item, expands on the theme,

“Some of these effects are really hard to predict from animal models because the situations that lead to them are idiosyncratic,” says Linda Griffith, the School of Engineering Professor of Teaching Innovation, a professor of biological engineering and mechanical engineering, and one of the senior authors of the study. “With our chip, you can distribute a drug and then look for the effects on other tissues, and measure the exposure and how it is metabolized.”

These chips could also be used to evaluate antibody drugs and other immunotherapies, which are difficult to test thoroughly in animals because they are designed to interact with the human immune system.

David Trumper, an MIT professor of mechanical engineering, and Murat Cirit, a research scientist in the Department of Biological Engineering, are also senior authors of the paper, which appears in the journal Scientific Reports. The paper’s lead authors are former MIT postdocs Collin Edington and Wen Li Kelly Chen.

Modeling organs

When developing a new drug, researchers identify drug targets based on what they know about the biology of the disease, and then create compounds that affect those targets. Preclinical testing in animals can offer information about a drug’s safety and effectiveness before human testing begins, but those tests may not reveal potential side effects, Griffith says. Furthermore, drugs that work in animals often fail in human trials.

“Animals do not represent people in all the facets that you need to develop drugs and understand disease,” Griffith says. “That is becoming more and more apparent as we look across all kinds of drugs.”

Complications can also arise due to variability among individual patients, including their genetic background, environmental influences, lifestyles, and other drugs they may be taking. “A lot of the time you don’t see problems with a drug, particularly something that might be widely prescribed, until it goes on the market,” Griffith says.

As part of a project spearheaded by the Defense Advanced Research Projects Agency (DARPA), Griffith and her colleagues decided to pursue a technology that they call a “physiome on a chip,” which they believe could offer a way to model potential drug effects more accurately and rapidly. To achieve this, the researchers needed new equipment — a platform that would allow tissues to grow and interact with each other — as well as engineered tissue that would accurately mimic the functions of human organs.

Before this project was launched, no one had succeeded in connecting more than a few different tissue types on a platform. Furthermore, most researchers working on this kind of chip were working with closed microfluidic systems, which allow fluid to flow in and out but do not offer an easy way to manipulate what is happening inside the chip. These systems also require external pumps.

The MIT team decided to create an open system, which essentially removes the lid and makes it easier to manipulate the system and remove samples for analysis. Their system, adapted from technology they previously developed and commercialized through U.K.-based CN BioInnovations, also incorporates several on-board pumps that can control the flow of liquid between the “organs,” replicating the circulation of blood, immune cells, and proteins through the human body. The pumps also allow larger engineered tissues, for example tumors within an organ, to be evaluated.

Complex interactions

The researchers created several versions of their chip, linking up to 10 organ types: liver, lung, gut, endometrium, brain, heart, pancreas, kidney, skin, and skeletal muscle. Each “organ” consists of clusters of 1 million to 2 million cells. These tissues don’t replicate the entire organ, but they do perform many of its important functions. Significantly, most of the tissues come directly from patient samples rather than from cell lines that have been developed for lab use. These so-called “primary cells” are more difficult to work with but offer a more representative model of organ function, Griffith says.

Using this system, the researchers showed that they could deliver a drug to the gastrointestinal tissue, mimicking oral ingestion of a drug, and then observe as the drug was transported to other tissues and metabolized. They could measure where the drugs went, the effects of the drugs on different tissues, and how the drugs were broken down. In a related publication, the researchers modeled how drugs can cause unexpected stress on the liver by making the gastrointestinal tract “leaky,” allowing bacteria to enter the bloodstream and produce inflammation in the liver.

Kevin Healy, a professor of bioengineering and materials science and engineering at the University of California at Berkeley, says that this kind of system holds great potential for accurate prediction of complex adverse drug reactions.

“While microphysiological systems (MPS) featuring single organs can be of great use for both pharmaceutical testing and basic organ-level studies, the huge potential of MPS technology is revealed by connecting multiple organ chips in an integrated system for in vitro pharmacology. This study beautifully illustrates that multi-MPS “physiome-on-a-chip” approaches, which combine the genetic background of human cells with physiologically relevant tissue-to-media volumes, allow accurate prediction of drug pharmacokinetics and drug absorption, distribution, metabolism, and excretion,” says Healy, who was not involved in the research.

Griffith believes that the most immediate applications for this technology involve modeling two to four organs. Her lab is now developing a model system for Parkinson’s disease that includes brain, liver, and gastrointestinal tissue, which she plans to use to investigate the hypothesis that bacteria found in the gut can influence the development of Parkinson’s disease.

Other applications include modeling tumors that metastasize to other parts of the body, she says.

“An advantage of our platform is that we can scale it up or down and accommodate a lot of different configurations,” Griffith says. “I think the field is going to go through a transition where we start to get more information out of a three-organ or four-organ system, and it will start to become cost-competitive because the information you’re getting is so much more valuable.”

The research was funded by the U.S. Army Research Office and DARPA.

Caption: MIT engineers have developed new technology that could be used to evaluate new drugs and detect possible side effects before the drugs are tested in humans. Using a microfluidic platform that connects engineered tissues from up to 10 organs, the researchers can accurately replicate human organ interactions for weeks at a time, allowing them to measure the effects of drugs on different parts of the body. Credit: Felice Frankel

Here’s a link to and a citation for the paper,

Interconnected Microphysiological Systems for Quantitative Biology and Pharmacology Studies by Collin D. Edington, Wen Li Kelly Chen, Emily Geishecker, Timothy Kassis, Luis R. Soenksen, Brij M. Bhushan, Duncan Freake, Jared Kirschner, Christian Maass, Nikolaos Tsamandouras, Jorge Valdez, Christi D. Cook, Tom Parent, Stephen Snyder, Jiajie Yu, Emily Suter, Michael Shockley, Jason Velazquez, Jeremy J. Velazquez, Linda Stockdale, Julia P. Papps, Iris Lee, Nicholas Vann, Mario Gamboa, Matthew E. LaBarge, Zhe Zhong, Xin Wang, Laurie A. Boyer, Douglas A. Lauffenburger, Rebecca L. Carrier, Catherine Communal, Steven R. Tannenbaum, Cynthia L. Stokes, David J. Hughes, Gaurav Rohatgi, David L. Trumper, Murat Cirit, Linda G. Griffith. Scientific Reports, 2018; 8 (1) DOI: 10.1038/s41598-018-22749-0 Published online:

This paper which describes testing for four-, seven-, and ten-organs-on-a-chip, is open access. From the paper’s Discussion,

In summary, we have demonstrated a generalizable approach to linking MPSs [microphysiological systems] within a fluidic platform to create a physiome-on-a-chip approach capable of generating complex molecular distribution profiles for advanced drug discovery applications. This adaptable, reusable system has unique and complementary advantages to existing microfluidic and PDMS-based approaches, especially for applications involving high logD substances (drugs and hormones), those requiring precise and flexible control over inter-MPS flow partitioning and drug distribution, and those requiring long-term (weeks) culture with reliable fluidic and sampling operation. We anticipate this platform can be applied to a wide range of problems in disease modeling and pre-clinical drug development, especially for tractable lower-order (2–4) interactions.

Congratulations to the researchers!

‘Lilliputian’ skyscraper: white graphene for hydrogen storage

This story comes from Rice University (Texas, US). From a March 12, 2018 news item on Nanowerk,

Rice University engineers have zeroed in on the optimal architecture for storing hydrogen in “white graphene” nanomaterials — a design like a Lilliputian skyscraper with “floors” of boron nitride sitting one atop another and held precisely 5.2 angstroms apart by boron nitride pillars.

Caption Thousands of hours of calculations on Rice University’s two fastest supercomputers found that the optimal architecture for packing hydrogen into “white graphene” involves making skyscraper-like frameworks of vertical columns and one-dimensional floors that are about 5.2 angstroms apart. In this illustration, hydrogen molecules (white) sit between sheet-like floors of graphene (gray) that are supported by boron-nitride pillars (pink and blue). Researchers found that identical structures made wholly of boron-nitride had unprecedented capacity for storing readily available hydrogen. Credit Lei Tao/Rice University

A March 12, 2018 Rice University news release (also on EurekAlert), which originated the news item, goes into extensive detail about the work,

“The motivation is to create an efficient material that can take up and hold a lot of hydrogen — both by volume and weight — and that can quickly and easily release that hydrogen when it’s needed,”  [emphasis mine] said the study’s lead author, Rouzbeh Shahsavari, assistant professor of civil and environmental engineering at Rice.

Hydrogen is the lightest and most abundant element in the universe, and its energy-to-mass ratio — the amount of available energy per pound of raw material, for example — far exceeds that of fossil fuels. It’s also the cleanest way to generate electricity: The only byproduct is water. A 2017 report by market analysts at BCC Research found that global demand for hydrogen storage materials and technologies will likely reach $5.4 billion annually by 2021.

Hydrogen’s primary drawbacks relate to portability, storage and safety. While large volumes can be stored under high pressure in underground salt domes and specially designed tanks, small-scale portable tanks — the equivalent of an automobile gas tank — have so far eluded engineers.

Following months of calculations on two of Rice’s fastest supercomputers, Shahsavari and Rice graduate student Shuo Zhao found the optimal architecture for storing hydrogen in boron nitride. One form of the material, hexagonal boron nitride (hBN), consists of atom-thick sheets of boron and nitrogen and is sometimes called white graphene because the atoms are spaced exactly like carbon atoms in flat sheets of graphene.

Previous work in Shahsavari’s Multiscale Materials Lab found that hybrid materials of graphene and boron nitride could hold enough hydrogen to meet the Department of Energy’s storage targets for light-duty fuel cell vehicles.

“The choice of material is important,” he said. “Boron nitride has been shown to be better in terms of hydrogen absorption than pure graphene, carbon nanotubes or hybrids of graphene and boron nitride.

“But the spacing and arrangement of hBN sheets and pillars is also critical,” he said. “So we decided to perform an exhaustive search of all the possible geometries of hBN to see which worked best. We also expanded the calculations to include various temperatures, pressures and dopants, trace elements that can be added to the boron nitride to enhance its hydrogen storage capacity.”

Zhao and Shahsavari set up numerous “ab initio” tests, computer simulations that used first principles of physics. Shahsavari said the approach was computationally intense but worth the extra effort because it offered the most precision.

“We conducted nearly 4,000 ab initio calculations to try and find that sweet spot where the material and geometry go hand in hand and really work together to optimize hydrogen storage,” he said.

Unlike materials that store hydrogen through chemical bonding, Shahsavari said boron nitride is a sorbent that holds hydrogen through physical bonds, which are weaker than chemical bonds. That’s an advantage when it comes to getting hydrogen out of storage because sorbent materials tend to discharge more easily than their chemical cousins, Shahsavari said.

He said the choice of boron nitride sheets or tubes and the corresponding spacing between them in the superstructure were the key to maximizing capacity.

“Without pillars, the sheets sit naturally one atop the other about 3 angstroms apart, and very few hydrogen atoms can penetrate that space,” he said. “When the distance grew to 6 angstroms or more, the capacity also fell off. At 5.2 angstroms, there is a cooperative attraction from both the ceiling and floor, and the hydrogen tends to clump in the middle. Conversely, models made of purely BN tubes — not sheets — had less storage capacity.”

Shahsavari said models showed that the pure hBN tube-sheet structures could hold 8 weight percent of hydrogen. (Weight percent is a measure of concentration, similar to parts per million.) Physical experiments are needed to verify that capacity, but that the DOE’s ultimate target is 7.5 weight percent, and Shahsavari’s models suggests even more hydrogen can be stored in his structure if trace amounts of lithium are added to the hBN.

Finally, Shahsavari said, irregularities in the flat, floor-like sheets of the structure could also prove useful for engineers.

“Wrinkles form naturally in the sheets of pillared boron nitride because of the nature of the junctions between the columns and floors,” he said. “In fact, this could also be advantageous because the wrinkles can provide toughness. If the material is placed under load or impact, that buckled shape can unbuckle easily without breaking. This could add to the material’s safety, which is a big concern in hydrogen storage devices.

“Furthermore, the high thermal conductivity and flexibility of BN may provide additional opportunities to control the adsorption and release kinetics on-demand,” Shahsavari said. “For example, it may be possible to control release kinetics by applying an external voltage, heat or an electric field.”

I may be wrong but this “The motivation is to create an efficient material that can take up and hold a lot of hydrogen — both by volume and weight — and that can quickly and easily release that hydrogen when it’s needed, …”  sounds like a supercapacitor. One other comment, this research appears to be ‘in silico’, i.e., all the testing has been done as computer simulations and the proposed materials themselves have yet to be tested.

Here’s a link to and a citation for the paper,

Merger of Energetic Affinity and Optimal Geometry Provides New Class of Boron Nitride Based Sorbents with Unprecedented Hydrogen Storage Capacity by Rouzbeh Shahsavari and Shuo Zhao. Small Vol. 14 Issue 10 DOI: 10.1002/smll.201702863 Version of Record online: 8 MAR 2018

© 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

Flat gallium (gallenene) and nanoelectronics

Another day, another 2D material. A March 9, 2018 news item on ScienceDaily announced the latest thin material from Rice university,

Scientists at Rice University and the Indian Institute of Science, Bangalore, have discovered a method to make atomically flat gallium that shows promise for nanoscale electronics.

The Rice lab of materials scientist Pulickel Ajayan and colleagues in India created two-dimensional gallenene, a thin film of conductive material that is to gallium what graphene is to carbon.

Extracted into a two-dimensional form, the novel material appears to have an affinity for binding with semiconductors like silicon and could make an efficient metal contact in two-dimensional electronic devices, the researchers said.

A March 9, 2018 Rice University news release (also on EurekAlert), which originated the news item, describes the process for creating gallenene,

Gallium is a metal with a low melting point; unlike graphene and many other 2-D structures, it cannot yet be grown with vapor phase deposition methods. Moreover, gallium also has a tendency to oxidize quickly. And while early samples of graphene were removed from graphite with adhesive tape, the bonds between gallium layers are too strong for such a simple approach.

So the Rice team led by co-authors Vidya Kochat, a former postdoctoral researcher at Rice, and Atanu Samanta, a student at the Indian Institute of Science, used heat instead of force.

Rather than a bottom-up approach, the researchers worked their way down from bulk gallium by heating it to 29.7 degrees Celsius (about 85 degrees Fahrenheit), just below the element’s melting point. That was enough to drip gallium onto a glass slide. As a drop cooled just a bit, the researchers pressed a flat piece of silicon dioxide on top to lift just a few flat layers of gallenene.

They successfully exfoliated gallenene onto other substrates, including gallium nitride, gallium arsenide, silicone and nickel. That allowed them to confirm that particular gallenene-substrate combinations have different electronic properties and to suggest that these properties can be tuned for applications.

“The current work utilizes the weak interfaces of solids and liquids to separate thin 2-D sheets of gallium,” said Chandra Sekhar Tiwary, principal investigator on the project he completed at Rice before becoming an assistant professor at the Indian Institute of Technology in Gandhinagar, India. “The same method can be explored for other metals and compounds with low melting points.”

Gallenene’s plasmonic and other properties are being investigated, according to Ajayan. “Near 2-D metals are difficult to extract, since these are mostly high-strength, nonlayered structures, so gallenene is an exception that could bridge the need for metals in the 2-D world,” he said.

Co-authors of the paper are graduate student Yuan Zhang and Associate Research Professor Robert Vajtai of Rice; Anthony Stender, a former Rice postdoctoral researcher and now an assistant professor at Ohio University; Sanjit Bhowmick, Praveena Manimunda and Syed Asif of Bruker Nano Surfaces, Minneapolis; and Rice alumnus Abhishek Singh of the Indian Institute of Science. Ajayan is chair of Rice’s Department of Materials Science and NanoEngineering, the Benjamin M. and Mary Greenwood Anderson Professor in Engineering and a professor of chemistry.

The Air Force Office of Scientific Research sponsored the research, with additional support from the Indo-US Science and Technology Forum, the government of India and a Rice Center for Quantum Materials/Smalley-Curl Postdoctoral Fellowship in Quantum Materials.

Here’s a link to and a citation for the paper,

Atomically thin gallium layers from solid-melt exfoliation by Vidya Kochat, Atanu Samanta, Yuan Zhang, Sanjit Bhowmick, Praveena Manimunda, Syed Asif S. Asif, Anthony S. Stender, Robert Vajtai, Abhishek K. Singh, Chandra S. Tiwary, and Pulickel M. Ajayan. Science Advances 09 Mar 2018: Vol. 4, no. 3, e1701373 DOI: 10.1126/sciadv.1701373

This paper appears to be open access.

Symbiosis (science education initiative) in British Columbia (Canada)

Is it STEM (science, technology, engineering, and mathematics) or is it STEAM (science, technology, engineering, arts, and mathematics)?

It’s STEAM as least as far as Dr. Scott Sampson is concerned. In his July 6, 2018 Creative Mornings Vancouver talk in Vancouver (British Columbia, Canada) he mentioned a major science education/outreach initiative taking place in the province of British Columbia (BC) but intended for all of Canada, Symbiosis There was some momentary confusion as Sampson’s slide deck identified it as a STEM initiative. Sampson verbally added the ‘A’ for arts and henceforth described it as a STEAM initiative. (Part of the difficulty is that many institutions have used the term STEM and only recently come to the realization they might want to add ‘art’ leading to confusion in Canada and the US, if nowhere else, as old materials require updating. Actually, I vote for adding the humanities too so that we can have SHTEAM.)

You’ll notice, should you visit the Symbiosis website, that the STEM/STEAM confusion extends further than Sampson’s slide deck.

Sampson,  “a dinosaur paleontologist, science communicator, and passionate advocate for reimagining cities as places where people and nature thrive, serves (since 2016) as president and CEO of Science World British Columbia” or as they’re known on their website:  Science World at TELUS World of Science. Unwieldy, eh?

The STEM/STEAM announcement

None of us in the Creative Mornings crowd had heard of Symbiosis or Scott Sampson for that matter (apparently, he’s a huge star among the preschool set due to his work on the PBS [US Public Broadcasting Service] children’s show ‘Dinosaur Train’). Regardless, it was good to hear  of this effort although my efforts to learn more about it have been a bit frustrated.

First, here’s what I found: a May 25, 2017 Science World media release (PDF) about Symbiosis,

Science World Introduces Symbiosis
A First-of Its-Kind [sic] Learning Ecosystem forCanada

We live in a time of unprecedented change. High-tech innovations are rapidly transforming 21st century societies and the Canadian marketplace is increasingly dominated by novel, knowledge-based jobs requiring high levels of literacy in science, technology, engineering and math (STEM). Failing to prepare the next generation to be STEM literate threatens the health of our youth, the economy and the places we live. STEM literacy needs to be integrated into the broader context of what it means to be a 21st century citizen. Also important is inclusion of an extra letter, “A,” for art and design, resulting in STEAM. The idea behind Symbiosis is to make STEAM learning accessible across Canada.

Every major Canadian city hosts dozens to hundreds of organizations that engage children and youth in STEAM learning. Yet, for the most part, these organizations operate in isolation. The result is that a huge proportion of Canadian youth, particularly in First Nations and other underserved communities, are not receiving quality STEAM learning opportunities.

In order to address this pressing need, Science World British Columbia (scienceworld.ca) is spearheading the creation of Symbiosis, a deeply collaborative STEAM learning ecosystem. Driven by a diverse network of cross-sector partners, Symbiosis will become a vibrant model for scaling the kinds of learning and careers needed in a knowledge-based economy.

Today [May 25, 2017], Science World is proud to announce that Symbiosis has been selected by STEM Learning Ecosystems, a US-based organization, to formally join a growing movement. In just two years, the STEM Learning Ecosystems  initiative has become a thriving network of hundreds of organizations and thousands of individuals, joined in regional partnerships with the objective of collaborating in new and creative ways to increase equity, quality, and STEM learning outcomes for all youth. Symbiosis will be the first member of this initiative outside the United States.

Symbiosis was selected to become part of the STEM Learning Ecosystem initiative because of a demonstrated [emphasis mine] commitment to cross-sector collaborations in schools and beyond the classroom. As STEM Ecosystems evolve, students will be able to connect what they’ve learned, in and out of school, with real-world, community-based opportunities.

I wonder how Symbiosis demonstrated their commitment. Their website doesn’t seem to have existed prior to 2018 and there’s no information there about any prior activities.

A very Canadian sigh

I checked the STEM Learning Ecosystems website for its Press Room and found a couple of illuminating press releases. Here’s how the addition of Symbiosis was described in the May 25, 2017 press release,

The 17 incoming ecosystem communities were selected because they demonstrate a commitment to cross-sector collaborations in schools and beyond the classroom—in afterschool and summer programs, at home, with local business and industry partners, and in science centers, libraries and other places both virtual and physical. As STEM Ecosystems evolve, students will be able to connect what is learned in and out of school with real-world opportunities.

“It makes complete sense to collaborate with like-minded regions and organizations,” said Matthew Felan of the Great Lakes Bay Regional Alliance STEM Initiative, one of the founding Ecosystems. “STEM Ecosystems provides technical assistance and infrastructure support so that we are able to tailor quality STEM learning opportunities to the specific needs of our region in Michigan while leveraging the experience of similar alliances across the nation.”

The following ecosystem communities were selected to become part of this [US} national STEM Learning Ecosystem:

  • Arizona: Flagstaff STEM Learning Ecosystem
  • California: Region 5 STEAM in Expanded Learning Ecosystem (San Benito, Santa Clara, Santa Cruz, Monterey Counties)
  • Louisiana: Baton Rouge STEM Learning Network
  • Massachusetts: Cape Cod Regional STEM Network
  • Michigan: Michigan STEM Partnership / Southeast Michigan STEM Alliance
  • Missouri: Louis Regional STEM Learning Ecosystem
  • New Jersey: Delran STEM Ecosystem Alliance (Burlington County)
  • New Jersey: Newark STEAM Coalition
  • New York: WNY STEM (Western New York State)
  • New York: North Country STEM Network (seven counties of Northern New York State)
  • Ohio: Upper Ohio Valley STEM Cooperative
  • Ohio: STEM Works East Central Ohio
  • Oklahoma: Mayes County STEM Alliance
  • Pennsylvania: Bucks, Chester, Delaware, Montgomery STEM Learning Ecosystem
  • Washington: The Washington STEM Network
  • Wisconsin: Greater Green Bay STEM Network
  • Canada: Symbiosis, British Columbia, Canada

Yes, somehow a Canadian initiative becomes another US regional community in their national ecosystem.

Then, they made everything better a year later in a May 29, 2018 press release,

New STEM Learning Ecosystems in the United States are:

  • California: East Bay STEM Network
  • Georgia: Atlanta STEAM Learning Ecosystem
  • Hawaii: Hawai’iloa ecosySTEM Cabinet
  • Illinois: South Suburban STEAM Network
  • Kentucky: Southeastern Kentucky STEM Ecosystem
  • Massachusetts: MetroWest STEM Education Network
  • New York: Greater Southern Tier STEM Learning Network
  • North Carolina: STEM SENC (Southeastern North Carolina)
  • North Dakota: North Dakota STEM Ecosystem
  • Texas: SA/Bexar STEM/STEAM Ecosystem

The growing global Community of Practice has added: [emphasis mine]

  • Kenya: Kenya National STEM Learning Ecosystem
  • México: Alianza Para Promover la Educación en STEM (APP STEM)

Are Americans still having fantasies about ‘manifest destiny’? For those unfamiliar with the ‘doctrine’,

In the 19th century, manifest destiny was a widely held belief in the United States that its settlers were destined to expand across North America.  …

They seem to have given up on Mexico but the dream of acquiring Canadian territory rears its head from time to time. Specifically, it happens when Quebec holds a referendum (the last one was in 1995) on whether or not it wishes to remain part of the Canadian confederation. After the last referendum, I’d hoped that was the end of ‘manifest destiny’ but it seems these 21st Century-oriented STEM Learning Ecosystems people have yet to give up a 19th century fantasy. (sigh)

What is Symbiosis?

For anyone interested in the definition of the word, from Wordnik,

symbiosis

Definitions

from The American Heritage® Dictionary of the English Language, 4th Edition

  • n. Biology A close, prolonged association between two or more different organisms of different species that may, but does not necessarily, benefit each member.
  • n. A relationship of mutual benefit or dependence.

from Wiktionary, Creative Commons Attribution/Share-Alike License

  • n. A relationship of mutual benefit.
  • n. A close, prolonged association between two or more organisms of different species, regardless of benefit to the members.
  • n. The state of people living together in community.

As for this BC-based organization, Symbiosis, which they hope will influence Canadian STEAM efforts and learning as a whole, I don’t have much. From the Symbiosis About Us webpage,

A learning ecosystem is an interconnected web of learning opportunities that encompasses formal education to community settings such as out-of-school care, summer programs, science centres and museums, and experiences at home.

​In May 2017, Symbiosis was selected by STEM Learning Ecosystems, a US-based organization, to formally join a growing movement. As the first member of this initiative outside the United States, Symbiosis has demonstrated a commitment to cross-sector collaborations in schools and beyond the classroom. As Symbiosis evolves, students will be able to connect what they’ve learned, in and out of school, with real-world, community-based opportunities.

We live in a time of unprecedented change. High-tech innovations are rapidly transforming 21st century societies and the Canadian marketplace is increasingly dominated by novel, knowledge-based jobs requiring high levels of literacy in science, technology, engineering and math (STEM). Failing to prepare the next generation to be STEM literate threatens the health of our youth, the economy, and the places we live. STEM literacy needs to be integrated into the broader context of what it means to be a 21st century citizen. Also important is inclusion of an extra letter, “A,” for art and design, resulting in STEAM.

In order to address this pressing need, Science World British Columbia is spearheading the creation of Symbiosis, a deeply collaborative STEAM learning ecosystem. Driven by a diverse network of cross-sector partners, Symbiosis will become a vibrant model for scaling the kinds of learning and careers needed in a knowledge-based economy.

Symbiosis:

  • Acknowledges the holistic connections among arts, science and nature
  • ​Is inclusive and equitable
  • Is learner-centered​
  • Fosters curiosity and life-long learning ​​
  • Is relevant—should reflect the community
  • Honours diverse perspectives, including Indigenous worldviews
  • Is partnerships, collaboration, and mentorship
  • ​Is a sustainable, thriving community, with resilience and flexibility
  • Is research-based, data-driven
  • Shares stories of success—stories of people/role models using STEAM and critical thinking to make a difference
  • Provides a  variety of access points that are available to all learners

I was looking for more concrete information such as:

  • what is your budget?
  • which organizations are partners?
  • where do you get your funding?
  • what have you done so far?

I did get an answer to my last question by going to the Symbiosis news webpage where I found these,

We’re hiring!

 7/3/2018 [Their deadline is July 13, 2018]

STAN conference

3/20/2018

Symbiosis on CKPG

3/12/2018

Design Studio #2 in March

2/15/2018

BC Science Outreach Workshop

2/7/2018

Make of that what you will. Also, there is a 2018 copyright notice (at the bottom of the webpages) but no copyright owner is listed.

There is some Symbiosis information

A magazine known as BC Business (!) offers some details in a May 11, 2018 opinion piece, Note: Links have been removed,

… Increasingly, the Canadian marketplace is dominated by novel, knowledge-based jobs requiring high levels of literacy in STEM (science, technology, engineering and math). Here in B.C., the tech sector now employs over 100,000 people, about 5 percent of the province’s total workforce. As the knowledge economy grows, these numbers will rise dramatically.

Yet technology-driven businesses are already struggling to fill many roles that require literacy in STEM. …

Today, STEM education in North America and elsewhere is struggling. One study found that 60 percent of students who enter high school interested in STEM fields change their minds by graduation. Lacking mentoring, students, especially girls, tend to lose interest in STEM. [emphasis mine]Today, only 22 percent of Canadian STEM jobs are held by women. Failing to prepare the next generation to be STEM-literate threatens the prospects of our youth, our economy and the places we live.

More and more, education is no longer confined to classrooms. … To kickstart this future, a “STEM learning ecosystem” movement has emerged in the United States, grounded in deeply collaborative, cross-sector networks of learning opportunities.

Symbiosis will concentrate on a trio of impacts:

1) Dramatically increasing the number of qualified STEM mentors in B.C.—from teachers and scientists to technologists and entrepreneurs;

2) Connecting this diversity of mentors with children and youth through networked opportunities, from classroom visits and on-site shadowing to volunteering and internships; and

3) Creating a digital hub that interweaves communities, hosts a library of resources and extends learning through virtual offerings. [emphases mine]

Science World British Columbia is spearheading Symbiosis, and organizations from many sectors have expressed strong interest in collaborating—among them K-12 education, higher education, industry, government and non-profits. Several of these organizations are founding members of the BC Science Charter, which formed in 2013.

Symbiosis will launch in fall of 2018 with two pilot communities: East Vancouver and Prince George. …

As for why students tend to lose interest in STEM, there’s a rather interesting longitudinal study taking place in the UK which attempts to answer at least some of that question. I first wrote about the ASPIRES study in a January 31, 2012 posting: Science attitude kicks in by 10 years old. This was based on preliminary data and it seemed to be confirmed by an unrelated US study of high school students also mentioned in that posting (scroll down about 40% of the way).

In short, both studies suggested that children are quite to open to science but when it comes time to think about careers, they tend to ‘aspire’ to what they see amongst family and friends. I don’t see that kind of thinking reflected in any of the information I’ve been able to find about Symbiosis and it was not present in Sampson’s, Creative Mornings talk.

However, I noted during Sampson’s talk that he mentioned his father, a professor of psychology at the University of British Columbia and how he had based his career expectations on his father’s career. (Sampson is from Vancouver originally.) Sampson, like his father, was at one point a professor of ‘science’ at a university.

Perhaps one day someone from Symbiosis will look into the ASPIRE studies or even read my blog 🙂

You can find the latest about what is now called the ASPIRES 2 study here. (I will try to post my own update to the ASPIRES projects in the near future).

Best hopes

I am happy to see Symbiosis arrive on the scene and I wish all the best for the initiative. I am less concerned than the BC Business folks about supplying employers with the kind of employees they want to hire and hopeful that Symbiosis will attract not just the students, educators, mentors, and scientists to whom they are appealing but will cast a wider net to include philosophers, car mechanics, hairdressers, poets, visual artists, farmers, chefs, and others in a ‘pursuit of wonder’.

Aside: I was introduced to the phrase ‘pursuit of wonder’ by a friend who sent me a link to José Teodoro’s May 29, 2018 interview with Canadian filmmaker, Peter Mettler for the Brick. Mettler discusses his film about the Northern Lights and the technical challenges he met along the way.