Tag Archives: US National Aeronautics and Space Administration

Interstellar buckyball mystery solved

Caption: An artist’s conception showing spherical carbon molecules known as buckyballs coming out from a planetary nebula — material shed by a dying star. Researchers at the University of Arizona have now created these molecules under laboratory conditions thought to mimic those in their ‘natural’ habitat in space. Credit: NASA/JPL-Caltech

A ‘buckyball’, for anyone who doesn’t know, is a molecule made up of carbon atoms, Said to resemble soccer balls or geodesic domes, they’re also known as C60 or Buckminsterfullerenes as Rachel Abraham notes in her November 13, 2019 University of Arizona news release (also on EurekAlert),

Scientists have long been puzzled by the existence of so-called “buckyballs” – complex carbon molecules with a soccer-ball-like structure – throughout interstellar space. Now, a team of researchers from the University of Arizona has proposed a mechanism for their formation in a study published in the Astrophysical Journal Letters.

Carbon 60, or C60 for short, whose official name is Buckminsterfullerene, comes in spherical molecules consisting of 60 carbon atoms organized in five-membered and six-membered rings. The name “buckyball” derives from their resemblance to the architectural work of Richard Buckminster Fuller [bettr known as Buckminster Fuller], who designed many dome structures that look similar to C60. Their formation was thought to only be possible in lab settings until their detection in space challenged this assumption.

For decades, people thought interstellar space was sprinkled with lightweight molecules only: mostly single atoms, two-atom molecules and the occasional nine or 10-atom molecules. This was until massive C60 and C70 molecules were detected a few years ago.

Researchers were also surprised to find that that they were composed of pure carbon. In the lab, C60 is made by blasting together pure carbon sources, such as graphite. In space, C60 was detected in planetary nebulae, which are the debris of dying stars. This environment has about 10,000 hydrogen molecules for every carbon molecule.

“Any hydrogen should destroy fullerene synthesis,” said astrobiology and chemistry doctoral student Jacob Bernal, lead author of the paper. “If you have a box of balls, and for every 10,000 hydrogen balls you have one carbon, and you keep shaking them, how likely is it that you get 60 carbons to stick together? It’s very unlikely.”

Bernal and his co-authors began investigating the C60 mechanism after realizing that the transmission electron microscope, or TEM, housed at the Kuiper Materials Imaging and Characterization Facility at UArizona, was able to simulate the planetary nebula environment fairly well.

The TEM, which is funded by the National Science Foundation and NASA, has a serial number of “1” because it is the first of its kind in the world with its exact configuration. Its 200,000-volt electron beam can probe matter down to 78 picometers – scales too small for the human brain to comprehend – in order to see individual atoms. It operates under a vacuum with extremely low pressures. This pressure, or lack thereof, in the TEM is very close to the pressure in circumstellar environments.

“It’s not that we necessarily tailored the instrument to have these specific kinds of pressures,” said Tom Zega, associate professor in the UArizona Lunar and Planetary Lab and study co-author. “These instruments operate at those kinds of very low pressures not because we want them to be like stars, but because molecules of the atmosphere get in the way when you’re trying to do high-resolution imaging with electron microscopes.”

The team partnered with the U.S. Department of Energy’s Argonne National Lab, near Chicago, which has a TEM capable of studying radiation responses of materials. They placed silicon carbide, a common form of dust made in stars, in the low-pressure environment of the TEM, subjected it to temperatures up to 1,830 degrees Fahrenheit and irradiated it with high-energy xenon ions.

Then, it was brought back to Tucson for researchers to utilize the higher resolution and better analytical capabilities of the UArizona TEM. They knew their hypothesis would be validated if they observed the silicon shedding and exposing pure carbon.

“Sure enough, the silicon came off, and you were left with layers of carbon in six-membered ring sets called graphite,” said co-author Lucy Ziurys, Regents Professor of astronomy, chemistry and biochemistry. “And then when the grains had an uneven surface, five-membered and six-membered rings formed and made spherical structures matching the diameter of C60. So, we think we’re seeing C60.”

This work suggests that C60 is derived from the silicon carbide dust made by dying stars, which is then hit by high temperatures, shockwaves and high energy particles , leeching silicon from the surface and leaving carbon behind. These big molecules are dispersed because dying stars eject their material into the interstellar medium – the spaces in between stars – thus accounting for their presence outside of planetary nebulae. Buckyballs are very stable to radiation, allowing them to survive for billions of years if shielded from the harsh environment of space.

“The conditions in the universe where we would expect complex things to be destroyed are actually the conditions that create them,” Bernal said, adding that the implications of the findings are endless.

“If this mechanism is forming C60, it’s probably forming all kinds of carbon nanostructures,” Ziurys said. “And if you read the chemical literature, these are all thought to be synthetic materials only made in the lab, and yet, interstellar space seems to be making them naturally.”

If the findings are any sign, it appears that there is more the universe has to tell us about how chemistry truly works.

I have two links and citations. This first is for the 2019 paper being described here and the second is the original 1985 paper about C60.

Formation of Interstellar C60 from Silicon Carbide Circumstellar Grains by J. J. Bernal, P. Haenecour, J. Howe, T. J. Zega, S. Amari, and L. M. Ziurys. The Astrophysical Journal Letters, Volume 883, Number 2 Published 2019 October 1 © 2019. The American Astronomical Society. All rights reserved.

This paper is behind a paywall.

C60: Buckminsterfullerene by H. W. Kroto, J. R. Heath, S. C. O’Brien, R. F. Curl & R. E. Smalley. Nature volume 318, pages162–163 (1985) doi:10.1038/318162a0

This paper is open access.

A Café Scientifique Vancouver (Canada) May 28, 2019 talk ‘Getting to the heart of Mars with insight’ and an update on Baba Brinkman (former Vancouverite) and his science raps

It’s been a while since I’ve received any notices about upcoming talks from the local Café Scientifique crowd but on May 22, 2019 there was this announcement in an email,

Dear Café Scientifiquers,

Our next café will happen on TUESDAY, MAY 28TH [2019] at 7:30PM in the back room at YAGGER’S DOWNTOWN (433 W Pender). Our speaker for the evening will be DR. CATHERINE JOHNSON from the Department of Earth, Ocean and Atmospheric Sciences at UBC [University of British Columbia] .

GETTING TO THE HEART OF MARS WITH INSIGHT

Catherine Johnson is a professor of geophysics in the Dept of Earth, Ocean and Atmospheric Sciences at UBC Vancouver [campus], and a senior scientist at the Planetary Science Institute, Tucson.  She is a Co-Investigator on the InSight mission to Mars, the OSIRIS-REx mission to asteroid Bennu and was previously a Participating Scientist on the MESSENGER mission to Mercury.

We hope to see you there!

I did some digging and found two articles about Johnson, the InSight mission, and Mars. The first one is an October 21, 2012 article by James Keller on the Huffington Post Canada website,

As NASA’s Curiosity rover beams back photos of the rocky surface of Mars, another group of scientists, including one from British Columbia, is preparing the next mission to uncover what’s underneath.

Prof. Catherine Johnson, of the University of British Columbia, is among the scientists whose project, named Insight, was selected by NASA this week as part of the U.S. space agency’s Discovery program, which invites proposals from within the scientific community.

Insight will send a stationary robotic lander to Mars in 2016, drilling down several metres into the surface as it uses a combination of temperature readings and seismic measurements to help scientists on this planet learn more about the Martian core.

The second one is a May 6, 2018 article (I gather it took them longer to get to Mars than they anticipated in 2012) by Ivan Semeniuk for the Globe and Mail newspaper website,

Thanks to a thick bank of predawn fog, Catherine Johnson couldn’t see the rocket when it blasted off early Saturday morning at the Vandenberg Air Force Base in California – but she could hear the roar as NASA’s InSight mission set off on its 6½-month journey to Mars.

“It was really impressive,” said Dr. Johnson, a planetary scientist at the University of British Columbia and a member of the mission’s science team. Describing the mood at the launch as a mixture of relief and joy, Dr. Johnson added that “the spacecraft is finally en route to do what we have worked toward for many years.”

But while InSight’s mission is just getting under way, it also marks the last stage in a particularly fruitful period for the U.S. space agency’s Mars program. In the past two decades, multiple, complementary spacecraft tackled different aspects of Mars science.

Unlike the Curiosity rover, which landed on Mars nearly six years ago and is in the process of climbing a mountain in the middle of an ancient crater, InSight is designed to stay in one place after it touches down Nov. 26 [2018]. Its purpose is to open a new direction in Mars exploration – one that leads straight down as the spacecraft deploys a unique set of instruments to spy on the planet’s interior.

“What we will learn … will help us understand the earliest history of rocky planets, including Earth,” Dr. Johnson said.

It has been a prolonged voyage to the red planet. In 2015, technical problems forced program managers to postpone InSight’s launch for 2½ years. Now, scientists are hoping for smooth sailing to Mars and an uneventful landing a few hundred kilometres north of Curiosity, at a site that Dr. Johnson cheerfully describes as “boring.”

Does the timing of this talk mean you’ll be getting the latest news since InSight landed on Mars roughly six months ago? One can only hope. Finally, Johnson’s UBC bio webpage is here.

Baba Brinkman brings us up-to-date

Here’s most of a May 22, 2019 newsletter update (received via email) from former Vancouverite and current rapper, playwright, and science communicator, Baba Brinkman,

… Over the past five years I have been collaborating frequently with a company in California called SpectorDance, after the artistic director Fran Spector Atkins invited me to write and perform a rap soundtrack to one of her dance productions. Well, a few weeks ago we played our biggest venue yet with our latest collaborative show, Ocean Trilogy, which is all about the impact of human activities including climate change on marine ecosystems. The show was developed in collaboration with scientists at the Monterey Bay Aquarium Research Institute, and for the first time there’s now a full video of the production online. Have you ever seen scientifically-informed eco rap music combined in live performance with ballet and modern dance? Enjoy.

Speaking of “Science is Everywhere”, about a year ago I got to perform my song “Can’t Stop” about the neurobiology of free will for a sold-out crowd at the Brooklyn Academy of Music alongside physicist Brian Greene, comedian Chuck Nice, and Neil deGrasse Tyson. The song is half scripted and half freestyle (can you tell which part is which?) They just released the video.

Over the past few months I’ve been performing Rap Guide to Evolution, Consciousness, and Climate Chaos off-Broadway 2-3 times per week, which has been a roller coaster. Some nights I have 80 people and it’s rocking, other nights I step on stage and play to 15 people and it takes effort to keep it lively. But since this is New York, occasionally when there’s only 15 people one of them will turn out to be a former Obama Administration Energy Advisor or will publish a five star review, which keeps it exciting.

Tonight I fly to the UK where I’ll be performing all next week, including the premiere of my newest show Rap Guide to Culture, with upcoming shows in Brighton, followed by off-Broadway previews in June, followed by a full run at the Edinburgh Fringe in August (plus encores of my other shows), followed by… well I can’t really see any further than August at the moment, but the next few months promise to be action-packed.

What’s Rap Guide to Culture about? Cultural evolution and the psychology of norms of course. I recently attended a conference at the National Institute for Mathematical and Biological Synthesis in Knoxville, TN where I performed a sneak preview and did a “Rap Up” of the various conference talks, summarizing the scientific content at the end of the day, check out the video.

Okay, time to get back to packing and hit the road. More to come soon, and wish me luck continuing to dominate my lonely genre.

Brinkman has been featured here many times (just use his name as the term in the blog’s search engine). While he lives in New York City these days, he does retain a connection to Vancouver in that his mother Joyce Murray is the Member of Parliament for Vancouver Quadra and, currently, the president of the Treasury Board.

Ooblek (non-Newtonian goo) and bras from Reebok

I have taken a liberty in the title for this piece, strictly speaking the non-Newtonian goo in the bra isn’t the stuff (ooblek) made of cornstarch and water from your childhood science experiments but it has many of the same qualities. The material in the Reebok bra, PureMove, is called Shear Thickening Fluid and was developed at the University of Delaware in 2005 and subsequently employed by NASA (US National Aeronautics and Space Administration) for use in the suits used by astronauts as noted in an August 6, 2018 article by Elizabeth Secgran for Fast Company who explains how it came be used for the latest sports bra,

While the activewear industry floods the market with hundreds of different sports bras every season, research shows that most female consumers are unsatisfied with their sports bra options, and 1 in 5 women avoid exercise altogether because they don’t have a sports bra that fits them properly.

Reebok wants to make that experience a thing of the past. Today, it launches a new bra, the PureMove, that adapts to your movements, tightening up when you’re moving fast and relaxing when you’re not. …

When I visited Reebok’s Boston headquarters, Witek [Danielle Witek, Reebok designer who spearheaded the R&D making the bra possible] handed me a jar of the fluid with a stick in it. When I moved the stick quickly, it seemed to turn into a solid, and when I moved it slowly, it had the texture of honey. Witek and the scientists have incorporated this fluid into a fabric that Reebok dubs “Motion Sense Technology.” The fluid is woven into the textile, so that on the surface, it looks and feels like the synthetic material you might find in any sports bra. But what you can’t see is that the fabric adapts to the body’s shape, the velocity of the breast tissue in motion, and the type and force of movement. It stretches less with high-impact movements and then stretches more during rest and lower intensity activities.

I tested an early version of the PureMove bra a few months ago, before it had even gone into production. I did a high-intensity workout that involved doing jumping jacks and sprints, followed by a cool-down session. The best thing about the bra was that I didn’t notice it at all. I didn’t feel stifled when I was just strolling around the gym, and I didn’t feel like I was unsupported when I was running around. Ultimately, the best bras are the ones that you don’t have to think about so you can focus on getting on with your life.

Since this technology is so new, Reebok had to do a lot of testing to make sure the bra would actually do what it advertised. The company set up a breast biomechanics testing center with the help of the University of Delaware, with 54 separate motion sensors tracking and measuring various parts of a tester’s chest area. This is a far more rigorous approach than most testing facilities in the industry that typically only use between two to four sensors. Over the course of a year, the facility gathered the data required for the scientists and Reebok product designers to develop the PureMove bra.

… If it’s well-received, the logical next step would be to incorporate the Motion Sense Technology into other products, like running tights or swimsuits, since transitioning between compression and looseness is something that we want in all of our sportswear. ..

According to the Reebok PureMove bra webpage, it was available from August 16, 2018,

Credit: Reebok

It’s $60 (I imagine those are US dollars).

For anyone interested in the science of non-Newtonian goo, shear thickening fluid, and NASA, there’s a November 24, 2015 article by Lydia Chain for Popular Science (Note: Links have been removed),

There’s an experiment you may have done in high school: When you mix cornstarch with water—a concoction colloquially called oobleck—and give it a stir, it acts like a liquid. But scrape it quickly or hit it hard, and it stiffens up into a solid. If you set the right pace, you can even run on top of a pool of the stuff. This phenomenon is called shear force thickening, and scientists have been trying to understand how it happens for decades.

There are two main theories, and figuring out which is right could affect the way we make things like cement, body armor, concussion preventing helmets, and even spacesuits.

The prevailing theory is that it’s all about the fluid dynamics (the nature of how fluids move) of the liquid and the particles in a solution. As the particles are pushed closer and closer together, it becomes harder to squeeze the liquid out from between them. Eventually, it’s too hard to squeeze out any more fluid and the particles lock up into hydrodynamic clusters, still separated by a thin film of fluid. They then move together, thickening the mixture and forming a solid.

The other idea is that contact forces like friction keep the particles locked together. Under this theory, when force is applied, the particles actually touch. The shearing force and friction keep them pressed together, which makes the solution more solid.

“The debate has been raging, and we’ve been wracking our brains to think of a method to conclusively go one way or the other,” says Itai Cohen, a physicist at Cornell University. He and his team recently ran a new experiment that seems to point to friction as the driving cause of shear thickening.

Norman Wagner, a chemical engineer at the University of Delaware, says that research into frictional interactions like this is important, but notes that he isn’t completely convinced as Cohen’s team didn’t measure friction directly (they inferred it was friction from their modeling however they didn’t find the exact measurement of the friction between the particles). He also says that there’s a lot of data in the field already that strongly indicates hydrodynamic clusters as the cause for shear thickening.

Wagner and his team are working on a NASA funded project to improve space suits so that micrometeorites or other debris can’t puncture them. They have also bent their technology to make padding for helmets and shin guards that would do a better job protecting athletes from harmful impacts. They are even making puncture resistant gloves that would give healthcare workers the same dexterity as current ones but with extra protection against accidental needle sticks.

“It’s a very exciting area,” says Wagner. He’s very interested in designing materials that automatically protect someone, without robotics or power. …

I guess that in 2015 Wagner didn’t realize his work would also end up in a 2018 sports bra.

SIGGRAPH (Special Interest Group on Computer GRAPHics and Interactive Techniques) and their art gallery from Aug. 12 – 16, 2018 (making the world synthetic) in Vancouver (Canada)

While my main interest is the group’s temporary art gallery, I am providing a brief explanatory introduction and a couple of previews for SIGGRAPH 2018.

Introduction

For anyone unfamiliar with the Special Interest Group on Computer GRAPHics and Interactive Techniques (SIGGRAPH) and its conferences, from the SIGGRAPH Wikipedia entry Note: Links have been removed),

Some highlights of the conference are its Animation Theater and Electronic Theater presentations, where recently created CG films are played. There is a large exhibition floor, where several hundred companies set up elaborate booths and compete for attention and recruits. Most of the companies are in the engineering, graphics, motion picture, or video game industries. There are also many booths for schools which specialize in computer graphics or interactivity.

Dozens of research papers are presented each year, and SIGGRAPH is widely considered the most prestigious forum for the publication of computer graphics research.[1] The recent paper acceptance rate for SIGGRAPH has been less than 26%.[2] The submitted papers are peer-reviewed in a single-blind process.[3] There has been some criticism about the preference of SIGGRAPH paper reviewers for novel results rather than useful incremental progress.[4][5] …

This is the third SIGGRAPH Vancouver has hosted; the others were in 2011 and 2014.  The theme for the 2018 iteration is ‘Generations’; here’s more about it from an Aug. 2, 2018 article by Terry Flores for Variety,

While its focus is firmly forward thinking, SIGGRAPH 2018, the computer graphics, animation, virtual reality, games, digital art, mixed reality, and emerging technologies conference, is also tipping its hat to the past thanks to its theme this year: Generations. The conference runs Aug. 12-16 in Vancouver, B.C.

“In the literal people sense, pioneers in the computer graphics industry are standing shoulder to shoulder with researchers, practitioners and the future of the industry — young people — mentoring them, dabbling across multiple disciplines to innovate, relate, and grow,” says SIGGRAPH 2018 conference chair Roy C. Anthony, VP of creative development and operations at software and technology firm Ventuz. “This is really what SIGGRAPH has always been about. Generations really seemed like a very appropriate way of looking back and remembering where we all came from and how far we’ve come.”

SIGGRAPH 2018 has a number of treats in store for attendees, including the debut of Disney’s first VR film, the short “Cycles”; production sessions on the making of “Blade Runner 2049,” “Game of Thrones,” “Incredibles 2” and “Avengers: Infinity War”; as well as sneak peeks of Disney’s upcoming “Ralph Breaks the Internet: Wreck-It Ralph 2” and Laika’s “Missing Link.”

That list of ‘treats’ in the last paragraph makes the conference seem more like an iteration of a ‘comic-con’ than a technology conference.

Previews

I have four items about work that will be presented at SIGGRAPH 2018, First up, something about ‘redirected walking’ from a June 18, 2018 Association for Computing Machinery news release on EurekAlert,

CHICAGO–In the burgeoning world of virtual reality (VR) technology, it remains a challenge to provide users with a realistic perception of infinite space and natural walking capabilities in the virtual environment. A team of computer scientists has introduced a new approach to address this problem by leveraging a natural human phenomenon: eye blinks.

All humans are functionally blind for about 10 percent of the time under normal circumstances due to eye blinks and saccades, a rapid movement of the eye between two points or objects. Eye blinks are a common and natural cause of so-called “change blindness,” which indicates the inability for humans to notice changes to visual scenes. Zeroing in on eye blinks and change blindness, the team has devised a novel computational system that effectively redirects the user in the virtual environment during these natural instances, all with undetectable camera movements to deliver orientation redirection.

“Previous RDW [redirected walking] techniques apply rotations continuously while the user is walking. But the amount of unnoticeable rotations is limited,” notes Eike Langbehn, lead author of the research and doctoral candidate at the University of Hamburg. “That’s why an orthogonal approach is needed–we add some additional rotations when the user is not focused on the visuals. When we learned that humans are functionally blind for some time due to blinks, we thought, ‘Why don’t we do the redirection during eye blinks?'”

Human eye blinks occur approximately 10 to 20 times per minute, about every 4 to 19 seconds. Leveraging this window of opportunity–where humans are unable to detect major motion changes while in a virtual environment–the researchers devised an approach to synchronize a computer graphics rendering system with this visual process, and introduce any useful motion changes in virtual scenes to enhance users’ overall VR experience.

The researchers’ experiments revealed that imperceptible camera rotations of 2 to 5 degrees and translations of 4 to 9 cm of the user’s viewpoint are possible during a blink without users even noticing. They tracked test participants’ eye blinks by an eye tracker in a VR head-mounted display. In a confirmatory study, the team validated that participants could not reliably detect in which of two eye blinks their viewpoint was manipulated while walking a VR curved path. The tests relied on unconscious natural eye blinking, but the researchers say redirection during blinking could be carried out consciously. Since users can consciously blink multiple times a day without much effort, eye blinks provide great potential to be used as an intentional trigger in their approach.

The team will present their work at SIGGRAPH 2018, held 12-16 August in Vancouver, British Columbia. The annual conference and exhibition showcases the world’s leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.

“RDW is a big challenge since current techniques still need too much space to enable unlimited walking in VR,” notes Langbehn. “Our work might contribute to a reduction of space since we found out that unnoticeable rotations of up to five degrees are possible during blinks. This means we can improve the performance of RDW by approximately 50 percent.”

The team’s results could be used in combination with other VR research, such as novel steering algorithms, improved path prediction, and rotations during saccades, to name a few. Down the road, such techniques could some day enable consumer VR users to virtually walk beyond their living room.

Langbehn collaborated on the work with Frank Steinicke of University of Hamburg, Markus Lappe of University of Muenster, Gregory F. Welch of University of Central Florida, and Gerd Bruder, also of University of Central Florida. For the full paper and video, visit the team’s project page.

###

About ACM, ACM SIGGRAPH, and SIGGRAPH 2018

ACM, the Association for Computing Machinery, is the world’s largest educational and scientific computing society, uniting educators, researchers, and professionals to inspire dialogue, share resources, and address the field’s challenges. ACM SIGGRAPH is a special interest group within ACM that serves as an interdisciplinary community for members in research, technology, and applications in computer graphics and interactive techniques. SIGGRAPH is the world’s leading annual interdisciplinary educational experience showcasing the latest in computer graphics and interactive techniques. SIGGRAPH 2018, marking the 45th annual conference hosted by ACM SIGGRAPH, will take place from 12-16 August at the Vancouver Convention Centre in Vancouver, B.C.

They have provided an image illustrating what they mean (I don’t find it especially informative),

Caption: The viewing behavior of a virtual reality user, including fixations (in green) and saccades (in red). A blink fully suppresses visual perception. Credit: Eike Langbehn

Next up (2), there’s Disney Corporation’s first virtual reality (VR) short, from a July 19, 2018  Association for Computing Machinery news release on EurekAlert,

Walt Disney Animation Studios will debut its first ever virtual reality short film at SIGGRAPH 2018, and the hope is viewers will walk away feeling connected to the characters as equally as they will with the VR technology involved in making the film.

Cycles, an experimental film directed by Jeff Gipson, centers around the true meaning of creating a home and the life it holds inside its walls. The idea for the film is personal, inspired by Gipson’s childhood spending time with his grandparents and creating memories in their home, and later, having to move them to an assisted living residence.

“Every house has a story unique to the people, the characters who live there,” says Gipson. “We wanted to create a story in this single place and be able to have the viewer witness life happening around them. It is an emotionally driven film, expressing the real ups and downs, the happy and sad moments in life.”

For Cycles, Gipson also drew from his past life as an architect, having spent several years designing skate parks, and from his passion for action sports, including freestyle BMX. In Los Angeles, where Gipson lives, it is not unusual to find homes with an empty swimming pool reserved for skating or freestyle biking. Part of the pitch for Cycles came out of Gipson’s experience riding in these empty pools and being curious about the homes attached to them, the families who lived there, and the memories they made.

SIGGRAPH attendees will have the opportunity to experience Cycles at the Immersive Pavilion, a new space for this year’s conference. The Pavilion is devoted exclusively to virtual, augmented, and mixed reality and will contain: the VR Theater, a storytelling extravaganza that is part of the Computer Animation Festival; the Vrcade, a space for VR, AR, and MR games or experiences; and the well-known Village, for showcasing large-scale projects. SIGGRAPH 2018, held 12-16 August in Vancouver, British Columbia, is an annual gathering that showcases the world’s leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.

The production team completed Cycles in four months with about 50 collaborators as part of a professional development program at the studio. A key difference in VR filmmaking includes getting creative with how to translate a story to the VR “screen.” Pre-visualizing the narrative, for one, was a challenge. Rather than traditional storyboarding, Gipson and his team instead used a mix of Quill VR painting techniques and motion capture to “storyboard” Cycles, incorporating painters and artists to generate sculptures or 3D models of characters early on and draw scenes for the VR space. The creators also got innovative with the use of light and color saturation in scenes to help guide the user’s eyes during the film.

“What’s cool for VR is that we are really on the edge of trying to figure out what it is and how to tell stories in this new medium,” says Gipson. “In VR, you can look anywhere and really be transported to a different world, experience it from different angles, and see every detail. We want people watching to feel alive and feel emotion, and give them a true cinematic experience.”

This is Gipson’s VR directorial debut. He joined Walt Disney Animation Studios in 2013, serving as a lighting artist on Disney favorites like Frozen, Zootopia, and Moana. Of getting to direct the studio’s first VR short, he says, “VR is an amazing technology and a lot of times the technology is what is really celebrated. We hope more and more people begin to see the emotional weight of VR films, and with Cycles in particular, we hope they will feel the emotions we aimed to convey with our story.”

Apparently this is a still from the ‘short’,

Caption: Disney Animation Studios will present ‘Cycles’ , its first virtual reality (VR) short, at ACM SIGGRAPH 2018. Credit: Disney Animation Studios

There’s also something (3) from Google as described in a July 26, 2018 Association of Computing Machinery news release on EurekAlert,

Google has unveiled a new virtual reality (VR) immersive experience based on a novel system that captures and renders high-quality, realistic images from the real world using light fields. Created by a team of leading researchers at Google, Welcome to Light Fields is the tech giant’s splash into the nascent arena of light fields VR experiences, an exciting corner of VR video technology gaining traction for its promise to deliver extremely high-quality imagery and experiences in the virtual world.

Google released Welcome to Light Fields earlier this year as a free app on Steam VR for HTC Vive, Oculus Rift, and Windows Mixed Reality headsets. The creators will demonstrate the VR experience at SIGGRAPH 2018, in the Immersive Pavilion, a new space for this year’s conference. The Pavilion is devoted exclusively to virtual, augmented, and mixed reality and will contain: the Vrcade, a space for VR, AR, and MR games or experiences; the VR Theater, a storytelling extravaganza that is part of the Computer Animation Festival; and the well-known Village, for showcasing large-scale projects. SIGGRAPH 2018, held 12-16 August in Vancouver, British Columbia, is an annual gathering that showcases the world’s leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.

Destinations in Welcome to Light Fields include NASA’s Space Shuttle Discovery, delivering to viewers an astronaut’s view inside the flight deck, which has never been open to the public; the pristine teak and mahogany interiors of the Gamble House, an architectural treasure in Pasadena, CA; and the glorious St. Stephen’s Church in Granada Hills, CA, home to a stunning wall of more than 14,000 pieces of glimmering stained glass.

“I love that light fields in VR can teleport you to exotic places in the real world, and truly make you believe you are there,” says Ryan Overbeck, software engineer at Google who co-led the project. “To me, this is magic.”

To bring this experience to life, Overbeck worked with a team that included Paul Debevec, senior staff engineer at Google, who managed the project and led the hardware piece with engineers Xueming Yu, Jay Busch, and Graham Fyffe. With Overbeck, Daniel Erickson and Daniel Evangelakos focused on the software end. The researchers designed a comprehensive system for capturing and rendering high-quality, spherical light field still images from footage captured in the real world. They developed two easy-to-use light field camera rigs, based on the GoPro Hero4action sports camera, that efficiently capture thousands of images on the surface of a sphere. Those images were then passed through a cloud-based light-field-processing pipeline.

Among other things, explains Overbeck, “The processing pipeline uses computer vision to place the images in 3D and generate depth maps, and we use a modified version of our vp9 video codec

to compress the light field data down to a manageable size.” To render a light field dataset, he notes, the team used a rendering algorithm that blends between the thousands of light field images in real-time.

The team relied on Google’s talented pool of engineers in computer vision, graphics, video compression, and machine learning to overcome the unique challenges posed in light fields technology. They also collaborated closely with the WebM team (who make the vp9 video codec) to develop the high-quality light field compression format incorporated into their system, and leaned heavily on the expertise of the Jump VR team to help pose the images and generate depth maps. (Jump is Google’s professional VR system for achieving 3D-360 video production at scale.)

Indeed, with Welcome to Light Fields, the Google team is demonstrating the potential and promise of light field VR technology, showcasing the technology’s ability to provide a truly immersive experience with a level of unmatched realism. Though light fields technology has been researched and explored in computer graphics for more than 30 years, practical systems for actually delivering high-quality light field experiences has not yet been possible.

Part of the team’s motivation behind creating this VR light field experience is to invigorate the nascent field.

“Welcome to Light Fields proves that it is now possible to make a compelling light field VR viewer that runs on consumer-grade hardware, and we hope that this knowledge will encourage others to get involved with building light field technology and media,” says Overbeck. “We understand that in order to eventually make compelling consumer products based on light fields, we need a thriving light field ecosystem. We need open light field codecs, we need artists creating beautiful light field imagery, and we need people using VR in order to engage with light fields.”

I don’t really understand why this image, which looks like something belongs on advertising material, would be chosen to accompany a news release on a science-based distribution outlet,

Caption: A team of leading researchers at Google, will unveil the new immersive virtual reality (VR) experience “Welcome to Lightfields” at ACM SIGGRAPH 2018. Credit: Image courtesy of Google/Overbeck

Finally (4), ‘synthesizing realistic sounds’ is announced in an Aug. 6, 2018 Stanford University (US) news release (also on EurekAlert) by Taylor Kubota,

Advances in computer-generated imagery have brought vivid, realistic animations to life, but the sounds associated with what we see simulated on screen, such as two objects colliding, are often recordings. Now researchers at Stanford University have developed a system that automatically renders accurate sounds for a wide variety of animated phenomena.

“There’s been a Holy Grail in computing of being able to simulate reality for humans. We can animate scenes and render them visually with physics and computer graphics, but, as for sounds, they are usually made up,” said Doug James, professor of computer science at Stanford University. “Currently there exists no way to generate realistic synchronized sounds for complex animated content, such as splashing water or colliding objects, automatically. This fills that void.”

The researchers will present their work on this sound synthesis system as part of ACM SIGGRAPH 2018, the leading conference on computer graphics and interactive techniques. In addition to enlivening movies and virtual reality worlds, this system could also help engineering companies prototype how products would sound before being physically produced, and hopefully encourage designs that are quieter and less irritating, the researchers said.

“I’ve spent years trying to solve partial differential equations – which govern how sound propagates – by hand,” said Jui-Hsien Wang, a graduate student in James’ lab and in the Institute for Computational and Mathematical Engineering (ICME), and lead author of the paper. “This is actually a place where you don’t just solve the equation but you can actually hear it once you’ve done it. That’s really exciting to me and it’s fun.”

Predicting sound

Informed by geometry and physical motion, the system figures out the vibrations of each object and how, like a loudspeaker, those vibrations excite sound waves. It computes the pressure waves cast off by rapidly moving and vibrating surfaces but does not replicate room acoustics. So, although it does not recreate the echoes in a grand cathedral, it can resolve detailed sounds from scenarios like a crashing cymbal, an upside-down bowl spinning to a stop, a glass filling up with water or a virtual character talking into a megaphone.

Most sounds associated with animations rely on pre-recorded clips, which require vast manual effort to synchronize with the action on-screen. These clips are also restricted to noises that exist – they can’t predict anything new. Other systems that produce and predict sounds as accurate as those of James and his team work only in special cases, or assume the geometry doesn’t deform very much. They also require a long pre-computation phase for each separate object.

“Ours is essentially just a render button with minimal pre-processing that treats all objects together in one acoustic wave simulation,” said Ante Qu, a graduate student in James’ lab and co-author of the paper.

The simulated sound that results from this method is highly detailed. It takes into account the sound waves produced by each object in an animation but also predicts how those waves bend, bounce or deaden based on their interactions with other objects and sound waves in the scene.

Challenges ahead

In its current form, the group’s process takes a while to create the finished product. But, now that they have proven this technique’s potential, they can focus on performance optimizations, such as implementing their method on parallel GPU hardware, that should make it drastically faster.

And, even in its current state, the results are worth the wait.

“The first water sounds we generated with the system were among the best ones we had simulated – and water is a huge challenge in computer-generated sound,” said James. “We thought we might get a little improvement, but it is dramatically better than previous approaches even right out of the box. It was really striking.”

Although the group’s work has faithfully rendered sounds of various objects spinning, falling and banging into each other, more complex objects and interactions – like the reverberating tones of a Stradivarius violin – remain difficult to model realistically. That, the group said, will have to wait for a future solution.

Timothy Langlois of Adobe Research is a co-author of this paper. This research was funded by the National Science Foundation and Adobe Research. James is also a professor, by courtesy, of music and a member of Stanford Bio-X.

Researchers Timothy Langlois, Doug L. James, Ante Qu and Jui-Hsien Wang have created a video featuring highlights of animations with sounds synthesized using the Stanford researchers’ new system.,

The researchers have also provided this image,

By computing pressure waves cast off by rapidly moving and vibrating surfaces – such as a cymbal – a new sound synthesis system developed by Stanford researchers can automatically render realistic sound for computer animations. (Image credit: Timothy Langlois, Doug L. James, Ante Qu and Jui-Hsien Wang)

It does seem like we’re synthesizing the world around us, eh?

The SIGGRAPH 2018 art gallery

Here’s what SIGGRAPH had to say about its 2018 art gallery in Vancouver and the themes for the conference and the gallery (from a May 18, 2018 Associating for Computing Machinery news release on globalnewswire.com (also on this 2018 SIGGRAPH webpage),

SIGGRAPH 2018, the world’s leading showcase of digital art created using computer graphics and interactive techniques, will present a special Art Gallery, entitled “Origins,” and historic Art Papers in Vancouver, B.C. The 45th SIGGRAPH conference will take place 12–16 August at the Vancouver Convention Centre. The programs will also honor the generations of creators that have come before through a special, 50th anniversary edition of the Leonard journal. To register for the conference, visit S2018.SIGGRAPH.ORG.

The SIGGRAPH 2018 ART GALLERY is a curated exhibition, conceived as a dialogical space that enables the viewer to reflect on man’s diverse cultural values and rituals through contemporary creative practices. Building upon an exciting and eclectic selection of creative practices mediated through technologies that represent the sophistication of our times, the SIGGRAPH 2018 Art Gallery will embrace the narratives of the indigenous communities based near Vancouver and throughout Canada as a source of inspiration. The exhibition will feature contemporary media artworks, art pieces by indigenous communities, and other traces of technologically mediated Ludic practices.

Andrés Burbano, SIGGRAPH 2018 Art Gallery chair and professor at Universidad de los Andes, said, “The Art Gallery aims to articulate myth and technology, science and art, the deep past and the computational present, and will coalesce around a theme of ‘Origins.’ Media and technological creative expressions will explore principles such as the origins of the cosmos, the origins of life, the origins of human presence, the origins of the occupation of territories in the Americas, and the origins of people living in the vast territories of the Arctic.”

He continued, “The venue [in Vancouver] hopes to rekindle the original spark that ignited the collaborative spirit of the SIGGRAPH community of engineers, scientists, and artists, who came together to create the very first conference in the early 1970s.”

Highlights from the 2018 Art Gallery include:

Transformation Mask (Canada) [Technology Based]
Shawn Hunt, independent; and Microsoft Garage: Andy Klein, Robert Butterworth, Jonathan Cobb, Jeremy Kersey, Stacey Mulcahy, Brendan O’Rourke, Brent Silk, and Julia Taylor-Hell, Microsoft Vancouver

TRANSFORMATION MASK is an interactive installation that features the Microsoft HoloLens. It utilizes electronics and mechanical engineering to express a physical and digital transformation. Participants are immersed in spatial sounds and holographic visuals.

Somnium (U.S.) [Science Based]
Marko Peljhan, Danny Bazo, and Karl Yerkes, University of California, Santa Barbara

Somnium is a cybernetic installation that provides visitors with the ability to sensorily, cognitively, and emotionally contemplate and experience exoplanetary discoveries, their macro and micro dimensions, and the potential for life in our Galaxy. Some might call it “space telescope.”

Ernest Edmonds Retrospective – Art Systems 1968-2018 (United Kingdom) [History Based]
Ernest Edmonds, De Montfort University

Celebrating one of the pioneers of computer graphics-based art since the early 1970s, this Ernest Edmonds career retrospective will showcase snapshots of Edmonds’ work as it developed over the years. With one piece from each decade, the retrospective will also demonstrate how vital the Leonardo journal has been throughout the 50-year journey.

In addition to the works above, the Art Gallery will feature pieces from notable female artists Ozge Samanci, Ruth West, and Nicole L’Hullier. For more information about the Edmonds retrospective, read THIS POST ON THE ACM SIGGRAPH BLOG.

The SIGGRAPH 2018 ART PAPERS program is designed to feature research from artists, scientists, theorists, technologists, historians, and more in one of four categories: project description, theory/criticism, methods, or history. The chosen work was selected by an international jury of scholars, artists, and immersive technology developers.

To celebrate the 50th anniversary of LEONARDO (MIT Press), and 10 years of its annual SIGGRAPH issue, SIGGRAPH 2018 is pleased to announce a special anniversary edition of the journal, which will feature the 2018 art papers. For 50 years, Leonardo has been the definitive publication for artist-academics. To learn more about the relationship between SIGGRAPH and the journal, listen to THIS EPISODE OF THE SIGGRAPH SPOTLIGHT PODCAST.

“In order to encourage a wider range of topics, we introduced a new submission type, short papers. This enabled us to accept more content than in previous years. Additionally, for the first time, we will introduce sessions that integrate the Art Gallery artist talks with Art Papers talks, promoting richer connections between these two creative communities,” said Angus Forbes, SIGGRAPH 2018 Art Papers chair and professor at University of California, Santa Cruz.

Art Papers highlights include:

Alienating the Familiar with CGI: A Recipe for Making a Full CGI Art House Animated Feature [Long]
Alex Counsell and Paul Charisse, University of Portsmouth

This paper explores the process of making and funding an art house feature film using full CGI in a marketplace where this has never been attempted. It explores cutting-edge technology and production approaches, as well as routes to successful fundraising.

Augmented Fauna and Glass Mutations: A Dialogue Between Material and Technique in Glassblowing and 3D Printing [Long]
Tobias Klein, City University of Hong Kong

The two presented artworks, “Augmented Fauna” and “Glass Mutations,” were created during an artist residence at the PILCHUCK GLASS SCHOOL. They are examples of the qualities and methods established through a synthesis between digital workflows and traditional craft processes and thus formulate the notion of digital craftsmanship.

Inhabitat: An Imaginary Ecosystem in a Children’s Science Museum [Short]
Graham Wakefield, York University, and Haru Hyunkyung Ji, OCAD University

“Inhabitat” is a mixed reality artwork in which participants become part of an imaginary ecology through three simultaneous perspectives of scale and agency; three distinct ways to see with other eyes. This imaginary world was exhibited at a children’s science museum for five months, using an interactive projection-augmented sculpture, a large screen and speaker array, and a virtual reality head-mounted display.

What’s the what?

My father used to say that and I always assumed it meant summarize the high points, if you need to, and get to the point—fast. In that spirit, I am both fascinated and mildly appalled. The virtual, mixed, and augmented reality technologies, as well as, the others being featured at SIGGRAPH 2018 are wondrous in many ways but it seems we are coming ever closer to a world where we no longer interact with nature or other humans directly. (see my August 10, 2018 posting about the ‘extinction of experience’ for research that encourages more direct interaction with nature) I realize that SIGGRAPH is intended as a primarily technical experience but I think a little more content questioning these technologies and their applications (social implications) might be in order. That’s often the artist’s role but I can’t see anything in the art gallery descriptions that hint at any sort of fundamental critique.

Why don’t you CRISPR yourself?

It must have been quite the conference. Josiah Zayner plunged a needle into himself and claimed to have changed his DNA (deoxyribonucleic acid) while giving his talk. (*Segue: There is some Canadian content if you keep reading.*) From an Oct. 10, 2017 article by Adele Peters for Fast Company (Note: A link has been removed),

“What we’ve got here is some DNA, and this is a syringe,” Josiah Zayner tells a room full of synthetic biologists and other researchers. He fills the needle and plunges it into his skin. “This will modify my muscle genes and give me bigger muscles.”

Zayner, a biohacker–basically meaning he experiments with biology in a DIY lab rather than a traditional one–was giving a talk called “A Step-by-Step Guide to Genetically Modifying Yourself With CRISPR” at the SynBioBeta conference in San Francisco, where other presentations featured academics in suits and the young CEOs of typical biotech startups. Unlike the others, he started his workshop by handing out shots of scotch and a booklet explaining the basics of DIY [do-it-yourwelf] genome engineering.

If you want to genetically modify yourself, it turns out, it’s not necessarily complicated. As he offered samples in small baggies to the crowd, Zayner explained that it took him about five minutes to make the DNA that he brought to the presentation. The vial held Cas9, an enzyme that snips DNA at a particular location targeted by guide RNA, in the gene-editing system known as CRISPR. In this case, it was designed to knock out the myostatin gene, which produces a hormone that limits muscle growth and lets muscles atrophy. In a study in China, dogs with the edited gene had double the muscle mass of normal dogs. If anyone in the audience wanted to try it, they could take a vial home and inject it later. Even rubbing it on skin, Zayner said, would have some effect on cells, albeit limited.

Peters goes on to note that Zayner has a PhD in molecular biology and biophysics and worked for NASA (US National Aeronautics and Space Administration). Zayner’s Wikipedia entry fills in a few more details (Note: Links have been removed),

Zayner graduated from the University of Chicago with a Ph.D. in biophysics in 2013. He then spent two years as a researcher at NASA’s Ames Research Center,[2] where he worked on Martian colony habitat design. While at the agency, Zayner also analyzed speech patterns in online chat, Twitter, and books, and found that language on Twitter and online chat is closer to how people talk than to how they write.[3] Zayner found NASA’s scientific work less innovative than he expected, and upon leaving in January 2016, he launched a crowdfunding campaign to provide CRISPR kits to let the general public experiment with editing bacterial DNA. He also continued his grad school business, The ODIN, which sells kits to let the general public experiment at home. As of May 2016, The ODIN had four employees and operates out of Zayner’s garage.[2]

He refers to himself as a biohacker and believes in the importance in letting the general public participate in scientific experimentation, rather than leaving it segregated to labs.[2][4][1] Zayner found the biohacking community exclusive and hierarchical, particularly in the types of people who decide what is “safe”. He hopes that his projects can let even more people experiment in their homes. Other scientists responded that biohacking is inherently privileged, as it requires leisure time and money, and that deviance from the safety rules of concern would lead to even harsher regulations for all.[5] Zayner’s public CRISPR kit campaign coincided with wider scrutiny over genetic modification. Zayner maintained that these fears were based on misunderstandings of the product, as genetic experiments on yeast and bacteria cannot produce a viral epidemic.[6][7] In April 2015, Zayner ran a hoax on Craigslist to raise awareness about the future potential of forgery in forensics genetics testing.[8]

In February 2016, Zayner performed a full body microbiome transplant on himself, including a fecal transplant, to experiment with microbiome engineering and see if he could cure himself from gastrointestinal and other health issues. The microbiome from the donors feces successfully transplanted in Zayner’s gut according to DNA sequencing done on samples.[2] This experiment was documented by filmmakers Kate McLean and Mario Furloni and turned into the short documentary film Gut Hack.[9]

In December 2016, Zayner created a fluorescent beer by engineering yeast to contain the green fluorescent protein from jellyfish. Zayner’s company, The ODIN, released kits to allow people to create their own engineered fluorescent yeast and this was met with some controversy as the FDA declared the green fluorescent protein can be seen as a color additive.[10] Zayner, views the kit as a way that individual can use genetic engineering to create things in their everyday life.[11]

I found the video for Zayner’s now completed crowdfunding campaign,

I also found The ODIN website (mentioned in the Wikipedia essay) where they claim to be selling various gene editing and gene engineering kits including the CRISPR editing kits mentioned in Peters’ article,

In 2016, he [Zayner] sold $200,000 worth of products, including a kit for yeast that can be used to brew glowing bioluminescent beer, a kit to discover antibiotics at home, and a full home lab that’s roughly the cost of a MacBook Pro. In 2017, he expects to double sales. Many kits are simple, and most buyers probably aren’t using the supplies to attempt to engineer themselves (many kits go to classrooms). But Zayner also hopes that as people using the kits gain genetic literacy, they experiment in wilder ways.

Zayner sells a full home biohacking lab that’s roughly the cost of a MacBook Pro. [Photo: The ODIN]

He questions whether traditional research methods, like randomized controlled trials, are the only way to make discoveries, pointing out that in newer personalized medicine (such as immunotherapy for cancer, which is personalized for each patient), a sample size of one person makes sense. At his workshop, he argued that people should have the choice to self-experiment if they want to; we also change our DNA when we drink alcohol or smoke cigarettes or breathe in dirty city air. Other society-sanctioned activities are more dangerous. “We sacrifice maybe a million people a year to the car gods,” he said. “If you ask someone, ‘Would you get rid of cars?’–no.” …

US researchers both conventional and DIY types such as Zayner are not the only ones who are editing genes. The Chinese study mentioned in Peters’ article was written up in an Oct. 19, 2015 article by Antonio Regalado for the MIT [Massachusetts Institute of Technology] Technology Review (Note: Links have been removed),

Scientists in China say they are the first to use gene editing to produce customized dogs. They created a beagle with double the amount of muscle mass by deleting a gene called myostatin.

The dogs have “more muscles and are expected to have stronger running ability, which is good for hunting, police (military) applications,” Liangxue Lai, a researcher with the Key Laboratory of Regenerative Biology at the Guangzhou Institutes of Biomedicine and Health, said in an e-mail.

Lai and 28 colleagues reported their results last week in the Journal of Molecular Cell Biology, saying they intend to create dogs with other DNA mutations, including ones that mimic human diseases such as Parkinson’s and muscular dystrophy. “The goal of the research is to explore an approach to the generation of new disease dog models for biomedical research,” says Lai. “Dogs are very close to humans in terms of metabolic, physiological, and anatomical characteristics.”

Lai said his group had no plans breed to breed the extra-muscular beagles as pets. Other teams, however, could move quickly to commercialize gene-altered dogs, potentially editing their DNA to change their size, enhance their intelligence, or correct genetic illnesses. A different Chinese Institute, BGI, said in September it had begun selling miniature pigs, created via gene editing, for $1,600 each as novelty pets.

People have been influencing the genetics of dogs for millennia. By at least 36,000 years ago, early humans had already started to tame wolves and shape the companions we have today. Charles Darwin frequently cited dog breeding in The Origin of Species to demonstrate how evolution gradually occurs by a process of selection. With CRISPR, however, evolution is no longer gradual or subject to chance. It is immediate and under human control.

It is precisely that power that is stirring wide debate and concern over CRISPR. Yet at least some researchers think that gene-edited dogs could put a furry, friendly face on the technology. In an interview this month, George Church, a professor at Harvard University who leads a large effort to employ CRISPR editing, said he thinks it will be possible to augment dogs by using DNA edits to make them live longer or simply make them smarter.

Church said he also believed the alteration of dogs and other large animals could open a path to eventual gene editing of people. “Germline editing of pigs or dogs offers a line into it,” he said. “People might say, ‘Hey, it works.’ ”

In the meantime, Zayner’s ideas are certainly thought provoking. I’m not endorsing either his products or his ideas but it should be noted that early science pioneers such as Humphrey Davy and others experimented on themselves. For anyone unfamiliar with Davy, (from the Humphrey Davy Wikipedia entry; Note: Links have been removed),

Sir Humphry Davy, 1st Baronet PRS MRIA FGS (17 December 1778 – 29 May 1829) was a Cornish chemist and inventor,[1] who is best remembered today for isolating a series of substances for the first time: potassium and sodium in 1807 and calcium, strontium, barium, magnesium and boron the following year, as well as discovering the elemental nature of chlorine and iodine. He also studied the forces involved in these separations, inventing the new field of electrochemistry. Berzelius called Davy’s 1806 Bakerian Lecture On Some Chemical Agencies of Electricity[2] “one of the best memoirs which has ever enriched the theory of chemistry.”[3] He was a Baronet, President of the Royal Society (PRS), Member of the Royal Irish Academy (MRIA), and Fellow of the Geological Society (FGS). He also invented the Davy lamp and a very early form of incandescent light bulb.

Canadian content*

A Nov. 11, 2017 posting on the Canadian Broadcasting Corporation’s (CBC) Quirks and Quarks blog notes that self-experimentation has a long history and goes on to describe Zayner’s and others biohacking exploits before describing the legality of biohacking in Canada,

With biohackers entering into the space traditionally held by scientists and clinicians, it begs questions. Professor Timothy Caulfield, a Canada research chair in health, law and policy at the University of Alberta, says when he hears of somebody giving themselves biohacked gene therapy, he wonders: “Is this legal? Is this safe? And if it’s not safe, is there anything that we can do about regulating it? And to be honest with you that’s a tough question and I think it’s an open question.”

In Canada, Caulfield says, Health Canada focuses on products. “You have to have something that you are going to regulate or you have to have something that’s making health claims. So if there is a product that is saying I can cure X, Y, or Z, Health Canada can say, ‘Well let’s make sure the science really backs up that claim.’ The problem with these do-it-yourself approaches is there isn’t really a product. You know these people are experimenting on themselves with something that may or may not be designed for health purposes.”

According to Caufield, if you could buy a gene therapy kit that was being marketed to you to biohack yourself, that would be different. “Health Canada could jump in. But right here that’s not the case,” he says.

There are places in the world that do regulate biohacking, says Caulfield. “Germany, for example, they have specific laws for it. And here in Canada we do have a regulatory framework that says that you cannot do gene therapy that will alter the germ line. In other words, you can’t do gene therapy or any kind of genetic editing that will create a change that you will pass on to your offspring. So that would be illegal, but that’s not what’s happening here. And I don’t think there’s a regulatory framework that adequately captures it.”

Infectious disease and policy experts aren’t that concerned yet about the possibility of a biohacker unleashing a genetically modified super germ into the population.

“I think in the future that could be a problem,”says Caulfield, “but this isn’t something that would be easy to do in your garage. I think it’s complicated science. But having said that, the science is moving quickly. We need to think about how we are going to control the potential harms.”

You can find out more about the ‘wild’ people (mostly men) of early science in Richard Holmes’ 2008 book, The Age of Wonder: How the Romantic Generation Discovered the Beauty and Terror of Science.

Finally, should you be interested in connecting with synthetic biology enthusiasts, entrepreneurs, and others, SynBioBeta is more than a conference; it’s also an activity hub.

ETA January 25, 2018 (five minutes later): There are some CRISPR/CAS9 events taking place in Toronto, Canada on January 24 and 25, 2018. One is a workshop with Portuguese artist, Marta de Menezes, and the other is a panel discussion. See my January 10, 2018 posting for more details.

*’Segue: There is some Canadian content if you keep reading.’ and ‘Canadian content’ added January 25, 2018 six minutes after first publication.

ETA February 20, 2018: Sarah Zhang’s Feb. 20, 2018 article for The Atlantic revisits Josiah Zayner’s decision to inject himself with CRISPR,

When Josiah Zayner watched a biotech CEO drop his pants at a biohacking conference and inject himself with an untested herpes treatment, he realized things had gone off the rails.

Zayner is no stranger to stunts in biohacking—loosely defined as experiments, often on the self, that take place outside of traditional lab spaces. You might say he invented their latest incarnation: He’s sterilized his body to “transplant” his entire microbiome in front of a reporter. He’s squabbled with the FDA about selling a kit to make glow-in-the-dark beer. He’s extensively documented attempts to genetically engineer the color of his skin. And most notoriously, he injected his arm with DNA encoding for CRISPR that could theoretically enhance his muscles—in between taking swigs of Scotch at a live-streamed event during an October conference. (Experts say—and even Zayner himself in the live-stream conceded—it’s unlikely to work.)

So when Zayner saw Ascendance Biomedical’s CEO injecting himself on a live-stream earlier this month, you might say there was an uneasy flicker of recognition.

“Honestly, I kind of blame myself,” Zayner told me recently. He’s been in a soul-searching mood; he recently had a kid and the backlash to the CRISPR stunt in October [2017] had been getting to him. “There’s no doubt in my mind that somebody is going to end up hurt eventually,” he said.

Yup, it’s one of the reasons for rules; people take things too far. The trick is figuring out how to achieve balance between risk taking and recklessness.

Gold’s origin in the universe due to cosmic collision

An hypothesis for gold’s origins was first mentioned here in a May 26, 2016 posting,

The link between this research and my side project on gold nanoparticles is a bit tenuous but this work on the origins for gold and other precious metals being found in the stars is so fascinating and I’m determined to find a connection.

An artist's impression of two neutron stars colliding. (Credit: Dana Berry / Skyworks Digital, Inc.) Courtesy: Kavli Foundation

An artist’s impression of two neutron stars colliding. (Credit: Dana Berry / Skyworks Digital, Inc.) Courtesy: Kavli Foundation

From a May 19, 2016 news item on phys.org,

The origin of many of the most precious elements on the periodic table, such as gold, silver and platinum, has perplexed scientists for more than six decades. Now a recent study has an answer, evocatively conveyed in the faint starlight from a distant dwarf galaxy.

In a roundtable discussion, published today [May 19, 2016?], The Kavli Foundation spoke to two of the researchers behind the discovery about why the source of these heavy elements, collectively called “r-process” elements, has been so hard to crack.

From the Spring 2016 Kavli Foundation webpage hosting the  “Galactic ‘Gold Mine’ Explains the Origin of Nature’s Heaviest Elements” Roundtable ,

Astronomers studying a galaxy called Reticulum II have just discovered that its stars contain whopping amounts of these metals—collectively known as “r-process” elements (See “What is the R-Process?”). Of the 10 dwarf galaxies that have been similarly studied so far, only Reticulum II bears such strong chemical signatures. The finding suggests some unusual event took place billions of years ago that created ample amounts of heavy elements and then strew them throughout the galaxy’s reservoir of gas and dust. This r-process-enriched material then went on to form Reticulum II’s standout stars.

Based on the new study, from a team of researchers at the Kavli Institute at the Massachusetts Institute of Technology, the unusual event in Reticulum II was likely the collision of two, ultra-dense objects called neutron stars. Scientists have hypothesized for decades that these collisions could serve as a primary source for r-process elements, yet the idea had lacked solid observational evidence. Now armed with this information, scientists can further hope to retrace the histories of galaxies based on the contents of their stars, in effect conducting “stellar archeology.”

Researchers have confirmed the hypothesis according to an Oct. 16, 2017 news item on phys.org,

Gold’s origin in the Universe has finally been confirmed, after a gravitational wave source was seen and heard for the first time ever by an international collaboration of researchers, with astronomers at the University of Warwick playing a leading role.

Members of Warwick’s Astronomy and Astrophysics Group, Professor Andrew Levan, Dr Joe Lyman, Dr Sam Oates and Dr Danny Steeghs, led observations which captured the light of two colliding neutron stars, shortly after being detected through gravitational waves – perhaps the most eagerly anticipated phenomenon in modern astronomy.

Marina Koren’s Oct. 16, 2017 article for The Atlantic presents a richly evocative view (Note: Links have been removed),

Some 130 million years ago, in another galaxy, two neutron stars spiraled closer and closer together until they smashed into each other in spectacular fashion. The violent collision produced gravitational waves, cosmic ripples powerful enough to stretch and squeeze the fabric of the universe. There was a brief flash of light a million trillion times as bright as the sun, and then a hot cloud of radioactive debris. The afterglow hung for several days, shifting from bright blue to dull red as the ejected material cooled in the emptiness of space.

Astronomers detected the aftermath of the merger on Earth on August 17. For the first time, they could see the source of universe-warping forces Albert Einstein predicted a century ago. Unlike with black-hole collisions, they had visible proof, and it looked like a bright jewel in the night sky.

But the merger of two neutron stars is more than fireworks. It’s a factory.

Using infrared telescopes, astronomers studied the spectra—the chemical composition of cosmic objects—of the collision and found that the plume ejected by the merger contained a host of newly formed heavy chemical elements, including gold, silver, platinum, and others. Scientists estimate the amount of cosmic bling totals about 10,000 Earth-masses of heavy elements.

I’m not sure exactly what this image signifies but it did accompany Koren’s article so presumably it’s a representation of colliding neutron stars,

NSF / LIGO / Sonoma State University /A. Simonnet. Downloaded from: https://www.theatlantic.com/science/archive/2017/10/the-making-of-cosmic-bling/543030/

An Oct. 16, 2017 University of Warwick press release (also on EurekAlert), which originated the news item on phys.org, provides more detail,

Huge amounts of gold, platinum, uranium and other heavy elements were created in the collision of these compact stellar remnants, and were pumped out into the universe – unlocking the mystery of how gold on wedding rings and jewellery is originally formed.

The collision produced as much gold as the mass of the Earth. [emphasis mine]

This discovery has also confirmed conclusively that short gamma-ray bursts are directly caused by the merging of two neutron stars.

The neutron stars were very dense – as heavy as our Sun yet only 10 kilometres across – and they collided with each other 130 million years ago, when dinosaurs roamed the Earth, in a relatively old galaxy that was no longer forming many stars.

They drew towards each other over millions of light years, and revolved around each other increasingly quickly as they got closer – eventually spinning around each other five hundred times per second.

Their merging sent ripples through the fabric of space and time – and these ripples are the elusive gravitational waves spotted by the astronomers.

The gravitational waves were detected by the Advanced Laser Interferometer Gravitational-Wave Observatory (Adv-LIGO) on 17 August this year [2017], with a short duration gamma-ray burst detected by the Fermi satellite just two seconds later.

This led to a flurry of observations as night fell in Chile, with a first report of a new source from the Swope 1m telescope.

Longstanding collaborators Professor Levan and Professor Nial Tanvir (from the University of Leicester) used the facilities of the European Southern Observatory to pinpoint the source in infrared light.

Professor Levan’s team was the first one to get observations of this new source with the Hubble Space Telescope. It comes from a galaxy called NGC 4993, 130 million light years away.

Andrew Levan, Professor in the Astronomy & Astrophysics group at the University of Warwick, commented: “Once we saw the data, we realised we had caught a new kind of astrophysical object. This ushers in the era of multi-messenger astronomy, it is like being able to see and hear for the first time.”

Dr Joe Lyman, who was observing at the European Southern Observatory at the time was the first to alert the community that the source was unlike any seen before.

He commented: “The exquisite observations obtained in a few days showed we were observing a kilonova, an object whose light is powered by extreme nuclear reactions. This tells us that the heavy elements, like the gold or platinum in jewellery are the cinders, forged in the billion degree remnants of a merging neutron star.”

Dr Samantha Oates added: “This discovery has answered three questions that astronomers have been puzzling for decades: what happens when neutron stars merge? What causes the short duration gamma-ray bursts? Where are the heavy elements, like gold, made? In the space of about a week all three of these mysteries were solved.”

Dr Danny Steeghs said: “This is a new chapter in astrophysics. We hope that in the next few years we will detect many more events like this. Indeed, in Warwick we have just finished building a telescope designed to do just this job, and we expect it to pinpoint these sources in this new era of multi-messenger astronomy”.

Congratulations to all of the researchers involved in this work!

Many, many research teams were  involved. Here’s a sampling of their news releases which focus on their areas of research,

University of the Witwatersrand (South Africa)

https://www.eurekalert.org/pub_releases/2017-10/uotw-wti101717.php

Weizmann Institute of Science (Israel)

https://www.eurekalert.org/pub_releases/2017-10/wios-cns101717.php

Carnegie Institution for Science (US)

https://www.eurekalert.org/pub_releases/2017-10/cifs-dns101217.php

Northwestern University (US)

https://www.eurekalert.org/pub_releases/2017-10/nu-adc101617.php

National Radio Astronomy Observatory (US)

https://www.eurekalert.org/pub_releases/2017-10/nrao-ru101317.php

Max-Planck-Gesellschaft (Germany)

https://www.eurekalert.org/pub_releases/2017-10/m-gwf101817.php

Penn State (Pennsylvania State University; US)

https://www.eurekalert.org/pub_releases/2017-10/ps-stl101617.php

University of California – Davis

https://www.eurekalert.org/pub_releases/2017-10/uoc–cns101717.php

The American Association for the Advancement of Science’s (AAAS) magazine, Science, has published seven papers on this research. Here’s an Oct. 16, 2017 AAAS news release with an overview of the papers,

https://www.eurekalert.org/pub_releases/2017-10/aaft-btf101617.php

I’m sure there are more news releases out there and that there will be many more papers published in many journals, so if this interests, I encourage you to keep looking.

Two final pieces I’d like to draw your attention to: one answers basic questions and another focuses on how artists knew what to draw when neutron stars collide.

Keith A Spencer’s Oct. 18, 2017 piece on salon.com answers a lot of basic questions for those of us who don’t have a background in astronomy. Here are a couple of examples,

What is a neutron star?

Okay, you know how atoms have protons, neutrons, and electrons in them? And you know how protons are positively charged, and electrons are negatively charged, and neutrons are neutral?

Yeah, I remember that from watching Bill Nye as a kid.

Totally. Anyway, have you ever wondered why the negatively-charged electrons and the positively-charged protons don’t just merge into each other and form a neutral neutron? I mean, they’re sitting there in the atom’s nucleus pretty close to each other. Like, if you had two magnets that close, they’d stick together immediately.

I guess now that you mention it, yeah, it is weird.

Well, it’s because there’s another force deep in the atom that’s preventing them from merging.

It’s really really strong.

The only way to overcome this force is to have a huge amount of matter in a really hot, dense space — basically shove them into each other until they give up and stick together and become a neutron. This happens in very large stars that have been around for a while — the core collapses, and in the aftermath, the electrons in the star are so close to the protons, and under so much pressure, that they suddenly merge. There’s a big explosion and the outer material of the star is sloughed off.

Okay, so you’re saying under a lot of pressure and in certain conditions, some stars collapse and become big balls of neutrons?

Pretty much, yeah.

So why do the neutrons just stick around in a huge ball? Aren’t they neutral? What’s keeping them together? 

Gravity, mostly. But also the strong nuclear force, that aforementioned weird strong force. This isn’t something you’d encounter on a macroscopic scale — the strong force only really works at the type of distances typified by particles in atomic nuclei. And it’s different, fundamentally, than the electromagnetic force, which is what makes magnets attract and repel and what makes your hair stick up when you rub a balloon on it.

So these neutrons in a big ball are bound by gravity, but also sticking together by virtue of the strong nuclear force. 

So basically, the new ball of neutrons is really small, at least, compared to how heavy it is. That’s because the neutrons are all clumped together as if this neutron star is one giant atomic nucleus — which it kinda is. It’s like a giant atom made only of neutrons. If our sun were a neutron star, it would be less than 20 miles wide. It would also not be something you would ever want to get near.

Got it. That means two giant balls of neutrons that weighed like, more than our sun and were only ten-ish miles wide, suddenly smashed into each other, and in the aftermath created a black hole, and we are just now detecting it on Earth?

Exactly. Pretty weird, no?

Spencer does a good job of gradually taking you through increasingly complex explanations.

For those with artistic interests, Neel V. Patel tries to answer a question about how artists knew what draw when neutron stars collided in his Oct. 18, 2017 piece for Slate.com,

All of these things make this discovery easy to marvel at and somewhat impossible to picture. Luckily, artists have taken up the task of imagining it for us, which you’ve likely seen if you’ve already stumbled on coverage of the discovery. Two bright, furious spheres of light and gas spiraling quickly into one another, resulting in a massive swell of lit-up matter along with light and gravitational waves rippling off speedily in all directions, towards parts unknown. These illustrations aren’t just alluring interpretations of a rare phenomenon; they are, to some extent, the translation of raw data and numbers into a tangible visual that gives scientists and nonscientists alike some way of grasping what just happened. But are these visualizations realistic? Is this what it actually looked like? No one has any idea. Which is what makes the scientific illustrators’ work all the more fascinating.

“My goal is to represent what the scientists found,” says Aurore Simmonet, a scientific illustrator based at Sonoma State University in Rohnert Park, California. Even though she said she doesn’t have a rigorous science background (she certainly didn’t know what a kilonova was before being tasked to illustrate one), she also doesn’t believe that type of experience is an absolute necessity. More critical, she says, is for the artist to have an interest in the subject matter and in learning new things, as well as a capacity to speak directly to scientists about their work.

Illustrators like Simmonet usually start off work on an illustration by asking the scientist what’s the biggest takeaway a viewer should grasp when looking at a visual. Unfortunately, this latest discovery yielded a multitude of papers emphasizing different conclusions and highlights. With so many scientific angles, there’s a stark challenge in trying to cram every important thing into a single drawing.

Clearly, however, the illustrations needed to center around the kilonova. Simmonet loves colors, so she began by discussing with the researchers what kind of color scheme would work best. The smash of two neutron stars lends itself well to deep, vibrant hues. Simmonet and Robin Dienel at the Carnegie Institution for Science elected to use a wide array of colors and drew bright cracking to show pressure forming at the merging. Others, like Luis Calcada at the European Southern Observatory, limited the color scheme in favor of emphasizing the bright moment of collision and the signal waves created by the kilonova.

Animators have even more freedom to show the event, since they have much more than a single frame to play with. The Conceptual Image Lab at NASA’s [US National Aeronautics and Space Administration] Goddard Space Flight Center created a short video about the new findings, and lead animator Brian Monroe says the video he and his colleagues designed shows off the evolution of the entire process: the rising action, climax, and resolution of the kilonova event.

The illustrators try to adhere to what the likely physics of the event entailed, soliciting feedback from the scientists to make sure they’re getting it right. The swirling of gas, the direction of ejected matter upon impact, the reflection of light, the proportions of the objects—all of these things are deliberately framed such that they make scientific sense. …

Do take a look at Patel’s piece, if for no other reason than to see all of the images he has embedded there. You may recognize Aurore Simmonet’s name from the credit line in the second image I have embedded here.

Yarns that harvest and generate energy

The researchers involved in this work are confident enough about their prospects that they will be  patenting their research into yarns. From an August 25, 2017 news item on Nanowerk,

An international research team led by scientists at The University of Texas at Dallas and Hanyang University in South Korea has developed high-tech yarns that generate electricity when they are stretched or twisted.

In a study published in the Aug. 25 [2017] issue of the journal Science (“Harvesting electrical energy from carbon nanotube yarn twist”), researchers describe “twistron” yarns and their possible applications, such as harvesting energy from the motion of ocean waves or from temperature fluctuations. When sewn into a shirt, these yarns served as a self-powered breathing monitor.

“The easiest way to think of twistron harvesters is, you have a piece of yarn, you stretch it, and out comes electricity,” said Dr. Carter Haines, associate research professor in the Alan G. MacDiarmid NanoTech Institute at UT Dallas and co-lead author of the article. The article also includes researchers from South Korea, Virginia Tech, Wright-Patterson Air Force Base and China.

An August 25, 2017 University of Texas at Dallas news release, which originated the news item, expands on the theme,

Yarns Based on Nanotechnology

The yarns are constructed from carbon nanotubes, which are hollow cylinders of carbon 10,000 times smaller in diameter than a human hair. The researchers first twist-spun the nanotubes into high-strength, lightweight yarns. To make the yarns highly elastic, they introduced so much twist that the yarns coiled like an over-twisted rubber band.

In order to generate electricity, the yarns must be either submerged in or coated with an ionically conducting material, or electrolyte, which can be as simple as a mixture of ordinary table salt and water.

“Fundamentally, these yarns are supercapacitors,” said Dr. Na Li, a research scientist at the NanoTech Institute and co-lead author of the study. “In a normal capacitor, you use energy — like from a battery — to add charges to the capacitor. But in our case, when you insert the carbon nanotube yarn into an electrolyte bath, the yarns are charged by the electrolyte itself. No external battery, or voltage, is needed.”

When a harvester yarn is twisted or stretched, the volume of the carbon nanotube yarn decreases, bringing the electric charges on the yarn closer together and increasing their energy, Haines said. This increases the voltage associated with the charge stored in the yarn, enabling the harvesting of electricity.

Stretching the coiled twistron yarns 30 times a second generated 250 watts per kilogram of peak electrical power when normalized to the harvester’s weight, said Dr. Ray Baughman, director of the NanoTech Institute and a corresponding author of the study.

“Although numerous alternative harvesters have been investigated for many decades, no other reported harvester provides such high electrical power or energy output per cycle as ours for stretching rates between a few cycles per second and 600 cycles per second.”

Lab Tests Show Potential Applications

In the lab, the researchers showed that a twistron yarn weighing less than a housefly could power a small LED, which lit up each time the yarn was stretched.

To show that twistrons can harvest waste thermal energy from the environment, Li connected a twistron yarn to a polymer artificial muscle that contracts and expands when heated and cooled. The twistron harvester converted the mechanical energy generated by the polymer muscle to electrical energy.

“There is a lot of interest in using waste energy to power the Internet of Things, such as arrays of distributed sensors,” Li said. “Twistron technology might be exploited for such applications where changing batteries is impractical.”

The researchers also sewed twistron harvesters into a shirt. Normal breathing stretched the yarn and generated an electrical signal, demonstrating its potential as a self-powered respiration sensor.

“Electronic textiles are of major commercial interest, but how are you going to power them?” Baughman said. “Harvesting electrical energy from human motion is one strategy for eliminating the need for batteries. Our yarns produced over a hundred times higher electrical power per weight when stretched compared to other weavable fibers reported in the literature.”

Electricity from Ocean Waves

“In the lab we showed that our energy harvesters worked using a solution of table salt as the electrolyte,” said Baughman, who holds the Robert A. Welch Distinguished Chair in Chemistry in the School of Natural Sciences and Mathematics. “But we wanted to show that they would also work in ocean water, which is chemically more complex.”

In a proof-of-concept demonstration, co-lead author Dr. Shi Hyeong Kim, a postdoctoral researcher at the NanoTech Institute, waded into the frigid surf off the east coast of South Korea to deploy a coiled twistron in the sea. He attached a 10 centimeter-long yarn, weighing only 1 milligram (about the weight of a mosquito), between a balloon and a sinker that rested on the seabed.

Every time an ocean wave arrived, the balloon would rise, stretching the yarn up to 25 percent, thereby generating measured electricity.

Even though the investigators used very small amounts of twistron yarn in the current study, they have shown that harvester performance is scalable, both by increasing twistron diameter and by operating many yarns in parallel.

“If our twistron harvesters could be made less expensively, they might ultimately be able to harvest the enormous amount of energy available from ocean waves,” Baughman said. “However, at present these harvesters are most suitable for powering sensors and sensor communications. Based on demonstrated average power output, just 31 milligrams of carbon nanotube yarn harvester could provide the electrical energy needed to transmit a 2-kilobyte packet of data over a 100-meter radius every 10 seconds for the Internet of Things.”

Researchers from the UT Dallas Erik Jonsson School of Engineering and Computer Science and Lintec of America’s Nano-Science & Technology Center also participated in the study.

The investigators have filed a patent on the technology.

In the U.S., the research was funded by the Air Force, the Air Force Office of Scientific Research, NASA, the Office of Naval Research and the Robert A. Welch Foundation. In Korea, the research was supported by the Korea-U.S. Air Force Cooperation Program and the Creative Research Initiative Center for Self-powered Actuation of the National Research Foundation and the Ministry of Science.

Here’s a link to and a citation for the paper,

Harvesting electrical energy from carbon nanotube yarn twist by Shi Hyeong Kim, Carter S. Haines, Na Li, Keon Jung Kim, Tae Jin Mun, Changsoon Choi, Jiangtao Di, Young Jun Oh, Juan Pablo Oviedo, Julia Bykova, Shaoli Fang, Nan Jiang, Zunfeng Liu, Run Wang, Prashant Kumar, Rui Qiao, Shashank Priya, Kyeongjae Cho, Moon Kim, Matthew Steven Lucas, Lawrence F. Drummy, Benji Maruyama, Dong Youn Lee, Xavier Lepró, Enlai Gao, Dawood Albarq, Raquel Ovalle-Robles, Seon Jeong Kim, Ray H. Baughman. Science 25 Aug 2017: Vol. 357, Issue 6353, pp. 773-778 DOI: 10.1126/science.aam8771

This paper is behind a paywall.

Dexter Johnson in an Aug. 25, 2017 posting on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website) delves further into the research,

“Basically what’s happening is when we stretch the yarn, we’re getting a change in capacitance of the yarn. It’s that change that allows us to get energy out,” explains Carter Haines, associate research professor at UT Dallas and co-lead author of the paper describing the research, in an interview with IEEE Spectrum.

This makes it similar in many ways to other types of energy harvesters. For instance, in other research, it has been demonstrated—with sheets of rubber with coated electrodes on both sides—that you can increase the capacitance of a material when you stretch it and it becomes thinner. As a result, if you have charge on that capacitor, you can change the voltage associated with that charge.

“We’re more or less exploiting the same effect but what we’re doing differently is we’re using an electric chemical cell to do this,” says Haines. “So we’re not changing double layer capacitance in normal parallel plate capacitors. But we’re actually changing the electric chemical capacitance on the surface of a super capacitor yarn.”

While there are other capacitance-based energy harvesters, those other devices require extremely high voltages to work because they’re using parallel plate capacitors, according to Haines.

Dexter asks good questions and his post is very informative.

Sounding out the TRAPPIST-1 planetary system

It’s been a while since a data sonification story has come this way. Like my first posting on the topic (Feb. 7, 2014) this is another astrophysics ‘piece of music’. From the University of Toronto (Canada) and Thought Café (a Canadian animation studio),

For those who’d like a little text, here’s more from a May 10, 2017 University of Toronto news release (also on EurekAlert) by Don Campbell,

When NASA announced its discovery of the TRAPPIST-1 system back in February [2017] it caused quite a stir, and with good reason. Three of its seven Earth-sized planets lay in the star’s habitable zone, meaning they may harbour suitable conditions for life.

But one of the major puzzles from the original research describing the system was that it seemed to be unstable.

“If you simulate the system, the planets start crashing into one another in less than a million years,” says Dan Tamayo, a postdoc at U of T Scarborough’s Centre for Planetary Science.

“This may seem like a long time, but it’s really just an astronomical blink of an eye. It would be very lucky for us to discover TRAPPIST-1 right before it fell apart, so there must be a reason why it remains stable.”

Tamayo and his colleagues seem to have found a reason why. In research published in the journal Astrophysical Journal Letters, they describe the planets in the TRAPPIST-1 system as being in something called a “resonant chain” that can strongly stabilize the system.

In resonant configurations, planets’ orbital periods form ratios of whole numbers. It’s a very technical principle, but a good example is how Neptune orbits the Sun three times in the amount of time it takes Pluto to orbit twice. This is a good thing for Pluto because otherwise it wouldn’t exist. Since the two planets’ orbits intersect, if things were random they would collide, but because of resonance, the locations of the planets relative to one another keeps repeating.

“There’s a rhythmic repeating pattern that ensures the system remains stable over a long period of time,” says Matt Russo, a post-doc at the Canadian Institute for Theoretical Astrophysics (CITA) who has been working on creative ways to visualize the system.

TRAPPIST-1 takes this principle to a whole other level with all seven planets being in a chain of resonances. To illustrate this remarkable configuration, Tamayo, Russo and colleague Andrew Santaguida created an animation in which the planets play a piano note every time they pass in front of their host star, and a drum beat every time a planet overtakes its nearest neighbour.

Because the planets’ periods are simple ratios of each other, their motion creates a steady repeating pattern that is similar to how we play music. Simple frequency ratios are also what makes two notes sound pleasing when played together.

Speeding up the planets’ orbital frequencies into the human hearing range produces an astrophysical symphony of sorts, but one that’s playing out more than 40 light years away.

“Most planetary systems are like bands of amateur musicians playing their parts at different speeds,” says Russo. “TRAPPIST-1 is different; it’s a super-group with all seven members synchronizing their parts in nearly perfect time.”

But even synchronized orbits don’t necessarily survive very long, notes Tamayo. For technical reasons, chaos theory also requires precise orbital alignments to ensure systems remain stable. This can explain why the simulations done in the original discovery paper quickly resulted in the planets colliding with one another.

“It’s not that the system is doomed, it’s that stable configurations are very exact,” he says. “We can’t measure all the orbital parameters well enough at the moment, so the simulated systems kept resulting in collisions because the setups weren’t precise.”

In order to overcome this Tamayo and his team looked at the system not as it is today, but how it may have originally formed. When the system was being born out of a disk of gas, the planets should have migrated relative to one another, allowing the system to naturally settle into a stable resonant configuration.

“This means that early on, each planet’s orbit was tuned to make it harmonious with its neighbours, in the same way that instruments are tuned by a band before it begins to play,” says Russo. “That’s why the animation produces such beautiful music.”

The team tested the simulations using the supercomputing cluster at the Canadian Institute for Theoretical Astrophysics (CITA) and found that the majority they generated remained stable for as long as they could possibly run it. This was about 100 times longer than it took for the simulations in the original research paper describing TRAPPIST-1 to go berserk.

“It seems somehow poetic that this special configuration that can generate such remarkable music can also be responsible for the system surviving to the present day,” says Tamayo.

Here’s a link to and a citation for the paper,

Convergent Migration Renders TRAPPIST-1 Long-lived by Daniel Tamayo, Hanno Rein, Cristobal Petrovich, and Norman Murray. The Astrophysical Journal Letters, Volume 840, Number 2 https://doi.org/10.5281/zenodo.496153 Published 2017 May 10

© 2017. The American Astronomical Society. All rights reserved.

This paper is open access.