Scientists have long been puzzled by the existence of so-called “buckyballs” – complex carbon molecules with a soccer-ball-like structure – throughout interstellar space. Now, a team of researchers from the University of Arizona has proposed a mechanism for their formation in a study published in the Astrophysical Journal Letters.
Carbon 60, or C60 for short, whose official name is Buckminsterfullerene, comes in spherical molecules consisting of 60 carbon atoms organized in five-membered and six-membered rings. The name “buckyball” derives from their resemblance to the architectural work of Richard Buckminster Fuller [bettr known as Buckminster Fuller], who designed many dome structures that look similar to C60. Their formation was thought to only be possible in lab settings until their detection in space challenged this assumption.
For decades, people thought interstellar space was sprinkled with lightweight molecules only: mostly single atoms, two-atom molecules and the occasional nine or 10-atom molecules. This was until massive C60 and C70 molecules were detected a few years ago.
Researchers were also surprised to find that that they were composed of pure carbon. In the lab, C60 is made by blasting together pure carbon sources, such as graphite. In space, C60 was detected in planetary nebulae, which are the debris of dying stars. This environment has about 10,000 hydrogen molecules for every carbon molecule.
“Any hydrogen should destroy fullerene synthesis,” said astrobiology and chemistry doctoral student Jacob Bernal, lead author of the paper. “If you have a box of balls, and for every 10,000 hydrogen balls you have one carbon, and you keep shaking them, how likely is it that you get 60 carbons to stick together? It’s very unlikely.”
Bernal and his co-authors began investigating the C60 mechanism after realizing that the transmission electron microscope, or TEM, housed at the Kuiper Materials Imaging and Characterization Facility at UArizona, was able to simulate the planetary nebula environment fairly well.
The TEM, which is funded by the National Science Foundation and NASA, has a serial number of “1” because it is the first of its kind in the world with its exact configuration. Its 200,000-volt electron beam can probe matter down to 78 picometers – scales too small for the human brain to comprehend – in order to see individual atoms. It operates under a vacuum with extremely low pressures. This pressure, or lack thereof, in the TEM is very close to the pressure in circumstellar environments.
“It’s not that we necessarily tailored the instrument to have these specific kinds of pressures,” said Tom Zega, associate professor in the UArizona Lunar and Planetary Lab and study co-author. “These instruments operate at those kinds of very low pressures not because we want them to be like stars, but because molecules of the atmosphere get in the way when you’re trying to do high-resolution imaging with electron microscopes.”
The team partnered with the U.S. Department of Energy’s Argonne National Lab, near Chicago, which has a TEM capable of studying radiation responses of materials. They placed silicon carbide, a common form of dust made in stars, in the low-pressure environment of the TEM, subjected it to temperatures up to 1,830 degrees Fahrenheit and irradiated it with high-energy xenon ions.
Then, it was brought back to Tucson for researchers to utilize the higher resolution and better analytical capabilities of the UArizona TEM. They knew their hypothesis would be validated if they observed the silicon shedding and exposing pure carbon.
“Sure enough, the silicon came off, and you were left with layers of carbon in six-membered ring sets called graphite,” said co-author Lucy Ziurys, Regents Professor of astronomy, chemistry and biochemistry. “And then when the grains had an uneven surface, five-membered and six-membered rings formed and made spherical structures matching the diameter of C60. So, we think we’re seeing C60.”
This work suggests that C60 is derived from the silicon carbide dust made by dying stars, which is then hit by high temperatures, shockwaves and high energy particles , leeching silicon from the surface and leaving carbon behind. These big molecules are dispersed because dying stars eject their material into the interstellar medium – the spaces in between stars – thus accounting for their presence outside of planetary nebulae. Buckyballs are very stable to radiation, allowing them to survive for billions of years if shielded from the harsh environment of space.
“The conditions in the universe where we would expect complex things to be destroyed are actually the conditions that create them,” Bernal said, adding that the implications of the findings are endless.
“If this mechanism is forming C60, it’s probably forming all kinds of carbon nanostructures,” Ziurys said. “And if you read the chemical literature, these are all thought to be synthetic materials only made in the lab, and yet, interstellar space seems to be making them naturally.”
If the findings are any sign, it appears that there is more the universe has to tell us about how chemistry truly works.
I have two links and citations. This first is for the 2019 paper being described here and the second is the original 1985 paper about C60.
I don’t know what’s happened but either there are way more science type events or I’ve changed some pattern of internet use and am stumbling across more of them. I vote for the former.
In any event, this is the third ’roundup’ of science type things and/or events that I’ve published this October 2019. *ETA October 23, 2019: The events are in one or other of the science museums in Ottawa, the internships (part-time) are in Washington, DC, and Sci_Tunes is aimed at teachers in the UK although I imagine anyone is free to enjoy them.*
All three of the museums that are included in the Ingenium portmanteau (formerly the Canada Science and Technology Museums Corporation) have events and Ingenium itself is announcing a science type thing (a video game).
AI (artificial intelligence) and climate change at the Canada Museum of Science and Technology
From an October 16, 2019 Ingenium announcement (received via email),
Canada Science and Technology Museum Oct. 24, 2019 (6:30 p.m. – 8:30 p.m.) Fee: $10 for non-members, $7 for members and students. Registration required. Language: English presentation with simultaneous translation into French, and a bilingual Q & A.
Climate Change and Artificial Intelligence – two topics essential to the future of our society, each with their own inherent challenges. What if they could work together for the greater good?
Join invited speakers from Watergeeks.io and BluWave AI for a discussion that will explore the potential to use AI to reduce greenhouse gas emissions, build climate resilience,and help Canada lead in the clean tech economy. Don’t miss this essential evening, the first in the thematic series “Living in the Machine Age.”
For anyone who may be confused about the museum name (as I was for so very long): The corporation is the governing entity for three museums, Canada Science and Technology, Canada Agriculture and Food Museum, and Canada Aviation and Space Museum. Changing the corporate name from Canada Science and Technology Museums Corporation to Ingenium was welcome news (to me, if no one else).
Sky High Magic at the Canada Aviation and Space Museum
From an October 16, 2019 Ingenium announcement (received via email) ,
Canada Aviation and Space Museum
Jan. 5, 2020, Feb. 17, 2020, and March 8, 2020
Fee: $8 per ticket, $6 for members (with the discount code)
Mark your calendar…the Sky High Magic
Series is coming back to the Canada Aviation and Space Museum! With
shows running through March 2020, this year’s line up features talented,
high-energy magicians who will dazzle you with amazing illusions —
mixed with a whirlwind of comedy.
StarBlox Inc. is a mashup of a puzzler and a brawler — in space! Ingenium’s experts worked on the science in the game to immerse players in a realistic world. For example, when playing on the Jovian moon Io, you’ll need to dodge waves of lava. In real life, these can measure over 50 km high!
The game includes 72 unlockable photobook entries about the planets, moons, and asteroids in the game, with images from NASA. Check out the StarBlox Inc. trailer.
I’ve included a copy of the trailer here,
It seems more like a entrepreneur’s starter kit than a game. The overarching theme seems to be that the business of transportation and delivery is a zero sum game. Philosophically, they seem to be espousing capitalism as a form of the ‘strongest survive’ tenet.
OTTAWA, ON, September 30, 2019 – Nintendo Switch players can now join the team at StarBlox Incorporated – where sorting cargo is a contact sport!
A unique mash up of a puzzler and a brawler, Ingenium – Canada’s Museums of Science and Innovation – developed the game for Nintendo Switch in partnership with Seed Interactive. Crafted for scientific accuracy by Ingenium’s expert science advisors and curatorial staff, StarBlox Inc. features stunning planetary backdrops which have been meticulously designed to ensure that players are fully immersed in a realistic world.
As players deliver cargo to the far corners of the solar system, each of the planets, moons and asteroids presents new challenges – from black holes to gravity to waves of lava. This interactive game tests quickness and ability to efficiently to beat the competition. But watch out – the shipping world is fierce! An opponent can sabotage work by stealing blocks, delivering punches or even throwing someone in the incinerator!
Game features include: Two local competitive multiplayer modes for up to four people Single player “Career mode” Seventy-two unlockable photobook entries about the planets, moons and asteroids in the game, with images provided by NASA
StarBlox Inc. is now available for pre-purchase in North America, and will launch in the Nintendo eShop on Nintendo Switch later this fall.
“As science communicators, Ingenium is proud to create digital experiences that reach beyond the four walls of our Museums. This latest foray into the world of gaming is just one of the many ways in which we are leveraging our world class collection and team of experts to engage people regardless of where they are – nationally and internationally.” – Christina Tessier, President and CEO, Ingenium – Canada’s Museums of Science and Innovatio
“Seed Interactive creates entertainment with a purpose. As digital innovators we utilize games and interactive technologies to create exciting and accessible education, health and wellness and entertainment products.” -Aaron McLean, Founder and C.O.O, SEED Interactive Inc
The game was released October 18, 2019.
Hercules and the last straw at the Canada Agriculture and Food Museum
From an October 21, 2019 Ingenium announcement (received via email),
Hercules and The Last Straw
Friday, November 8, 2019 5:30 p.m. to 8 p.m. Canada Agriculture and Food Museum
We are pleased to invite you to join us at the Canada Agriculture and Food Museum at 5:30 p.m. on Friday, November 8, 2019 for a special evening of art and inspiration.
Ingenium is thrilled to partner with celebrated artist Elaine Goble as she shares her artistic perspectives on the fascinating connection between the STEAM subjects of Science, Technology, Engineering, Arts, and Mathematics, and personal wellness. The Canada Agriculture and Food Museum is one of three museums of Ingenium – Canada’s Museums of Science and Innovation.
For the first time, three of Ms. Goble’s large-sized animal portraits will be on view simultaneously. The vernissage and presentation will be held in the museum’s Learning Centre, where guests will be welcome to view the artworks and meet the artist before the presentation. Ms. Goble’s pieces will be complemented by several other agriculture-related artworks from Ingenium’s national science and technology collection.
Light refreshments and a cash bar will be offered.
In honour of Ms. Goble’s commitment to using art as a catalyst for curiosity and expression, a $20 donation to the museum’s art programming is requested. A tax receipt will be issued to all ticket holders and donors. If you cannot attend but would like to make a donation, please visit the Ingenium Foundation’s website .
Please RSVP using this link before November 3. As space for this event is limited, please reserve early to ensure you don’t miss out on this evening devoted to art, ingenuity, and the human spirit. A reminder with more information, including detailed driving and parking directions, will be emailed to all registrants several days before the event.
Here’s the image they’re using to accompany the publicity for the event,
Presumably, that is either Hercules or a stand-in for him.
Perimeter Institute and ‘Homes away from home’ with Elizabeth Tasker
I tried but these Perimeter Institute (PI) events are very popular and they are already at the wait list stage mere hours after making tickets available. However, there are other ways to attend as you’ll see.
Here’s more from an October 18, 2019 announcement from PI (received via email),
Homes away from home WEDNESDAY, NOVEMBER 6  at 7 PM ET Elizabeth Tasker, Japan Aerospace Exploration Agency
Since the discovery of the first exoplanets in the early 1990s, we have detected more than 4,000 worlds beyond our solar system. Many of these are similar in size to our Earth, leading to an obvious question: could any be habitable?
On November 6 , astrophysicist and author Elizabeth Tasker will take audiences for a speculative stroll through a few of the alien worlds we’ve discovered in the galaxy, and ponder whether someone else may already call them home. Read more ➞
Become a member of our donor thank you program! Learn more.
Here’s a bit more detail from the event’s ticket page,
PI Public Lecture Series:
Title: Homes away from home – the hunt for habitable planets
Since the discovery of the first exoplanets in the early 1990s, we have detected more than 4,000 worlds beyond our solar system. Many of these are similar in size to our Earth, leading to an obvious question: could any be habitable?
For now, we typically only know the size and orbit of these planets, but nothing about their surface conditions. Although we cannot know for sure if these worlds could support life, we can use models to speculate on what we might find there.
In her Nov. 6  talk at Perimeter Institute, astrophysicist and author Elizabeth Tasker will take audiences for a speculative stroll through a few of the alien worlds we’ve discovered in the galaxy, and ponder whether someone else may already call them home.
Elizabeth Tasker is an astrophysicist at the Japan Aerospace Exploration Agency (JAXA). Her research explores the formation of stars and planets using computer simulations. She is particularly interested in how diverse planets might be and what different conditions might exist beyond our Solar System. Elizabeth is also a keen science communicator and writer for the NASA NExSS “Many Worlds” online column. Her popular science book, The Planet Factory, was published out in paperback in Canada last April.
Wilson Center Spring 2020 science and technology internships
From an October 21, 2019 Wilson Center announcement (received via email),
The Science and Technology Innovation Program (STIP) is currently welcoming applicantions for the spring semester of 2020. Our internships are designed to provide the opportunity for current students or recent graduates for practical experience in an environment that successfully combines scholarship with public policy. We recommend exploring our website to determine if your research interests align with current STIP programming around emerging technologies, i.e.:
5G * Artificial Intelligence * Big Data * Citizen Science * Cybersecurity * Disinformation * Marine Debris/Ocean Plastics * One Health * Open Science * Public Communication of Science * Serious Games
We offer two types of internships: graduate-level research and undergraduate-level research internships. All internships must be served in Washington, D.C. and cannot be served remotely. Internships are unpaid unless otherwise stated.
Tools like Foldscope, a $1 microscope, and Arduino, a microprocessor for creating customized scientific instrumentation, show how low cost hardware (including open, proprietary, and mixed solutions) can accelerate research while making it more transparent and participatory.
These tools have the potential to change how, and by whom, science is done, within professional spaces and broader communities. But more work is needed to understand the capacity and future potential for low-cost hardware to accelerate and broaden participation in scientific research. We are seeking a research intern with an interest in exploring democratized scientific research and technological development through the lens of low cost hardware.
Our world is drowning in plastic pollution. Humans produce about 300 million tons of plastic waste every year, equivalent to the weight of the entire human population in 2018. Nowhere is this crisis more visible than in our oceans, which by 2050 could contain more plastic than fish. Further complicating this issue are city-state actors, such as the United States, EU and China, who have vastly different approaches in how to negate the issue area. The global public needs to understand the impact of plastic pollution and how to end its leakage into the ocean.
We are seeking a research intern with an interest in exploring the ocean plastics issue in a shared role between the China Environment Forum and Science and Technology Innovation Program’s Serious Games Initiative.
The deadline for Spring 2020 internships is November 15, 2019.
Cosmic Shambles and Sci-Tunes
Cosmic Shambles (officially, The Cosmic Shambles Network) is a science blog network that rose from the ashes of the Guardian science blog network. These days they have podcasts, videos, blogs, and more. This latest project is described in an October 21, 2019 posting on the Cosmic Shambles blog,
In association with The Stephen Hawking Foundation and science troubadour Jonny Berliner, The Cosmic Shambles Network is proud to present Sci-Tunes.
Coming soon, a series of educational music videos on GCSE [General Certificate of Secondary Education examinations in the UK] Physics, written and performed by Jonny Berliner, funded by The Stephen Hawking Foundation, and produced by The Cosmic Shambles Network. The full videos will be released in November  and accompanied by free resources packs for both teachers and students. …
I have taken a liberty in the title for this piece, strictly speaking the non-Newtonian goo in the bra isn’t the stuff (ooblek) made of cornstarch and water from your childhood science experiments but it has many of the same qualities. The material in the Reebok bra, PureMove, is called Shear Thickening Fluid and was developed at the University of Delaware in 2005 and subsequently employed by NASA (US National Aeronautics and Space Administration) for use in the suits used by astronauts as noted in an August 6, 2018 article by Elizabeth Secgran for Fast Company who explains how it came be used for the latest sports bra,
While the activewear industry floods the market with hundreds of different sports bras every season, research shows that most female consumers are unsatisfied with their sports bra options, and 1 in 5 women avoid exercise altogether because they don’t have a sports bra that fits them properly.
Reebok wants to make that experience a thing of the past. Today, it launches a new bra, the PureMove, that adapts to your movements, tightening up when you’re moving fast and relaxing when you’re not. …
When I visited Reebok’s Boston headquarters, Witek [Danielle Witek, Reebok designer who spearheaded the R&D making the bra possible] handed me a jar of the fluid with a stick in it. When I moved the stick quickly, it seemed to turn into a solid, and when I moved it slowly, it had the texture of honey. Witek and the scientists have incorporated this fluid into a fabric that Reebok dubs “Motion Sense Technology.” The fluid is woven into the textile, so that on the surface, it looks and feels like the synthetic material you might find in any sports bra. But what you can’t see is that the fabric adapts to the body’s shape, the velocity of the breast tissue in motion, and the type and force of movement. It stretches less with high-impact movements and then stretches more during rest and lower intensity activities.
I tested an early version of the PureMove bra a few months ago, before it had even gone into production. I did a high-intensity workout that involved doing jumping jacks and sprints, followed by a cool-down session. The best thing about the bra was that I didn’t notice it at all. I didn’t feel stifled when I was just strolling around the gym, and I didn’t feel like I was unsupported when I was running around. Ultimately, the best bras are the ones that you don’t have to think about so you can focus on getting on with your life.
Since this technology is so new, Reebok had to do a lot of testing to make sure the bra would actually do what it advertised. The company set up a breast biomechanics testing center with the help of the University of Delaware, with 54 separate motion sensors tracking and measuring various parts of a tester’s chest area. This is a far more rigorous approach than most testing facilities in the industry that typically only use between two to four sensors. Over the course of a year, the facility gathered the data required for the scientists and Reebok product designers to develop the PureMove bra.
… If it’s well-received, the logical next step would be to incorporate the Motion Sense Technology into other products, like running tights or swimsuits, since transitioning between compression and looseness is something that we want in all of our sportswear. ..
For anyone interested in the science of non-Newtonian goo, shear thickening fluid, and NASA, there’s a November 24, 2015 article by Lydia Chain for Popular Science (Note: Links have been removed),
There’s an experiment you may have done in high school: When you mix cornstarch with water—a concoction colloquially called oobleck—and give it a stir, it acts like a liquid. But scrape it quickly or hit it hard, and it stiffens up into a solid. If you set the right pace, you can even run on top of a pool of the stuff. This phenomenon is called shear force thickening, and scientists have been trying to understand how it happens for decades.
There are two main theories, and figuring out which is right could affect the way we make things like cement, body armor, concussion preventing helmets, and even spacesuits.
The prevailing theory is that it’s all about the fluid dynamics (the nature of how fluids move) of the liquid and the particles in a solution. As the particles are pushed closer and closer together, it becomes harder to squeeze the liquid out from between them. Eventually, it’s too hard to squeeze out any more fluid and the particles lock up into hydrodynamic clusters, still separated by a thin film of fluid. They then move together, thickening the mixture and forming a solid.
The other idea is that contact forces like friction keep the particles locked together. Under this theory, when force is applied, the particles actually touch. The shearing force and friction keep them pressed together, which makes the solution more solid.
“The debate has been raging, and we’ve been wracking our brains to think of a method to conclusively go one way or the other,” says Itai Cohen, a physicist at Cornell University. He and his team recently ran a new experiment that seems to point to friction as the driving cause of shear thickening.
Norman Wagner, a chemical engineer at the University of Delaware, says that research into frictional interactions like this is important, but notes that he isn’t completely convinced as Cohen’s team didn’t measure friction directly (they inferred it was friction from their modeling however they didn’t find the exact measurement of the friction between the particles). He also says that there’s a lot of data in the field already that strongly indicates hydrodynamic clusters as the cause for shear thickening.
Wagner and his team are working on a NASA funded project to improve space suits so that micrometeorites or other debris can’t puncture them. They have also bent their technology to make padding for helmets and shin guards that would do a better job protecting athletes from harmful impacts. They are even making puncture resistant gloves that would give healthcare workers the same dexterity as current ones but with extra protection against accidental needle sticks.
“It’s a very exciting area,” says Wagner. He’s very interested in designing materials that automatically protect someone, without robotics or power. …
I guess that in 2015 Wagner didn’t realize his work would also end up in a 2018 sports bra.
While my main interest is the group’s temporary art gallery, I am providing a brief explanatory introduction and a couple of previews for SIGGRAPH 2018.
For anyone unfamiliar with the Special Interest Group on Computer GRAPHics and Interactive Techniques (SIGGRAPH) and its conferences, from the SIGGRAPH Wikipedia entry Note: Links have been removed),
Some highlights of the conference are its Animation Theater and Electronic Theater presentations, where recently created CG films are played. There is a large exhibition floor, where several hundred companies set up elaborate booths and compete for attention and recruits. Most of the companies are in the engineering, graphics, motion picture, or video game industries. There are also many booths for schools which specialize in computer graphics or interactivity.
Dozens of research papers are presented each year, and SIGGRAPH is widely considered the most prestigious forum for the publication of computer graphics research. The recent paper acceptance rate for SIGGRAPH has been less than 26%. The submitted papers are peer-reviewed in a single-blind process. There has been some criticism about the preference of SIGGRAPH paper reviewers for novel results rather than useful incremental progress. …
This is the third SIGGRAPH Vancouver has hosted; the others were in 2011 and 2014. The theme for the 2018 iteration is ‘Generations’; here’s more about it from an Aug. 2, 2018 article by Terry Flores for Variety,
While its focus is firmly forward thinking, SIGGRAPH 2018, the computer graphics, animation, virtual reality, games, digital art, mixed reality, and emerging technologies conference, is also tipping its hat to the past thanks to its theme this year: Generations. The conference runs Aug. 12-16 in Vancouver, B.C.
“In the literal people sense, pioneers in the computer graphics industry are standing shoulder to shoulder with researchers, practitioners and the future of the industry — young people — mentoring them, dabbling across multiple disciplines to innovate, relate, and grow,” says SIGGRAPH 2018 conference chair Roy C. Anthony, VP of creative development and operations at software and technology firm Ventuz. “This is really what SIGGRAPH has always been about. Generations really seemed like a very appropriate way of looking back and remembering where we all came from and how far we’ve come.”
SIGGRAPH 2018 has a number of treats in store for attendees, including the debut of Disney’s first VR film, the short “Cycles”; production sessions on the making of “Blade Runner 2049,” “Game of Thrones,” “Incredibles 2” and “Avengers: Infinity War”; as well as sneak peeks of Disney’s upcoming “Ralph Breaks the Internet: Wreck-It Ralph 2” and Laika’s “Missing Link.”
That list of ‘treats’ in the last paragraph makes the conference seem more like an iteration of a ‘comic-con’ than a technology conference.
CHICAGO–In the burgeoning world of virtual reality (VR) technology, it remains a challenge to provide users with a realistic perception of infinite space and natural walking capabilities in the virtual environment. A team of computer scientists has introduced a new approach to address this problem by leveraging a natural human phenomenon: eye blinks.
All humans are functionally blind for about 10 percent of the time under normal circumstances due to eye blinks and saccades, a rapid movement of the eye between two points or objects. Eye blinks are a common and natural cause of so-called “change blindness,” which indicates the inability for humans to notice changes to visual scenes. Zeroing in on eye blinks and change blindness, the team has devised a novel computational system that effectively redirects the user in the virtual environment during these natural instances, all with undetectable camera movements to deliver orientation redirection.
“Previous RDW [redirected walking] techniques apply rotations continuously while the user is walking. But the amount of unnoticeable rotations is limited,” notes Eike Langbehn, lead author of the research and doctoral candidate at the University of Hamburg. “That’s why an orthogonal approach is needed–we add some additional rotations when the user is not focused on the visuals. When we learned that humans are functionally blind for some time due to blinks, we thought, ‘Why don’t we do the redirection during eye blinks?'”
Human eye blinks occur approximately 10 to 20 times per minute, about every 4 to 19 seconds. Leveraging this window of opportunity–where humans are unable to detect major motion changes while in a virtual environment–the researchers devised an approach to synchronize a computer graphics rendering system with this visual process, and introduce any useful motion changes in virtual scenes to enhance users’ overall VR experience.
The researchers’ experiments revealed that imperceptible camera rotations of 2 to 5 degrees and translations of 4 to 9 cm of the user’s viewpoint are possible during a blink without users even noticing. They tracked test participants’ eye blinks by an eye tracker in a VR head-mounted display. In a confirmatory study, the team validated that participants could not reliably detect in which of two eye blinks their viewpoint was manipulated while walking a VR curved path. The tests relied on unconscious natural eye blinking, but the researchers say redirection during blinking could be carried out consciously. Since users can consciously blink multiple times a day without much effort, eye blinks provide great potential to be used as an intentional trigger in their approach.
The team will present their work at SIGGRAPH 2018, held 12-16 August in Vancouver, British Columbia. The annual conference and exhibition showcases the world’s leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.
“RDW is a big challenge since current techniques still need too much space to enable unlimited walking in VR,” notes Langbehn. “Our work might contribute to a reduction of space since we found out that unnoticeable rotations of up to five degrees are possible during blinks. This means we can improve the performance of RDW by approximately 50 percent.”
The team’s results could be used in combination with other VR research, such as novel steering algorithms, improved path prediction, and rotations during saccades, to name a few. Down the road, such techniques could some day enable consumer VR users to virtually walk beyond their living room.
Langbehn collaborated on the work with Frank Steinicke of University of Hamburg, Markus Lappe of University of Muenster, Gregory F. Welch of University of Central Florida, and Gerd Bruder, also of University of Central Florida. For the full paper and video, visit the team’s project page.
About ACM, ACM SIGGRAPH, and SIGGRAPH 2018
ACM, the Association for Computing Machinery, is the world’s largest educational and scientific computing society, uniting educators, researchers, and professionals to inspire dialogue, share resources, and address the field’s challenges. ACM SIGGRAPH is a special interest group within ACM that serves as an interdisciplinary community for members in research, technology, and applications in computer graphics and interactive techniques. SIGGRAPH is the world’s leading annual interdisciplinary educational experience showcasing the latest in computer graphics and interactive techniques. SIGGRAPH 2018, marking the 45th annual conference hosted by ACM SIGGRAPH, will take place from 12-16 August at the Vancouver Convention Centre in Vancouver, B.C.
They have provided an image illustrating what they mean (I don’t find it especially informative),
Caption: The viewing behavior of a virtual reality user, including fixations (in green) and saccades (in red). A blink fully suppresses visual perception. Credit: Eike Langbehn
Walt Disney Animation Studios will debut its first ever virtual reality short film at SIGGRAPH 2018, and the hope is viewers will walk away feeling connected to the characters as equally as they will with the VR technology involved in making the film.
Cycles, an experimental film directed by Jeff Gipson, centers around the true meaning of creating a home and the life it holds inside its walls. The idea for the film is personal, inspired by Gipson’s childhood spending time with his grandparents and creating memories in their home, and later, having to move them to an assisted living residence.
“Every house has a story unique to the people, the characters who live there,” says Gipson. “We wanted to create a story in this single place and be able to have the viewer witness life happening around them. It is an emotionally driven film, expressing the real ups and downs, the happy and sad moments in life.”
For Cycles, Gipson also drew from his past life as an architect, having spent several years designing skate parks, and from his passion for action sports, including freestyle BMX. In Los Angeles, where Gipson lives, it is not unusual to find homes with an empty swimming pool reserved for skating or freestyle biking. Part of the pitch for Cycles came out of Gipson’s experience riding in these empty pools and being curious about the homes attached to them, the families who lived there, and the memories they made.
SIGGRAPH attendees will have the opportunity to experience Cycles at the Immersive Pavilion, a new space for this year’s conference. The Pavilion is devoted exclusively to virtual, augmented, and mixed reality and will contain: the VR Theater, a storytelling extravaganza that is part of the Computer Animation Festival; the Vrcade, a space for VR, AR, and MR games or experiences; and the well-known Village, for showcasing large-scale projects. SIGGRAPH 2018, held 12-16 August in Vancouver, British Columbia, is an annual gathering that showcases the world’s leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.
The production team completed Cycles in four months with about 50 collaborators as part of a professional development program at the studio. A key difference in VR filmmaking includes getting creative with how to translate a story to the VR “screen.” Pre-visualizing the narrative, for one, was a challenge. Rather than traditional storyboarding, Gipson and his team instead used a mix of Quill VR painting techniques and motion capture to “storyboard” Cycles, incorporating painters and artists to generate sculptures or 3D models of characters early on and draw scenes for the VR space. The creators also got innovative with the use of light and color saturation in scenes to help guide the user’s eyes during the film.
“What’s cool for VR is that we are really on the edge of trying to figure out what it is and how to tell stories in this new medium,” says Gipson. “In VR, you can look anywhere and really be transported to a different world, experience it from different angles, and see every detail. We want people watching to feel alive and feel emotion, and give them a true cinematic experience.”
This is Gipson’s VR directorial debut. He joined Walt Disney Animation Studios in 2013, serving as a lighting artist on Disney favorites like Frozen, Zootopia, and Moana. Of getting to direct the studio’s first VR short, he says, “VR is an amazing technology and a lot of times the technology is what is really celebrated. We hope more and more people begin to see the emotional weight of VR films, and with Cycles in particular, we hope they will feel the emotions we aimed to convey with our story.”
Apparently this is a still from the ‘short’,
Caption: Disney Animation Studios will present ‘Cycles’ , its first virtual reality (VR) short, at ACM SIGGRAPH 2018. Credit: Disney Animation Studios
Google has unveiled a new virtual reality (VR) immersive experience based on a novel system that captures and renders high-quality, realistic images from the real world using light fields. Created by a team of leading researchers at Google, Welcome to Light Fields is the tech giant’s splash into the nascent arena of light fields VR experiences, an exciting corner of VR video technology gaining traction for its promise to deliver extremely high-quality imagery and experiences in the virtual world.
Google released Welcome to Light Fields earlier this year as a free app on Steam VR for HTC Vive, Oculus Rift, and Windows Mixed Reality headsets. The creators will demonstrate the VR experience at SIGGRAPH 2018, in the Immersive Pavilion, a new space for this year’s conference. The Pavilion is devoted exclusively to virtual, augmented, and mixed reality and will contain: the Vrcade, a space for VR, AR, and MR games or experiences; the VR Theater, a storytelling extravaganza that is part of the Computer Animation Festival; and the well-known Village, for showcasing large-scale projects. SIGGRAPH 2018, held 12-16 August in Vancouver, British Columbia, is an annual gathering that showcases the world’s leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.
Destinations in Welcome to Light Fields include NASA’s Space Shuttle Discovery, delivering to viewers an astronaut’s view inside the flight deck, which has never been open to the public; the pristine teak and mahogany interiors of the Gamble House, an architectural treasure in Pasadena, CA; and the glorious St. Stephen’s Church in Granada Hills, CA, home to a stunning wall of more than 14,000 pieces of glimmering stained glass.
“I love that light fields in VR can teleport you to exotic places in the real world, and truly make you believe you are there,” says Ryan Overbeck, software engineer at Google who co-led the project. “To me, this is magic.”
To bring this experience to life, Overbeck worked with a team that included Paul Debevec, senior staff engineer at Google, who managed the project and led the hardware piece with engineers Xueming Yu, Jay Busch, and Graham Fyffe. With Overbeck, Daniel Erickson and Daniel Evangelakos focused on the software end. The researchers designed a comprehensive system for capturing and rendering high-quality, spherical light field still images from footage captured in the real world. They developed two easy-to-use light field camera rigs, based on the GoPro Hero4action sports camera, that efficiently capture thousands of images on the surface of a sphere. Those images were then passed through a cloud-based light-field-processing pipeline.
Among other things, explains Overbeck, “The processing pipeline uses computer vision to place the images in 3D and generate depth maps, and we use a modified version of our vp9 video codec
to compress the light field data down to a manageable size.” To render a light field dataset, he notes, the team used a rendering algorithm that blends between the thousands of light field images in real-time.
The team relied on Google’s talented pool of engineers in computer vision, graphics, video compression, and machine learning to overcome the unique challenges posed in light fields technology. They also collaborated closely with the WebM team (who make the vp9 video codec) to develop the high-quality light field compression format incorporated into their system, and leaned heavily on the expertise of the Jump VR team to help pose the images and generate depth maps. (Jump is Google’s professional VR system for achieving 3D-360 video production at scale.)
Indeed, with Welcome to Light Fields, the Google team is demonstrating the potential and promise of light field VR technology, showcasing the technology’s ability to provide a truly immersive experience with a level of unmatched realism. Though light fields technology has been researched and explored in computer graphics for more than 30 years, practical systems for actually delivering high-quality light field experiences has not yet been possible.
Part of the team’s motivation behind creating this VR light field experience is to invigorate the nascent field.
“Welcome to Light Fields proves that it is now possible to make a compelling light field VR viewer that runs on consumer-grade hardware, and we hope that this knowledge will encourage others to get involved with building light field technology and media,” says Overbeck. “We understand that in order to eventually make compelling consumer products based on light fields, we need a thriving light field ecosystem. We need open light field codecs, we need artists creating beautiful light field imagery, and we need people using VR in order to engage with light fields.”
I don’t really understand why this image, which looks like something belongs on advertising material, would be chosen to accompany a news release on a science-based distribution outlet,
Caption: A team of leading researchers at Google, will unveil the new immersive virtual reality (VR) experience “Welcome to Lightfields” at ACM SIGGRAPH 2018. Credit: Image courtesy of Google/Overbeck
Advances in computer-generated imagery have brought vivid, realistic animations to life, but the sounds associated with what we see simulated on screen, such as two objects colliding, are often recordings. Now researchers at Stanford University have developed a system that automatically renders accurate sounds for a wide variety of animated phenomena.
“There’s been a Holy Grail in computing of being able to simulate reality for humans. We can animate scenes and render them visually with physics and computer graphics, but, as for sounds, they are usually made up,” said Doug James, professor of computer science at Stanford University. “Currently there exists no way to generate realistic synchronized sounds for complex animated content, such as splashing water or colliding objects, automatically. This fills that void.”
The researchers will present their work on this sound synthesis system as part of ACM SIGGRAPH 2018, the leading conference on computer graphics and interactive techniques. In addition to enlivening movies and virtual reality worlds, this system could also help engineering companies prototype how products would sound before being physically produced, and hopefully encourage designs that are quieter and less irritating, the researchers said.
“I’ve spent years trying to solve partial differential equations – which govern how sound propagates – by hand,” said Jui-Hsien Wang, a graduate student in James’ lab and in the Institute for Computational and Mathematical Engineering (ICME), and lead author of the paper. “This is actually a place where you don’t just solve the equation but you can actually hear it once you’ve done it. That’s really exciting to me and it’s fun.”
Informed by geometry and physical motion, the system figures out the vibrations of each object and how, like a loudspeaker, those vibrations excite sound waves. It computes the pressure waves cast off by rapidly moving and vibrating surfaces but does not replicate room acoustics. So, although it does not recreate the echoes in a grand cathedral, it can resolve detailed sounds from scenarios like a crashing cymbal, an upside-down bowl spinning to a stop, a glass filling up with water or a virtual character talking into a megaphone.
Most sounds associated with animations rely on pre-recorded clips, which require vast manual effort to synchronize with the action on-screen. These clips are also restricted to noises that exist – they can’t predict anything new. Other systems that produce and predict sounds as accurate as those of James and his team work only in special cases, or assume the geometry doesn’t deform very much. They also require a long pre-computation phase for each separate object.
“Ours is essentially just a render button with minimal pre-processing that treats all objects together in one acoustic wave simulation,” said Ante Qu, a graduate student in James’ lab and co-author of the paper.
The simulated sound that results from this method is highly detailed. It takes into account the sound waves produced by each object in an animation but also predicts how those waves bend, bounce or deaden based on their interactions with other objects and sound waves in the scene.
In its current form, the group’s process takes a while to create the finished product. But, now that they have proven this technique’s potential, they can focus on performance optimizations, such as implementing their method on parallel GPU hardware, that should make it drastically faster.
And, even in its current state, the results are worth the wait.
“The first water sounds we generated with the system were among the best ones we had simulated – and water is a huge challenge in computer-generated sound,” said James. “We thought we might get a little improvement, but it is dramatically better than previous approaches even right out of the box. It was really striking.”
Although the group’s work has faithfully rendered sounds of various objects spinning, falling and banging into each other, more complex objects and interactions – like the reverberating tones of a Stradivarius violin – remain difficult to model realistically. That, the group said, will have to wait for a future solution.
Timothy Langlois of Adobe Research is a co-author of this paper. This research was funded by the National Science Foundation and Adobe Research. James is also a professor, by courtesy, of music and a member of Stanford Bio-X.
Researchers Timothy Langlois, Doug L. James, Ante Qu and Jui-Hsien Wang have created a video featuring highlights of animations with sounds synthesized using the Stanford researchers’ new system.,
The researchers have also provided this image,
By computing pressure waves cast off by rapidly moving and vibrating surfaces – such as a cymbal – a new sound synthesis system developed by Stanford researchers can automatically render realistic sound for computer animations. (Image credit: Timothy Langlois, Doug L. James, Ante Qu and Jui-Hsien Wang)
It does seem like we’re synthesizing the world around us, eh?
SIGGRAPH 2018, the world’s leading showcase of digital art created using computer graphics and interactive techniques, will present a special Art Gallery, entitled “Origins,” and historic Art Papers in Vancouver, B.C. The 45th SIGGRAPH conference will take place 12–16 August at the Vancouver Convention Centre. The programs will also honor the generations of creators that have come before through a special, 50th anniversary edition of the Leonard journal. To register for the conference, visit S2018.SIGGRAPH.ORG.
The SIGGRAPH 2018 ART GALLERY is a curated exhibition, conceived as a dialogical space that enables the viewer to reflect on man’s diverse cultural values and rituals through contemporary creative practices. Building upon an exciting and eclectic selection of creative practices mediated through technologies that represent the sophistication of our times, the SIGGRAPH 2018 Art Gallery will embrace the narratives of the indigenous communities based near Vancouver and throughout Canada as a source of inspiration. The exhibition will feature contemporary media artworks, art pieces by indigenous communities, and other traces of technologically mediated Ludic practices.
Andrés Burbano, SIGGRAPH 2018 Art Gallery chair and professor at Universidad de los Andes, said, “The Art Gallery aims to articulate myth and technology, science and art, the deep past and the computational present, and will coalesce around a theme of ‘Origins.’ Media and technological creative expressions will explore principles such as the origins of the cosmos, the origins of life, the origins of human presence, the origins of the occupation of territories in the Americas, and the origins of people living in the vast territories of the Arctic.”
He continued, “The venue [in Vancouver] hopes to rekindle the original spark that ignited the collaborative spirit of the SIGGRAPH community of engineers, scientists, and artists, who came together to create the very first conference in the early 1970s.”
Highlights from the 2018 Art Gallery include:
Transformation Mask (Canada) [Technology Based]
Shawn Hunt, independent; and Microsoft Garage: Andy Klein, Robert Butterworth, Jonathan Cobb, Jeremy Kersey, Stacey Mulcahy, Brendan O’Rourke, Brent Silk, and Julia Taylor-Hell, Microsoft Vancouver
TRANSFORMATION MASK is an interactive installation that features the Microsoft HoloLens. It utilizes electronics and mechanical engineering to express a physical and digital transformation. Participants are immersed in spatial sounds and holographic visuals.
Somnium (U.S.) [Science Based]
Marko Peljhan, Danny Bazo, and Karl Yerkes, University of California, Santa Barbara
Somnium is a cybernetic installation that provides visitors with the ability to sensorily, cognitively, and emotionally contemplate and experience exoplanetary discoveries, their macro and micro dimensions, and the potential for life in our Galaxy. Some might call it “space telescope.”
Ernest Edmonds Retrospective – Art Systems 1968-2018 (United Kingdom) [History Based]
Ernest Edmonds, De Montfort University
Celebrating one of the pioneers of computer graphics-based art since the early 1970s, this Ernest Edmonds career retrospective will showcase snapshots of Edmonds’ work as it developed over the years. With one piece from each decade, the retrospective will also demonstrate how vital the Leonardo journal has been throughout the 50-year journey.
In addition to the works above, the Art Gallery will feature pieces from notable female artists Ozge Samanci, Ruth West, and Nicole L’Hullier. For more information about the Edmonds retrospective, read THIS POST ON THE ACM SIGGRAPH BLOG.
The SIGGRAPH 2018 ART PAPERS program is designed to feature research from artists, scientists, theorists, technologists, historians, and more in one of four categories: project description, theory/criticism, methods, or history. The chosen work was selected by an international jury of scholars, artists, and immersive technology developers.
To celebrate the 50th anniversary of LEONARDO (MIT Press), and 10 years of its annual SIGGRAPH issue, SIGGRAPH 2018 is pleased to announce a special anniversary edition of the journal, which will feature the 2018 art papers. For 50 years, Leonardo has been the definitive publication for artist-academics. To learn more about the relationship between SIGGRAPH and the journal, listen to THIS EPISODE OF THE SIGGRAPH SPOTLIGHT PODCAST.
“In order to encourage a wider range of topics, we introduced a new submission type, short papers. This enabled us to accept more content than in previous years. Additionally, for the first time, we will introduce sessions that integrate the Art Gallery artist talks with Art Papers talks, promoting richer connections between these two creative communities,” said Angus Forbes, SIGGRAPH 2018 Art Papers chair and professor at University of California, Santa Cruz.
Art Papers highlights include:
Alienating the Familiar with CGI: A Recipe for Making a Full CGI Art House Animated Feature [Long]
Alex Counsell and Paul Charisse, University of Portsmouth
This paper explores the process of making and funding an art house feature film using full CGI in a marketplace where this has never been attempted. It explores cutting-edge technology and production approaches, as well as routes to successful fundraising.
Augmented Fauna and Glass Mutations: A Dialogue Between Material and Technique in Glassblowing and 3D Printing [Long]
Tobias Klein, City University of Hong Kong
The two presented artworks, “Augmented Fauna” and “Glass Mutations,” were created during an artist residence at the PILCHUCK GLASS SCHOOL. They are examples of the qualities and methods established through a synthesis between digital workflows and traditional craft processes and thus formulate the notion of digital craftsmanship.
Inhabitat: An Imaginary Ecosystem in a Children’s Science Museum [Short]
Graham Wakefield, York University, and Haru Hyunkyung Ji, OCAD University
“Inhabitat” is a mixed reality artwork in which participants become part of an imaginary ecology through three simultaneous perspectives of scale and agency; three distinct ways to see with other eyes. This imaginary world was exhibited at a children’s science museum for five months, using an interactive projection-augmented sculpture, a large screen and speaker array, and a virtual reality head-mounted display.
What’s the what?
My father used to say that and I always assumed it meant summarize the high points, if you need to, and get to the point—fast. In that spirit, I am both fascinated and mildly appalled. The virtual, mixed, and augmented reality technologies, as well as, the others being featured at SIGGRAPH 2018 are wondrous in many ways but it seems we are coming ever closer to a world where we no longer interact with nature or other humans directly. (see my August 10, 2018 posting about the ‘extinction of experience’ for research that encourages more direct interaction with nature) I realize that SIGGRAPH is intended as a primarily technical experience but I think a little more content questioning these technologies and their applications (social implications) might be in order. That’s often the artist’s role but I can’t see anything in the art gallery descriptions that hint at any sort of fundamental critique.
Notanee Bourassa knew that what he was seeing in the night sky was not normal. Bourassa, an IT technician in Regina, Canada, trekked outside of his home on July 25, 2016, around midnight with his two younger children to show them a beautiful moving light display in the sky — an aurora borealis. He often sky gazes until the early hours of the morning to photograph the aurora with his Nikon camera, but this was his first expedition with his children. When a thin purple ribbon of light appeared and starting glowing, Bourassa immediately snapped pictures until the light particles disappeared 20 minutes later. Having watched the northern lights for almost 30 years since he was a teenager, he knew this wasn’t an aurora. It was something else.
From 2015 to 2016, citizen scientists — people like Bourassa who are excited about a science field but don’t necessarily have a formal educational background — shared 30 reports of these mysterious lights in online forums and with a team of scientists that run a project called Aurorasaurus. The citizen science project, funded by NASA and the National Science Foundation, tracks the aurora borealis through user-submitted reports and tweets.
The Aurorasaurus team, led by Liz MacDonald, a space scientist at NASA’s Goddard Space Flight Center in Greenbelt, Maryland, conferred to determine the identity of this mysterious phenomenon. MacDonald and her colleague Eric Donovan at the University of Calgary in Canada talked with the main contributors of these images, amateur photographers in a Facebook group called Alberta Aurora Chasers, which included Bourassa and lead administrator Chris Ratzlaff. Ratzlaff gave the phenomenon a fun, new name, Steve, and it stuck.
But people still didn’t know what it was.
Scientists’ understanding of Steve changed that night Bourassa snapped his pictures. Bourassa wasn’t the only one observing Steve. Ground-based cameras called all-sky cameras, run by the University of Calgary and University of California, Berkeley, took pictures of large areas of the sky and captured Steve and the auroral display far to the north. From space, ESA’s (the European Space Agency) Swarm satellite just happened to be passing over the exact area at the same time and documented Steve.
For the first time, scientists had ground and satellite views of Steve. Scientists have now learned, despite its ordinary name, that Steve may be an extraordinary puzzle piece in painting a better picture of how Earth’s magnetic fields function and interact with charged particles in space. The findings are published in a study released today in Science Advances.
“This is a light display that we can observe over thousands of kilometers from the ground,” said MacDonald. “It corresponds to something happening way out in space. Gathering more data points on STEVE will help us understand more about its behavior and its influence on space weather.”
The study highlights one key quality of Steve: Steve is not a normal aurora. Auroras occur globally in an oval shape, last hours and appear primarily in greens, blues and reds. Citizen science reports showed Steve is purple with a green picket fence structure that waves. It is a line with a beginning and end. People have observed Steve for 20 minutes to 1 hour before it disappears.
If anything, auroras and Steve are different flavors of an ice cream, said MacDonald. They are both created in generally the same way: Charged particles from the Sun interact with Earth’s magnetic field lines.
The uniqueness of Steve is in the details. While Steve goes through the same large-scale creation process as an aurora, it travels along different magnetic field lines than the aurora. All-sky cameras showed that Steve appears at much lower latitudes. That means the charged particles that create Steve connect to magnetic field lines that are closer to Earth’s equator, hence why Steve is often seen in southern Canada.
Perhaps the biggest surprise about Steve appeared in the satellite data. The data showed that Steve comprises a fast moving stream of extremely hot particles called a sub auroral ion drift, or SAID. Scientists have studied SAIDs since the 1970s but never knew there was an accompanying visual effect. The Swarm satellite recorded information on the charged particles’ speeds and temperatures, but does not have an imager aboard.
“People have studied a lot of SAIDs, but we never knew it had a visible light. Now our cameras are sensitive enough to pick it up and people’s eyes and intellect were critical in noticing its importance,” said Donovan, a co-author of the study. Donovan led the all-sky camera network and his Calgary colleagues lead the electric field instruments on the Swarm satellite.
Steve is an important discovery because of its location in the sub auroral zone, an area of lower latitude than where most auroras appear that is not well researched. For one, with this discovery, scientists now know there are unknown chemical processes taking place in the sub auroral zone that can lead to this light emission.
Second, Steve consistently appears in the presence of auroras, which usually occur at a higher latitude area called the auroral zone. That means there is something happening in near-Earth space that leads to both an aurora and Steve. Steve might be the only visual clue that exists to show a chemical or physical connection between the higher latitude auroral zone and lower latitude sub auroral zone, said MacDonald.
“Steve can help us understand how the chemical and physical processes in Earth’s upper atmosphere can sometimes have local noticeable effects in lower parts of Earth’s atmosphere,” said MacDonald. “This provides good insight on how Earth’s system works as a whole.”
The team can learn a lot about Steve with additional ground and satellite reports, but recording Steve from the ground and space simultaneously is a rare occurrence. Each Swarm satellite orbits Earth every 90 minutes and Steve only lasts up to an hour in a specific area. If the satellite misses Steve as it circles Earth, Steve will probably be gone by the time that same satellite crosses the spot again.
In the end, capturing Steve becomes a game of perseverance and probability.
“It is my hope that with our timely reporting of sightings, researchers can study the data so we can together unravel the mystery of Steve’s origin, creation, physics and sporadic nature,” said Bourassa. “This is exciting because the more I learn about it, the more questions I have.”
As for the name “Steve” given by the citizen scientists? The team is keeping it as an homage to its initial name and discoverers. But now it is STEVE, short for Strong Thermal Emission Velocity Enhancement.
Other collaborators on this work are: the University of Calgary, New Mexico Consortium, Boston University, Lancaster University, Athabasca University, Los Alamos National Laboratory and the Alberta Aurora Chasers Facebook group.
If you live in an area where you may see STEVE or an aurora, submit your pictures and reports to Aurorasaurus through aurorasaurus.org or the free iOS and Android mobile apps. To learn how to spot STEVE, click here.
There is a video with MacDonald describing the work and featuring more images,
Citizen scientists first began posting about Steve on social media several years ago. Across New Zealand, Canada, the United States, and the United Kingdom, they reported an unusual sight in the night sky: a purplish line that arced across the heavens for about an hour at a time, visible at lower latitudes than classical aurorae, mostly in the spring and fall. … “It’s similar to a contrail but doesn’t disperse,” says Notanee Bourassa, an aurora photographer in Saskatchewan province in Canada [Regina as mentioned in the news release is the capital of the province of Saskatchewan].
Traditional aurorae are often green, because oxygen atoms present in Earth’s atmosphere emit that color light when they’re bombarded by charged particles trapped in Earth’s magnetic field. They also appear as a diffuse glow—rather than a distinct line—on the northern or southern horizon. Without a scientific theory to explain the new sight, a group of citizen scientists led by aurora enthusiast Chris Ratzlaff of Canada’s Alberta province [usually referred to as Canada’s province of Alberta or simply, the province of Alberta] playfully dubbed it Steve, after a line in the 2006 children’s movie Over the Hedge.
Aurorae have been studied for decades, but people may have missed Steve because their cameras weren’t sensitive enough, says Elizabeth MacDonald, a space physicist at NASA Goddard Space Flight Center in Greenbelt, Maryland, and leader of the new research. MacDonald and her team have used data from a European satellite called Swarm-A to study Steve in its native environment, about 200 kilometers up in the atmosphere. Swarm-A’s instruments revealed that the charged particles in Steve had a temperature of about 6000°C, “impressively hot” compared with the nearby atmosphere, MacDonald says. And those ions were flowing from east to west at nearly 6 kilometers per second, …
This paper is open access. You’ll note that Notanee Bourassa is listed as an author. For more about Bourassa, there’s his Twitter feed (@DJHardwired) and his YouTube Channel. BTW, his Twitter bio notes that he’s “Recently heartbroken,” as well as, “Seasoned human male. Expert storm chaser, aurora photographer, drone flyer and on-air FM radio DJ.” Make of that what you will.
I have news about two April 2018 events in the US.
It’s been a while since I’ve featured a Woodrow Wilson International Center for Scholars event. I’d forgotten about this one but, since it was postponed due to weather issues, I’ve gotten another chance (from a March 28, 2018 Woodrow Wilson Center announcement received via email),
For over thirty years, women have remained noticeably underrepresented in science, technology, engineering and mathematics (STEM) fields. Women make up more than half of college-educated workers but only 25% of college-educated STEM workers – in some fields, such as computer science, women make up only 18.1% of earned bachelor’s degrees. Missing half of the talent pool impacts our potential competitiveness and innovation in a technology-driven economy. But the real problems may begin once women enter a STEM career.
Once in a STEM career, women continue to face obstacles that prevent them from advancing in their career at the same rate as their male colleagues. From hiring practices to workplace culture, multiple factors create barriers that prevent women from achieving fulfilling and successful careers. The capacity of women in STEM to excel in their chosen careers impacts the pipeline for emerging women leaders in these fields, and if these barriers persist, the number of women in the pipeline will not be able to grow.
In order to open up pathways to leadership for more women in STEM, we must ask the question: What are those barriers? And more importantly, what can we do about them?
In honor of Women’s History Month, please join the Wilson Center’s Science and Technology Innovation Program, Women in Public Service Project and Serious Game Initiative for a conversation with women leaders in STEM on the barriers and opportunities for women in STEM, and the actions that can be taken to achieve true gender parity in these fields.
Elizabeth Newbury is the Director and Program Associate for the Serious Games Initiative for the Wilson Center, leading Wilson’s use of games in engaging the public around policy research. She has a PhD and Masters degree from the Department of Communication at Cornell University, where her research interests revolved around understanding multiple dimensions of gaming audiences and the surrounding culture of those who play video games. Her dissertation was a multi-method, cross-discplinary interrogation of the public consumption of games and the use of gaming in day-to-day practices, specifically in the context of esports. She has presented her research before both academic audiences and public audiences, ranging from the International Communication Association and the Association of Internet Researchers to the Serious Play Conference.
As lead of the Serious Games Initiative, she leverages games as a tool for the public communication of science and policy research. Current projects include the Fiscal Ship, a game about the federal budget developed and maintained in collaboration with the Hutchins Center on Fiscal and Monetary Policy with the Brookings Institute. Collaborating across the Wilson Institution, her current works in progress include games pertaining to cybsercurity to the history of nuclear proliferation to polar initiatives. Under her leadership, SGI is pursuing how public policy and science can come together in an interactive platform to increase public dialogue and engagement around timely and critical issues of today.
Onto the second event,
New York City
Every once in a while I get an unexpected email and this one was a delightful surprise as it combines an art installation, intellectual property law, and a legal performance piece (from a March 30, 2018 galeplstonpc.com announcement),
I Speak for the Trees:
A Mock Trial
Wednesday, April 25, 2018 | 6:00 PM
Location: Jacob Burns Moot Court Room, Cardozo School of Law
2018 A Blade of Grass Fellow Aviva Rahmani is creating Blued Trees Symphony, an ecological artwork made with the intention of using copyright law (VARA) to defend land in New York, Virginia, and West Virginia that is subject to eminent domain because of proposed natural gas pipelines.
The Cardozo School of Law Environmental Law Society; Art Law Society; and Intellectual Property Student Association welcome us to the Jacob Burns Moot Court Room for a mock trial that will explore whether the status of the artwork under VARA trumps eminent domain takings by corporations. Experienced VARA litigator Gale Elston (Cardozo alumna) will represent the artist.
This program is free and open to the public, but space is limited! Please RSVP to firstname.lastname@example.org. If you’re unable to join us in person, stay tuned to our Facebook page for a live stream of the event!
Ecological artist Aviva Rahmani is the inaugural ABOG Fellow for Contemplative Practice, in partnership with the Hemera Foundation. This targeted fellowship supports artists who work with the intersection of social practice and contemplative practice. Rahmani’s The Blued Trees and The Blued Trees Symphony projects have been installed and copyrighted in the path of natural gas pipelines across miles of North America. The work has gained international attention and support, including Fellowships from the New York Foundation for the Arts and A Blade of Grass. Rahmani holds a PhD from Plymouth University, UK in environmental sciences, technology and studio art and has produced over twenty one-hour raw Gulf to Gulf sessions on climate change viewed from eighty-five countries. “Trigger Points/ Tipping Points,” a precursor to Gulf to Gulf,premiered at the 2007 Venice Biennale.
Gale P. Elston is an art attorney who has represented artists, art institutes, and non-profits for over twenty-five years as an advocate for artists’ rights. Three of her cases are featured in the Art Law Handbook, including a VARA based case establishing new law for the rights of artists. She has litigated many VARA cases in the Federal Southern District Court of New York. Her cases have obtained monetary awards for artists whose work has been damaged, modified or harmed. She has served on the board of numerous art related non-profits, including WhiteBox, (Re)Create Artist In Residency Program, and Faith Ringgold’s Any Child Can Fly Foundation, and as a Trustee for the Marin Headlands Artist in Residency Program. She has served to promote numerous artists’ rights pro bono, and represented notable artists including Carolee Schneeman, Phillip Pavia, Faith Ringgold, Ida Applebroog, and Hans Van de Bovenkamp, among others.
We’re grateful that this program is made possible in part by the New York State Council on the Arts with the support of Governor Andrew M. Cuomo and the New York State Legislature; the support of the American Chai Trust; and, in part, by public funds from the New York City Department of Cultural Affairs in partnership with the City Council.
I’m particularly interested in this approach to pipeline protests as my home province (British Columbia, Canada) i s currently in a fight with two other provinces (Alberta and Saskatchewan), as well as, our federal government where the usual tactics (protests, jail the time and interprovincial trade wars [see: March 29, 2018 Financial Post article by Geoffrey Morgan], etc.) are being used. Maybe it’s time to apply a little more imagination to the protests in British Columbia.
*’property’ added to title of blog posting on April 5, 2018 3:30 pm PDT.
A March 22, 2018 EuroScience Open Forum (ESOF) 2018 announcement (received via email) trumpets some of the latest news for this event being held July 9 to July 14, 2018 in Toulouse, France. (Located in the south in the region known as the Occitanie, it’s the fourth largest city in France. Toulouse is situated on the River Garonne. See more in its Wikipedia entry.) Here’s the latest from the announcement,
ESOF 2018 Plenary Sessions
Top speakers and hot topics confirmed for the Plenary Sessions at ESOF 2018
Lorna Hughes, Professor at the University of Glasgow, Chair of the Europeana Research Advisory Board, will give a plenary keynote on “Digital humanities”. John Ioannidis, Professor of Medicine and of Health Research and Policy at Stanford University, famous for his PLoS Medicine paper on “Why most Published Research Findings are False”, will talk about “Reproducibility”. A third plenary will involve Marìa Teresa Ruiz, a Chilean astronomer and the 2017 L’Oreal UNESCO award for Women in Science: she will talk about exoplanets.
ESOF under the spotlights
French President’s high patronage: ESOF is at the top of the institutional agendas in 2018.
“Sharing science”. But also putting science at the highest level making it a real political and societal issue in a changing world. ESOF 2018 has officially received the “High Patronage” from the President of the French Republic Emmanuel Macron. ESOF 2018 has also been listed by the French Minister for Europe and Foreign Affairs among the 27 priority events for France.
A constellation of satellites around the ESOF planet!
Second focus on Satellite events:
– 4th GEO Blue Planet Symposium organised 4-6 July by Mercator Ocean.
– ECSJ 2018, 5th European Conference of Science Journalists, co-organised by the French Association of Science Journalists in the News Press (AJSPI) and the Union of European Science Journalists’ Associations (EUSJA) on 8 July.
– Esprit de Découvertes (Discovery spirit) organised by the Académie des Sciences, Inscriptions et Belles Lettres de Toulouse on 8 July.
More Satellite events to come! Don’t forget to stay long enough in order to participate in these focused Satellite Events and … to discover the city.
A unique feature of ESOF is the Science meets Poetry day, which is held at every Forum and brings poets and scientists together.
Indeed, there is today a real artistic movement of poets connected with ESOF. Famous participants from earlier meetings include contributors such as the late Seamus Heaney, Roald Hoffmann [sic] Jean-Pierre Luminet and Prince Henrik of Denmark, but many young and aspiring poets are also involved.
The meeting is in two parts:
lectures on subjects involving science with poetry
a poster session for contributed poems
There are competitions associated with the event and every Science meets Poetry day gives rise to the publication of Proceedings in book form.
In Toulouse, the event will be staged by EuroScience in collaboration with the Académie des Jeux Floraux of Toulouse, the Société des Poètes Français and the European Academy of Sciences Arts and Letters, under patronage of UNESCO. The full programme will be announced later, but includes such themes as a celebration of the number 7 in honour of the seven Troubadours of Toulouse, who held the first Jeux Floraux in the year 1323, Space Travel and the first poets and scientists who wrote about it (including Cyrano de Bergerac and Johannes Kepler), from Metrodorus and Diophantes of Alexandria to Fermat’s Last Theorem, the Poetry of Ecology, Lafayette’s ship the Hermione seen from America and many other thought-provoking subjects.
The meeting will be held in the Hôtel d’Assézat, one of the finest old buildings of the ancient city of Toulouse.
Exceptionally, it will be open to registered participants from ESOF and also to some members of the public within the limits of available space.
Tentative Programme for the Science meets Poetry day on the 12th of July 2018
(some Speakers are still to be confirmed)
09:00 – 09:30 A welcome for the poets : The legendary Troubadours of Toulouse and the poetry of the number 7 (Philippe Dazet-Brun, Académie des Jeux Floraux)
09:30 – 10:00 The science and the poetry of violets from Toulouse (Marie-Thérèse Esquerré-Tugayé Laboratoire de Recherche en Sciences Végétales, Université Toulouse III-CNRS)
10:00 –10:30 The true Cyrano de Bergerac, gascon poet, and his celebrated travels to the Moon (Jean-Charles Dorge, Société des Poètes Français)
10:30 – 11:00 Coffee Break (with poems as posters)
11:00 – 11:30 Kepler the author and the imaginary travels of the famous astronomer to the Moon. (Uli Rothfuss, die Kogge International Society of German-language authors )
11:30 – 12:00 Spoutnik and Space in Russian Literature (Alla-Valeria Mikhalevitch, Laboratory of the Russian Academy of Sciences Saint-Petersburg)
12:00 – 12:30 Poems for the planet Mars (James Philip Kotsybar, the ‘Bard of Mars’, California and NASA USA)
12:30 – 14:00 Lunch and meetings of the Juries of poetry competitions
14:00 – 14:30 The voyage of the Hermione and « Lafayette, here we come ! » seen by an American poet (Nick Norwood, University of Columbus Ohio)
14:30 – 15:00 Alexandria, Toulouse and Oxford : the poem rendered by Eutrope and Fermat’s Last Theorem (Chaunes [Jean-Patrick Connerade], European Academy of Sciences, Arts and Letters, UNESCO)
15:00 –15:30 How biology is celebrated in contemporary poetry (Assumpcio Forcada, biologist and poet from Barcelona)
15:30 – 16:00 A book of poems around ecology : a central subject in modern poetry (Sam Illingworth, Metropolitan University of Manchester)
16:00 – 16:30 Coffee break (with poems as posters)
16:30 – 17:00 Toulouse and Europe : poetry at the crossroads of European Languages (Stefka Hrusanova (Bulgarian Academy and Linguaggi-Di-Versi)
17:00 – 17:30 Round Table : seven poets from Toulouse give their views on the theme : Languages, invisible frontiers within both science and poetry
17:30 – 18:00 The winners of the poetry competitions are announced
18:00 – 18:15 Chaunes. Closing remarks
I’m fascinated as in all the years I’ve covered the European City of Science events I’ve never before tripped across a ‘Science meets Poetry’ meeting. Sadly, there’s no contact information for those organizers. However, you can sign up for a newsletter and there are contacts for the larger event, European City of Science or as they are calling it in Toulouse, the Science in the City Festival,
Camille Rossignol (Toulouse Métropole)
+33 (0)5 36 25 27 83
François Lafont (ESOF 2018 / So Toulouse)
+33 (0)5 61 14 58 47
Travel grants for media types
One last note and this is for journalists. It’s still possible to apply for a travel grant, which helps ease but not remove the pain of travel expenses. From the ESOF 2018 Media Travel Grants webpage,
ESOF 2018 – ECSJ 2018 Travel Grants
The 5th European Conference of Science Journalists (ECSJ2018) is offering 50 travel + accommodation grants of up to 400€ to international journalists interested in attending ECSJ and ESOF.
We are looking for active professional journalists who cover science or science policy regularly (not necessarily exclusively), with an interest in reflecting on their professional practices and ethics. Applicants can be freelancers or staff, and can work for print, web, or broadcast media.
Springer Nature is a leading research, educational and professional publisher, providing quality content to its communities through a range of innovative platforms, products and services and is home of trusted brands including Nature Research.
Nature Research has supported ESOF since its very first meeting in 2004 and is funding the Nature Travel Grant Scheme for journalists to attend ESOF2018 with the aim of increasing the impact of ESOF. The Nature Travel Grant Scheme offers a lump sum of £400 for journalists based in Europe and £800 for journalists based outside of Europe, to help cover the costs of travel and accommodation to attend ESOF2018.
It must have been quite the conference. Josiah Zayner plunged a needle into himself and claimed to have changed his DNA (deoxyribonucleic acid) while giving his talk. (*Segue: There is some Canadian content if you keep reading.*) From an Oct. 10, 2017 article by Adele Peters for Fast Company (Note: A link has been removed),
“What we’ve got here is some DNA, and this is a syringe,” Josiah Zayner tells a room full of synthetic biologists and other researchers. He fills the needle and plunges it into his skin. “This will modify my muscle genes and give me bigger muscles.”
Zayner, a biohacker–basically meaning he experiments with biology in a DIY lab rather than a traditional one–was giving a talk called “A Step-by-Step Guide to Genetically Modifying Yourself With CRISPR” at the SynBioBeta conference in San Francisco, where other presentations featured academics in suits and the young CEOs of typical biotech startups. Unlike the others, he started his workshop by handing out shots of scotch and a booklet explaining the basics of DIY [do-it-yourwelf] genome engineering.
If you want to genetically modify yourself, it turns out, it’s not necessarily complicated. As he offered samples in small baggies to the crowd, Zayner explained that it took him about five minutes to make the DNA that he brought to the presentation. The vial held Cas9, an enzyme that snips DNA at a particular location targeted by guide RNA, in the gene-editing system known as CRISPR. In this case, it was designed to knock out the myostatin gene, which produces a hormone that limits muscle growth and lets muscles atrophy. In a study in China, dogs with the edited gene had double the muscle mass of normal dogs. If anyone in the audience wanted to try it, they could take a vial home and inject it later. Even rubbing it on skin, Zayner said, would have some effect on cells, albeit limited.
Peters goes on to note that Zayner has a PhD in molecular biology and biophysics and worked for NASA (US National Aeronautics and Space Administration). Zayner’s Wikipedia entry fills in a few more details (Note: Links have been removed),
Zayner graduated from the University of Chicago with a Ph.D. in biophysics in 2013. He then spent two years as a researcher at NASA’s Ames Research Center, where he worked on Martian colony habitat design. While at the agency, Zayner also analyzed speech patterns in online chat, Twitter, and books, and found that language on Twitter and online chat is closer to how people talk than to how they write. Zayner found NASA’s scientific work less innovative than he expected, and upon leaving in January 2016, he launched a crowdfunding campaign to provide CRISPR kits to let the general public experiment with editing bacterial DNA. He also continued his grad school business, The ODIN, which sells kits to let the general public experiment at home. As of May 2016, The ODIN had four employees and operates out of Zayner’s garage.
He refers to himself as a biohacker and believes in the importance in letting the general public participate in scientific experimentation, rather than leaving it segregated to labs. Zayner found the biohacking community exclusive and hierarchical, particularly in the types of people who decide what is “safe”. He hopes that his projects can let even more people experiment in their homes. Other scientists responded that biohacking is inherently privileged, as it requires leisure time and money, and that deviance from the safety rules of concern would lead to even harsher regulations for all. Zayner’s public CRISPR kit campaign coincided with wider scrutiny over genetic modification. Zayner maintained that these fears were based on misunderstandings of the product, as genetic experiments on yeast and bacteria cannot produce a viral epidemic. In April 2015, Zayner ran a hoax on Craigslist to raise awareness about the future potential of forgery in forensics genetics testing.
In February 2016, Zayner performed a full body microbiome transplant on himself, including a fecal transplant, to experiment with microbiome engineering and see if he could cure himself from gastrointestinal and other health issues. The microbiome from the donors feces successfully transplanted in Zayner’s gut according to DNA sequencing done on samples. This experiment was documented by filmmakers Kate McLean and Mario Furloni and turned into the short documentary film Gut Hack.
In December 2016, Zayner created a fluorescent beer by engineering yeast to contain the green fluorescent protein from jellyfish. Zayner’s company, The ODIN, released kits to allow people to create their own engineered fluorescent yeast and this was met with some controversy as the FDA declared the green fluorescent protein can be seen as a color additive. Zayner, views the kit as a way that individual can use genetic engineering to create things in their everyday life.
I found the video for Zayner’s now completed crowdfunding campaign,
I also found The ODIN website (mentioned in the Wikipedia essay) where they claim to be selling various gene editing and gene engineering kits including the CRISPR editing kits mentioned in Peters’ article,
In 2016, he [Zayner] sold $200,000 worth of products, including a kit for yeast that can be used to brew glowing bioluminescent beer, a kit to discover antibiotics at home, and a full home lab that’s roughly the cost of a MacBook Pro. In 2017, he expects to double sales. Many kits are simple, and most buyers probably aren’t using the supplies to attempt to engineer themselves (many kits go to classrooms). But Zayner also hopes that as people using the kits gain genetic literacy, they experiment in wilder ways.
He questions whether traditional research methods, like randomized controlled trials, are the only way to make discoveries, pointing out that in newer personalized medicine (such as immunotherapy for cancer, which is personalized for each patient), a sample size of one person makes sense. At his workshop, he argued that people should have the choice to self-experiment if they want to; we also change our DNA when we drink alcohol or smoke cigarettes or breathe in dirty city air. Other society-sanctioned activities are more dangerous. “We sacrifice maybe a million people a year to the car gods,” he said. “If you ask someone, ‘Would you get rid of cars?’–no.” …
US researchers both conventional and DIY types such as Zayner are not the only ones who are editing genes. The Chinese study mentioned in Peters’ article was written up in an Oct. 19, 2015 article by Antonio Regalado for the MIT [Massachusetts Institute of Technology] Technology Review (Note: Links have been removed),
Scientists in China say they are the first to use gene editing to produce customized dogs. They created a beagle with double the amount of muscle mass by deleting a gene called myostatin.
The dogs have “more muscles and are expected to have stronger running ability, which is good for hunting, police (military) applications,” Liangxue Lai, a researcher with the Key Laboratory of Regenerative Biology at the Guangzhou Institutes of Biomedicine and Health, said in an e-mail.
Lai and 28 colleagues reported their results last week in the Journal of Molecular Cell Biology, saying they intend to create dogs with other DNA mutations, including ones that mimic human diseases such as Parkinson’s and muscular dystrophy. “The goal of the research is to explore an approach to the generation of new disease dog models for biomedical research,” says Lai. “Dogs are very close to humans in terms of metabolic, physiological, and anatomical characteristics.”
Lai said his group had no plans breed to breed the extra-muscular beagles as pets. Other teams, however, could move quickly to commercialize gene-altered dogs, potentially editing their DNA to change their size, enhance their intelligence, or correct genetic illnesses. A different Chinese Institute, BGI, said in September it had begun selling miniature pigs, created via gene editing, for $1,600 each as novelty pets.
People have been influencing the genetics of dogs for millennia. By at least 36,000 years ago, early humans had already started to tame wolves and shape the companions we have today. Charles Darwin frequently cited dog breeding in The Origin of Species to demonstrate how evolution gradually occurs by a process of selection. With CRISPR, however, evolution is no longer gradual or subject to chance. It is immediate and under human control.
It is precisely that power that is stirring wide debate and concern over CRISPR. Yet at least some researchers think that gene-edited dogs could put a furry, friendly face on the technology. In an interview this month, George Church, a professor at Harvard University who leads a large effort to employ CRISPR editing, said he thinks it will be possible to augment dogs by using DNA edits to make them live longer or simply make them smarter.
Church said he also believed the alteration of dogs and other large animals could open a path to eventual gene editing of people. “Germline editing of pigs or dogs offers a line into it,” he said. “People might say, ‘Hey, it works.’ ”
In the meantime, Zayner’s ideas are certainly thought provoking. I’m not endorsing either his products or his ideas but it should be noted that early science pioneers such as Humphrey Davy and others experimented on themselves. For anyone unfamiliar with Davy, (from the Humphrey Davy Wikipedia entry; Note: Links have been removed),
Sir Humphry Davy, 1st Baronet PRS MRIA FGS (17 December 1778 – 29 May 1829) was a Cornish chemist and inventor, who is best remembered today for isolating a series of substances for the first time: potassium and sodium in 1807 and calcium, strontium, barium, magnesium and boron the following year, as well as discovering the elemental nature of chlorine and iodine. He also studied the forces involved in these separations, inventing the new field of electrochemistry. Berzelius called Davy’s 1806 Bakerian Lecture On Some Chemical Agencies of Electricity “one of the best memoirs which has ever enriched the theory of chemistry.” He was a Baronet, President of the Royal Society (PRS), Member of the Royal Irish Academy (MRIA), and Fellow of the Geological Society (FGS). He also invented the Davy lamp and a very early form of incandescent light bulb.
A Nov. 11, 2017 posting on the Canadian Broadcasting Corporation’s (CBC) Quirks and Quarks blog notes that self-experimentation has a long history and goes on to describe Zayner’s and others biohacking exploits before describing the legality of biohacking in Canada,
With biohackers entering into the space traditionally held by scientists and clinicians, it begs questions. Professor Timothy Caulfield, a Canada research chair in health, law and policy at the University of Alberta, says when he hears of somebody giving themselves biohacked gene therapy, he wonders: “Is this legal? Is this safe? And if it’s not safe, is there anything that we can do about regulating it? And to be honest with you that’s a tough question and I think it’s an open question.”
In Canada, Caulfield says, Health Canada focuses on products. “You have to have something that you are going to regulate or you have to have something that’s making health claims. So if there is a product that is saying I can cure X, Y, or Z, Health Canada can say, ‘Well let’s make sure the science really backs up that claim.’ The problem with these do-it-yourself approaches is there isn’t really a product. You know these people are experimenting on themselves with something that may or may not be designed for health purposes.”
According to Caufield, if you could buy a gene therapy kit that was being marketed to you to biohack yourself, that would be different. “Health Canada could jump in. But right here that’s not the case,” he says.
There are places in the world that do regulate biohacking, says Caulfield. “Germany, for example, they have specific laws for it. And here in Canada we do have a regulatory framework that says that you cannot do gene therapy that will alter the germ line. In other words, you can’t do gene therapy or any kind of genetic editing that will create a change that you will pass on to your offspring. So that would be illegal, but that’s not what’s happening here. And I don’t think there’s a regulatory framework that adequately captures it.”
Infectious disease and policy experts aren’t that concerned yet about the possibility of a biohacker unleashing a genetically modified super germ into the population.
“I think in the future that could be a problem,”says Caulfield, “but this isn’t something that would be easy to do in your garage. I think it’s complicated science. But having said that, the science is moving quickly. We need to think about how we are going to control the potential harms.”
You can find out more about the ‘wild’ people (mostly men) of early science in Richard Holmes’ 2008 book, The Age of Wonder: How the Romantic Generation Discovered the Beauty and Terror of Science.
Finally, should you be interested in connecting with synthetic biology enthusiasts, entrepreneurs, and others, SynBioBeta is more than a conference; it’s also an activity hub.
ETA January 25, 2018 (five minutes later): There are some CRISPR/CAS9 events taking place in Toronto, Canada on January 24 and 25, 2018. One is a workshop with Portuguese artist, Marta de Menezes, and the other is a panel discussion. See my January 10, 2018 posting for more details.
*’Segue: There is some Canadian content if you keep reading.’ and ‘Canadian content’ added January 25, 2018 six minutes after first publication.
ETA February 20, 2018: Sarah Zhang’s Feb. 20, 2018 article for The Atlantic revisits Josiah Zayner’s decision to inject himself with CRISPR,
When Josiah Zayner watched a biotech CEO drop his pants at a biohacking conference and inject himself with an untested herpes treatment, he realized things had gone off the rails.
Zayner is no stranger to stunts in biohacking—loosely defined as experiments, often on the self, that take place outside of traditional lab spaces. You might say he invented their latest incarnation: He’s sterilized his body to “transplant” his entire microbiome in front of a reporter. He’s squabbled with the FDA about selling a kit to make glow-in-the-dark beer. He’s extensively documented attempts to genetically engineer the color of his skin. And most notoriously, he injected his arm with DNA encoding for CRISPR that could theoretically enhance his muscles—in between taking swigs of Scotch at a live-streamed event during an October conference. (Experts say—and even Zayner himself in the live-stream conceded—it’s unlikely to work.)
So when Zayner saw Ascendance Biomedical’s CEO injecting himself on a live-stream earlier this month, you might say there was an uneasy flicker of recognition.
“Honestly, I kind of blame myself,” Zayner told me recently. He’s been in a soul-searching mood; he recently had a kid and the backlash to the CRISPR stunt in October  had been getting to him. “There’s no doubt in my mind that somebody is going to end up hurt eventually,” he said.
Yup, it’s one of the reasons for rules; people take things too far. The trick is figuring out how to achieve balance between risk taking and recklessness.
The link between this research and my side project on gold nanoparticles is a bit tenuous but this work on the origins for gold and other precious metals being found in the stars is so fascinating and I’m determined to find a connection.
An artist’s impression of two neutron stars colliding. (Credit: Dana Berry / Skyworks Digital, Inc.) Courtesy: Kavli Foundation
The origin of many of the most precious elements on the periodic table, such as gold, silver and platinum, has perplexed scientists for more than six decades. Now a recent study has an answer, evocatively conveyed in the faint starlight from a distant dwarf galaxy.
In a roundtable discussion, published today [May 19, 2016?], The Kavli Foundation spoke to two of the researchers behind the discovery about why the source of these heavy elements, collectively called “r-process” elements, has been so hard to crack.
Astronomers studying a galaxy called Reticulum II have just discovered that its stars contain whopping amounts of these metals—collectively known as “r-process” elements (See “What is the R-Process?”). Of the 10 dwarf galaxies that have been similarly studied so far, only Reticulum II bears such strong chemical signatures. The finding suggests some unusual event took place billions of years ago that created ample amounts of heavy elements and then strew them throughout the galaxy’s reservoir of gas and dust. This r-process-enriched material then went on to form Reticulum II’s standout stars.
Based on the new study, from a team of researchers at the Kavli Institute at the Massachusetts Institute of Technology, the unusual event in Reticulum II was likely the collision of two, ultra-dense objects called neutron stars. Scientists have hypothesized for decades that these collisions could serve as a primary source for r-process elements, yet the idea had lacked solid observational evidence. Now armed with this information, scientists can further hope to retrace the histories of galaxies based on the contents of their stars, in effect conducting “stellar archeology.”
Gold’s origin in the Universe has finally been confirmed, after a gravitational wave source was seen and heard for the first time ever by an international collaboration of researchers, with astronomers at the University of Warwick playing a leading role.
Members of Warwick’s Astronomy and Astrophysics Group, Professor Andrew Levan, Dr Joe Lyman, Dr Sam Oates and Dr Danny Steeghs, led observations which captured the light of two colliding neutron stars, shortly after being detected through gravitational waves – perhaps the most eagerly anticipated phenomenon in modern astronomy.
Marina Koren’s Oct. 16, 2017 article for The Atlantic presents a richly evocative view (Note: Links have been removed),
Some 130 million years ago, in another galaxy, two neutron stars spiraled closer and closer together until they smashed into each other in spectacular fashion. The violent collision produced gravitational waves, cosmic ripples powerful enough to stretch and squeeze the fabric of the universe. There was a brief flash of light a million trillion times as bright as the sun, and then a hot cloud of radioactive debris. The afterglow hung for several days, shifting from bright blue to dull red as the ejected material cooled in the emptiness of space.
Astronomers detected the aftermath of the merger on Earth on August 17. For the first time, they could see the source of universe-warping forces Albert Einstein predicted a century ago. Unlike with black-hole collisions, they had visible proof, and it looked like a bright jewel in the night sky.
But the merger of two neutron stars is more than fireworks. It’s a factory.
Using infrared telescopes, astronomers studied the spectra—the chemical composition of cosmic objects—of the collision and found that the plume ejected by the merger contained a host of newly formed heavy chemical elements, including gold, silver, platinum, and others. Scientists estimate the amount of cosmic bling totals about 10,000 Earth-masses of heavy elements.
I’m not sure exactly what this image signifies but it did accompany Koren’s article so presumably it’s a representation of colliding neutron stars,
NSF / LIGO / Sonoma State University /A. Simonnet. Downloaded from: https://www.theatlantic.com/science/archive/2017/10/the-making-of-cosmic-bling/543030/
Huge amounts of gold, platinum, uranium and other heavy elements were created in the collision of these compact stellar remnants, and were pumped out into the universe – unlocking the mystery of how gold on wedding rings and jewellery is originally formed.
The collision produced as much gold as the mass of the Earth. [emphasis mine]
This discovery has also confirmed conclusively that short gamma-ray bursts are directly caused by the merging of two neutron stars.
The neutron stars were very dense – as heavy as our Sun yet only 10 kilometres across – and they collided with each other 130 million years ago, when dinosaurs roamed the Earth, in a relatively old galaxy that was no longer forming many stars.
They drew towards each other over millions of light years, and revolved around each other increasingly quickly as they got closer – eventually spinning around each other five hundred times per second.
Their merging sent ripples through the fabric of space and time – and these ripples are the elusive gravitational waves spotted by the astronomers.
The gravitational waves were detected by the Advanced Laser Interferometer Gravitational-Wave Observatory (Adv-LIGO) on 17 August this year , with a short duration gamma-ray burst detected by the Fermi satellite just two seconds later.
This led to a flurry of observations as night fell in Chile, with a first report of a new source from the Swope 1m telescope.
Longstanding collaborators Professor Levan and Professor Nial Tanvir (from the University of Leicester) used the facilities of the European Southern Observatory to pinpoint the source in infrared light.
Professor Levan’s team was the first one to get observations of this new source with the Hubble Space Telescope. It comes from a galaxy called NGC 4993, 130 million light years away.
Andrew Levan, Professor in the Astronomy & Astrophysics group at the University of Warwick, commented: “Once we saw the data, we realised we had caught a new kind of astrophysical object. This ushers in the era of multi-messenger astronomy, it is like being able to see and hear for the first time.”
Dr Joe Lyman, who was observing at the European Southern Observatory at the time was the first to alert the community that the source was unlike any seen before.
He commented: “The exquisite observations obtained in a few days showed we were observing a kilonova, an object whose light is powered by extreme nuclear reactions. This tells us that the heavy elements, like the gold or platinum in jewellery are the cinders, forged in the billion degree remnants of a merging neutron star.”
Dr Samantha Oates added: “This discovery has answered three questions that astronomers have been puzzling for decades: what happens when neutron stars merge? What causes the short duration gamma-ray bursts? Where are the heavy elements, like gold, made? In the space of about a week all three of these mysteries were solved.”
Dr Danny Steeghs said: “This is a new chapter in astrophysics. We hope that in the next few years we will detect many more events like this. Indeed, in Warwick we have just finished building a telescope designed to do just this job, and we expect it to pinpoint these sources in this new era of multi-messenger astronomy”.
Congratulations to all of the researchers involved in this work!
Many, many research teams were involved. Here’s a sampling of their news releases which focus on their areas of research,
The American Association for the Advancement of Science’s (AAAS) magazine, Science, has published seven papers on this research. Here’s an Oct. 16, 2017 AAAS news release with an overview of the papers,
I’m sure there are more news releases out there and that there will be many more papers published in many journals, so if this interests, I encourage you to keep looking.
Two final pieces I’d like to draw your attention to: one answers basic questions and another focuses on how artists knew what to draw when neutron stars collide.
Keith A Spencer’s Oct. 18, 2017 piece on salon.com answers a lot of basic questions for those of us who don’t have a background in astronomy. Here are a couple of examples,
What is a neutron star?
Okay, you know how atoms have protons, neutrons, and electrons in them? And you know how protons are positively charged, and electrons are negatively charged, and neutrons are neutral?
Yeah, I remember that from watching Bill Nye as a kid.
Totally. Anyway, have you ever wondered why the negatively-charged electrons and the positively-charged protons don’t just merge into each other and form a neutral neutron? I mean, they’re sitting there in the atom’s nucleus pretty close to each other. Like, if you had two magnets that close, they’d stick together immediately.
I guess now that you mention it, yeah, it is weird.
Well, it’s because there’s another force deep in the atom that’s preventing them from merging.
It’s really really strong.
The only way to overcome this force is to have a huge amount of matter in a really hot, dense space — basically shove them into each other until they give up and stick together and become a neutron. This happens in very large stars that have been around for a while — the core collapses, and in the aftermath, the electrons in the star are so close to the protons, and under so much pressure, that they suddenly merge. There’s a big explosion and the outer material of the star is sloughed off.
Okay, so you’re saying under a lot of pressure and in certain conditions, some stars collapse and become big balls of neutrons?
Pretty much, yeah.
So why do the neutrons just stick around in a huge ball? Aren’t they neutral? What’s keeping them together?
Gravity, mostly. But also the strong nuclear force, that aforementioned weird strong force. This isn’t something you’d encounter on a macroscopic scale — the strong force only really works at the type of distances typified by particles in atomic nuclei. And it’s different, fundamentally, than the electromagnetic force, which is what makes magnets attract and repel and what makes your hair stick up when you rub a balloon on it.
So these neutrons in a big ball are bound by gravity, but also sticking together by virtue of the strong nuclear force.
So basically, the new ball of neutrons is really small, at least, compared to how heavy it is. That’s because the neutrons are all clumped together as if this neutron star is one giant atomic nucleus — which it kinda is. It’s like a giant atom made only of neutrons. If our sun were a neutron star, it would be less than 20 miles wide. It would also not be something you would ever want to get near.
Got it. That means two giant balls of neutrons that weighed like, more than our sun and were only ten-ish miles wide, suddenly smashed into each other, and in the aftermath created a black hole, and we are just now detecting it on Earth?
Exactly. Pretty weird, no?
Spencer does a good job of gradually taking you through increasingly complex explanations.
For those with artistic interests, Neel V. Patel tries to answer a question about how artists knew what draw when neutron stars collided in his Oct. 18, 2017 piece for Slate.com,
All of these things make this discovery easy to marvel at and somewhat impossible to picture. Luckily, artists have taken up the task of imagining it for us, which you’ve likely seen if you’ve already stumbled on coverage of the discovery. Two bright, furious spheres of light and gas spiraling quickly into one another, resulting in a massive swell of lit-up matter along with light and gravitational waves rippling off speedily in all directions, towards parts unknown. These illustrations aren’t just alluring interpretations of a rare phenomenon; they are, to some extent, the translation of raw data and numbers into a tangible visual that gives scientists and nonscientists alike some way of grasping what just happened. But are these visualizations realistic? Is this what it actually looked like? No one has any idea. Which is what makes the scientific illustrators’ work all the more fascinating.
“My goal is to represent what the scientists found,” says Aurore Simmonet, a scientific illustrator based at Sonoma State University in Rohnert Park, California. Even though she said she doesn’t have a rigorous science background (she certainly didn’t know what a kilonova was before being tasked to illustrate one), she also doesn’t believe that type of experience is an absolute necessity. More critical, she says, is for the artist to have an interest in the subject matter and in learning new things, as well as a capacity to speak directly to scientists about their work.
Illustrators like Simmonet usually start off work on an illustration by asking the scientist what’s the biggest takeaway a viewer should grasp when looking at a visual. Unfortunately, this latest discovery yielded a multitude of papers emphasizing different conclusions and highlights. With so many scientific angles, there’s a stark challenge in trying to cram every important thing into a single drawing.
Clearly, however, the illustrations needed to center around the kilonova. Simmonet loves colors, so she began by discussing with the researchers what kind of color scheme would work best. The smash of two neutron stars lends itself well to deep, vibrant hues. Simmonet and Robin Dienel at the Carnegie Institution for Science elected to use a wide array of colors and drew bright cracking to show pressure forming at the merging. Others, like Luis Calcada at the European Southern Observatory, limited the color scheme in favor of emphasizing the bright moment of collision and the signal waves created by the kilonova.
Animators have even more freedom to show the event, since they have much more than a single frame to play with. The Conceptual Image Lab at NASA’s [US National Aeronautics and Space Administration] Goddard Space Flight Center created a short video about the new findings, and lead animator Brian Monroe says the video he and his colleagues designed shows off the evolution of the entire process: the rising action, climax, and resolution of the kilonova event.
The illustrators try to adhere to what the likely physics of the event entailed, soliciting feedback from the scientists to make sure they’re getting it right. The swirling of gas, the direction of ejected matter upon impact, the reflection of light, the proportions of the objects—all of these things are deliberately framed such that they make scientific sense. …
Do take a look at Patel’s piece, if for no other reason than to see all of the images he has embedded there. You may recognize Aurore Simmonet’s name from the credit line in the second image I have embedded here.
The researchers involved in this work are confident enough about their prospects that they will be patenting their research into yarns. From an August 25, 2017 news item on Nanowerk,
An international research team led by scientists at The University of Texas at Dallas and Hanyang University in South Korea has developed high-tech yarns that generate electricity when they are stretched or twisted.
In a study published in the Aug. 25  issue of the journal Science (“Harvesting electrical energy from carbon nanotube yarn twist”), researchers describe “twistron” yarns and their possible applications, such as harvesting energy from the motion of ocean waves or from temperature fluctuations. When sewn into a shirt, these yarns served as a self-powered breathing monitor.
“The easiest way to think of twistron harvesters is, you have a piece of yarn, you stretch it, and out comes electricity,” said Dr. Carter Haines, associate research professor in the Alan G. MacDiarmid NanoTech Institute at UT Dallas and co-lead author of the article. The article also includes researchers from South Korea, Virginia Tech, Wright-Patterson Air Force Base and China.
The yarns are constructed from carbon nanotubes, which are hollow cylinders of carbon 10,000 times smaller in diameter than a human hair. The researchers first twist-spun the nanotubes into high-strength, lightweight yarns. To make the yarns highly elastic, they introduced so much twist that the yarns coiled like an over-twisted rubber band.
In order to generate electricity, the yarns must be either submerged in or coated with an ionically conducting material, or electrolyte, which can be as simple as a mixture of ordinary table salt and water.
“Fundamentally, these yarns are supercapacitors,” said Dr. Na Li, a research scientist at the NanoTech Institute and co-lead author of the study. “In a normal capacitor, you use energy — like from a battery — to add charges to the capacitor. But in our case, when you insert the carbon nanotube yarn into an electrolyte bath, the yarns are charged by the electrolyte itself. No external battery, or voltage, is needed.”
When a harvester yarn is twisted or stretched, the volume of the carbon nanotube yarn decreases, bringing the electric charges on the yarn closer together and increasing their energy, Haines said. This increases the voltage associated with the charge stored in the yarn, enabling the harvesting of electricity.
Stretching the coiled twistron yarns 30 times a second generated 250 watts per kilogram of peak electrical power when normalized to the harvester’s weight, said Dr. Ray Baughman, director of the NanoTech Institute and a corresponding author of the study.
“Although numerous alternative harvesters have been investigated for many decades, no other reported harvester provides such high electrical power or energy output per cycle as ours for stretching rates between a few cycles per second and 600 cycles per second.”
Lab Tests Show Potential Applications
In the lab, the researchers showed that a twistron yarn weighing less than a housefly could power a small LED, which lit up each time the yarn was stretched.
To show that twistrons can harvest waste thermal energy from the environment, Li connected a twistron yarn to a polymer artificial muscle that contracts and expands when heated and cooled. The twistron harvester converted the mechanical energy generated by the polymer muscle to electrical energy.
“There is a lot of interest in using waste energy to power the Internet of Things, such as arrays of distributed sensors,” Li said. “Twistron technology might be exploited for such applications where changing batteries is impractical.”
The researchers also sewed twistron harvesters into a shirt. Normal breathing stretched the yarn and generated an electrical signal, demonstrating its potential as a self-powered respiration sensor.
“Electronic textiles are of major commercial interest, but how are you going to power them?” Baughman said. “Harvesting electrical energy from human motion is one strategy for eliminating the need for batteries. Our yarns produced over a hundred times higher electrical power per weight when stretched compared to other weavable fibers reported in the literature.”
Electricity from Ocean Waves
“In the lab we showed that our energy harvesters worked using a solution of table salt as the electrolyte,” said Baughman, who holds the Robert A. Welch Distinguished Chair in Chemistry in the School of Natural Sciences and Mathematics. “But we wanted to show that they would also work in ocean water, which is chemically more complex.”
In a proof-of-concept demonstration, co-lead author Dr. Shi Hyeong Kim, a postdoctoral researcher at the NanoTech Institute, waded into the frigid surf off the east coast of South Korea to deploy a coiled twistron in the sea. He attached a 10 centimeter-long yarn, weighing only 1 milligram (about the weight of a mosquito), between a balloon and a sinker that rested on the seabed.
Every time an ocean wave arrived, the balloon would rise, stretching the yarn up to 25 percent, thereby generating measured electricity.
Even though the investigators used very small amounts of twistron yarn in the current study, they have shown that harvester performance is scalable, both by increasing twistron diameter and by operating many yarns in parallel.
“If our twistron harvesters could be made less expensively, they might ultimately be able to harvest the enormous amount of energy available from ocean waves,” Baughman said. “However, at present these harvesters are most suitable for powering sensors and sensor communications. Based on demonstrated average power output, just 31 milligrams of carbon nanotube yarn harvester could provide the electrical energy needed to transmit a 2-kilobyte packet of data over a 100-meter radius every 10 seconds for the Internet of Things.”
The investigators have filed a patent on the technology.
In the U.S., the research was funded by the Air Force, the Air Force Office of Scientific Research, NASA, the Office of Naval Research and the Robert A. Welch Foundation. In Korea, the research was supported by the Korea-U.S. Air Force Cooperation Program and the Creative Research Initiative Center for Self-powered Actuation of the National Research Foundation and the Ministry of Science.
Here’s a link to and a citation for the paper,
Harvesting electrical energy from carbon nanotube yarn twist by Shi Hyeong Kim, Carter S. Haines, Na Li, Keon Jung Kim, Tae Jin Mun, Changsoon Choi, Jiangtao Di, Young Jun Oh, Juan Pablo Oviedo, Julia Bykova, Shaoli Fang, Nan Jiang, Zunfeng Liu, Run Wang, Prashant Kumar, Rui Qiao, Shashank Priya, Kyeongjae Cho, Moon Kim, Matthew Steven Lucas, Lawrence F. Drummy, Benji Maruyama, Dong Youn Lee, Xavier Lepró, Enlai Gao, Dawood Albarq, Raquel Ovalle-Robles, Seon Jeong Kim, Ray H. Baughman. Science 25 Aug 2017: Vol. 357, Issue 6353, pp. 773-778 DOI: 10.1126/science.aam8771
This paper is behind a paywall.
Dexter Johnson in an Aug. 25, 2017 posting on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website) delves further into the research,
“Basically what’s happening is when we stretch the yarn, we’re getting a change in capacitance of the yarn. It’s that change that allows us to get energy out,” explains Carter Haines, associate research professor at UT Dallas and co-lead author of the paper describing the research, in an interview with IEEE Spectrum.
This makes it similar in many ways to other types of energy harvesters. For instance, in other research, it has been demonstrated—with sheets of rubber with coated electrodes on both sides—that you can increase the capacitance of a material when you stretch it and it becomes thinner. As a result, if you have charge on that capacitor, you can change the voltage associated with that charge.
“We’re more or less exploiting the same effect but what we’re doing differently is we’re using an electric chemical cell to do this,” says Haines. “So we’re not changing double layer capacitance in normal parallel plate capacitors. But we’re actually changing the electric chemical capacitance on the surface of a super capacitor yarn.”
While there are other capacitance-based energy harvesters, those other devices require extremely high voltages to work because they’re using parallel plate capacitors, according to Haines.
Dexter asks good questions and his post is very informative.