Tag Archives: NASA

SIGGRAPH (Special Interest Group on Computer GRAPHics and Interactive Techniques) and their art gallery from Aug. 12 – 16, 2018 (making the world synthetic) in Vancouver (Canada)

While my main interest is the group’s temporary art gallery, I am providing a brief explanatory introduction and a couple of previews for SIGGRAPH 2018.

Introduction

For anyone unfamiliar with the Special Interest Group on Computer GRAPHics and Interactive Techniques (SIGGRAPH) and its conferences, from the SIGGRAPH Wikipedia entry Note: Links have been removed),

Some highlights of the conference are its Animation Theater and Electronic Theater presentations, where recently created CG films are played. There is a large exhibition floor, where several hundred companies set up elaborate booths and compete for attention and recruits. Most of the companies are in the engineering, graphics, motion picture, or video game industries. There are also many booths for schools which specialize in computer graphics or interactivity.

Dozens of research papers are presented each year, and SIGGRAPH is widely considered the most prestigious forum for the publication of computer graphics research.[1] The recent paper acceptance rate for SIGGRAPH has been less than 26%.[2] The submitted papers are peer-reviewed in a single-blind process.[3] There has been some criticism about the preference of SIGGRAPH paper reviewers for novel results rather than useful incremental progress.[4][5] …

This is the third SIGGRAPH Vancouver has hosted; the others were in 2011 and 2014.  The theme for the 2018 iteration is ‘Generations’; here’s more about it from an Aug. 2, 2018 article by Terry Flores for Variety,

While its focus is firmly forward thinking, SIGGRAPH 2018, the computer graphics, animation, virtual reality, games, digital art, mixed reality, and emerging technologies conference, is also tipping its hat to the past thanks to its theme this year: Generations. The conference runs Aug. 12-16 in Vancouver, B.C.

“In the literal people sense, pioneers in the computer graphics industry are standing shoulder to shoulder with researchers, practitioners and the future of the industry — young people — mentoring them, dabbling across multiple disciplines to innovate, relate, and grow,” says SIGGRAPH 2018 conference chair Roy C. Anthony, VP of creative development and operations at software and technology firm Ventuz. “This is really what SIGGRAPH has always been about. Generations really seemed like a very appropriate way of looking back and remembering where we all came from and how far we’ve come.”

SIGGRAPH 2018 has a number of treats in store for attendees, including the debut of Disney’s first VR film, the short “Cycles”; production sessions on the making of “Blade Runner 2049,” “Game of Thrones,” “Incredibles 2” and “Avengers: Infinity War”; as well as sneak peeks of Disney’s upcoming “Ralph Breaks the Internet: Wreck-It Ralph 2” and Laika’s “Missing Link.”

That list of ‘treats’ in the last paragraph makes the conference seem more like an iteration of a ‘comic-con’ than a technology conference.

Previews

I have four items about work that will be presented at SIGGRAPH 2018, First up, something about ‘redirected walking’ from a June 18, 2018 Association for Computing Machinery news release on EurekAlert,

CHICAGO–In the burgeoning world of virtual reality (VR) technology, it remains a challenge to provide users with a realistic perception of infinite space and natural walking capabilities in the virtual environment. A team of computer scientists has introduced a new approach to address this problem by leveraging a natural human phenomenon: eye blinks.

All humans are functionally blind for about 10 percent of the time under normal circumstances due to eye blinks and saccades, a rapid movement of the eye between two points or objects. Eye blinks are a common and natural cause of so-called “change blindness,” which indicates the inability for humans to notice changes to visual scenes. Zeroing in on eye blinks and change blindness, the team has devised a novel computational system that effectively redirects the user in the virtual environment during these natural instances, all with undetectable camera movements to deliver orientation redirection.

“Previous RDW [redirected walking] techniques apply rotations continuously while the user is walking. But the amount of unnoticeable rotations is limited,” notes Eike Langbehn, lead author of the research and doctoral candidate at the University of Hamburg. “That’s why an orthogonal approach is needed–we add some additional rotations when the user is not focused on the visuals. When we learned that humans are functionally blind for some time due to blinks, we thought, ‘Why don’t we do the redirection during eye blinks?'”

Human eye blinks occur approximately 10 to 20 times per minute, about every 4 to 19 seconds. Leveraging this window of opportunity–where humans are unable to detect major motion changes while in a virtual environment–the researchers devised an approach to synchronize a computer graphics rendering system with this visual process, and introduce any useful motion changes in virtual scenes to enhance users’ overall VR experience.

The researchers’ experiments revealed that imperceptible camera rotations of 2 to 5 degrees and translations of 4 to 9 cm of the user’s viewpoint are possible during a blink without users even noticing. They tracked test participants’ eye blinks by an eye tracker in a VR head-mounted display. In a confirmatory study, the team validated that participants could not reliably detect in which of two eye blinks their viewpoint was manipulated while walking a VR curved path. The tests relied on unconscious natural eye blinking, but the researchers say redirection during blinking could be carried out consciously. Since users can consciously blink multiple times a day without much effort, eye blinks provide great potential to be used as an intentional trigger in their approach.

The team will present their work at SIGGRAPH 2018, held 12-16 August in Vancouver, British Columbia. The annual conference and exhibition showcases the world’s leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.

“RDW is a big challenge since current techniques still need too much space to enable unlimited walking in VR,” notes Langbehn. “Our work might contribute to a reduction of space since we found out that unnoticeable rotations of up to five degrees are possible during blinks. This means we can improve the performance of RDW by approximately 50 percent.”

The team’s results could be used in combination with other VR research, such as novel steering algorithms, improved path prediction, and rotations during saccades, to name a few. Down the road, such techniques could some day enable consumer VR users to virtually walk beyond their living room.

Langbehn collaborated on the work with Frank Steinicke of University of Hamburg, Markus Lappe of University of Muenster, Gregory F. Welch of University of Central Florida, and Gerd Bruder, also of University of Central Florida. For the full paper and video, visit the team’s project page.

###

About ACM, ACM SIGGRAPH, and SIGGRAPH 2018

ACM, the Association for Computing Machinery, is the world’s largest educational and scientific computing society, uniting educators, researchers, and professionals to inspire dialogue, share resources, and address the field’s challenges. ACM SIGGRAPH is a special interest group within ACM that serves as an interdisciplinary community for members in research, technology, and applications in computer graphics and interactive techniques. SIGGRAPH is the world’s leading annual interdisciplinary educational experience showcasing the latest in computer graphics and interactive techniques. SIGGRAPH 2018, marking the 45th annual conference hosted by ACM SIGGRAPH, will take place from 12-16 August at the Vancouver Convention Centre in Vancouver, B.C.

They have provided an image illustrating what they mean (I don’t find it especially informative),

Caption: The viewing behavior of a virtual reality user, including fixations (in green) and saccades (in red). A blink fully suppresses visual perception. Credit: Eike Langbehn

Next up (2), there’s Disney Corporation’s first virtual reality (VR) short, from a July 19, 2018  Association for Computing Machinery news release on EurekAlert,

Walt Disney Animation Studios will debut its first ever virtual reality short film at SIGGRAPH 2018, and the hope is viewers will walk away feeling connected to the characters as equally as they will with the VR technology involved in making the film.

Cycles, an experimental film directed by Jeff Gipson, centers around the true meaning of creating a home and the life it holds inside its walls. The idea for the film is personal, inspired by Gipson’s childhood spending time with his grandparents and creating memories in their home, and later, having to move them to an assisted living residence.

“Every house has a story unique to the people, the characters who live there,” says Gipson. “We wanted to create a story in this single place and be able to have the viewer witness life happening around them. It is an emotionally driven film, expressing the real ups and downs, the happy and sad moments in life.”

For Cycles, Gipson also drew from his past life as an architect, having spent several years designing skate parks, and from his passion for action sports, including freestyle BMX. In Los Angeles, where Gipson lives, it is not unusual to find homes with an empty swimming pool reserved for skating or freestyle biking. Part of the pitch for Cycles came out of Gipson’s experience riding in these empty pools and being curious about the homes attached to them, the families who lived there, and the memories they made.

SIGGRAPH attendees will have the opportunity to experience Cycles at the Immersive Pavilion, a new space for this year’s conference. The Pavilion is devoted exclusively to virtual, augmented, and mixed reality and will contain: the VR Theater, a storytelling extravaganza that is part of the Computer Animation Festival; the Vrcade, a space for VR, AR, and MR games or experiences; and the well-known Village, for showcasing large-scale projects. SIGGRAPH 2018, held 12-16 August in Vancouver, British Columbia, is an annual gathering that showcases the world’s leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.

The production team completed Cycles in four months with about 50 collaborators as part of a professional development program at the studio. A key difference in VR filmmaking includes getting creative with how to translate a story to the VR “screen.” Pre-visualizing the narrative, for one, was a challenge. Rather than traditional storyboarding, Gipson and his team instead used a mix of Quill VR painting techniques and motion capture to “storyboard” Cycles, incorporating painters and artists to generate sculptures or 3D models of characters early on and draw scenes for the VR space. The creators also got innovative with the use of light and color saturation in scenes to help guide the user’s eyes during the film.

“What’s cool for VR is that we are really on the edge of trying to figure out what it is and how to tell stories in this new medium,” says Gipson. “In VR, you can look anywhere and really be transported to a different world, experience it from different angles, and see every detail. We want people watching to feel alive and feel emotion, and give them a true cinematic experience.”

This is Gipson’s VR directorial debut. He joined Walt Disney Animation Studios in 2013, serving as a lighting artist on Disney favorites like Frozen, Zootopia, and Moana. Of getting to direct the studio’s first VR short, he says, “VR is an amazing technology and a lot of times the technology is what is really celebrated. We hope more and more people begin to see the emotional weight of VR films, and with Cycles in particular, we hope they will feel the emotions we aimed to convey with our story.”

Apparently this is a still from the ‘short’,

Caption: Disney Animation Studios will present ‘Cycles’ , its first virtual reality (VR) short, at ACM SIGGRAPH 2018. Credit: Disney Animation Studios

There’s also something (3) from Google as described in a July 26, 2018 Association of Computing Machinery news release on EurekAlert,

Google has unveiled a new virtual reality (VR) immersive experience based on a novel system that captures and renders high-quality, realistic images from the real world using light fields. Created by a team of leading researchers at Google, Welcome to Light Fields is the tech giant’s splash into the nascent arena of light fields VR experiences, an exciting corner of VR video technology gaining traction for its promise to deliver extremely high-quality imagery and experiences in the virtual world.

Google released Welcome to Light Fields earlier this year as a free app on Steam VR for HTC Vive, Oculus Rift, and Windows Mixed Reality headsets. The creators will demonstrate the VR experience at SIGGRAPH 2018, in the Immersive Pavilion, a new space for this year’s conference. The Pavilion is devoted exclusively to virtual, augmented, and mixed reality and will contain: the Vrcade, a space for VR, AR, and MR games or experiences; the VR Theater, a storytelling extravaganza that is part of the Computer Animation Festival; and the well-known Village, for showcasing large-scale projects. SIGGRAPH 2018, held 12-16 August in Vancouver, British Columbia, is an annual gathering that showcases the world’s leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.

Destinations in Welcome to Light Fields include NASA’s Space Shuttle Discovery, delivering to viewers an astronaut’s view inside the flight deck, which has never been open to the public; the pristine teak and mahogany interiors of the Gamble House, an architectural treasure in Pasadena, CA; and the glorious St. Stephen’s Church in Granada Hills, CA, home to a stunning wall of more than 14,000 pieces of glimmering stained glass.

“I love that light fields in VR can teleport you to exotic places in the real world, and truly make you believe you are there,” says Ryan Overbeck, software engineer at Google who co-led the project. “To me, this is magic.”

To bring this experience to life, Overbeck worked with a team that included Paul Debevec, senior staff engineer at Google, who managed the project and led the hardware piece with engineers Xueming Yu, Jay Busch, and Graham Fyffe. With Overbeck, Daniel Erickson and Daniel Evangelakos focused on the software end. The researchers designed a comprehensive system for capturing and rendering high-quality, spherical light field still images from footage captured in the real world. They developed two easy-to-use light field camera rigs, based on the GoPro Hero4action sports camera, that efficiently capture thousands of images on the surface of a sphere. Those images were then passed through a cloud-based light-field-processing pipeline.

Among other things, explains Overbeck, “The processing pipeline uses computer vision to place the images in 3D and generate depth maps, and we use a modified version of our vp9 video codec

to compress the light field data down to a manageable size.” To render a light field dataset, he notes, the team used a rendering algorithm that blends between the thousands of light field images in real-time.

The team relied on Google’s talented pool of engineers in computer vision, graphics, video compression, and machine learning to overcome the unique challenges posed in light fields technology. They also collaborated closely with the WebM team (who make the vp9 video codec) to develop the high-quality light field compression format incorporated into their system, and leaned heavily on the expertise of the Jump VR team to help pose the images and generate depth maps. (Jump is Google’s professional VR system for achieving 3D-360 video production at scale.)

Indeed, with Welcome to Light Fields, the Google team is demonstrating the potential and promise of light field VR technology, showcasing the technology’s ability to provide a truly immersive experience with a level of unmatched realism. Though light fields technology has been researched and explored in computer graphics for more than 30 years, practical systems for actually delivering high-quality light field experiences has not yet been possible.

Part of the team’s motivation behind creating this VR light field experience is to invigorate the nascent field.

“Welcome to Light Fields proves that it is now possible to make a compelling light field VR viewer that runs on consumer-grade hardware, and we hope that this knowledge will encourage others to get involved with building light field technology and media,” says Overbeck. “We understand that in order to eventually make compelling consumer products based on light fields, we need a thriving light field ecosystem. We need open light field codecs, we need artists creating beautiful light field imagery, and we need people using VR in order to engage with light fields.”

I don’t really understand why this image, which looks like something belongs on advertising material, would be chosen to accompany a news release on a science-based distribution outlet,

Caption: A team of leading researchers at Google, will unveil the new immersive virtual reality (VR) experience “Welcome to Lightfields” at ACM SIGGRAPH 2018. Credit: Image courtesy of Google/Overbeck

Finally (4), ‘synthesizing realistic sounds’ is announced in an Aug. 6, 2018 Stanford University (US) news release (also on EurekAlert) by Taylor Kubota,

Advances in computer-generated imagery have brought vivid, realistic animations to life, but the sounds associated with what we see simulated on screen, such as two objects colliding, are often recordings. Now researchers at Stanford University have developed a system that automatically renders accurate sounds for a wide variety of animated phenomena.

“There’s been a Holy Grail in computing of being able to simulate reality for humans. We can animate scenes and render them visually with physics and computer graphics, but, as for sounds, they are usually made up,” said Doug James, professor of computer science at Stanford University. “Currently there exists no way to generate realistic synchronized sounds for complex animated content, such as splashing water or colliding objects, automatically. This fills that void.”

The researchers will present their work on this sound synthesis system as part of ACM SIGGRAPH 2018, the leading conference on computer graphics and interactive techniques. In addition to enlivening movies and virtual reality worlds, this system could also help engineering companies prototype how products would sound before being physically produced, and hopefully encourage designs that are quieter and less irritating, the researchers said.

“I’ve spent years trying to solve partial differential equations – which govern how sound propagates – by hand,” said Jui-Hsien Wang, a graduate student in James’ lab and in the Institute for Computational and Mathematical Engineering (ICME), and lead author of the paper. “This is actually a place where you don’t just solve the equation but you can actually hear it once you’ve done it. That’s really exciting to me and it’s fun.”

Predicting sound

Informed by geometry and physical motion, the system figures out the vibrations of each object and how, like a loudspeaker, those vibrations excite sound waves. It computes the pressure waves cast off by rapidly moving and vibrating surfaces but does not replicate room acoustics. So, although it does not recreate the echoes in a grand cathedral, it can resolve detailed sounds from scenarios like a crashing cymbal, an upside-down bowl spinning to a stop, a glass filling up with water or a virtual character talking into a megaphone.

Most sounds associated with animations rely on pre-recorded clips, which require vast manual effort to synchronize with the action on-screen. These clips are also restricted to noises that exist – they can’t predict anything new. Other systems that produce and predict sounds as accurate as those of James and his team work only in special cases, or assume the geometry doesn’t deform very much. They also require a long pre-computation phase for each separate object.

“Ours is essentially just a render button with minimal pre-processing that treats all objects together in one acoustic wave simulation,” said Ante Qu, a graduate student in James’ lab and co-author of the paper.

The simulated sound that results from this method is highly detailed. It takes into account the sound waves produced by each object in an animation but also predicts how those waves bend, bounce or deaden based on their interactions with other objects and sound waves in the scene.

Challenges ahead

In its current form, the group’s process takes a while to create the finished product. But, now that they have proven this technique’s potential, they can focus on performance optimizations, such as implementing their method on parallel GPU hardware, that should make it drastically faster.

And, even in its current state, the results are worth the wait.

“The first water sounds we generated with the system were among the best ones we had simulated – and water is a huge challenge in computer-generated sound,” said James. “We thought we might get a little improvement, but it is dramatically better than previous approaches even right out of the box. It was really striking.”

Although the group’s work has faithfully rendered sounds of various objects spinning, falling and banging into each other, more complex objects and interactions – like the reverberating tones of a Stradivarius violin – remain difficult to model realistically. That, the group said, will have to wait for a future solution.

Timothy Langlois of Adobe Research is a co-author of this paper. This research was funded by the National Science Foundation and Adobe Research. James is also a professor, by courtesy, of music and a member of Stanford Bio-X.

Researchers Timothy Langlois, Doug L. James, Ante Qu and Jui-Hsien Wang have created a video featuring highlights of animations with sounds synthesized using the Stanford researchers’ new system.,

The researchers have also provided this image,

By computing pressure waves cast off by rapidly moving and vibrating surfaces – such as a cymbal – a new sound synthesis system developed by Stanford researchers can automatically render realistic sound for computer animations. (Image credit: Timothy Langlois, Doug L. James, Ante Qu and Jui-Hsien Wang)

It does seem like we’re synthesizing the world around us, eh?

The SIGGRAPH 2018 art gallery

Here’s what SIGGRAPH had to say about its 2018 art gallery in Vancouver and the themes for the conference and the gallery (from a May 18, 2018 Associating for Computing Machinery news release on globalnewswire.com (also on this 2018 SIGGRAPH webpage),

SIGGRAPH 2018, the world’s leading showcase of digital art created using computer graphics and interactive techniques, will present a special Art Gallery, entitled “Origins,” and historic Art Papers in Vancouver, B.C. The 45th SIGGRAPH conference will take place 12–16 August at the Vancouver Convention Centre. The programs will also honor the generations of creators that have come before through a special, 50th anniversary edition of the Leonard journal. To register for the conference, visit S2018.SIGGRAPH.ORG.

The SIGGRAPH 2018 ART GALLERY is a curated exhibition, conceived as a dialogical space that enables the viewer to reflect on man’s diverse cultural values and rituals through contemporary creative practices. Building upon an exciting and eclectic selection of creative practices mediated through technologies that represent the sophistication of our times, the SIGGRAPH 2018 Art Gallery will embrace the narratives of the indigenous communities based near Vancouver and throughout Canada as a source of inspiration. The exhibition will feature contemporary media artworks, art pieces by indigenous communities, and other traces of technologically mediated Ludic practices.

Andrés Burbano, SIGGRAPH 2018 Art Gallery chair and professor at Universidad de los Andes, said, “The Art Gallery aims to articulate myth and technology, science and art, the deep past and the computational present, and will coalesce around a theme of ‘Origins.’ Media and technological creative expressions will explore principles such as the origins of the cosmos, the origins of life, the origins of human presence, the origins of the occupation of territories in the Americas, and the origins of people living in the vast territories of the Arctic.”

He continued, “The venue [in Vancouver] hopes to rekindle the original spark that ignited the collaborative spirit of the SIGGRAPH community of engineers, scientists, and artists, who came together to create the very first conference in the early 1970s.”

Highlights from the 2018 Art Gallery include:

Transformation Mask (Canada) [Technology Based]
Shawn Hunt, independent; and Microsoft Garage: Andy Klein, Robert Butterworth, Jonathan Cobb, Jeremy Kersey, Stacey Mulcahy, Brendan O’Rourke, Brent Silk, and Julia Taylor-Hell, Microsoft Vancouver

TRANSFORMATION MASK is an interactive installation that features the Microsoft HoloLens. It utilizes electronics and mechanical engineering to express a physical and digital transformation. Participants are immersed in spatial sounds and holographic visuals.

Somnium (U.S.) [Science Based]
Marko Peljhan, Danny Bazo, and Karl Yerkes, University of California, Santa Barbara

Somnium is a cybernetic installation that provides visitors with the ability to sensorily, cognitively, and emotionally contemplate and experience exoplanetary discoveries, their macro and micro dimensions, and the potential for life in our Galaxy. Some might call it “space telescope.”

Ernest Edmonds Retrospective – Art Systems 1968-2018 (United Kingdom) [History Based]
Ernest Edmonds, De Montfort University

Celebrating one of the pioneers of computer graphics-based art since the early 1970s, this Ernest Edmonds career retrospective will showcase snapshots of Edmonds’ work as it developed over the years. With one piece from each decade, the retrospective will also demonstrate how vital the Leonardo journal has been throughout the 50-year journey.

In addition to the works above, the Art Gallery will feature pieces from notable female artists Ozge Samanci, Ruth West, and Nicole L’Hullier. For more information about the Edmonds retrospective, read THIS POST ON THE ACM SIGGRAPH BLOG.

The SIGGRAPH 2018 ART PAPERS program is designed to feature research from artists, scientists, theorists, technologists, historians, and more in one of four categories: project description, theory/criticism, methods, or history. The chosen work was selected by an international jury of scholars, artists, and immersive technology developers.

To celebrate the 50th anniversary of LEONARDO (MIT Press), and 10 years of its annual SIGGRAPH issue, SIGGRAPH 2018 is pleased to announce a special anniversary edition of the journal, which will feature the 2018 art papers. For 50 years, Leonardo has been the definitive publication for artist-academics. To learn more about the relationship between SIGGRAPH and the journal, listen to THIS EPISODE OF THE SIGGRAPH SPOTLIGHT PODCAST.

“In order to encourage a wider range of topics, we introduced a new submission type, short papers. This enabled us to accept more content than in previous years. Additionally, for the first time, we will introduce sessions that integrate the Art Gallery artist talks with Art Papers talks, promoting richer connections between these two creative communities,” said Angus Forbes, SIGGRAPH 2018 Art Papers chair and professor at University of California, Santa Cruz.

Art Papers highlights include:

Alienating the Familiar with CGI: A Recipe for Making a Full CGI Art House Animated Feature [Long]
Alex Counsell and Paul Charisse, University of Portsmouth

This paper explores the process of making and funding an art house feature film using full CGI in a marketplace where this has never been attempted. It explores cutting-edge technology and production approaches, as well as routes to successful fundraising.

Augmented Fauna and Glass Mutations: A Dialogue Between Material and Technique in Glassblowing and 3D Printing [Long]
Tobias Klein, City University of Hong Kong

The two presented artworks, “Augmented Fauna” and “Glass Mutations,” were created during an artist residence at the PILCHUCK GLASS SCHOOL. They are examples of the qualities and methods established through a synthesis between digital workflows and traditional craft processes and thus formulate the notion of digital craftsmanship.

Inhabitat: An Imaginary Ecosystem in a Children’s Science Museum [Short]
Graham Wakefield, York University, and Haru Hyunkyung Ji, OCAD University

“Inhabitat” is a mixed reality artwork in which participants become part of an imaginary ecology through three simultaneous perspectives of scale and agency; three distinct ways to see with other eyes. This imaginary world was exhibited at a children’s science museum for five months, using an interactive projection-augmented sculpture, a large screen and speaker array, and a virtual reality head-mounted display.

What’s the what?

My father used to say that and I always assumed it meant summarize the high points, if you need to, and get to the point—fast. In that spirit, I am both fascinated and mildly appalled. The virtual, mixed, and augmented reality technologies, as well as, the others being featured at SIGGRAPH 2018 are wondrous in many ways but it seems we are coming ever closer to a world where we no longer interact with nature or other humans directly. (see my August 10, 2018 posting about the ‘extinction of experience’ for research that encourages more direct interaction with nature) I realize that SIGGRAPH is intended as a primarily technical experience but I think a little more content questioning these technologies and their applications (social implications) might be in order. That’s often the artist’s role but I can’t see anything in the art gallery descriptions that hint at any sort of fundamental critique.

My name is Steve and I’m a sub auroral ion drift

Photo: The Aurora Named STEVE Couresty: NASA Goddard

That stunning image is one of a series, many of which were taken by amateur photographers as noted in a March 14, 2018 US National Aeronautics and Space Agency (NASA)/Goddard Space Flight Center news release (also on EurekAlert) by Kasha Patel about how STEVE was discovered,

Notanee Bourassa knew that what he was seeing in the night sky was not normal. Bourassa, an IT technician in Regina, Canada, trekked outside of his home on July 25, 2016, around midnight with his two younger children to show them a beautiful moving light display in the sky — an aurora borealis. He often sky gazes until the early hours of the morning to photograph the aurora with his Nikon camera, but this was his first expedition with his children. When a thin purple ribbon of light appeared and starting glowing, Bourassa immediately snapped pictures until the light particles disappeared 20 minutes later. Having watched the northern lights for almost 30 years since he was a teenager, he knew this wasn’t an aurora. It was something else.

From 2015 to 2016, citizen scientists — people like Bourassa who are excited about a science field but don’t necessarily have a formal educational background — shared 30 reports of these mysterious lights in online forums and with a team of scientists that run a project called Aurorasaurus. The citizen science project, funded by NASA and the National Science Foundation, tracks the aurora borealis through user-submitted reports and tweets.

The Aurorasaurus team, led by Liz MacDonald, a space scientist at NASA’s Goddard Space Flight Center in Greenbelt, Maryland, conferred to determine the identity of this mysterious phenomenon. MacDonald and her colleague Eric Donovan at the University of Calgary in Canada talked with the main contributors of these images, amateur photographers in a Facebook group called Alberta Aurora Chasers, which included Bourassa and lead administrator Chris Ratzlaff. Ratzlaff gave the phenomenon a fun, new name, Steve, and it stuck.

But people still didn’t know what it was.

Scientists’ understanding of Steve changed that night Bourassa snapped his pictures. Bourassa wasn’t the only one observing Steve. Ground-based cameras called all-sky cameras, run by the University of Calgary and University of California, Berkeley, took pictures of large areas of the sky and captured Steve and the auroral display far to the north. From space, ESA’s (the European Space Agency) Swarm satellite just happened to be passing over the exact area at the same time and documented Steve.

For the first time, scientists had ground and satellite views of Steve. Scientists have now learned, despite its ordinary name, that Steve may be an extraordinary puzzle piece in painting a better picture of how Earth’s magnetic fields function and interact with charged particles in space. The findings are published in a study released today in Science Advances.

“This is a light display that we can observe over thousands of kilometers from the ground,” said MacDonald. “It corresponds to something happening way out in space. Gathering more data points on STEVE will help us understand more about its behavior and its influence on space weather.”

The study highlights one key quality of Steve: Steve is not a normal aurora. Auroras occur globally in an oval shape, last hours and appear primarily in greens, blues and reds. Citizen science reports showed Steve is purple with a green picket fence structure that waves. It is a line with a beginning and end. People have observed Steve for 20 minutes to 1 hour before it disappears.

If anything, auroras and Steve are different flavors of an ice cream, said MacDonald. They are both created in generally the same way: Charged particles from the Sun interact with Earth’s magnetic field lines.

The uniqueness of Steve is in the details. While Steve goes through the same large-scale creation process as an aurora, it travels along different magnetic field lines than the aurora. All-sky cameras showed that Steve appears at much lower latitudes. That means the charged particles that create Steve connect to magnetic field lines that are closer to Earth’s equator, hence why Steve is often seen in southern Canada.

Perhaps the biggest surprise about Steve appeared in the satellite data. The data showed that Steve comprises a fast moving stream of extremely hot particles called a sub auroral ion drift, or SAID. Scientists have studied SAIDs since the 1970s but never knew there was an accompanying visual effect. The Swarm satellite recorded information on the charged particles’ speeds and temperatures, but does not have an imager aboard.

“People have studied a lot of SAIDs, but we never knew it had a visible light. Now our cameras are sensitive enough to pick it up and people’s eyes and intellect were critical in noticing its importance,” said Donovan, a co-author of the study. Donovan led the all-sky camera network and his Calgary colleagues lead the electric field instruments on the Swarm satellite.

Steve is an important discovery because of its location in the sub auroral zone, an area of lower latitude than where most auroras appear that is not well researched. For one, with this discovery, scientists now know there are unknown chemical processes taking place in the sub auroral zone that can lead to this light emission.

Second, Steve consistently appears in the presence of auroras, which usually occur at a higher latitude area called the auroral zone. That means there is something happening in near-Earth space that leads to both an aurora and Steve. Steve might be the only visual clue that exists to show a chemical or physical connection between the higher latitude auroral zone and lower latitude sub auroral zone, said MacDonald.

“Steve can help us understand how the chemical and physical processes in Earth’s upper atmosphere can sometimes have local noticeable effects in lower parts of Earth’s atmosphere,” said MacDonald. “This provides good insight on how Earth’s system works as a whole.”

The team can learn a lot about Steve with additional ground and satellite reports, but recording Steve from the ground and space simultaneously is a rare occurrence. Each Swarm satellite orbits Earth every 90 minutes and Steve only lasts up to an hour in a specific area. If the satellite misses Steve as it circles Earth, Steve will probably be gone by the time that same satellite crosses the spot again.

In the end, capturing Steve becomes a game of perseverance and probability.

“It is my hope that with our timely reporting of sightings, researchers can study the data so we can together unravel the mystery of Steve’s origin, creation, physics and sporadic nature,” said Bourassa. “This is exciting because the more I learn about it, the more questions I have.”

As for the name “Steve” given by the citizen scientists? The team is keeping it as an homage to its initial name and discoverers. But now it is STEVE, short for Strong Thermal Emission Velocity Enhancement.

Other collaborators on this work are: the University of Calgary, New Mexico Consortium, Boston University, Lancaster University, Athabasca University, Los Alamos National Laboratory and the Alberta Aurora Chasers Facebook group.

If you live in an area where you may see STEVE or an aurora, submit your pictures and reports to Aurorasaurus through aurorasaurus.org or the free iOS and Android mobile apps. To learn how to spot STEVE, click here.

There is a video with MacDonald describing the work and featuring more images,

Katherine Kornei’s March 14, 2018 article for sciencemag.org adds more detail about the work,

Citizen scientists first began posting about Steve on social media several years ago. Across New Zealand, Canada, the United States, and the United Kingdom, they reported an unusual sight in the night sky: a purplish line that arced across the heavens for about an hour at a time, visible at lower latitudes than classical aurorae, mostly in the spring and fall. … “It’s similar to a contrail but doesn’t disperse,” says Notanee Bourassa, an aurora photographer in Saskatchewan province in Canada [Regina as mentioned in the news release is the capital of the province of Saskatchewan].

Traditional aurorae are often green, because oxygen atoms present in Earth’s atmosphere emit that color light when they’re bombarded by charged particles trapped in Earth’s magnetic field. They also appear as a diffuse glow—rather than a distinct line—on the northern or southern horizon. Without a scientific theory to explain the new sight, a group of citizen scientists led by aurora enthusiast Chris Ratzlaff of Canada’s Alberta province [usually referred to as Canada’s province of Alberta or simply, the province of Alberta] playfully dubbed it Steve, after a line in the 2006 children’s movie Over the Hedge.

Aurorae have been studied for decades, but people may have missed Steve because their cameras weren’t sensitive enough, says Elizabeth MacDonald, a space physicist at NASA Goddard Space Flight Center in Greenbelt, Maryland, and leader of the new research. MacDonald and her team have used data from a European satellite called Swarm-A to study Steve in its native environment, about 200 kilometers up in the atmosphere. Swarm-A’s instruments revealed that the charged particles in Steve had a temperature of about 6000°C, “impressively hot” compared with the nearby atmosphere, MacDonald says. And those ions were flowing from east to west at nearly 6 kilometers per second, …

Here’s a link to and a citation for the paper,

New science in plain sight: Citizen scientists lead to the discovery of optical structure in the upper atmosphere by Elizabeth A. MacDonald, Eric Donovan, Yukitoshi Nishimura, Nathan A. Case, D. Megan Gillies, Bea Gallardo-Lacourt, William E. Archer, Emma L. Spanswick, Notanee Bourassa, Martin Connors, Matthew Heavner, Brian Jackel, Burcu Kosar, David J. Knudsen, Chris Ratzlaff, and Ian Schofield. Science Advances 14 Mar 2018:
Vol. 4, no. 3, eaaq0030 DOI: 10.1126/sciadv.aaq0030

This paper is open access. You’ll note that Notanee Bourassa is listed as an author. For more about Bourassa, there’s his Twitter feed (@DJHardwired) and his YouTube Channel. BTW, his Twitter bio notes that he’s “Recently heartbroken,” as well as, “Seasoned human male. Expert storm chaser, aurora photographer, drone flyer and on-air FM radio DJ.” Make of that what you will.

EuroScience Open Forum in Toulouse, France from July 9 to July 14, 2018

A March 22, 2018 EuroScience Open Forum (ESOF) 2018 announcement (received via email) trumpets some of the latest news for this event being held July 9 to July 14, 2018 in Toulouse, France. (Located in the south in the region known as the Occitanie, it’s the fourth largest city in France. Toulouse is situated on the River Garonne. See more in its Wikipedia entry.) Here’s the latest from the announcement,

ESOF 2018 Plenary Sessions

Top speakers and hot topics confirmed for the Plenary Sessions at ESOF 2018

Lorna Hughes, Professor at the University of Glasgow, Chair of the Europeana Research Advisory Board, will give a plenary keynote on “Digital humanities”. John Ioannidis, Professor of Medicine and of Health Research and Policy at Stanford University, famous for his PLoS Medicine paper on “Why most Published Research Findings are False”, will talk about “Reproducibility”. A third plenary will involve Marìa Teresa Ruiz, a Chilean astronomer and the 2017 L’Oreal UNESCO award for Women in Science: she will talk about exoplanets.

 

ESOF under the spotlights

French President’s high patronage: ESOF is at the top of the institutional agendas in 2018.

“Sharing science”. But also putting science at the highest level making it a real political and societal issue in a changing world. ESOF 2018 has officially received the “High Patronage” from the President of the French Republic Emmanuel Macron. ESOF 2018 has also been listed by the French Minister for Europe and Foreign Affairs among the 27 priority events for France.

A constellation of satellites around the ESOF planet!

Second focus on Satellite events:
4th GEO Blue Planet Symposium organised 4-6 July by Mercator Ocean.
ECSJ 2018, 5th European Conference of Science Journalists, co-organised by the French Association of Science Journalists in the News Press (AJSPI) and the Union of European Science Journalists’ Associations (EUSJA) on 8 July.
– Esprit de Découvertes (Discovery spirit) organised by the Académie des Sciences, Inscriptions et Belles Lettres de Toulouse on 8 July.

More Satellite events to come! Don’t forget to stay long enough in order to participate in these focused Satellite Events and … to discover the city.

The programme for ESOF 2018 can be found here.

Science meets poetry

As has become usual, there is a European City of Science event being held in Toulouse in concert (more or less) with and in celebration of the ESOF event. The City of Science event is being held from July 7 – July 16, 2018.

Organizers have not announced much in the way of programming for the City of Science other than a ‘Science meets Poetry’ meeting,

A unique feature of ESOF is the Science meets Poetry day, which is held at every Forum and brings poets and scientists together.

Indeed, there is today a real artistic movement of poets connected with ESOF. Famous participants from earlier meetings include contributors such as the late Seamus Heaney, Roald Hoffmann [sic] Jean-Pierre Luminet and Prince Henrik of Denmark, but many young and aspiring poets are also involved.

The meeting is in two parts:

  • lectures on subjects involving science with poetry
  • a poster session for contributed poems

There are competitions associated with the event and every Science meets Poetry day gives rise to the publication of Proceedings in book form.

In Toulouse, the event will be staged by EuroScience in collaboration with the Académie des Jeux Floraux of Toulouse, the Société des Poètes Français and the European Academy of Sciences Arts and Letters, under patronage of UNESCO. The full programme will be announced later, but includes such themes as a celebration of the number 7 in honour of the seven Troubadours of Toulouse, who held the first Jeux Floraux in the year 1323, Space Travel and the first poets and scientists who wrote about it (including Cyrano de Bergerac and Johannes Kepler), from Metrodorus and Diophantes of Alexandria to Fermat’s Last Theorem, the Poetry of Ecology, Lafayette’s ship the Hermione seen from America and many other thought-provoking subjects.

The meeting will be held in the Hôtel d’Assézat, one of the finest old buildings of the ancient city of Toulouse.

Exceptionally, it will be open to registered participants from ESOF and also to some members of the public within the limits of available space.

Tentative Programme for the Science meets Poetry day on the 12th of July 2018

(some Speakers are still to be confirmed)

  • 09:00 – 09:30 A welcome for the poets : The legendary Troubadours of Toulouse and the poetry of the number 7 (Philippe Dazet-Brun, Académie des Jeux Floraux)
  • 09:30 – 10:00 The science and the poetry of violets from Toulouse (Marie-Thérèse Esquerré-Tugayé  Laboratoire de Recherche en Sciences Végétales, Université Toulouse III-CNRS)
  • 10:00 –10:30  The true Cyrano de Bergerac, gascon poet, and his celebrated travels to the Moon (Jean-Charles Dorge, Société des Poètes Français)
  • 10:30 – 11:00  Coffee Break (with poems as posters)
  • 11:00 – 11:30 Kepler the author and the imaginary travels of the famous astronomer to the Moon. (Uli Rothfuss, die Kogge International Society of German-language authors )
  • 11:30 – 12:00  Spoutnik and Space in Russian Literature (Alla-Valeria Mikhalevitch, Laboratory of the Russian Academy of Sciences  Saint-Petersburg)
  • 12:00 – 12:30  Poems for the planet Mars (James Philip Kotsybar, the ‘Bard of Mars’, California and NASA USA)
  • 12:30 – 14:00  Lunch and meetings of the Juries of poetry competitions
  • 14:00 – 14:30  The voyage of the Hermione and « Lafayette, here we come ! » seen by an American poet (Nick Norwood, University of Columbus Ohio)
  • 14:30 –  15:00 Alexandria, Toulouse and Oxford : the poem rendered by Eutrope and Fermat’s Last Theorem (Chaunes [Jean-Patrick Connerade], European Academy of Sciences, Arts and Letters, UNESCO)
  • 15:00 –15:30  How biology is celebrated in contemporary poetry (Assumpcio Forcada, biologist and poet from Barcelona)
  • 15:30 – 16:00  A book of poems around ecology : a central subject in modern poetry (Sam Illingworth, Metropolitan University of Manchester)
  • 16:00 – 16:30  Coffee break (with poems as posters)
  • 16:30 – 17:00 Toulouse and Europe : poetry at the crossroads of European Languages (Stefka Hrusanova (Bulgarian Academy and Linguaggi-Di-Versi)
  • 17:00 – 17:30 Round Table : seven poets from Toulouse give their views on the theme : Languages, invisible frontiers within both science and poetry
  • 17:30 – 18:00 The winners of the poetry competitions are announced
  • 18:00 – 18:15 Chaunes. Closing remarks

I’m fascinated as in all the years I’ve covered the European City of Science events I’ve never before tripped across a ‘Science meets Poetry’ meeting. Sadly, there’s no contact information for those organizers. However, you can sign up for a newsletter and there are contacts for the larger event, European City of Science or as they are calling it in Toulouse, the Science in the City Festival,

Contact

Camille Rossignol (Toulouse Métropole)

camille.rossignol@toulouse-metropole.fr

+33 (0)5 36 25 27 83

François Lafont (ESOF 2018 / So Toulouse)

francois.lafont@toulouse2018.esof.eu

+33 (0)5 61 14 58 47

Travel grants for media types

One last note and this is for journalists. It’s still possible to apply for a travel grant, which helps ease but not remove the pain of travel expenses. From the ESOF 2018 Media Travel Grants webpage,

ESOF 2018 – ECSJ 2018 Travel Grants

The 5th European Conference of Science Journalists (ECSJ2018) is offering 50 travel + accommodation grants of up to 400€ to international journalists interested in attending ECSJ and ESOF.

We are looking for active professional journalists who cover science or science policy regularly (not necessarily exclusively), with an interest in reflecting on their professional practices and ethics. Applicants can be freelancers or staff, and can work for print, web, or broadcast media.

More information

ESOF 2018 Nature Travel Grants

Springer Nature is a leading research, educational and professional publisher, providing quality content to its communities through a range of innovative platforms, products and services and is home of trusted brands including Nature Research.

Nature Research has supported ESOF since its very first meeting in 2004 and is funding the Nature Travel Grant Scheme for journalists to attend ESOF2018 with the aim of increasing the impact of ESOF. The Nature Travel Grant Scheme offers a lump sum of £400 for journalists based in Europe and £800 for journalists based outside of Europe, to help cover the costs of travel and accommodation to attend ESOF2018.

More information

Good luck!

(My previous posting about this ESOF 2018 was Sept. 4, 2017 [scroll down about 50% of the way] should you be curious.)

Why don’t you CRISPR yourself?

It must have been quite the conference. Josiah Zayner plunged a needle into himself and claimed to have changed his DNA (deoxyribonucleic acid) while giving his talk. (*Segue: There is some Canadian content if you keep reading.*) From an Oct. 10, 2017 article by Adele Peters for Fast Company (Note: A link has been removed),

“What we’ve got here is some DNA, and this is a syringe,” Josiah Zayner tells a room full of synthetic biologists and other researchers. He fills the needle and plunges it into his skin. “This will modify my muscle genes and give me bigger muscles.”

Zayner, a biohacker–basically meaning he experiments with biology in a DIY lab rather than a traditional one–was giving a talk called “A Step-by-Step Guide to Genetically Modifying Yourself With CRISPR” at the SynBioBeta conference in San Francisco, where other presentations featured academics in suits and the young CEOs of typical biotech startups. Unlike the others, he started his workshop by handing out shots of scotch and a booklet explaining the basics of DIY [do-it-yourwelf] genome engineering.

If you want to genetically modify yourself, it turns out, it’s not necessarily complicated. As he offered samples in small baggies to the crowd, Zayner explained that it took him about five minutes to make the DNA that he brought to the presentation. The vial held Cas9, an enzyme that snips DNA at a particular location targeted by guide RNA, in the gene-editing system known as CRISPR. In this case, it was designed to knock out the myostatin gene, which produces a hormone that limits muscle growth and lets muscles atrophy. In a study in China, dogs with the edited gene had double the muscle mass of normal dogs. If anyone in the audience wanted to try it, they could take a vial home and inject it later. Even rubbing it on skin, Zayner said, would have some effect on cells, albeit limited.

Peters goes on to note that Zayner has a PhD in molecular biology and biophysics and worked for NASA (US National Aeronautics and Space Administration). Zayner’s Wikipedia entry fills in a few more details (Note: Links have been removed),

Zayner graduated from the University of Chicago with a Ph.D. in biophysics in 2013. He then spent two years as a researcher at NASA’s Ames Research Center,[2] where he worked on Martian colony habitat design. While at the agency, Zayner also analyzed speech patterns in online chat, Twitter, and books, and found that language on Twitter and online chat is closer to how people talk than to how they write.[3] Zayner found NASA’s scientific work less innovative than he expected, and upon leaving in January 2016, he launched a crowdfunding campaign to provide CRISPR kits to let the general public experiment with editing bacterial DNA. He also continued his grad school business, The ODIN, which sells kits to let the general public experiment at home. As of May 2016, The ODIN had four employees and operates out of Zayner’s garage.[2]

He refers to himself as a biohacker and believes in the importance in letting the general public participate in scientific experimentation, rather than leaving it segregated to labs.[2][4][1] Zayner found the biohacking community exclusive and hierarchical, particularly in the types of people who decide what is “safe”. He hopes that his projects can let even more people experiment in their homes. Other scientists responded that biohacking is inherently privileged, as it requires leisure time and money, and that deviance from the safety rules of concern would lead to even harsher regulations for all.[5] Zayner’s public CRISPR kit campaign coincided with wider scrutiny over genetic modification. Zayner maintained that these fears were based on misunderstandings of the product, as genetic experiments on yeast and bacteria cannot produce a viral epidemic.[6][7] In April 2015, Zayner ran a hoax on Craigslist to raise awareness about the future potential of forgery in forensics genetics testing.[8]

In February 2016, Zayner performed a full body microbiome transplant on himself, including a fecal transplant, to experiment with microbiome engineering and see if he could cure himself from gastrointestinal and other health issues. The microbiome from the donors feces successfully transplanted in Zayner’s gut according to DNA sequencing done on samples.[2] This experiment was documented by filmmakers Kate McLean and Mario Furloni and turned into the short documentary film Gut Hack.[9]

In December 2016, Zayner created a fluorescent beer by engineering yeast to contain the green fluorescent protein from jellyfish. Zayner’s company, The ODIN, released kits to allow people to create their own engineered fluorescent yeast and this was met with some controversy as the FDA declared the green fluorescent protein can be seen as a color additive.[10] Zayner, views the kit as a way that individual can use genetic engineering to create things in their everyday life.[11]

I found the video for Zayner’s now completed crowdfunding campaign,

I also found The ODIN website (mentioned in the Wikipedia essay) where they claim to be selling various gene editing and gene engineering kits including the CRISPR editing kits mentioned in Peters’ article,

In 2016, he [Zayner] sold $200,000 worth of products, including a kit for yeast that can be used to brew glowing bioluminescent beer, a kit to discover antibiotics at home, and a full home lab that’s roughly the cost of a MacBook Pro. In 2017, he expects to double sales. Many kits are simple, and most buyers probably aren’t using the supplies to attempt to engineer themselves (many kits go to classrooms). But Zayner also hopes that as people using the kits gain genetic literacy, they experiment in wilder ways.

Zayner sells a full home biohacking lab that’s roughly the cost of a MacBook Pro. [Photo: The ODIN]

He questions whether traditional research methods, like randomized controlled trials, are the only way to make discoveries, pointing out that in newer personalized medicine (such as immunotherapy for cancer, which is personalized for each patient), a sample size of one person makes sense. At his workshop, he argued that people should have the choice to self-experiment if they want to; we also change our DNA when we drink alcohol or smoke cigarettes or breathe in dirty city air. Other society-sanctioned activities are more dangerous. “We sacrifice maybe a million people a year to the car gods,” he said. “If you ask someone, ‘Would you get rid of cars?’–no.” …

US researchers both conventional and DIY types such as Zayner are not the only ones who are editing genes. The Chinese study mentioned in Peters’ article was written up in an Oct. 19, 2015 article by Antonio Regalado for the MIT [Massachusetts Institute of Technology] Technology Review (Note: Links have been removed),

Scientists in China say they are the first to use gene editing to produce customized dogs. They created a beagle with double the amount of muscle mass by deleting a gene called myostatin.

The dogs have “more muscles and are expected to have stronger running ability, which is good for hunting, police (military) applications,” Liangxue Lai, a researcher with the Key Laboratory of Regenerative Biology at the Guangzhou Institutes of Biomedicine and Health, said in an e-mail.

Lai and 28 colleagues reported their results last week in the Journal of Molecular Cell Biology, saying they intend to create dogs with other DNA mutations, including ones that mimic human diseases such as Parkinson’s and muscular dystrophy. “The goal of the research is to explore an approach to the generation of new disease dog models for biomedical research,” says Lai. “Dogs are very close to humans in terms of metabolic, physiological, and anatomical characteristics.”

Lai said his group had no plans breed to breed the extra-muscular beagles as pets. Other teams, however, could move quickly to commercialize gene-altered dogs, potentially editing their DNA to change their size, enhance their intelligence, or correct genetic illnesses. A different Chinese Institute, BGI, said in September it had begun selling miniature pigs, created via gene editing, for $1,600 each as novelty pets.

People have been influencing the genetics of dogs for millennia. By at least 36,000 years ago, early humans had already started to tame wolves and shape the companions we have today. Charles Darwin frequently cited dog breeding in The Origin of Species to demonstrate how evolution gradually occurs by a process of selection. With CRISPR, however, evolution is no longer gradual or subject to chance. It is immediate and under human control.

It is precisely that power that is stirring wide debate and concern over CRISPR. Yet at least some researchers think that gene-edited dogs could put a furry, friendly face on the technology. In an interview this month, George Church, a professor at Harvard University who leads a large effort to employ CRISPR editing, said he thinks it will be possible to augment dogs by using DNA edits to make them live longer or simply make them smarter.

Church said he also believed the alteration of dogs and other large animals could open a path to eventual gene editing of people. “Germline editing of pigs or dogs offers a line into it,” he said. “People might say, ‘Hey, it works.’ ”

In the meantime, Zayner’s ideas are certainly thought provoking. I’m not endorsing either his products or his ideas but it should be noted that early science pioneers such as Humphrey Davy and others experimented on themselves. For anyone unfamiliar with Davy, (from the Humphrey Davy Wikipedia entry; Note: Links have been removed),

Sir Humphry Davy, 1st Baronet PRS MRIA FGS (17 December 1778 – 29 May 1829) was a Cornish chemist and inventor,[1] who is best remembered today for isolating a series of substances for the first time: potassium and sodium in 1807 and calcium, strontium, barium, magnesium and boron the following year, as well as discovering the elemental nature of chlorine and iodine. He also studied the forces involved in these separations, inventing the new field of electrochemistry. Berzelius called Davy’s 1806 Bakerian Lecture On Some Chemical Agencies of Electricity[2] “one of the best memoirs which has ever enriched the theory of chemistry.”[3] He was a Baronet, President of the Royal Society (PRS), Member of the Royal Irish Academy (MRIA), and Fellow of the Geological Society (FGS). He also invented the Davy lamp and a very early form of incandescent light bulb.

Canadian content*

A Nov. 11, 2017 posting on the Canadian Broadcasting Corporation’s (CBC) Quirks and Quarks blog notes that self-experimentation has a long history and goes on to describe Zayner’s and others biohacking exploits before describing the legality of biohacking in Canada,

With biohackers entering into the space traditionally held by scientists and clinicians, it begs questions. Professor Timothy Caulfield, a Canada research chair in health, law and policy at the University of Alberta, says when he hears of somebody giving themselves biohacked gene therapy, he wonders: “Is this legal? Is this safe? And if it’s not safe, is there anything that we can do about regulating it? And to be honest with you that’s a tough question and I think it’s an open question.”

In Canada, Caulfield says, Health Canada focuses on products. “You have to have something that you are going to regulate or you have to have something that’s making health claims. So if there is a product that is saying I can cure X, Y, or Z, Health Canada can say, ‘Well let’s make sure the science really backs up that claim.’ The problem with these do-it-yourself approaches is there isn’t really a product. You know these people are experimenting on themselves with something that may or may not be designed for health purposes.”

According to Caufield, if you could buy a gene therapy kit that was being marketed to you to biohack yourself, that would be different. “Health Canada could jump in. But right here that’s not the case,” he says.

There are places in the world that do regulate biohacking, says Caulfield. “Germany, for example, they have specific laws for it. And here in Canada we do have a regulatory framework that says that you cannot do gene therapy that will alter the germ line. In other words, you can’t do gene therapy or any kind of genetic editing that will create a change that you will pass on to your offspring. So that would be illegal, but that’s not what’s happening here. And I don’t think there’s a regulatory framework that adequately captures it.”

Infectious disease and policy experts aren’t that concerned yet about the possibility of a biohacker unleashing a genetically modified super germ into the population.

“I think in the future that could be a problem,”says Caulfield, “but this isn’t something that would be easy to do in your garage. I think it’s complicated science. But having said that, the science is moving quickly. We need to think about how we are going to control the potential harms.”

You can find out more about the ‘wild’ people (mostly men) of early science in Richard Holmes’ 2008 book, The Age of Wonder: How the Romantic Generation Discovered the Beauty and Terror of Science.

Finally, should you be interested in connecting with synthetic biology enthusiasts, entrepreneurs, and others, SynBioBeta is more than a conference; it’s also an activity hub.

ETA January 25, 2018 (five minutes later): There are some CRISPR/CAS9 events taking place in Toronto, Canada on January 24 and 25, 2018. One is a workshop with Portuguese artist, Marta de Menezes, and the other is a panel discussion. See my January 10, 2018 posting for more details.

*’Segue: There is some Canadian content if you keep reading.’ and ‘Canadian content’ added January 25, 2018 six minutes after first publication.

ETA February 20, 2018: Sarah Zhang’s Feb. 20, 2018 article for The Atlantic revisits Josiah Zayner’s decision to inject himself with CRISPR,

When Josiah Zayner watched a biotech CEO drop his pants at a biohacking conference and inject himself with an untested herpes treatment, he realized things had gone off the rails.

Zayner is no stranger to stunts in biohacking—loosely defined as experiments, often on the self, that take place outside of traditional lab spaces. You might say he invented their latest incarnation: He’s sterilized his body to “transplant” his entire microbiome in front of a reporter. He’s squabbled with the FDA about selling a kit to make glow-in-the-dark beer. He’s extensively documented attempts to genetically engineer the color of his skin. And most notoriously, he injected his arm with DNA encoding for CRISPR that could theoretically enhance his muscles—in between taking swigs of Scotch at a live-streamed event during an October conference. (Experts say—and even Zayner himself in the live-stream conceded—it’s unlikely to work.)

So when Zayner saw Ascendance Biomedical’s CEO injecting himself on a live-stream earlier this month, you might say there was an uneasy flicker of recognition.

“Honestly, I kind of blame myself,” Zayner told me recently. He’s been in a soul-searching mood; he recently had a kid and the backlash to the CRISPR stunt in October [2017] had been getting to him. “There’s no doubt in my mind that somebody is going to end up hurt eventually,” he said.

Yup, it’s one of the reasons for rules; people take things too far. The trick is figuring out how to achieve balance between risk taking and recklessness.

Gold’s origin in the universe due to cosmic collision

An hypothesis for gold’s origins was first mentioned here in a May 26, 2016 posting,

The link between this research and my side project on gold nanoparticles is a bit tenuous but this work on the origins for gold and other precious metals being found in the stars is so fascinating and I’m determined to find a connection.

An artist's impression of two neutron stars colliding. (Credit: Dana Berry / Skyworks Digital, Inc.) Courtesy: Kavli Foundation

An artist’s impression of two neutron stars colliding. (Credit: Dana Berry / Skyworks Digital, Inc.) Courtesy: Kavli Foundation

From a May 19, 2016 news item on phys.org,

The origin of many of the most precious elements on the periodic table, such as gold, silver and platinum, has perplexed scientists for more than six decades. Now a recent study has an answer, evocatively conveyed in the faint starlight from a distant dwarf galaxy.

In a roundtable discussion, published today [May 19, 2016?], The Kavli Foundation spoke to two of the researchers behind the discovery about why the source of these heavy elements, collectively called “r-process” elements, has been so hard to crack.

From the Spring 2016 Kavli Foundation webpage hosting the  “Galactic ‘Gold Mine’ Explains the Origin of Nature’s Heaviest Elements” Roundtable ,

Astronomers studying a galaxy called Reticulum II have just discovered that its stars contain whopping amounts of these metals—collectively known as “r-process” elements (See “What is the R-Process?”). Of the 10 dwarf galaxies that have been similarly studied so far, only Reticulum II bears such strong chemical signatures. The finding suggests some unusual event took place billions of years ago that created ample amounts of heavy elements and then strew them throughout the galaxy’s reservoir of gas and dust. This r-process-enriched material then went on to form Reticulum II’s standout stars.

Based on the new study, from a team of researchers at the Kavli Institute at the Massachusetts Institute of Technology, the unusual event in Reticulum II was likely the collision of two, ultra-dense objects called neutron stars. Scientists have hypothesized for decades that these collisions could serve as a primary source for r-process elements, yet the idea had lacked solid observational evidence. Now armed with this information, scientists can further hope to retrace the histories of galaxies based on the contents of their stars, in effect conducting “stellar archeology.”

Researchers have confirmed the hypothesis according to an Oct. 16, 2017 news item on phys.org,

Gold’s origin in the Universe has finally been confirmed, after a gravitational wave source was seen and heard for the first time ever by an international collaboration of researchers, with astronomers at the University of Warwick playing a leading role.

Members of Warwick’s Astronomy and Astrophysics Group, Professor Andrew Levan, Dr Joe Lyman, Dr Sam Oates and Dr Danny Steeghs, led observations which captured the light of two colliding neutron stars, shortly after being detected through gravitational waves – perhaps the most eagerly anticipated phenomenon in modern astronomy.

Marina Koren’s Oct. 16, 2017 article for The Atlantic presents a richly evocative view (Note: Links have been removed),

Some 130 million years ago, in another galaxy, two neutron stars spiraled closer and closer together until they smashed into each other in spectacular fashion. The violent collision produced gravitational waves, cosmic ripples powerful enough to stretch and squeeze the fabric of the universe. There was a brief flash of light a million trillion times as bright as the sun, and then a hot cloud of radioactive debris. The afterglow hung for several days, shifting from bright blue to dull red as the ejected material cooled in the emptiness of space.

Astronomers detected the aftermath of the merger on Earth on August 17. For the first time, they could see the source of universe-warping forces Albert Einstein predicted a century ago. Unlike with black-hole collisions, they had visible proof, and it looked like a bright jewel in the night sky.

But the merger of two neutron stars is more than fireworks. It’s a factory.

Using infrared telescopes, astronomers studied the spectra—the chemical composition of cosmic objects—of the collision and found that the plume ejected by the merger contained a host of newly formed heavy chemical elements, including gold, silver, platinum, and others. Scientists estimate the amount of cosmic bling totals about 10,000 Earth-masses of heavy elements.

I’m not sure exactly what this image signifies but it did accompany Koren’s article so presumably it’s a representation of colliding neutron stars,

NSF / LIGO / Sonoma State University /A. Simonnet. Downloaded from: https://www.theatlantic.com/science/archive/2017/10/the-making-of-cosmic-bling/543030/

An Oct. 16, 2017 University of Warwick press release (also on EurekAlert), which originated the news item on phys.org, provides more detail,

Huge amounts of gold, platinum, uranium and other heavy elements were created in the collision of these compact stellar remnants, and were pumped out into the universe – unlocking the mystery of how gold on wedding rings and jewellery is originally formed.

The collision produced as much gold as the mass of the Earth. [emphasis mine]

This discovery has also confirmed conclusively that short gamma-ray bursts are directly caused by the merging of two neutron stars.

The neutron stars were very dense – as heavy as our Sun yet only 10 kilometres across – and they collided with each other 130 million years ago, when dinosaurs roamed the Earth, in a relatively old galaxy that was no longer forming many stars.

They drew towards each other over millions of light years, and revolved around each other increasingly quickly as they got closer – eventually spinning around each other five hundred times per second.

Their merging sent ripples through the fabric of space and time – and these ripples are the elusive gravitational waves spotted by the astronomers.

The gravitational waves were detected by the Advanced Laser Interferometer Gravitational-Wave Observatory (Adv-LIGO) on 17 August this year [2017], with a short duration gamma-ray burst detected by the Fermi satellite just two seconds later.

This led to a flurry of observations as night fell in Chile, with a first report of a new source from the Swope 1m telescope.

Longstanding collaborators Professor Levan and Professor Nial Tanvir (from the University of Leicester) used the facilities of the European Southern Observatory to pinpoint the source in infrared light.

Professor Levan’s team was the first one to get observations of this new source with the Hubble Space Telescope. It comes from a galaxy called NGC 4993, 130 million light years away.

Andrew Levan, Professor in the Astronomy & Astrophysics group at the University of Warwick, commented: “Once we saw the data, we realised we had caught a new kind of astrophysical object. This ushers in the era of multi-messenger astronomy, it is like being able to see and hear for the first time.”

Dr Joe Lyman, who was observing at the European Southern Observatory at the time was the first to alert the community that the source was unlike any seen before.

He commented: “The exquisite observations obtained in a few days showed we were observing a kilonova, an object whose light is powered by extreme nuclear reactions. This tells us that the heavy elements, like the gold or platinum in jewellery are the cinders, forged in the billion degree remnants of a merging neutron star.”

Dr Samantha Oates added: “This discovery has answered three questions that astronomers have been puzzling for decades: what happens when neutron stars merge? What causes the short duration gamma-ray bursts? Where are the heavy elements, like gold, made? In the space of about a week all three of these mysteries were solved.”

Dr Danny Steeghs said: “This is a new chapter in astrophysics. We hope that in the next few years we will detect many more events like this. Indeed, in Warwick we have just finished building a telescope designed to do just this job, and we expect it to pinpoint these sources in this new era of multi-messenger astronomy”.

Congratulations to all of the researchers involved in this work!

Many, many research teams were  involved. Here’s a sampling of their news releases which focus on their areas of research,

University of the Witwatersrand (South Africa)

https://www.eurekalert.org/pub_releases/2017-10/uotw-wti101717.php

Weizmann Institute of Science (Israel)

https://www.eurekalert.org/pub_releases/2017-10/wios-cns101717.php

Carnegie Institution for Science (US)

https://www.eurekalert.org/pub_releases/2017-10/cifs-dns101217.php

Northwestern University (US)

https://www.eurekalert.org/pub_releases/2017-10/nu-adc101617.php

National Radio Astronomy Observatory (US)

https://www.eurekalert.org/pub_releases/2017-10/nrao-ru101317.php

Max-Planck-Gesellschaft (Germany)

https://www.eurekalert.org/pub_releases/2017-10/m-gwf101817.php

Penn State (Pennsylvania State University; US)

https://www.eurekalert.org/pub_releases/2017-10/ps-stl101617.php

University of California – Davis

https://www.eurekalert.org/pub_releases/2017-10/uoc–cns101717.php

The American Association for the Advancement of Science’s (AAAS) magazine, Science, has published seven papers on this research. Here’s an Oct. 16, 2017 AAAS news release with an overview of the papers,

https://www.eurekalert.org/pub_releases/2017-10/aaft-btf101617.php

I’m sure there are more news releases out there and that there will be many more papers published in many journals, so if this interests, I encourage you to keep looking.

Two final pieces I’d like to draw your attention to: one answers basic questions and another focuses on how artists knew what to draw when neutron stars collide.

Keith A Spencer’s Oct. 18, 2017 piece on salon.com answers a lot of basic questions for those of us who don’t have a background in astronomy. Here are a couple of examples,

What is a neutron star?

Okay, you know how atoms have protons, neutrons, and electrons in them? And you know how protons are positively charged, and electrons are negatively charged, and neutrons are neutral?

Yeah, I remember that from watching Bill Nye as a kid.

Totally. Anyway, have you ever wondered why the negatively-charged electrons and the positively-charged protons don’t just merge into each other and form a neutral neutron? I mean, they’re sitting there in the atom’s nucleus pretty close to each other. Like, if you had two magnets that close, they’d stick together immediately.

I guess now that you mention it, yeah, it is weird.

Well, it’s because there’s another force deep in the atom that’s preventing them from merging.

It’s really really strong.

The only way to overcome this force is to have a huge amount of matter in a really hot, dense space — basically shove them into each other until they give up and stick together and become a neutron. This happens in very large stars that have been around for a while — the core collapses, and in the aftermath, the electrons in the star are so close to the protons, and under so much pressure, that they suddenly merge. There’s a big explosion and the outer material of the star is sloughed off.

Okay, so you’re saying under a lot of pressure and in certain conditions, some stars collapse and become big balls of neutrons?

Pretty much, yeah.

So why do the neutrons just stick around in a huge ball? Aren’t they neutral? What’s keeping them together? 

Gravity, mostly. But also the strong nuclear force, that aforementioned weird strong force. This isn’t something you’d encounter on a macroscopic scale — the strong force only really works at the type of distances typified by particles in atomic nuclei. And it’s different, fundamentally, than the electromagnetic force, which is what makes magnets attract and repel and what makes your hair stick up when you rub a balloon on it.

So these neutrons in a big ball are bound by gravity, but also sticking together by virtue of the strong nuclear force. 

So basically, the new ball of neutrons is really small, at least, compared to how heavy it is. That’s because the neutrons are all clumped together as if this neutron star is one giant atomic nucleus — which it kinda is. It’s like a giant atom made only of neutrons. If our sun were a neutron star, it would be less than 20 miles wide. It would also not be something you would ever want to get near.

Got it. That means two giant balls of neutrons that weighed like, more than our sun and were only ten-ish miles wide, suddenly smashed into each other, and in the aftermath created a black hole, and we are just now detecting it on Earth?

Exactly. Pretty weird, no?

Spencer does a good job of gradually taking you through increasingly complex explanations.

For those with artistic interests, Neel V. Patel tries to answer a question about how artists knew what draw when neutron stars collided in his Oct. 18, 2017 piece for Slate.com,

All of these things make this discovery easy to marvel at and somewhat impossible to picture. Luckily, artists have taken up the task of imagining it for us, which you’ve likely seen if you’ve already stumbled on coverage of the discovery. Two bright, furious spheres of light and gas spiraling quickly into one another, resulting in a massive swell of lit-up matter along with light and gravitational waves rippling off speedily in all directions, towards parts unknown. These illustrations aren’t just alluring interpretations of a rare phenomenon; they are, to some extent, the translation of raw data and numbers into a tangible visual that gives scientists and nonscientists alike some way of grasping what just happened. But are these visualizations realistic? Is this what it actually looked like? No one has any idea. Which is what makes the scientific illustrators’ work all the more fascinating.

“My goal is to represent what the scientists found,” says Aurore Simmonet, a scientific illustrator based at Sonoma State University in Rohnert Park, California. Even though she said she doesn’t have a rigorous science background (she certainly didn’t know what a kilonova was before being tasked to illustrate one), she also doesn’t believe that type of experience is an absolute necessity. More critical, she says, is for the artist to have an interest in the subject matter and in learning new things, as well as a capacity to speak directly to scientists about their work.

Illustrators like Simmonet usually start off work on an illustration by asking the scientist what’s the biggest takeaway a viewer should grasp when looking at a visual. Unfortunately, this latest discovery yielded a multitude of papers emphasizing different conclusions and highlights. With so many scientific angles, there’s a stark challenge in trying to cram every important thing into a single drawing.

Clearly, however, the illustrations needed to center around the kilonova. Simmonet loves colors, so she began by discussing with the researchers what kind of color scheme would work best. The smash of two neutron stars lends itself well to deep, vibrant hues. Simmonet and Robin Dienel at the Carnegie Institution for Science elected to use a wide array of colors and drew bright cracking to show pressure forming at the merging. Others, like Luis Calcada at the European Southern Observatory, limited the color scheme in favor of emphasizing the bright moment of collision and the signal waves created by the kilonova.

Animators have even more freedom to show the event, since they have much more than a single frame to play with. The Conceptual Image Lab at NASA’s [US National Aeronautics and Space Administration] Goddard Space Flight Center created a short video about the new findings, and lead animator Brian Monroe says the video he and his colleagues designed shows off the evolution of the entire process: the rising action, climax, and resolution of the kilonova event.

The illustrators try to adhere to what the likely physics of the event entailed, soliciting feedback from the scientists to make sure they’re getting it right. The swirling of gas, the direction of ejected matter upon impact, the reflection of light, the proportions of the objects—all of these things are deliberately framed such that they make scientific sense. …

Do take a look at Patel’s piece, if for no other reason than to see all of the images he has embedded there. You may recognize Aurore Simmonet’s name from the credit line in the second image I have embedded here.

Yarns that harvest and generate energy

The researchers involved in this work are confident enough about their prospects that they will be  patenting their research into yarns. From an August 25, 2017 news item on Nanowerk,

An international research team led by scientists at The University of Texas at Dallas and Hanyang University in South Korea has developed high-tech yarns that generate electricity when they are stretched or twisted.

In a study published in the Aug. 25 [2017] issue of the journal Science (“Harvesting electrical energy from carbon nanotube yarn twist”), researchers describe “twistron” yarns and their possible applications, such as harvesting energy from the motion of ocean waves or from temperature fluctuations. When sewn into a shirt, these yarns served as a self-powered breathing monitor.

“The easiest way to think of twistron harvesters is, you have a piece of yarn, you stretch it, and out comes electricity,” said Dr. Carter Haines, associate research professor in the Alan G. MacDiarmid NanoTech Institute at UT Dallas and co-lead author of the article. The article also includes researchers from South Korea, Virginia Tech, Wright-Patterson Air Force Base and China.

An August 25, 2017 University of Texas at Dallas news release, which originated the news item, expands on the theme,

Yarns Based on Nanotechnology

The yarns are constructed from carbon nanotubes, which are hollow cylinders of carbon 10,000 times smaller in diameter than a human hair. The researchers first twist-spun the nanotubes into high-strength, lightweight yarns. To make the yarns highly elastic, they introduced so much twist that the yarns coiled like an over-twisted rubber band.

In order to generate electricity, the yarns must be either submerged in or coated with an ionically conducting material, or electrolyte, which can be as simple as a mixture of ordinary table salt and water.

“Fundamentally, these yarns are supercapacitors,” said Dr. Na Li, a research scientist at the NanoTech Institute and co-lead author of the study. “In a normal capacitor, you use energy — like from a battery — to add charges to the capacitor. But in our case, when you insert the carbon nanotube yarn into an electrolyte bath, the yarns are charged by the electrolyte itself. No external battery, or voltage, is needed.”

When a harvester yarn is twisted or stretched, the volume of the carbon nanotube yarn decreases, bringing the electric charges on the yarn closer together and increasing their energy, Haines said. This increases the voltage associated with the charge stored in the yarn, enabling the harvesting of electricity.

Stretching the coiled twistron yarns 30 times a second generated 250 watts per kilogram of peak electrical power when normalized to the harvester’s weight, said Dr. Ray Baughman, director of the NanoTech Institute and a corresponding author of the study.

“Although numerous alternative harvesters have been investigated for many decades, no other reported harvester provides such high electrical power or energy output per cycle as ours for stretching rates between a few cycles per second and 600 cycles per second.”

Lab Tests Show Potential Applications

In the lab, the researchers showed that a twistron yarn weighing less than a housefly could power a small LED, which lit up each time the yarn was stretched.

To show that twistrons can harvest waste thermal energy from the environment, Li connected a twistron yarn to a polymer artificial muscle that contracts and expands when heated and cooled. The twistron harvester converted the mechanical energy generated by the polymer muscle to electrical energy.

“There is a lot of interest in using waste energy to power the Internet of Things, such as arrays of distributed sensors,” Li said. “Twistron technology might be exploited for such applications where changing batteries is impractical.”

The researchers also sewed twistron harvesters into a shirt. Normal breathing stretched the yarn and generated an electrical signal, demonstrating its potential as a self-powered respiration sensor.

“Electronic textiles are of major commercial interest, but how are you going to power them?” Baughman said. “Harvesting electrical energy from human motion is one strategy for eliminating the need for batteries. Our yarns produced over a hundred times higher electrical power per weight when stretched compared to other weavable fibers reported in the literature.”

Electricity from Ocean Waves

“In the lab we showed that our energy harvesters worked using a solution of table salt as the electrolyte,” said Baughman, who holds the Robert A. Welch Distinguished Chair in Chemistry in the School of Natural Sciences and Mathematics. “But we wanted to show that they would also work in ocean water, which is chemically more complex.”

In a proof-of-concept demonstration, co-lead author Dr. Shi Hyeong Kim, a postdoctoral researcher at the NanoTech Institute, waded into the frigid surf off the east coast of South Korea to deploy a coiled twistron in the sea. He attached a 10 centimeter-long yarn, weighing only 1 milligram (about the weight of a mosquito), between a balloon and a sinker that rested on the seabed.

Every time an ocean wave arrived, the balloon would rise, stretching the yarn up to 25 percent, thereby generating measured electricity.

Even though the investigators used very small amounts of twistron yarn in the current study, they have shown that harvester performance is scalable, both by increasing twistron diameter and by operating many yarns in parallel.

“If our twistron harvesters could be made less expensively, they might ultimately be able to harvest the enormous amount of energy available from ocean waves,” Baughman said. “However, at present these harvesters are most suitable for powering sensors and sensor communications. Based on demonstrated average power output, just 31 milligrams of carbon nanotube yarn harvester could provide the electrical energy needed to transmit a 2-kilobyte packet of data over a 100-meter radius every 10 seconds for the Internet of Things.”

Researchers from the UT Dallas Erik Jonsson School of Engineering and Computer Science and Lintec of America’s Nano-Science & Technology Center also participated in the study.

The investigators have filed a patent on the technology.

In the U.S., the research was funded by the Air Force, the Air Force Office of Scientific Research, NASA, the Office of Naval Research and the Robert A. Welch Foundation. In Korea, the research was supported by the Korea-U.S. Air Force Cooperation Program and the Creative Research Initiative Center for Self-powered Actuation of the National Research Foundation and the Ministry of Science.

Here’s a link to and a citation for the paper,

Harvesting electrical energy from carbon nanotube yarn twist by Shi Hyeong Kim, Carter S. Haines, Na Li, Keon Jung Kim, Tae Jin Mun, Changsoon Choi, Jiangtao Di, Young Jun Oh, Juan Pablo Oviedo, Julia Bykova, Shaoli Fang, Nan Jiang, Zunfeng Liu, Run Wang, Prashant Kumar, Rui Qiao, Shashank Priya, Kyeongjae Cho, Moon Kim, Matthew Steven Lucas, Lawrence F. Drummy, Benji Maruyama, Dong Youn Lee, Xavier Lepró, Enlai Gao, Dawood Albarq, Raquel Ovalle-Robles, Seon Jeong Kim, Ray H. Baughman. Science 25 Aug 2017: Vol. 357, Issue 6353, pp. 773-778 DOI: 10.1126/science.aam8771

This paper is behind a paywall.

Dexter Johnson in an Aug. 25, 2017 posting on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website) delves further into the research,

“Basically what’s happening is when we stretch the yarn, we’re getting a change in capacitance of the yarn. It’s that change that allows us to get energy out,” explains Carter Haines, associate research professor at UT Dallas and co-lead author of the paper describing the research, in an interview with IEEE Spectrum.

This makes it similar in many ways to other types of energy harvesters. For instance, in other research, it has been demonstrated—with sheets of rubber with coated electrodes on both sides—that you can increase the capacitance of a material when you stretch it and it becomes thinner. As a result, if you have charge on that capacitor, you can change the voltage associated with that charge.

“We’re more or less exploiting the same effect but what we’re doing differently is we’re using an electric chemical cell to do this,” says Haines. “So we’re not changing double layer capacitance in normal parallel plate capacitors. But we’re actually changing the electric chemical capacitance on the surface of a super capacitor yarn.”

While there are other capacitance-based energy harvesters, those other devices require extremely high voltages to work because they’re using parallel plate capacitors, according to Haines.

Dexter asks good questions and his post is very informative.

Sounding out the TRAPPIST-1 planetary system

It’s been a while since a data sonification story has come this way. Like my first posting on the topic (Feb. 7, 2014) this is another astrophysics ‘piece of music’. From the University of Toronto (Canada) and Thought Café (a Canadian animation studio),

For those who’d like a little text, here’s more from a May 10, 2017 University of Toronto news release (also on EurekAlert) by Don Campbell,

When NASA announced its discovery of the TRAPPIST-1 system back in February [2017] it caused quite a stir, and with good reason. Three of its seven Earth-sized planets lay in the star’s habitable zone, meaning they may harbour suitable conditions for life.

But one of the major puzzles from the original research describing the system was that it seemed to be unstable.

“If you simulate the system, the planets start crashing into one another in less than a million years,” says Dan Tamayo, a postdoc at U of T Scarborough’s Centre for Planetary Science.

“This may seem like a long time, but it’s really just an astronomical blink of an eye. It would be very lucky for us to discover TRAPPIST-1 right before it fell apart, so there must be a reason why it remains stable.”

Tamayo and his colleagues seem to have found a reason why. In research published in the journal Astrophysical Journal Letters, they describe the planets in the TRAPPIST-1 system as being in something called a “resonant chain” that can strongly stabilize the system.

In resonant configurations, planets’ orbital periods form ratios of whole numbers. It’s a very technical principle, but a good example is how Neptune orbits the Sun three times in the amount of time it takes Pluto to orbit twice. This is a good thing for Pluto because otherwise it wouldn’t exist. Since the two planets’ orbits intersect, if things were random they would collide, but because of resonance, the locations of the planets relative to one another keeps repeating.

“There’s a rhythmic repeating pattern that ensures the system remains stable over a long period of time,” says Matt Russo, a post-doc at the Canadian Institute for Theoretical Astrophysics (CITA) who has been working on creative ways to visualize the system.

TRAPPIST-1 takes this principle to a whole other level with all seven planets being in a chain of resonances. To illustrate this remarkable configuration, Tamayo, Russo and colleague Andrew Santaguida created an animation in which the planets play a piano note every time they pass in front of their host star, and a drum beat every time a planet overtakes its nearest neighbour.

Because the planets’ periods are simple ratios of each other, their motion creates a steady repeating pattern that is similar to how we play music. Simple frequency ratios are also what makes two notes sound pleasing when played together.

Speeding up the planets’ orbital frequencies into the human hearing range produces an astrophysical symphony of sorts, but one that’s playing out more than 40 light years away.

“Most planetary systems are like bands of amateur musicians playing their parts at different speeds,” says Russo. “TRAPPIST-1 is different; it’s a super-group with all seven members synchronizing their parts in nearly perfect time.”

But even synchronized orbits don’t necessarily survive very long, notes Tamayo. For technical reasons, chaos theory also requires precise orbital alignments to ensure systems remain stable. This can explain why the simulations done in the original discovery paper quickly resulted in the planets colliding with one another.

“It’s not that the system is doomed, it’s that stable configurations are very exact,” he says. “We can’t measure all the orbital parameters well enough at the moment, so the simulated systems kept resulting in collisions because the setups weren’t precise.”

In order to overcome this Tamayo and his team looked at the system not as it is today, but how it may have originally formed. When the system was being born out of a disk of gas, the planets should have migrated relative to one another, allowing the system to naturally settle into a stable resonant configuration.

“This means that early on, each planet’s orbit was tuned to make it harmonious with its neighbours, in the same way that instruments are tuned by a band before it begins to play,” says Russo. “That’s why the animation produces such beautiful music.”

The team tested the simulations using the supercomputing cluster at the Canadian Institute for Theoretical Astrophysics (CITA) and found that the majority they generated remained stable for as long as they could possibly run it. This was about 100 times longer than it took for the simulations in the original research paper describing TRAPPIST-1 to go berserk.

“It seems somehow poetic that this special configuration that can generate such remarkable music can also be responsible for the system surviving to the present day,” says Tamayo.

Here’s a link to and a citation for the paper,

Convergent Migration Renders TRAPPIST-1 Long-lived by Daniel Tamayo, Hanno Rein, Cristobal Petrovich, and Norman Murray. The Astrophysical Journal Letters, Volume 840, Number 2 https://doi.org/10.5281/zenodo.496153 Published 2017 May 10

© 2017. The American Astronomical Society. All rights reserved.

This paper is open access.

The Canadian science scene and the 2017 Canadian federal budget

There’s not much happening in the 2017-18 budget in terms of new spending according to Paul Wells’ March 22, 2017 article for TheStar.com,

This is the 22nd or 23rd federal budget I’ve covered. And I’ve never seen the like of the one Bill Morneau introduced on Wednesday [March 22, 2017].

Not even in the last days of the Harper Conservatives did a budget provide for so little new spending — $1.3 billion in the current budget year, total, in all fields of government. That’s a little less than half of one per cent of all federal program spending for this year.

But times are tight. The future is a place where we can dream. So the dollars flow more freely in later years. In 2021-22, the budget’s fifth planning year, new spending peaks at $8.2 billion. Which will be about 2.4 per cent of all program spending.

He’s not alone in this 2017 federal budget analysis; CBC (Canadian Broadcasting Corporation) pundits, Chantal Hébert, Andrew Coyne, and Jennifer Ditchburn said much the same during their ‘At Issue’ segment of the March 22, 2017 broadcast of The National (news).

Before I focus on the science and technology budget, here are some general highlights from the CBC’s March 22, 2017 article on the 2017-18 budget announcement (Note: Links have been removed,

Here are highlights from the 2017 federal budget:

  • Deficit: $28.5 billion, up from $25.4 billion projected in the fall.
  • Trend: Deficits gradually decline over next five years — but still at $18.8 billion in 2021-22.
  • Housing: $11.2 billion over 11 years, already budgeted, will go to a national housing strategy.
  • Child care: $7 billion over 10 years, already budgeted, for new spaces, starting 2018-19.
  • Indigenous: $3.4 billion in new money over five years for infrastructure, health and education.
  • Defence: $8.4 billion in capital spending for equipment pushed forward to 2035.
  • Care givers: New care-giving benefit up to 15 weeks, starting next year.
  • Skills: New agency to research and measure skills development, starting 2018-19.
  • Innovation: $950 million over five years to support business-led “superclusters.”
  • Startups: $400 million over three years for a new venture capital catalyst initiative.
  • AI: $125 million to launch a pan-Canadian Artificial Intelligence Strategy.
  • Coding kids: $50 million over two years for initiatives to teach children to code.
  • Families: Option to extend parental leave up to 18 months.
  • Uber tax: GST to be collected on ride-sharing services.
  • Sin taxes: One cent more on a bottle of wine, five cents on 24 case of beer.
  • Bye-bye: No more Canada Savings Bonds.
  • Transit credit killed: 15 per cent non-refundable public transit tax credit phased out this year.

You can find the entire 2017-18 budget here.

Science and the 2017-18 budget

For anyone interested in the science news, you’ll find most of that in the 2017 budget’s Chapter 1 — Skills, Innovation and Middle Class jobs. As well, Wayne Kondro has written up a précis in his March 22, 2017 article for Science (magazine),

Finance officials, who speak on condition of anonymity during the budget lock-up, indicated the budgets of the granting councils, the main source of operational grants for university researchers, will be “static” until the government can assess recommendations that emerge from an expert panel formed in 2015 and headed by former University of Toronto President David Naylor to review basic science in Canada [highlighted in my June 15, 2016 posting ; $2M has been allocated for the advisor and associated secretariat]. Until then, the officials said, funding for the Natural Sciences and Engineering Research Council of Canada (NSERC) will remain at roughly $848 million, whereas that for the Canadian Institutes of Health Research (CIHR) will remain at $773 million, and for the Social Sciences and Humanities Research Council [SSHRC] at $547 million.

NSERC, though, will receive $8.1 million over 5 years to administer a PromoScience Program that introduces youth, particularly unrepresented groups like Aboriginal people and women, to science, technology, engineering, and mathematics through measures like “space camps and conservation projects.” CIHR, meanwhile, could receive modest amounts from separate plans to identify climate change health risks and to reduce drug and substance abuse, the officials added.

… Canada’s Innovation and Skills Plan, would funnel $600 million over 5 years allocated in 2016, and $112.5 million slated for public transit and green infrastructure, to create Silicon Valley–like “super clusters,” which the budget defined as “dense areas of business activity that contain large and small companies, post-secondary institutions and specialized talent and infrastructure.” …

… The Canadian Institute for Advanced Research will receive $93.7 million [emphasis mine] to “launch a Pan-Canadian Artificial Intelligence Strategy … (to) position Canada as a world-leading destination for companies seeking to invest in artificial intelligence and innovation.”

… Among more specific measures are vows to: Use $87.7 million in previous allocations to the Canada Research Chairs program to create 25 “Canada 150 Research Chairs” honoring the nation’s 150th year of existence, provide $1.5 million per year to support the operations of the office of the as-yet-unappointed national science adviser [see my Dec. 7, 2016 post for information about the job posting, which is now closed]; provide $165.7 million [emphasis mine] over 5 years for the nonprofit organization Mitacs to create roughly 6300 more co-op positions for university students and grads, and provide $60.7 million over five years for new Canadian Space Agency projects, particularly for Canadian participation in the National Aeronautics and Space Administration’s next Mars Orbiter Mission.

Kondros was either reading an earlier version of the budget or made an error regarding Mitacs (from the budget in the “A New, Ambitious Approach to Work-Integrated Learning” subsection),

Mitacs has set an ambitious goal of providing 10,000 work-integrated learning placements for Canadian post-secondary students and graduates each year—up from the current level of around 3,750 placements. Budget 2017 proposes to provide $221 million [emphasis mine] over five years, starting in 2017–18, to achieve this goal and provide relevant work experience to Canadian students.

As well, the budget item for the Pan-Canadian Artificial Intelligence Strategy is $125M.

Moving from Kondros’ précis, the budget (in the “Positioning National Research Council Canada Within the Innovation and Skills Plan” subsection) announces support for these specific areas of science,

Stem Cell Research

The Stem Cell Network, established in 2001, is a national not-for-profit organization that helps translate stem cell research into clinical applications, commercial products and public policy. Its research holds great promise, offering the potential for new therapies and medical treatments for respiratory and heart diseases, cancer, diabetes, spinal cord injury, multiple sclerosis, Crohn’s disease, auto-immune disorders and Parkinson’s disease. To support this important work, Budget 2017 proposes to provide the Stem Cell Network with renewed funding of $6 million in 2018–19.

Space Exploration

Canada has a long and proud history as a space-faring nation. As our international partners prepare to chart new missions, Budget 2017 proposes investments that will underscore Canada’s commitment to innovation and leadership in space. Budget 2017 proposes to provide $80.9 million on a cash basis over five years, starting in 2017–18, for new projects through the Canadian Space Agency that will demonstrate and utilize Canadian innovations in space, including in the field of quantum technology as well as for Mars surface observation. The latter project will enable Canada to join the National Aeronautics and Space Administration’s (NASA’s) next Mars Orbiter Mission.

Quantum Information

The development of new quantum technologies has the potential to transform markets, create new industries and produce leading-edge jobs. The Institute for Quantum Computing is a world-leading Canadian research facility that furthers our understanding of these innovative technologies. Budget 2017 proposes to provide the Institute with renewed funding of $10 million over two years, starting in 2017–18.

Social Innovation

Through community-college partnerships, the Community and College Social Innovation Fund fosters positive social outcomes, such as the integration of vulnerable populations into Canadian communities. Following the success of this pilot program, Budget 2017 proposes to invest $10 million over two years, starting in 2017–18, to continue this work.

International Research Collaborations

The Canadian Institute for Advanced Research (CIFAR) connects Canadian researchers with collaborative research networks led by eminent Canadian and international researchers on topics that touch all humanity. Past collaborations facilitated by CIFAR are credited with fostering Canada’s leadership in artificial intelligence and deep learning. Budget 2017 proposes to provide renewed and enhanced funding of $35 million over five years, starting in 2017–18.

Earlier this week, I highlighted Canada’s strength in the field of regenerative medicine, specifically stem cells in a March 21, 2017 posting. The $6M in the current budget doesn’t look like increased funding but rather a one-year extension. I’m sure they’re happy to receive it  but I imagine it’s a little hard to plan major research projects when you’re not sure how long your funding will last.

As for Canadian leadership in artificial intelligence, that was news to me. Here’s more from the budget,

Canada a Pioneer in Deep Learning in Machines and Brains

CIFAR’s Learning in Machines & Brains program has shaken up the field of artificial intelligence by pioneering a technique called “deep learning,” a computer technique inspired by the human brain and neural networks, which is now routinely used by the likes of Google and Facebook. The program brings together computer scientists, biologists, neuroscientists, psychologists and others, and the result is rich collaborations that have propelled artificial intelligence research forward. The program is co-directed by one of Canada’s foremost experts in artificial intelligence, the Université de Montréal’s Yoshua Bengio, and for his many contributions to the program, the University of Toronto’s Geoffrey Hinton, another Canadian leader in this field, was awarded the title of Distinguished Fellow by CIFAR in 2014.

Meanwhile, from chapter 1 of the budget in the subsection titled “Preparing for the Digital Economy,” there is this provision for children,

Providing educational opportunities for digital skills development to Canadian girls and boys—from kindergarten to grade 12—will give them the head start they need to find and keep good, well-paying, in-demand jobs. To help provide coding and digital skills education to more young Canadians, the Government intends to launch a competitive process through which digital skills training organizations can apply for funding. Budget 2017 proposes to provide $50 million over two years, starting in 2017–18, to support these teaching initiatives.

I wonder if BC Premier Christy Clark is heaving a sigh of relief. At the 2016 #BCTECH Summit, she announced that students in BC would learn to code at school and in newly enhanced coding camp programmes (see my Jan. 19, 2016 posting). Interestingly, there was no mention of additional funding to support her initiative. I guess this money from the federal government comes at a good time as we will have a provincial election later this spring where she can announce the initiative again and, this time, mention there’s money for it.

Attracting brains from afar

Ivan Semeniuk in his March 23, 2017 article (for the Globe and Mail) reads between the lines to analyze the budget’s possible impact on Canadian science,

But a between-the-lines reading of the budget document suggests the government also has another audience in mind: uneasy scientists from the United States and Britain.

The federal government showed its hand at the 2017 #BCTECH Summit. From a March 16, 2017 article by Meera Bains for the CBC news online,

At the B.C. tech summit, Navdeep Bains, Canada’s minister of innovation, said the government will act quickly to fast track work permits to attract highly skilled talent from other countries.

“We’re taking the processing time, which takes months, and reducing it to two weeks for immigration processing for individuals [who] need to come here to help companies grow and scale up,” Bains said.

“So this is a big deal. It’s a game changer.”

That change will happen through the Global Talent Stream, a new program under the federal government’s temporary foreign worker program.  It’s scheduled to begin on June 12, 2017.

U.S. companies are taking notice and a Canadian firm, True North, is offering to help them set up shop.

“What we suggest is that they think about moving their operations, or at least a chunk of their operations, to Vancouver, set up a Canadian subsidiary,” said the company’s founder, Michael Tippett.

“And that subsidiary would be able to house and accommodate those employees.”

Industry experts says while the future is unclear for the tech sector in the U.S., it’s clear high tech in B.C. is gearing up to take advantage.

US business attempts to take advantage of Canada’s relative stability and openness to immigration would seem to be the motive for at least one cross border initiative, the Cascadia Urban Analytics Cooperative. From my Feb. 28, 2017 posting,

There was some big news about the smallest version of the Cascadia region on Thursday, Feb. 23, 2017 when the University of British Columbia (UBC) , the University of Washington (state; UW), and Microsoft announced the launch of the Cascadia Urban Analytics Cooperative. From the joint Feb. 23, 2017 news release (read on the UBC website or read on the UW website),

In an expansion of regional cooperation, the University of British Columbia and the University of Washington today announced the establishment of the Cascadia Urban Analytics Cooperative to use data to help cities and communities address challenges from traffic to homelessness. The largest industry-funded research partnership between UBC and the UW, the collaborative will bring faculty, students and community stakeholders together to solve problems, and is made possible thanks to a $1-million gift from Microsoft.

Today’s announcement follows last September’s [2016] Emerging Cascadia Innovation Corridor Conference in Vancouver, B.C. The forum brought together regional leaders for the first time to identify concrete opportunities for partnerships in education, transportation, university research, human capital and other areas.

A Boston Consulting Group study unveiled at the conference showed the region between Seattle and Vancouver has “high potential to cultivate an innovation corridor” that competes on an international scale, but only if regional leaders work together. The study says that could be possible through sustained collaboration aided by an educated and skilled workforce, a vibrant network of research universities and a dynamic policy environment.

It gets better, it seems Microsoft has been positioning itself for a while if Matt Day’s analysis is correct (from my Feb. 28, 2017 posting),

Matt Day in a Feb. 23, 2017 article for the The Seattle Times provides additional perspective (Note: Links have been removed),

Microsoft’s effort to nudge Seattle and Vancouver, B.C., a bit closer together got an endorsement Thursday [Feb. 23, 2017] from the leading university in each city.

The partnership has its roots in a September [2016] conference in Vancouver organized by Microsoft’s public affairs and lobbying unit [emphasis mine.] That gathering was aimed at tying business, government and educational institutions in Microsoft’s home region in the Seattle area closer to its Canadian neighbor.

Microsoft last year [2016] opened an expanded office in downtown Vancouver with space for 750 employees, an outpost partly designed to draw to the Northwest more engineers than the company can get through the U.S. guest worker system [emphasis mine].

This was all prior to President Trump’s legislative moves in the US, which have at least one Canadian observer a little more gleeful than I’m comfortable with. From a March 21, 2017 article by Susan Lum  for CBC News online,

U.S. President Donald Trump’s efforts to limit travel into his country while simultaneously cutting money from science-based programs provides an opportunity for Canada’s science sector, says a leading Canadian researcher.

“This is Canada’s moment. I think it’s a time we should be bold,” said Alan Bernstein, president of CIFAR [which on March 22, 2017 was awarded $125M to launch the Pan Canada Artificial Intelligence Strategy in the Canadian federal budget announcement], a global research network that funds hundreds of scientists in 16 countries.

Bernstein believes there are many reasons why Canada has become increasingly attractive to scientists around the world, including the political climate in the United States and the Trump administration’s travel bans.

Thankfully, Bernstein calms down a bit,

“It used to be if you were a bright young person anywhere in the world, you would want to go to Harvard or Berkeley or Stanford, or what have you. Now I think you should give pause to that,” he said. “We have pretty good universities here [emphasis mine]. We speak English. We’re a welcoming society for immigrants.”​

Bernstein cautions that Canada should not be seen to be poaching scientists from the United States — but there is an opportunity.

“It’s as if we’ve been in a choir of an opera in the back of the stage and all of a sudden the stars all left the stage. And the audience is expecting us to sing an aria. So we should sing,” Bernstein said.

Bernstein said the federal government, with this week’s so-called innovation budget, can help Canada hit the right notes.

“Innovation is built on fundamental science, so I’m looking to see if the government is willing to support, in a big way, fundamental science in the country.”

Pretty good universities, eh? Thank you, Dr. Bernstein, for keeping some of the boosterism in check. Let’s leave the chest thumping to President Trump and his cronies.

Ivan Semeniuk’s March 23, 2017 article (for the Globe and Mail) provides more details about the situation in the US and in Britain,

Last week, Donald Trump’s first budget request made clear the U.S. President would significantly reduce or entirely eliminate research funding in areas such as climate science and renewable energy if permitted by Congress. Even the National Institutes of Health, which spearheads medical research in the United States and is historically supported across party lines, was unexpectedly targeted for a $6-billion (U.S.) cut that the White House said could be achieved through “efficiencies.”

In Britain, a recent survey found that 42 per cent of academics were considering leaving the country over worries about a less welcoming environment and the loss of research money that a split with the European Union is expected to bring.

In contrast, Canada’s upbeat language about science in the budget makes a not-so-subtle pitch for diversity and talent from abroad, including $117.6-million to establish 25 research chairs with the aim of attracting “top-tier international scholars.”

For good measure, the budget also includes funding for science promotion and $2-million annually for Canada’s yet-to-be-hired Chief Science Advisor, whose duties will include ensuring that government researchers can speak freely about their work.

“What we’ve been hearing over the last few months is that Canada is seen as a beacon, for its openness and for its commitment to science,” said Ms. Duncan [Kirsty Duncan, Minister of Science], who did not refer directly to either the United States or Britain in her comments.

Providing a less optimistic note, Erica Alini in her March 22, 2017 online article for Global News mentions a perennial problem, the Canadian brain drain,

The budget includes a slew of proposed reforms and boosted funding for existing training programs, as well as new skills-development resources for unemployed and underemployed Canadians not covered under current EI-funded programs.

There are initiatives to help women and indigenous people get degrees or training in science, technology, engineering and mathematics (the so-called STEM subjects) and even to teach kids as young as kindergarten-age to code.

But there was no mention of how to make sure Canadians with the right skills remain in Canada, TD’s DePratto {Toronto Dominion Bank} Economics; TD is currently experiencing a scandal {March 13, 2017 Huffington Post news item}] told Global News.

Canada ranks in the middle of the pack compared to other advanced economies when it comes to its share of its graduates in STEM fields, but the U.S. doesn’t shine either, said DePratto [Brian DePratto, senior economist at TD .

The key difference between Canada and the U.S. is the ability to retain domestic talent and attract brains from all over the world, he noted.

To be blunt, there may be some opportunities for Canadian science but it does well to remember (a) US businesses have no particular loyalty to Canada and (b) all it takes is an election to change any perceived advantages to disadvantages.

Digital policy and intellectual property issues

Dubbed by some as the ‘innovation’ budget (official title:  Building a Strong Middle Class), there is an attempt to address a longstanding innovation issue (from a March 22, 2017 posting by Michael Geist on his eponymous blog (Note: Links have been removed),

The release of today’s [march 22, 2017] federal budget is expected to include a significant emphasis on innovation, with the government revealing how it plans to spend (or re-allocate) hundreds of millions of dollars that is intended to support innovation. Canada’s dismal innovation record needs attention, but spending our way to a more innovative economy is unlikely to yield the desired results. While Navdeep Bains, the Innovation, Science and Economic Development Minister, has talked for months about the importance of innovation, Toronto Star columnist Paul Wells today delivers a cutting but accurate assessment of those efforts:

“This government is the first with a minister for innovation! He’s Navdeep Bains. He frequently posts photos of his meetings on Twitter, with the hashtag “#innovation.” That’s how you know there is innovation going on. A year and a half after he became the minister for #innovation, it’s not clear what Bains’s plans are. It’s pretty clear that within the government he has less than complete control over #innovation. There’s an advisory council on economic growth, chaired by the McKinsey guru Dominic Barton, which periodically reports to the government urging more #innovation.

There’s a science advisory panel, chaired by former University of Toronto president David Naylor, that delivered a report to Science Minister Kirsty Duncan more than three months ago. That report has vanished. One presumes that’s because it offered some advice. Whatever Bains proposes, it will have company.”

Wells is right. Bains has been very visible with plenty of meetings and public photo shoots but no obvious innovation policy direction. This represents a missed opportunity since Bains has plenty of policy tools at his disposal that could advance Canada’s innovation framework without focusing on government spending.

For example, Canada’s communications system – wireless and broadband Internet access – falls directly within his portfolio and is crucial for both business and consumers. Yet Bains has been largely missing in action on the file. He gave approval for the Bell – MTS merger that virtually everyone concedes will increase prices in the province and make the communications market less competitive. There are potential policy measures that could bring new competitors into the market (MVNOs [mobile virtual network operators] and municipal broadband) and that could make it easier for consumers to switch providers (ban on unlocking devices). Some of this falls to the CRTC, but government direction and emphasis would make a difference.

Even more troubling has been his near total invisibility on issues relating to new fees or taxes on Internet access and digital services. Canadian Heritage Minister Mélanie Joly has taken control of the issue with the possibility that Canadians could face increased costs for their Internet access or digital services through mandatory fees to contribute to Canadian content.  Leaving aside the policy objections to such an approach (reducing affordable access and the fact that foreign sources now contribute more toward Canadian English language TV production than Canadian broadcasters and distributors), Internet access and e-commerce are supposed to be Bains’ issue and they have a direct connection to the innovation file. How is it possible for the Innovation, Science and Economic Development Minister to have remained silent for months on the issue?

Bains has been largely missing on trade related innovation issues as well. My Globe and Mail column today focuses on a digital-era NAFTA, pointing to likely U.S. demands on data localization, data transfers, e-commerce rules, and net neutrality.  These are all issues that fall under Bains’ portfolio and will impact investment in Canadian networks and digital services. There are innovation opportunities for Canada here, but Bains has been content to leave the policy issues to others, who will be willing to sacrifice potential gains in those areas.

Intellectual property policy is yet another area that falls directly under Bains’ mandate with an obvious link to innovation, but he has done little on the file. Canada won a huge NAFTA victory late last week involving the Canadian patent system, which was challenged by pharmaceutical giant Eli Lilly. Why has Bains not promoted the decision as an affirmation of how Canada’s intellectual property rules?

On the copyright front, the government is scheduled to conduct a review of the Copyright Act later this year, but it is not clear whether Bains will take the lead or again cede responsibility to Joly. The Copyright Act is statutorily under the Industry Minister and reform offers the chance to kickstart innovation. …

For anyone who’s not familiar with this area, innovation is often code for commercialization of science and technology research efforts. These days, digital service and access policies and intellectual property policies are all key to research and innovation efforts.

The country that’s most often (except in mainstream Canadian news media) held up as an example of leadership in innovation is Estonia. The Economist profiled the country in a July 31, 2013 article and a July 7, 2016 article on apolitical.co provides and update.

Conclusions

Science monies for the tri-council science funding agencies (NSERC, SSHRC, and CIHR) are more or less flat but there were a number of line items in the federal budget which qualify as science funding. The $221M over five years for Mitacs, the $125M for the Pan-Canadian Artificial Intelligence Strategy, additional funding for the Canada research chairs, and some of the digital funding could also be included as part of the overall haul. This is in line with the former government’s (Stephen Harper’s Conservatives) penchant for keeping the tri-council’s budgets under control while spreading largesse elsewhere (notably the Perimeter Institute, TRIUMF [Canada’s National Laboratory for Particle and Nuclear Physics], and, in the 2015 budget, $243.5-million towards the Thirty Metre Telescope (TMT) — a massive astronomical observatory to be constructed on the summit of Mauna Kea, Hawaii, a $1.5-billion project). This has lead to some hard feelings in the past with regard to ‘big science’ projects getting what some have felt is an undeserved boost in finances while the ‘small fish’ are left scrabbling for the ever-diminishing (due to budget cuts in years past and inflation) pittances available from the tri-council agencies.

Mitacs, which started life as a federally funded Network Centre for Excellence focused on mathematics, has since shifted focus to become an innovation ‘champion’. You can find Mitacs here and you can find the organization’s March 2016 budget submission to the House of Commons Standing Committee on Finance here. At the time, they did not request a specific amount of money; they just asked for more.

The amount Mitacs expects to receive this year is over $40M which represents more than double what they received from the federal government and almost of 1/2 of their total income in the 2015-16 fiscal year according to their 2015-16 annual report (see p. 327 for the Mitacs Statement of Operations to March 31, 2016). In fact, the federal government forked over $39,900,189. in the 2015-16 fiscal year to be their largest supporter while Mitacs’ total income (receipts) was $81,993,390.

It’s a strange thing but too much money, etc. can be as bad as too little. I wish the folks Mitacs nothing but good luck with their windfall.

I don’t see anything in the budget that encourages innovation and investment from the industrial sector in Canada.

Finallyl, innovation is a cultural issue as much as it is a financial issue and having worked with a number of developers and start-up companies, the most popular business model is to develop a successful business that will be acquired by a large enterprise thereby allowing the entrepreneurs to retire before the age of 30 (or 40 at the latest). I don’t see anything from the government acknowledging the problem let alone any attempts to tackle it.

All in all, it was a decent budget with nothing in it to seriously offend anyone.

From flubber to thubber

Flubber (flying rubber) is an imaginary material that provided a plot point for two Disney science fiction comedies, The Absent-Minded Professor in 1961 which was remade in 1997 as Flubber. By contrast, ‘thubber’ (thermally conductive rubber) is a real life new material developed at Carnegie Mellon University (US).

A Feb. 13, 2017 news item on phys.org makes the announcement (Note: A link has been removed),

Carmel Majidi and Jonathan Malen of Carnegie Mellon University have developed a thermally conductive rubber material that represents a breakthrough for creating soft, stretchable machines and electronics. The findings were published in Proceedings of the National Academy of Sciences this week.

The new material, nicknamed “thubber,” is an electrically insulating composite that exhibits an unprecedented combination of metal-like thermal conductivity, elasticity similar to soft, biological tissue, and can stretch over six times its initial length.

A Feb.13, 2017 Carnegie Mellon University news release (also on EurekAlert), which originated the news item, provides more detail (Note A link has been removed),

“Our combination of high thermal conductivity and elasticity is especially critical for rapid heat dissipation in applications such as wearable computing and soft robotics, which require mechanical compliance and stretchable functionality,” said Majidi, an associate professor of mechanical engineering.

Applications could extend to industries like athletic wear and sports medicine—think of lighted clothing for runners and heated garments for injury therapy. Advanced manufacturing, energy, and transportation are other areas where stretchable electronic material could have an impact.

“Until now, high power devices have had to be affixed to rigid, inflexible mounts that were the only technology able to dissipate heat efficiently,” said Malen, an associate professor of mechanical engineering. “Now, we can create stretchable mounts for LED lights or computer processors that enable high performance without overheating in applications that demand flexibility, such as light-up fabrics and iPads that fold into your wallet.”

The key ingredient in “thubber” is a suspension of non-toxic, liquid metal microdroplets. The liquid state allows the metal to deform with the surrounding rubber at room temperature. When the rubber is pre-stretched, the droplets form elongated pathways that are efficient for heat travel. Despite the amount of metal, the material is also electrically insulating.

To demonstrate these findings, the team mounted an LED light onto a strip of the material to create a safety lamp worn around a jogger’s leg. The “thubber” dissipated the heat from the LED, which would have otherwise burned the jogger. The researchers also created a soft robotic fish that swims with a “thubber” tail, without using conventional motors or gears.

“As the field of flexible electronics grows, there will be a greater need for materials like ours,” said Majidi. “We can also see it used for artificial muscles that power bio-inspired robots.”

Majidi and Malen acknowledge the efforts of lead authors Michael Bartlett, Navid Kazem, and Matthew Powell-Palm in performing this multidisciplinary work. They also acknowledge funding from the Air Force, NASA, and the Army Research Office.

Here’s a link to and a citation for the paper,

High thermal conductivity in soft elastomers with elongated liquid metal inclusions by Michael D. Bartlett, Navid Kazem, Matthew J. Powell-Palm, Xiaonan Huang, Wenhuan Sun, Jonathan A. Malen, and Carmel Majidi.  Proceedings of the National Academy of Sciences of the United States of America (PNAS, Proceedings of the National Academy of Sciences) doi: 10.1073/pnas.1616377114

This paper is open access.