Tag Archives: University of Hamburg

Of puke, CRISPR, fruit flies, and monarch butterflies

I’ve never seen an educational institution use a somewhat vulgar slang term such as ‘puke’ before. Especially not in a news release. You’ll find that elsewhere online ‘puke’ has been replaced, in the headline, with the more socially acceptable ‘vomit’.

Since I wanted to catch this historic moment amid concerns that the original version of the news release will disappear, I’m including the entire news release as i saw it on EurekAlert.com (from an October 2, 2019 University of California at Berkeley news release),

News Release 2-Oct-2019

CRISPRed fruit flies mimic monarch butterfly — and could make you puke
Scientists recreate in flies the mutations that let monarch butterfly eat toxic milkweed with impunity

University of California – Berkeley

The fruit flies in Noah Whiteman’s lab may be hazardous to your health.

Whiteman and his University of California, Berkeley, colleagues have turned perfectly palatable fruit flies — palatable, at least, to frogs and birds — into potentially poisonous prey that may cause anything that eats them to puke. In large enough quantities, the flies likely would make a human puke, too, much like the emetic effect of ipecac syrup.

That’s because the team genetically engineered the flies, using CRISPR-Cas9 gene editing, to be able to eat milkweed without dying and to sequester its toxins, just as America’s most beloved butterfly, the monarch, does to deter predators.

This is the first time anyone has recreated in a multicellular organism a set of evolutionary mutations leading to a totally new adaptation to the environment — in this case, a new diet and new way of deterring predators.

Like monarch caterpillars, the CRISPRed fruit fly maggots thrive on milkweed, which contains toxins that kill most other animals, humans included. The maggots store the toxins in their bodies and retain them through metamorphosis, after they turn into adult flies, which means the adult “monarch flies” could also make animals upchuck.

The team achieved this feat by making three CRISPR edits in a single gene: modifications identical to the genetic mutations that allow monarch butterflies to dine on milkweed and sequester its poison. These mutations in the monarch have allowed it to eat common poisonous plants other insects could not and are key to the butterfly’s thriving presence throughout North and Central America.

Flies with the triple genetic mutation proved to be 1,000 times less sensitive to milkweed toxin than the wild fruit fly, Drosophila melanogaster.

Whiteman and his colleagues will describe their experiment in the Oct. 2 [2019] issue of the journal Nature.

Monarch flies

The UC Berkeley researchers created these monarch flies to establish, beyond a shadow of a doubt, which genetic changes in the genome of monarch butterflies were necessary to allow them to eat milkweed with impunity. They found, surprisingly, that only three single-nucleotide substitutions in one gene are sufficient to give fruit flies the same toxin resistance as monarchs.

“All we did was change three sites, and we made these superflies,” said Whiteman, an associate professor of integrative biology. “But to me, the most amazing thing is that we were able to test evolutionary hypotheses in a way that has never been possible outside of cell lines. It would have been difficult to discover this without having the ability to create mutations with CRISPR.”

Whiteman’s team also showed that 20 other insect groups able to eat milkweed and related toxic plants – including moths, beetles, wasps, flies, aphids, a weevil and a true bug, most of which sport the color orange to warn away predators – independently evolved mutations in one, two or three of the same amino acid positions to overcome, to varying degrees, the toxic effects of these plant poisons.

In fact, his team reconstructed the one, two or three mutations that led to each of the four butterfly and moth lineages, each mutation conferring some resistance to the toxin. All three mutations were necessary to make the monarch butterfly the king of milkweed.
Resistance to milkweed toxin comes at a cost, however. Monarch flies are not as quick to recover from upsets, such as being shaken — a test known as “bang” sensitivity.

“This shows there is a cost to mutations, in terms of recovery of the nervous system and probably other things we don’t know about,” Whiteman said. “But the benefit of being able to escape a predator is so high … if it’s death or toxins, toxins will win, even if there is a cost.”

Plant vs. insect

Whiteman is interested in the evolutionary battle between plants and parasites and was intrigued by the evolutionary adaptations that allowed the monarch to beat the milkweed’s toxic defense. He also wanted to know whether other insects that are resistant — though all less resistant than the monarch — use similar tricks to disable the toxin.

“Since plants and animals first invaded land 400 million years ago, this coevolutionary arms race is thought to have given rise to a lot of the plant and animal diversity that we see, because most animals are insects, and most insects are herbivorous: they eat plants,” he said.

Milkweeds and a variety of other plants, including foxglove, the source of digitoxin and digoxin, contain related toxins — called cardiac glycosides — that can kill an elephant and any creature with a beating heart. Foxglove’s effect on the heart is the reason that an extract of the plant, in the genus Digitalis, has been used for centuries to treat heart conditions, and why digoxin and digitoxin are used today to treat congestive heart failure.

These plants’ bitterness alone is enough to deter most animals, but a small minority of insects, including the monarch (Danaus plexippus) and its relative, the queen butterfly (Danaus gilippus), have learned to love milkweed and use it to repel predators.

Whiteman noted that the monarch is a tropical lineage that invaded North America after the last ice age, in part enabled by the three mutations that allowed it to eat a poisonous plant other animals could not, giving it a survival edge and a natural defense against predators.

“The monarch resists the toxin the best of all the insects, and it has the biggest population size of any of them; it’s all over the world,” he said.

The new paper reveals that the mutations had to occur in the right sequence, or else the flies would never have survived the three separate mutational events.

Thwarting the sodium pump

The poisons in these plants, most of them a type of cardenolide, interfere with the sodium/potassium pump (Na+/K+-ATPase) that most of the body’s cells use to move sodium ions out and potassium ions in. The pump creates an ion imbalance that the cell uses to its favor. Nerve cells, for example, transmit signals along their elongated cell bodies, or axons, by opening sodium and potassium gates in a wave that moves down the axon, allowing ions to flow in and out to equilibrate the imbalance. After the wave passes, the sodium pump re-establishes the ionic imbalance.

Digitoxin, from foxglove, and ouabain, the main toxin in milkweed, block the pump and prevent the cell from establishing the sodium/potassium gradient. This throws the ion concentration in the cell out of whack, causing all sorts of problems. In animals with hearts, like birds and humans, heart cells begin to beat so strongly that the heart fails; the result is death by cardiac arrest.

Scientists have known for decades how these toxins interact with the sodium pump: they bind the part of the pump protein that sticks out through the cell membrane, clogging the channel. They’ve even identified two specific amino acid changes or mutations in the protein pump that monarchs and the other insects evolved to prevent the toxin from binding.

But Whiteman and his colleagues weren’t satisfied with this just so explanation: that insects coincidentally developed the same two identical mutations in the sodium pump 14 separate times, end of story. With the advent of CRISPR-Cas9 gene editing in 2012, coinvented by UC Berkeley’s Jennifer Doudna, Whiteman and colleagues Anurag Agrawal of Cornell University and Susanne Dobler of the University of Hamburg in Germany applied to the Templeton Foundation for a grant to recreate these mutations in fruit flies and to see if they could make the flies immune to the toxic effects of cardenolides.

Seven years, many failed attempts and one new grant from the National Institutes of Health later, along with the dedicated CRISPR work of GenetiVision of Houston, Texas, they finally achieved their goal. In the process, they discovered a third critical, compensatory mutation in the sodium pump that had to occur before the last and most potent resistance mutation would stick. Without this compensatory mutation, the maggots died.

Their detective work required inserting single, double and triple mutations into the fruit fly’s own sodium pump gene, in various orders, to assess which ones were necessary. Insects having only one of the two known amino acid changes in the sodium pump gene were best at resisting the plant poisons, but they also had serious side effects — nervous system problems — consistent with the fact that sodium pump mutations in humans are often associated with seizures. However, the third, compensatory mutation somehow reduces the negative effects of the other two mutations.

“One substitution that evolved confers weak resistance, but it is always present and allows for substitutions that are going to confer the most resistance,” said postdoctoral fellow Marianna Karageorgi, a geneticist and evolutionary biologist. “This substitution in the insect unlocks the resistance substitutions, reducing the neurological costs of resistance. Because this trait has evolved so many times, we have also shown that this is not random.”

The fact that one compensatory mutation is required before insects with the most resistant mutation could survive placed a constraint on how insects could evolve toxin resistance, explaining why all 21 lineages converged on the same solution, Whiteman said. In other situations, such as where the protein involved is not so critical to survival, animals might find different solutions.

“This helps answer the question, ‘Why does convergence evolve sometimes, but not other times?'” Whiteman said. “Maybe the constraints vary. That’s a simple answer, but if you think about it, these three mutations turned a Drosophila protein into a monarch one, with respect to cardenolide resistance. That’s kind of remarkable.”

###

The research was funded by the Templeton Foundation and the National Institutes of Health. Co-authors with Whiteman and Agrawal are co-first authors Marianthi Karageorgi of UC Berkeley and Simon Groen, now at New York University; Fidan Sumbul and Felix Rico of Aix-Marseille Université in France; Julianne Pelaez, Kirsten Verster, Jessica Aguilar, Susan Bernstein, Teruyuki Matsunaga and Michael Astourian of UC Berkeley; Amy Hastings of Cornell; and Susanne Dobler of Universität Hamburg in Germany.

Robert Sanders’ Oct. 2, 2019′ news release for the University of California at Berkeley (it’s also been republished as an Oct. 2, 2019 news item on ScienceDaily) has had its headline changed to ‘vomit’ but you’ll find the more vulgar word remains in two locations of the second paragraph of the revised new release.

If you have time, go to the news release on the University of California at Berkeley website just to admire the images that have been embedded in the news release. Here’s one,

Caption: A Drosophila melanogaster “monarch fly” with mutations introduced by CRISPR-Cas9 genome editing (V111, S119 and H122) to the sodium potassium pump, on a wing of a monarch butterfly (Danaus plexippus). Credit & Ccpyright: Julianne Pelaez

Here’s a link to and a citation for the paper,

Genome editing retraces the evolution of toxin resistance in the monarch butterfly by Marianthi Karageorgi, Simon C. Groen, Fidan Sumbul, Julianne N. Pelaez, Kirsten I. Verster, Jessica M. Aguilar, Amy P. Hastings, Susan L. Bernstein, Teruyuki Matsunaga, Michael Astourian, Geno Guerra, Felix Rico, Susanne Dobler, Anurag A. Agrawal & Noah K. Whiteman. Nature (2019) DOI: https://doi.org/10.1038/s41586-019-1610-8 Published 02 October 2019

This paper is behind a paywall.

Words about a word

I’m glad they changed the headline and substituted vomit for puke. I think we need vulgar and/or taboo words to release anger or disgust or other difficult emotions. Incorporating those words into standard language deprives them of that power.

The last word: Genetivision

The company mentioned in the new release, Genetivision, is the place to go for transgenic flies. Here’s a sampling from the their Testimonials webpage,

GenetiVision‘s service has been excellent in the quality and price. The timeliness of its international service has been a big plus. We are very happy with its consistent service and the flies it generates.”
Kwang-Wook Choi, Ph.D.
Department of Biological Sciences
Korea Advanced Institute of Science and Technology


“We couldn’t be happier with GenetiVision. Great prices on both standard P and PhiC31 transgenics, quick turnaround time, and we’re still batting 1000 with transformant success. We used to do our own injections but your service makes it both faster and more cost-effective. Thanks for your service!”
Thomas Neufeld, Ph.D.
Department of Genetics, Cell Biology and Development
University of Minnesota

You can find out more here at the Genetivision website.

SIGGRAPH (Special Interest Group on Computer GRAPHics and Interactive Techniques) and their art gallery from Aug. 12 – 16, 2018 (making the world synthetic) in Vancouver (Canada)

While my main interest is the group’s temporary art gallery, I am providing a brief explanatory introduction and a couple of previews for SIGGRAPH 2018.

Introduction

For anyone unfamiliar with the Special Interest Group on Computer GRAPHics and Interactive Techniques (SIGGRAPH) and its conferences, from the SIGGRAPH Wikipedia entry Note: Links have been removed),

Some highlights of the conference are its Animation Theater and Electronic Theater presentations, where recently created CG films are played. There is a large exhibition floor, where several hundred companies set up elaborate booths and compete for attention and recruits. Most of the companies are in the engineering, graphics, motion picture, or video game industries. There are also many booths for schools which specialize in computer graphics or interactivity.

Dozens of research papers are presented each year, and SIGGRAPH is widely considered the most prestigious forum for the publication of computer graphics research.[1] The recent paper acceptance rate for SIGGRAPH has been less than 26%.[2] The submitted papers are peer-reviewed in a single-blind process.[3] There has been some criticism about the preference of SIGGRAPH paper reviewers for novel results rather than useful incremental progress.[4][5] …

This is the third SIGGRAPH Vancouver has hosted; the others were in 2011 and 2014.  The theme for the 2018 iteration is ‘Generations’; here’s more about it from an Aug. 2, 2018 article by Terry Flores for Variety,

While its focus is firmly forward thinking, SIGGRAPH 2018, the computer graphics, animation, virtual reality, games, digital art, mixed reality, and emerging technologies conference, is also tipping its hat to the past thanks to its theme this year: Generations. The conference runs Aug. 12-16 in Vancouver, B.C.

“In the literal people sense, pioneers in the computer graphics industry are standing shoulder to shoulder with researchers, practitioners and the future of the industry — young people — mentoring them, dabbling across multiple disciplines to innovate, relate, and grow,” says SIGGRAPH 2018 conference chair Roy C. Anthony, VP of creative development and operations at software and technology firm Ventuz. “This is really what SIGGRAPH has always been about. Generations really seemed like a very appropriate way of looking back and remembering where we all came from and how far we’ve come.”

SIGGRAPH 2018 has a number of treats in store for attendees, including the debut of Disney’s first VR film, the short “Cycles”; production sessions on the making of “Blade Runner 2049,” “Game of Thrones,” “Incredibles 2” and “Avengers: Infinity War”; as well as sneak peeks of Disney’s upcoming “Ralph Breaks the Internet: Wreck-It Ralph 2” and Laika’s “Missing Link.”

That list of ‘treats’ in the last paragraph makes the conference seem more like an iteration of a ‘comic-con’ than a technology conference.

Previews

I have four items about work that will be presented at SIGGRAPH 2018, First up, something about ‘redirected walking’ from a June 18, 2018 Association for Computing Machinery news release on EurekAlert,

CHICAGO–In the burgeoning world of virtual reality (VR) technology, it remains a challenge to provide users with a realistic perception of infinite space and natural walking capabilities in the virtual environment. A team of computer scientists has introduced a new approach to address this problem by leveraging a natural human phenomenon: eye blinks.

All humans are functionally blind for about 10 percent of the time under normal circumstances due to eye blinks and saccades, a rapid movement of the eye between two points or objects. Eye blinks are a common and natural cause of so-called “change blindness,” which indicates the inability for humans to notice changes to visual scenes. Zeroing in on eye blinks and change blindness, the team has devised a novel computational system that effectively redirects the user in the virtual environment during these natural instances, all with undetectable camera movements to deliver orientation redirection.

“Previous RDW [redirected walking] techniques apply rotations continuously while the user is walking. But the amount of unnoticeable rotations is limited,” notes Eike Langbehn, lead author of the research and doctoral candidate at the University of Hamburg. “That’s why an orthogonal approach is needed–we add some additional rotations when the user is not focused on the visuals. When we learned that humans are functionally blind for some time due to blinks, we thought, ‘Why don’t we do the redirection during eye blinks?'”

Human eye blinks occur approximately 10 to 20 times per minute, about every 4 to 19 seconds. Leveraging this window of opportunity–where humans are unable to detect major motion changes while in a virtual environment–the researchers devised an approach to synchronize a computer graphics rendering system with this visual process, and introduce any useful motion changes in virtual scenes to enhance users’ overall VR experience.

The researchers’ experiments revealed that imperceptible camera rotations of 2 to 5 degrees and translations of 4 to 9 cm of the user’s viewpoint are possible during a blink without users even noticing. They tracked test participants’ eye blinks by an eye tracker in a VR head-mounted display. In a confirmatory study, the team validated that participants could not reliably detect in which of two eye blinks their viewpoint was manipulated while walking a VR curved path. The tests relied on unconscious natural eye blinking, but the researchers say redirection during blinking could be carried out consciously. Since users can consciously blink multiple times a day without much effort, eye blinks provide great potential to be used as an intentional trigger in their approach.

The team will present their work at SIGGRAPH 2018, held 12-16 August in Vancouver, British Columbia. The annual conference and exhibition showcases the world’s leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.

“RDW is a big challenge since current techniques still need too much space to enable unlimited walking in VR,” notes Langbehn. “Our work might contribute to a reduction of space since we found out that unnoticeable rotations of up to five degrees are possible during blinks. This means we can improve the performance of RDW by approximately 50 percent.”

The team’s results could be used in combination with other VR research, such as novel steering algorithms, improved path prediction, and rotations during saccades, to name a few. Down the road, such techniques could some day enable consumer VR users to virtually walk beyond their living room.

Langbehn collaborated on the work with Frank Steinicke of University of Hamburg, Markus Lappe of University of Muenster, Gregory F. Welch of University of Central Florida, and Gerd Bruder, also of University of Central Florida. For the full paper and video, visit the team’s project page.

###

About ACM, ACM SIGGRAPH, and SIGGRAPH 2018

ACM, the Association for Computing Machinery, is the world’s largest educational and scientific computing society, uniting educators, researchers, and professionals to inspire dialogue, share resources, and address the field’s challenges. ACM SIGGRAPH is a special interest group within ACM that serves as an interdisciplinary community for members in research, technology, and applications in computer graphics and interactive techniques. SIGGRAPH is the world’s leading annual interdisciplinary educational experience showcasing the latest in computer graphics and interactive techniques. SIGGRAPH 2018, marking the 45th annual conference hosted by ACM SIGGRAPH, will take place from 12-16 August at the Vancouver Convention Centre in Vancouver, B.C.

They have provided an image illustrating what they mean (I don’t find it especially informative),

Caption: The viewing behavior of a virtual reality user, including fixations (in green) and saccades (in red). A blink fully suppresses visual perception. Credit: Eike Langbehn

Next up (2), there’s Disney Corporation’s first virtual reality (VR) short, from a July 19, 2018  Association for Computing Machinery news release on EurekAlert,

Walt Disney Animation Studios will debut its first ever virtual reality short film at SIGGRAPH 2018, and the hope is viewers will walk away feeling connected to the characters as equally as they will with the VR technology involved in making the film.

Cycles, an experimental film directed by Jeff Gipson, centers around the true meaning of creating a home and the life it holds inside its walls. The idea for the film is personal, inspired by Gipson’s childhood spending time with his grandparents and creating memories in their home, and later, having to move them to an assisted living residence.

“Every house has a story unique to the people, the characters who live there,” says Gipson. “We wanted to create a story in this single place and be able to have the viewer witness life happening around them. It is an emotionally driven film, expressing the real ups and downs, the happy and sad moments in life.”

For Cycles, Gipson also drew from his past life as an architect, having spent several years designing skate parks, and from his passion for action sports, including freestyle BMX. In Los Angeles, where Gipson lives, it is not unusual to find homes with an empty swimming pool reserved for skating or freestyle biking. Part of the pitch for Cycles came out of Gipson’s experience riding in these empty pools and being curious about the homes attached to them, the families who lived there, and the memories they made.

SIGGRAPH attendees will have the opportunity to experience Cycles at the Immersive Pavilion, a new space for this year’s conference. The Pavilion is devoted exclusively to virtual, augmented, and mixed reality and will contain: the VR Theater, a storytelling extravaganza that is part of the Computer Animation Festival; the Vrcade, a space for VR, AR, and MR games or experiences; and the well-known Village, for showcasing large-scale projects. SIGGRAPH 2018, held 12-16 August in Vancouver, British Columbia, is an annual gathering that showcases the world’s leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.

The production team completed Cycles in four months with about 50 collaborators as part of a professional development program at the studio. A key difference in VR filmmaking includes getting creative with how to translate a story to the VR “screen.” Pre-visualizing the narrative, for one, was a challenge. Rather than traditional storyboarding, Gipson and his team instead used a mix of Quill VR painting techniques and motion capture to “storyboard” Cycles, incorporating painters and artists to generate sculptures or 3D models of characters early on and draw scenes for the VR space. The creators also got innovative with the use of light and color saturation in scenes to help guide the user’s eyes during the film.

“What’s cool for VR is that we are really on the edge of trying to figure out what it is and how to tell stories in this new medium,” says Gipson. “In VR, you can look anywhere and really be transported to a different world, experience it from different angles, and see every detail. We want people watching to feel alive and feel emotion, and give them a true cinematic experience.”

This is Gipson’s VR directorial debut. He joined Walt Disney Animation Studios in 2013, serving as a lighting artist on Disney favorites like Frozen, Zootopia, and Moana. Of getting to direct the studio’s first VR short, he says, “VR is an amazing technology and a lot of times the technology is what is really celebrated. We hope more and more people begin to see the emotional weight of VR films, and with Cycles in particular, we hope they will feel the emotions we aimed to convey with our story.”

Apparently this is a still from the ‘short’,

Caption: Disney Animation Studios will present ‘Cycles’ , its first virtual reality (VR) short, at ACM SIGGRAPH 2018. Credit: Disney Animation Studios

There’s also something (3) from Google as described in a July 26, 2018 Association of Computing Machinery news release on EurekAlert,

Google has unveiled a new virtual reality (VR) immersive experience based on a novel system that captures and renders high-quality, realistic images from the real world using light fields. Created by a team of leading researchers at Google, Welcome to Light Fields is the tech giant’s splash into the nascent arena of light fields VR experiences, an exciting corner of VR video technology gaining traction for its promise to deliver extremely high-quality imagery and experiences in the virtual world.

Google released Welcome to Light Fields earlier this year as a free app on Steam VR for HTC Vive, Oculus Rift, and Windows Mixed Reality headsets. The creators will demonstrate the VR experience at SIGGRAPH 2018, in the Immersive Pavilion, a new space for this year’s conference. The Pavilion is devoted exclusively to virtual, augmented, and mixed reality and will contain: the Vrcade, a space for VR, AR, and MR games or experiences; the VR Theater, a storytelling extravaganza that is part of the Computer Animation Festival; and the well-known Village, for showcasing large-scale projects. SIGGRAPH 2018, held 12-16 August in Vancouver, British Columbia, is an annual gathering that showcases the world’s leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.

Destinations in Welcome to Light Fields include NASA’s Space Shuttle Discovery, delivering to viewers an astronaut’s view inside the flight deck, which has never been open to the public; the pristine teak and mahogany interiors of the Gamble House, an architectural treasure in Pasadena, CA; and the glorious St. Stephen’s Church in Granada Hills, CA, home to a stunning wall of more than 14,000 pieces of glimmering stained glass.

“I love that light fields in VR can teleport you to exotic places in the real world, and truly make you believe you are there,” says Ryan Overbeck, software engineer at Google who co-led the project. “To me, this is magic.”

To bring this experience to life, Overbeck worked with a team that included Paul Debevec, senior staff engineer at Google, who managed the project and led the hardware piece with engineers Xueming Yu, Jay Busch, and Graham Fyffe. With Overbeck, Daniel Erickson and Daniel Evangelakos focused on the software end. The researchers designed a comprehensive system for capturing and rendering high-quality, spherical light field still images from footage captured in the real world. They developed two easy-to-use light field camera rigs, based on the GoPro Hero4action sports camera, that efficiently capture thousands of images on the surface of a sphere. Those images were then passed through a cloud-based light-field-processing pipeline.

Among other things, explains Overbeck, “The processing pipeline uses computer vision to place the images in 3D and generate depth maps, and we use a modified version of our vp9 video codec

to compress the light field data down to a manageable size.” To render a light field dataset, he notes, the team used a rendering algorithm that blends between the thousands of light field images in real-time.

The team relied on Google’s talented pool of engineers in computer vision, graphics, video compression, and machine learning to overcome the unique challenges posed in light fields technology. They also collaborated closely with the WebM team (who make the vp9 video codec) to develop the high-quality light field compression format incorporated into their system, and leaned heavily on the expertise of the Jump VR team to help pose the images and generate depth maps. (Jump is Google’s professional VR system for achieving 3D-360 video production at scale.)

Indeed, with Welcome to Light Fields, the Google team is demonstrating the potential and promise of light field VR technology, showcasing the technology’s ability to provide a truly immersive experience with a level of unmatched realism. Though light fields technology has been researched and explored in computer graphics for more than 30 years, practical systems for actually delivering high-quality light field experiences has not yet been possible.

Part of the team’s motivation behind creating this VR light field experience is to invigorate the nascent field.

“Welcome to Light Fields proves that it is now possible to make a compelling light field VR viewer that runs on consumer-grade hardware, and we hope that this knowledge will encourage others to get involved with building light field technology and media,” says Overbeck. “We understand that in order to eventually make compelling consumer products based on light fields, we need a thriving light field ecosystem. We need open light field codecs, we need artists creating beautiful light field imagery, and we need people using VR in order to engage with light fields.”

I don’t really understand why this image, which looks like something belongs on advertising material, would be chosen to accompany a news release on a science-based distribution outlet,

Caption: A team of leading researchers at Google, will unveil the new immersive virtual reality (VR) experience “Welcome to Lightfields” at ACM SIGGRAPH 2018. Credit: Image courtesy of Google/Overbeck

Finally (4), ‘synthesizing realistic sounds’ is announced in an Aug. 6, 2018 Stanford University (US) news release (also on EurekAlert) by Taylor Kubota,

Advances in computer-generated imagery have brought vivid, realistic animations to life, but the sounds associated with what we see simulated on screen, such as two objects colliding, are often recordings. Now researchers at Stanford University have developed a system that automatically renders accurate sounds for a wide variety of animated phenomena.

“There’s been a Holy Grail in computing of being able to simulate reality for humans. We can animate scenes and render them visually with physics and computer graphics, but, as for sounds, they are usually made up,” said Doug James, professor of computer science at Stanford University. “Currently there exists no way to generate realistic synchronized sounds for complex animated content, such as splashing water or colliding objects, automatically. This fills that void.”

The researchers will present their work on this sound synthesis system as part of ACM SIGGRAPH 2018, the leading conference on computer graphics and interactive techniques. In addition to enlivening movies and virtual reality worlds, this system could also help engineering companies prototype how products would sound before being physically produced, and hopefully encourage designs that are quieter and less irritating, the researchers said.

“I’ve spent years trying to solve partial differential equations – which govern how sound propagates – by hand,” said Jui-Hsien Wang, a graduate student in James’ lab and in the Institute for Computational and Mathematical Engineering (ICME), and lead author of the paper. “This is actually a place where you don’t just solve the equation but you can actually hear it once you’ve done it. That’s really exciting to me and it’s fun.”

Predicting sound

Informed by geometry and physical motion, the system figures out the vibrations of each object and how, like a loudspeaker, those vibrations excite sound waves. It computes the pressure waves cast off by rapidly moving and vibrating surfaces but does not replicate room acoustics. So, although it does not recreate the echoes in a grand cathedral, it can resolve detailed sounds from scenarios like a crashing cymbal, an upside-down bowl spinning to a stop, a glass filling up with water or a virtual character talking into a megaphone.

Most sounds associated with animations rely on pre-recorded clips, which require vast manual effort to synchronize with the action on-screen. These clips are also restricted to noises that exist – they can’t predict anything new. Other systems that produce and predict sounds as accurate as those of James and his team work only in special cases, or assume the geometry doesn’t deform very much. They also require a long pre-computation phase for each separate object.

“Ours is essentially just a render button with minimal pre-processing that treats all objects together in one acoustic wave simulation,” said Ante Qu, a graduate student in James’ lab and co-author of the paper.

The simulated sound that results from this method is highly detailed. It takes into account the sound waves produced by each object in an animation but also predicts how those waves bend, bounce or deaden based on their interactions with other objects and sound waves in the scene.

Challenges ahead

In its current form, the group’s process takes a while to create the finished product. But, now that they have proven this technique’s potential, they can focus on performance optimizations, such as implementing their method on parallel GPU hardware, that should make it drastically faster.

And, even in its current state, the results are worth the wait.

“The first water sounds we generated with the system were among the best ones we had simulated – and water is a huge challenge in computer-generated sound,” said James. “We thought we might get a little improvement, but it is dramatically better than previous approaches even right out of the box. It was really striking.”

Although the group’s work has faithfully rendered sounds of various objects spinning, falling and banging into each other, more complex objects and interactions – like the reverberating tones of a Stradivarius violin – remain difficult to model realistically. That, the group said, will have to wait for a future solution.

Timothy Langlois of Adobe Research is a co-author of this paper. This research was funded by the National Science Foundation and Adobe Research. James is also a professor, by courtesy, of music and a member of Stanford Bio-X.

Researchers Timothy Langlois, Doug L. James, Ante Qu and Jui-Hsien Wang have created a video featuring highlights of animations with sounds synthesized using the Stanford researchers’ new system.,

The researchers have also provided this image,

By computing pressure waves cast off by rapidly moving and vibrating surfaces – such as a cymbal – a new sound synthesis system developed by Stanford researchers can automatically render realistic sound for computer animations. (Image credit: Timothy Langlois, Doug L. James, Ante Qu and Jui-Hsien Wang)

It does seem like we’re synthesizing the world around us, eh?

The SIGGRAPH 2018 art gallery

Here’s what SIGGRAPH had to say about its 2018 art gallery in Vancouver and the themes for the conference and the gallery (from a May 18, 2018 Associating for Computing Machinery news release on globalnewswire.com (also on this 2018 SIGGRAPH webpage),

SIGGRAPH 2018, the world’s leading showcase of digital art created using computer graphics and interactive techniques, will present a special Art Gallery, entitled “Origins,” and historic Art Papers in Vancouver, B.C. The 45th SIGGRAPH conference will take place 12–16 August at the Vancouver Convention Centre. The programs will also honor the generations of creators that have come before through a special, 50th anniversary edition of the Leonard journal. To register for the conference, visit S2018.SIGGRAPH.ORG.

The SIGGRAPH 2018 ART GALLERY is a curated exhibition, conceived as a dialogical space that enables the viewer to reflect on man’s diverse cultural values and rituals through contemporary creative practices. Building upon an exciting and eclectic selection of creative practices mediated through technologies that represent the sophistication of our times, the SIGGRAPH 2018 Art Gallery will embrace the narratives of the indigenous communities based near Vancouver and throughout Canada as a source of inspiration. The exhibition will feature contemporary media artworks, art pieces by indigenous communities, and other traces of technologically mediated Ludic practices.

Andrés Burbano, SIGGRAPH 2018 Art Gallery chair and professor at Universidad de los Andes, said, “The Art Gallery aims to articulate myth and technology, science and art, the deep past and the computational present, and will coalesce around a theme of ‘Origins.’ Media and technological creative expressions will explore principles such as the origins of the cosmos, the origins of life, the origins of human presence, the origins of the occupation of territories in the Americas, and the origins of people living in the vast territories of the Arctic.”

He continued, “The venue [in Vancouver] hopes to rekindle the original spark that ignited the collaborative spirit of the SIGGRAPH community of engineers, scientists, and artists, who came together to create the very first conference in the early 1970s.”

Highlights from the 2018 Art Gallery include:

Transformation Mask (Canada) [Technology Based]
Shawn Hunt, independent; and Microsoft Garage: Andy Klein, Robert Butterworth, Jonathan Cobb, Jeremy Kersey, Stacey Mulcahy, Brendan O’Rourke, Brent Silk, and Julia Taylor-Hell, Microsoft Vancouver

TRANSFORMATION MASK is an interactive installation that features the Microsoft HoloLens. It utilizes electronics and mechanical engineering to express a physical and digital transformation. Participants are immersed in spatial sounds and holographic visuals.

Somnium (U.S.) [Science Based]
Marko Peljhan, Danny Bazo, and Karl Yerkes, University of California, Santa Barbara

Somnium is a cybernetic installation that provides visitors with the ability to sensorily, cognitively, and emotionally contemplate and experience exoplanetary discoveries, their macro and micro dimensions, and the potential for life in our Galaxy. Some might call it “space telescope.”

Ernest Edmonds Retrospective – Art Systems 1968-2018 (United Kingdom) [History Based]
Ernest Edmonds, De Montfort University

Celebrating one of the pioneers of computer graphics-based art since the early 1970s, this Ernest Edmonds career retrospective will showcase snapshots of Edmonds’ work as it developed over the years. With one piece from each decade, the retrospective will also demonstrate how vital the Leonardo journal has been throughout the 50-year journey.

In addition to the works above, the Art Gallery will feature pieces from notable female artists Ozge Samanci, Ruth West, and Nicole L’Hullier. For more information about the Edmonds retrospective, read THIS POST ON THE ACM SIGGRAPH BLOG.

The SIGGRAPH 2018 ART PAPERS program is designed to feature research from artists, scientists, theorists, technologists, historians, and more in one of four categories: project description, theory/criticism, methods, or history. The chosen work was selected by an international jury of scholars, artists, and immersive technology developers.

To celebrate the 50th anniversary of LEONARDO (MIT Press), and 10 years of its annual SIGGRAPH issue, SIGGRAPH 2018 is pleased to announce a special anniversary edition of the journal, which will feature the 2018 art papers. For 50 years, Leonardo has been the definitive publication for artist-academics. To learn more about the relationship between SIGGRAPH and the journal, listen to THIS EPISODE OF THE SIGGRAPH SPOTLIGHT PODCAST.

“In order to encourage a wider range of topics, we introduced a new submission type, short papers. This enabled us to accept more content than in previous years. Additionally, for the first time, we will introduce sessions that integrate the Art Gallery artist talks with Art Papers talks, promoting richer connections between these two creative communities,” said Angus Forbes, SIGGRAPH 2018 Art Papers chair and professor at University of California, Santa Cruz.

Art Papers highlights include:

Alienating the Familiar with CGI: A Recipe for Making a Full CGI Art House Animated Feature [Long]
Alex Counsell and Paul Charisse, University of Portsmouth

This paper explores the process of making and funding an art house feature film using full CGI in a marketplace where this has never been attempted. It explores cutting-edge technology and production approaches, as well as routes to successful fundraising.

Augmented Fauna and Glass Mutations: A Dialogue Between Material and Technique in Glassblowing and 3D Printing [Long]
Tobias Klein, City University of Hong Kong

The two presented artworks, “Augmented Fauna” and “Glass Mutations,” were created during an artist residence at the PILCHUCK GLASS SCHOOL. They are examples of the qualities and methods established through a synthesis between digital workflows and traditional craft processes and thus formulate the notion of digital craftsmanship.

Inhabitat: An Imaginary Ecosystem in a Children’s Science Museum [Short]
Graham Wakefield, York University, and Haru Hyunkyung Ji, OCAD University

“Inhabitat” is a mixed reality artwork in which participants become part of an imaginary ecology through three simultaneous perspectives of scale and agency; three distinct ways to see with other eyes. This imaginary world was exhibited at a children’s science museum for five months, using an interactive projection-augmented sculpture, a large screen and speaker array, and a virtual reality head-mounted display.

What’s the what?

My father used to say that and I always assumed it meant summarize the high points, if you need to, and get to the point—fast. In that spirit, I am both fascinated and mildly appalled. The virtual, mixed, and augmented reality technologies, as well as, the others being featured at SIGGRAPH 2018 are wondrous in many ways but it seems we are coming ever closer to a world where we no longer interact with nature or other humans directly. (see my August 10, 2018 posting about the ‘extinction of experience’ for research that encourages more direct interaction with nature) I realize that SIGGRAPH is intended as a primarily technical experience but I think a little more content questioning these technologies and their applications (social implications) might be in order. That’s often the artist’s role but I can’t see anything in the art gallery descriptions that hint at any sort of fundamental critique.

All about time, metronomes, and attoseconds

Apparently there’s a metronome (the world’s most accurate) which makes it possible to get slow-motion videos/movies of atoms and molecules. The Jan. 16, 2012 news item on Nanowerk offers this,

The world’s most accurate metronome keeps stroke to an incredible 10 quintillionth of a second. The device enables slow-motion pictures from the world of molecules and atoms, scientists from the Center for Free-Electron Laser Science (CFEL) in Hamburg, Germany, and the Massachusetts Institute of Technology (MIT) report. The metronome, an ultrashort pulse laser, acting as an optical flywheel, is currently the most precise clock generator on short time scales, writes the research team headed by DESY scientist Prof. Franz X. Kärtner in the journal Nature Photonics (“Optical flywheels with attosecond jitter”). CFEL is a joint venture of DESY, the German Max Planck Society and the University of Hamburg.

I find this prospect gobsmacking (quite stunning), from the news item,

The accuracy of the laser beat is ten attoseconds (quintillionth of a second), or 0.000 000 000 000 000 01 seconds. [emphasis mine] Atomic clocks achieve a higher precision, yet on longer time scales. Only with this accurate laser beat it is possible to take motion pictures of the nanocosm, as the movement of electrons in molecules and atoms take place on time scales of some 100 attoseconds to femtoseconds. [emphasis mine] “That is about the time an electron needs for orbiting a hydrogen nucleus or for the electric charge to move through a molecule during photosynthesis,” Kärtner explains. With novel light sources, so-called free-electron lasers, researchers expect fundamental new insights into those processes.

I can hardly wait to see my first nanocosm in motion. There’s no word as to when this might be possible in either the news item on Nanowerk or on the Center for Free-Electron Laser Science (CFEL) announcement page.