Mentioned here twice (in a November 29, 2019 posting about the call for proposals and in a March 4, 2020 posting about the preliminary programme), the 2020 International Symposium on Electronic Arts has been postponed, from a March 23, 2020 announcement received via email,
POSTPONEMENT NOTICE – ISEA2020 New Dates: October 13 to 18, 2020
Montreal, March 23, 2020 — With the COVID-19 pandemic, the world is facing an extraordinary situation. Following the measures announced by the Government of Quebec and the Government of Canada, in a concerted decision with its partners and collaborators, Montreal Digital Spring (Printemps numérique) has decided to postpone ISEA2020: WHY SENTIENCE? We are looking forward to seeing you in Montreal, October 13 to 18 2020!
Our priorities are public health and high-quality programming, and we will work hard during the spring and summer to ensure that the ISEA community enjoys a memorable symposium! We thank you for your understanding.
Any purchases already made will be automatically transferred to the new dates. The new deadline for Early Bird registration, for presenters to upload camera-ready papers and to fill in the Zone Festival form is May 1st, 2020 at 11:59 pm (GMT-5).
The answers to most of your questions can be found in the FAQ. If you have a specific question, contact us at the following emails:
I first featured science slams in a July 17, 2013 posting when they popped up in the UK although I think they originated in Germany. As for Science Slam Canada, I think they started in 2016, (t least, that’s when they started their twitter feed).
Science Slam YVR at Fox It’s beginning to look a lot like … it’s time to have another Science Slam at the Fox!
For those of you who have never experienced the wonder of Science Slam, welcome! We are Vancouver’s most epic science showdown. Sit back, relax, and watch as our competitors battle to achieve science communication fame and glory.
What exactly is a science slam? Based on the format of a poetry slam, a science slam is a competition where speakers gather to share their science with you – the audience. Competitors have five minutes to present on any science topic without the use of a slideshow and are judged based on communication skills, audience impact and scientific content. Props and creative presentation styles are encouraged!
Whether you’re a researcher, student, educator, artist, or communicator, our stage is open to you. If you’ve got a science topic you’re researching, or just a topic you’re excited about, send in an application! If you’re not sure about an idea, just ask!
*Early Bird Tickets are $10, Regular are $12. [emphasis mine] Purchase them here: https://www.eventbrite.com/e/science-slam-at-fox-tickets-80868462749
Doors open at 7pm, event begins at 7:30pm. We’ll see you there!
Science Slam acknowledges that this event takes place on the traditional, ancestral, and unceded territory of the Squamish, Sto:lo, Musqueam, and Tsleil Waututh Nation. Many of our attendees, Science Slam included, are are guests of these territories and must act accordingly.
Science Slam is an inclusive event, as a result hate speech and abuse will not be tolerated. This includes anti-blackness, anti-indigenous, transphobia, homophobia, biphobia, islamophobia, xenophobia, fatphobia, ableism, transmisogyny, misogyny, femmephobia, cissexism, and anti-immigrant attitudes.
I went to the eventbrite website where you can purchase tickets and the prices reflect the first set in the announcement. Early bird tickets are sold out, which leaves you with General Admission at $12.
Collider Cafe in Vancouver on December 4, 2019
I think they were tired when they (CuriosityCollider.org) came up with the title for the upcoming Collider Cafe December 2019 event. Unfortunately, the description isn’t too exciting either. On the plus side, their recent Invasive Systems Collisions Festival was pretty interesting and one of the exhibits from that festival is being featured (artist: Laara Cerman; scientist: Scott Pownell)..
Here’s more about the upcoming Collider Cafe from their November 27, 2019 announcement (received via email),
Art. Science. Analogies.
Let analogies guide us through exploring the art and science in chemistry, nature, genetics, and technology.
Our #ColliderCafe is a space for artists, scientists, makers, and anyone interested in art+science to meet, discover, and connect. Are you curious? Join us at “Collider Cafe: Art. Science. Analosiges.” to explore how art and science intersect in the exploration of curiosity.
When: 8:00pm on Wednesday, December 4, 2019. Doors open at 7:30pm. Where: Pizzeria Barbarella. 654 E Broadway, Vancouver, BC (Google Map). Cost: $5-10 (sliding scale) cover at the door. Proceeds will be used to cover the cost of running this event, and to fund future Curiosity Collider events.
//Special thanks to Pizzeria Barbarella for hosting the upcoming Collider Cafe!//
Back to me, I’m still struggling with this hugely changed Word Press, which they claim is an ‘improvement’. In any case, for this second event, I decided that choosing a larger font size was superior to putting everything into a single block as I did for the Science Slam event. Please let me know if you have any opinions on the matter in the comments section.
Moving on, don’t expect Chris Dunnett’s presentation ‘Poetry of Technology’ to necessarily feature any poetry, if his website is any indication of his work. Also, I notice that Vance Williams is associated with 4D Labs at Simon Fraser University. At one time, 4D Labs was a ‘nanotechnology’ lab but at this time (November 29, 2019), it seems they are a revenue-producing group selling their materials expertise and access to their lab equipment to industry and other academic institutions. Still, Williams may feature some nanoscale work as part of his presentation.
A new software system developed by Brown University [US] researchers turns cell phones into augmented reality portals, enabling users to place virtual building blocks, furniture and other objects into real-world backdrops, and use their hands to manipulate those objects as if they were really there.
The developers hope the new system, called Portal-ble, could be a tool for artists, designers, game developers and others to experiment with augmented reality (AR). The team will present the work later this month at the ACM Symposium on User Interface Software and Technology (UIST 2019) in New Orleans. The source code for Andriod is freely available for download on the researchers’ website, and iPhone code will follow soon.
“AR is going to be a great new mode of interaction,” said Jeff Huang, an assistant professor of computer science at Brown who developed the system with his students. “We wanted to make something that made AR portable so that people could use anywhere without any bulky headsets. We also wanted people to be able to interact with the virtual world in a natural way using their hands.”
Huang said the idea for Portal-ble’s “hands-on” interaction grew out of some frustration with AR apps like Pokemon GO. AR apps use smartphones to place virtual objects (like Pokemon characters) into real-world scenes, but interacting with those objects requires users to swipe on the screen.
“Swiping just wasn’t a satisfying way of interacting,” Huang said. “In the real world, we interact with objects with our hands. We turn doorknobs, pick things up and throw things. So we thought manipulating virtual objects by hand would be much more powerful than swiping. That’s what’s different about Portal-ble.”
The platform makes use of a small infrared sensor mounted on the back of a phone. The sensor tracks the position of people’s hands in relation to virtual objects, enabling users to pick objects up, turn them, stack them or drop them. It also lets people use their hands to virtually “paint” onto real-world backdrops. As a demonstration, Huang and his students used the system to paint a virtual garden into a green space on Brown’s College Hill campus.
Huang says the main technical contribution of the work was developing the right accommodations and feedback tools to enable people to interact intuitively with virtual objects.
“It turns out that picking up a virtual object is really hard if you try to apply real-world physics,” Huang said. “People try to grab in the wrong place, or they put their fingers through the objects. So we had to observe how people tried to interact with these objects and then make our system able accommodate those tendencies.”
To do that, Huang enlisted students in a class he was teaching to come up with tasks they might want to do in the AR world — stacking a set of blocks, for example. The students then asked other people to try performing those tasks using Portal-ble, while recording what people were able to do and what they couldn’t. They could then adjust the system’s physics and user interface to make interactions more successful.
“It’s a little like what happens when people draw lines in Photoshop,” Huang said. “The lines people draw are never perfect, but the program can smooth them out and make them perfectly straight. Those were the kinds of accommodations we were trying to make with these virtual objects.”
The team also added sensory feedback — visual highlights on objects and phone vibrations — to make interactions easier. Huang said he was somewhat surprised that phone vibrations helped users to interact. Users feel the vibrations in the hand they’re using to hold the phone, not in the hand that’s actually grabbing for the virtual object. Still, Huang said, vibration feedback still helped users to more successfully interact with objects.
In follow-up studies, users reported that the accommodations and feedback used by the system made tasks significantly easier, less time-consuming and more satisfying.
Huang and his students plan to continue working with Portal-ble — expanding its object library, refining interactions and developing new activities. They also hope to streamline the system to make it run entirely on a phone. Currently the infrared sensor requires an infrared sensor and external compute stick for extra processing power.
Huang hopes people will download the freely available source code and try it for themselves. “We really just want to put this out there and see what people do with it,” he said. “The code is on our website for people to download, edit and build off of. It will be interesting to see what people do with it.
Co-authors on the research paper were Jing Qian, Jiaju Ma, Xiangyu Li, Benjamin Attal, Haoming Lai, James Tompkin and John Hughes. The work was supported by the National Science Foundation (IIS-1552663) and by a gift from Pixar.
This is the first time I’ve seen an augmented reality system that seems accessible, i.e., affordable. You can find out more on the Portal-ble ‘resource’ page where you’ll also find a link to the source code repository. The researchers, as noted in the news release, have an Android version available now with an iPhone version to be released in the future.
I’ve already written about October 2019 science and art/science events in Canada (see my Sept. 26, 2019 posting), but more event notices for Octoberhave come my way. These events are all art/science (or sciart as it’s sometimes called).
… on the future of life forms … a two-night (Oct./Nov.) discussion in Toronto, Canada
Here’s more from the ArtSci Salon’s October 3, 2019 announcement (received via email)
“…now they were perfecting a pigoon that could grow five or six kidneys at a time. Such a host animal could be reaped of its extra kidneys; then, rather than being destroyed, it could keep on living and grow more organs, much as a lobster could grow another claw to replace a missing one. That would be less wasteful, as it took a lot of food and care to grow a pigoon. A great deal of investment money had gone into OrganInc Farms…” (Margaret Atwood – Oryx & Crake 2003)
In Oryx and Crake Margaret Atwood describes a not-too-distant future where humans have perfected the art of fabricating and modifying a variety of creatures to improve and prolongue their own lives and wellbeing.
As Atwood has stated in various occasions, this is not science fiction.
It is in fact already happening. New forms of life appear not only as the product of lab fabrication or gene editing, but also as the result of toxic pollutants and climate change induced adaptation.
what to make of them?
how to cope with a world where extinction, adaptation and mutation risk to make traditional categories and taxonomies obsolete?
Join us to this two-parts series to discuss the ethics and implications of these transformations with artists, scientists and bioethicists.
Part 1 Thursday, October 17, 6:00-8:00 pm The Fields Institute for Research in Mathematical Sciences
Altered Inheritance: extinction, recreation or transformation? a dialogue and discussion on the implications of genome editing on humans and other organisms
with Francoise Baylis – Research Professor, Bioethicist, Dalhousie University
Karen Maxwell – Dept. of Biochemistry, Maxwell Lab, University of Toronto
emergent artists from OCADU [Ontario College of Art and Design University] and YorkU [York University, Toronto]
Part 2 Thursday, November 21, 6:00-8:00 pm The Fields Institute for Research in Mathematical Sciences
Classifying the new? why do we classify? what is it good for? what is the limit of taxonomy and classification in a transforming world?
with Richard Pell – Centre for PostNatural History, Pittsburgh, PA
Laurence Packer – Mellitologist, Professor of biology and environmental studies, York University
Stefan Herda – earth science artist
Cole Swanson – artist and educator (Art Foundation and Visual and Digital Arts, Humber college)
Anna Marie O’Brien – Frederickson, Rochman, and Sinton labs, University of Toronto
Françoise Baylis is University Research Professor at Dalhousie University. She is a member of the Order of Canada and the Order of Nova Scotia, as well as a fellow of the Royal Society of Canada and of the Canadian Academy of Health Sciences. Baylis was one of the organizers of, and a key participant in, the 2015 International Summit on Human Gene Editing. She is a member of the WHO expert advisory committee on Developing Global Standards for Governance and Oversight of Human Genome Editing. Her new book “Altered Inheritance. CRISPR and the Ethics of Human Genome Editing” is published by Harvard University Press
Karen Maxwell is a research professor in the dept of biochemistry at the university of toronto, where she runs the Maxwell Lab. Among other topics, the lab’s three branches “Anti-CRISPR”, “Phage morons” and “Anti-Phage defences” study the interplay of phages with their bacterial hosts, with a focus on phage mediated bacterial virulence mechanisms and inhibitors of anti-phage bacterial defenses.
Richard Pell works at the intersections of science, engineering, and culture. He has worked in a variety of electronic media from documentary video to robotics to bioart to museum exhibition. He is the founder and director of the Center for PostNatural History (CPNH), an organization dedicated to the collection and exposition of life-forms that have been intentionally and heritably altered through domestication, selective breeding, tissue culture or genetic engineering. The CPNH operates a permanent museum in Pittsburgh, Pennsylvania, and produces traveling exhibitions that have appeared in science and art museums throughout Europe and the United States, including being the subject of a major exhibition at the Wellcome Collection in London.
Laurence Packer is a mellitologist, ie a scholar whose main subject of study is wild bees. his research primarily involves the systematics of the bee subfamily Xeromelissinae – an obscure, but fascinating group of bees, restricted to the New World south of central Mexico. he has also expended considerable energy leading the global campaign to barcode the bees of the world. his work is concerned with promulgating the importance of bees: for genetic reasons, it seems that bees are more extinction prone than are almost all other organisms
Stefan Herda‘s practice explores our troubling relationship to the natural world through drawing, sculpture and video. Inspired by the earth sciences, Herda’s work navigates the space between truth and fiction. His material and process-based investigations fuse elements of authenticity, façade, the natural and the manufactured together. He received his BAH from the University of Guelph in 2010. His work in both sculpture and video has been included in exhibitions nationally and has been featured by CBC Arts and Daily VICE. Recently, Stefan has held solo shows at Patel Projects (Toronto) and Wil Kucey Gallery (Toronto), participated in group shows such as Cultivars: Possible Worlds at InterAccess (Toronto) and was featured as one of 12 artists in the Cabinet Project at the University of Toronto
Cole Swanson is an artist and educator based in Toronto, Canada. He has exhibited in solo and group exhibitions across Canada and throughout international venues in North America, South America, Europe, and Asia. At the heart of recent work is a cross-disciplinary exploration of materials and their sociocultural and biological histories. Embedded within art media and commonplace resources are complex relations between nature and culture, humans and other agents, consumers and the consumed. Swanson has engaged in a broad material practice using sound, installation, painting, and sculpture to explore interspecies relationships.
Anna Marie O’Brien is a post doc in the Frederickson, Rochman, and Sinton labs at University of Toronto, working on duckweeds, microbes, urban contaminants, and phenotypes.her PhD work was at Davis, with thesis advisors Dr. Jeffrey Ross-Ibarra and Dr. Sharon Strauss. she also collaborated closely with Dr. Ruairidh Sawers at LANGEBIO-CINVESTAV in Guanajuato, Mexico.
The first highlighted speaker, Françoise Baylis, has been mentioned here twice before, in a May 17, 2019 posting (scroll down to the ‘Global plea for moratorium on heritable genome editing’ subheading) and in an April 26, 2019 posting (scroll down to the ‘Finally’ subheading, the second paragraph). Both postings touch on the topic of CRISPR (clustered regularly interspaced short palindromic repeats) and germline editing (genetic editing that will affect all of your descendents).
Cartooney in New Westminster (near Vancouver, Canada) starting October 18, 2019
I like physics but I love cartoons Stephen Hawking
There you have it from one of the 20th/early 21st century’s most famous physicists. The quote is the opening line for the New Westminster (near Vancouver, Canada) New Media Gallery’s latest event webpage, Cartooney,
The impact of animated cartoons has been profound. In the early 20th century, we began exploiting the possibilities of the animated frame. The seven artists in this exhibition don’t create cartoons, they deconstruct those that already exist; from Looney Tunes, to The Simpsons to Charlie Brown. They exploit this potent material to reveal the inner and outer workings of our human world. The original cartoon is ever-present, haunting us with suggestive content.
The artists in this exhibition reframe our world. Here we are asked to consider the laws, systems and iconographies of the cartoon world while drawing parallels with our human world; physical laws, the laws of gravitation, matter + light, the physics of motion, and societal psychologies & behaviours. We are presented with fascinating catalogues and overlaying systems of symbolic language. The purposeful demolition of expectation in these works, mirrors the instabilities and dreams of modern life. They remind us that the pervasive medium of the cartoon can reflect and influence how we navigate the world. If there is a paradox here, it might be that dismantling a cartoon can throw open the doors of perception.
The New Westminster New Media Gallery’s next exhibition is exploring the impact of animated cartoons.
Cartooney opens at the gallery on Friday, Oct. 18 and runs until Dec. 8 , then again from Jan. 7 to Feb. 2 .
Artist Kevin McCoy, one-half of the duo of Jennifer and Kevin McCoy, will be on hand for an artist talk on opening night, Friday, Oct. 18. The talk will run from 6:30 to 7:30 p.m., with a reception and open exhibition from 7:30 to 9 p.m.
Laws of Motion in a Cartoon Landscape, by Andy Holden (U.K.):
In his two-channel audiovisual installation, 57 minutes long, Holden becomes a cartoon avatar, giving both a lecture on cartoons and a cartoon lecture, describing how our world is best now understood as a cartoon. The project incorporates Greek philosophy, Stephen Hawking, critical theory, physics, art, the financial crisis and Donald Trump, while adapting 10 laws of cartoon physics to create a theory of the world and a prophetic glimpse of the world we live in.
CB-MMXVIII (I’ve been thinking of giving sleeping lessons), by Patten (U.K.):
In this multi-screen audiovisual installation, the artist duo Patten subjects Charlie Brown to all the digital stresses, distortions and manipulations available in 2018, testing his plasticity.
“Sampled texts from philosophy, science and critical theory criss-cross the screens and are linked with scrolling images related to the natural world, DNA, systems, multiples; all serving to influence our reading of the cartoon character and the texts,” says the release. The ambient soundtrack is a dramatically slowed down Linus and Lucy theme.
You can find the New Westminster New Media Gallery on the third floor at the Anvil Centre, 777 Columbia St. See www.newmediagallery.ca for more details.
Collisions Festival: Invasive Systems in Vancouver, November 2019
Curiosity Collider, a Vancouver-based not-for-profit organization, will be hosting its inaugural art-science Collisions Festival: Invasive Systems at the VIVO Media Arts Centre from November 8 to 10, 2019. The festival features an art-science exhibition showcasing independent works and collaborative works by artist/scientist pairs, a hands-on DNA sonification workshop, an opening reception with performances, and guided discussions and tours.
Curated by Curiosity Collider’s Creative Director Char Hoyt, the theme of the festival focuses on the “invasive systems” that surround us – from technology and infections, to pollution and invasive species. “We want to create a space to explore the influence of the invasive aspects of our world on our inner and outer lives” said Char. “We will examine our observations from both scientific and artistic perspectives- are these influences beneficial, inevitable, or preventable?” Attendees can anticipate a deep dive into the delicate and complicated nature of how both living and inanimate things redefine our lives and environments – through visual art, multimedia installations, and interactive experiences.
“I am not a scientist and do not come from a family of scientists, but I have always appreciated knowing how things work, how things are connected and how things evolve – collaboration between art and science feel natural to me,” said Vancouver artist Dzee Lousie. “Both artists and scientists are curious, perform experiments and are driven by questions.” Dzee’s work Crossing, an interactive puzzle painting that examines how microbial colonies can impact our behaviours and processes in our body, is the result of a collaboration with UBC PhD candidate Linda Horianopoulos. “As scientists, we often want people to take notice of our work and engage with it. I think that art attracts people to do exactly that,” said Linda.
The sculptural work Invasion by Prince George artist Twyla Exner explores the remnants of technology. “My artworks propose hybrids of technological structures and living organisms. They take form as abandoned technologies that have sprouted with new life, clever artificialities that imitate nature, or biotechnological fixtures of the not-so-distant future,” Twyla shared. Like Dzee, she feels that artists and scientists share the sense of curiosity, experimentation, and creative problem solving. “Both art and science have the ability to tell stories and shape how people see and interpret the world around them.”
The festival is hosted in collaboration with the VIVO Media Arts Centre (2625 Kaslo Street, Vancouver, BC V5M 3G9). It will open on the evening of November 8th, with a reception and a live performance by local sound artist Edzi’u, during which her sculptural installation Moose are Life will be brought to life. On Saturday, artist Laara Cerman will co-host a DNA sonification workshop with scientist Scott Pownall. Their work Flora’s Song No. 1 in C Major – a hand-cranked music box that plays a tune created from the DNA of local invasive plants – will be on exhibit during the festival. The festival will also include tours by the curator at 3:30pm and guided discussions at 4pm on both Saturday and Sunday. Visit https://collisionsfestival2019.eventbrite.ca for festival tickets and http://bit.ly/collisionsfestival2019 for festival information.
Curiosity Collider and VIVO Media Arts Centre gratefully acknowledge the support of BC Arts Council, Canada Council for the Arts, City of Vancouver, Metro Vancouver Regional Cultural Project Grants Program, UBC Faculty of Science, and our printing sponsor Jukebox, for making Collisions Festival: Invasive Systems possible.
About Curiosity Collider Art-Science Foundation
Curiosity Collider Art-Science Foundation is a Vancouver based non-profit organization that is committed to providing opportunities for artists whose work expresses scientific concepts and scientists who collaborate with artists. We challenge the perception and experience of science in our culture, break down the walls between art and science, and engage our growing community to bringing life to the concepts that describe our world.
In this DNA sonification workshop, participants will learn the process of DNA barcoding of invasive plant species, and how to sonify DNA sequences with basic music theory and MIDI freeware. Participants will also get hands-on experience in amplying specific genetic regions in plants through polymerase chain reaction (PCR), a step necessary in preparing samples for DNA barcoding.
This workshop will be led by artist Laara Cerman and scientist Scott Pownall, whose art-science collaborative work “Flora’s Song No. 1 in C Major” will be on exhibit during Collisions Festival: Invasive Systems. Laara and Scott will also share their process of working together, and how decisions were made to arrive at their collaborative work of art and science.
We acknowledge that Collisions Festival and its events take place on the traditional, ancestral, unceded territories of the xwməθkwəy̓əm (Musqueam), Skwxwú7mesh (Squamish), Stó:lō and Səl̓ílwətaʔ/Selilwitulh (Tsleil- Waututh) Nations. We are grateful for the opportunity to live and work on this land.
I asked the Curiosity Collider folks (@CCollider on Twitter) if you needed to bring any equipment or have any knowledge of music. The answer was: no, you don’t need to bring anything (unless you want to) and you don’t need to know about music.
Uncorked at Science World at TELUS World of Science in Vancouver on November 14, 2019
This is not a cheap night out. An October 10, 2019 article by Lindsay William-Ross for the Daily Hive website gives you reasons to go anyway (Note: Links have been removed),
A new wine-themed event will have Vancouverites swirling with nerdy glee. Uncorked: A Celebration of the Science of Wine is an evening of sipping and learning that will bring together world-renown winemakers, chefs, and science experts for an unforgettable event.
Participating wineries are:
Mission Hill Family Estate CedarCreek Estate Winery CheckMate Artisanal Winery Martin’s Lane Winery Road 13 Vineyards
The wines will be paired with bites from Chef Patrick Gayler from Mission Hill’s Terrace Restaurant and Chef Neil Taylor from CedarCreek’s new Home Block Restaurant.
Programming for the evening includes seminars on the science of blending wine, the science of aging wine, the role of technology at modern vineyards, and the science of soil and terroir.
Proceeds from Uncorked will support Science World’s On the Road program, which last year brought live science performances to 41,500 students throughout B.C. who otherwise might not have had a chance to visit TELUS World of Science.
Tickets are $89 and can be purchased here. You may also want to reserve some money for the silent auction. Don’t forget, it’s November 14, 2019 from 7 pm to 10 pm at Science World in Vancouver. You can find directions and a map here.
For anyone who’s not familiar with the problem, digital art is disappearing or very difficult and/or expensive to access after the technology on which or with which it was created becomes obsolete. Fear not! Mathematicians are coming to the rescue in a joint programme between New York University (NYU) and the Solomon R. Guggenheim Museum.
Just as conservators have developed methods to protect traditional artworks, computer scientists have now created means to safeguard computer- or time-based art by following the same preservation principles.
Software- and computer-based works of art are fragile — not unlike their canvas counterparts — as their underlying technologies such as operating systems and programming languages change rapidly, placing these works at risk.
These include Shu Lea Cheang’s Brandon (1998-99), Mark Napier’s net.flag (2002), and John F. Simon Jr.’s Unfolding Object (2002), three online works recently conserved at the Solomon R. Guggenheim Museum, through a collaboration with New York University’s Courant Institute of Mathematical Sciences.
Fortunately, just as conservators have developed methods to protect traditional artworks, computer scientists, in collaboration with time-based media conservators, have created means to safeguard computer- or time-based art by following the same preservation principles.
“The principles of art conservation for traditional works of art can be applied to decision-making in conservation of software- and computer-based works of art with respect to programming language selection, programming techniques, documentation, and other aspects of software remediation during restoration,” explains Deena Engel, a professor of computer science at New York University’s Courant Institute of Mathematical Sciences.
Since 2014, she has been working with the Guggenheim Museum’s Conservation Department to analyze, document, and preserve computer-based artworks from the museum’s permanent collection. In 2016, the Guggenheim took more formal steps to ensure the stature of these works by establishing Conserving Computer-Based Art (CCBA), a research and treatment initiative aimed at preserving software and computer-based artworks held by the museum.
“As part of conserving contemporary art, conservators are faced with new challenges as artists use current technology as media for their artworks,” says Engel. “If you think of a word processing document that you wrote 10 years ago, can you still open it and read or print it? Software-based art can be very complex. Museums are tasked with conserving and exhibiting works of art in perpetuity. It is important that museums and collectors learn to care for these vulnerable and important works in contemporary art so that future generations can enjoy them.”
Under this initiative, a team led by Engel and Joanna Phillips, former senior conservator of time-based media at the Guggenheim Museum, and including conservation fellow Jonathan Farbowitz and Lena Stringari, deputy director and chief conservator at the Guggenheim Museum, explore and implement both technical and theoretical approaches to the treatment and restoration of software-based art.
In doing so, they not only strive to maintain the functionality and appeal of the original works, but also follow the ethical principles that guide conservation of traditional artwork, such as sculptures and paintings. Specifically, Engel and Phillips adhere to the American Institute for Conservation of Historic and Artistic Works’ Code of Ethics, Guidelines for Practice, and Commentaries, applying these standards to artistic creations that rely on software as a medium.
“For example, if we migrate a work of software-based art from an obsolete programming environment to a current one, our selection and programming decisions in the new programming language and environment are informed in part by evaluating the artistic goals of the medium first used,” explains Engel. “We strive to maintain respect for the artist’s coding style and approach in our restoration.”
So far, Phillips and Engel have completed two restorations of on-line artworks at the museum: Cheang’s Brandon (restored in 2016-2017) and Simon’s Unfolding Object (restored in 2018).
Commissioned by the Guggenheim in 1998, Brandon was the first of three web artworks acquired by the museum. Many features of the work had begun to fail within the fast-evolving technological landscape of the Internet: specific pages were no longer accessible, text and image animations no longer displayed properly, and internal and external links were broken. Through changes implemented by CCBA, Brandon fully resumes its programmed, functional, and aesthetic behaviors. The newly restored artwork can again be accessed at http://brandon.guggenheim.org.
Unfolding Object enables visitors from across the globe to create their own individual artwork online by unfolding the pages of a virtual “object”—a two-dimensional rectangular form—click by click, creating a new, multifaceted shape. Users may also see traces left by others who have previously unfolded the same facets, represented by lines or hash marks. The colors of the object and the background change depending on the time of day, so that two simultaneous users in different time zones are looking at different colors. But because the Java technology used to develop this early Internet artwork is now obsolete, the work was no longer supported by contemporary web browsers and is not easily accessible online.
About the CCBA
A longtime pioneer in the field of contemporary art conservation, and one of the few institutions in the United States with dedicated staff and lab facilities for the conservation of time-based media art, the Guggenheim established the Conserving Computer-Based Art initiative in 2016. The first program dedicated to this subject at the museum, this multiyear project was created to research and develop better practices for the acquisition, preservation, maintenance, and display of computer-based art. By addressing the challenges of preserving digital artworks, including hardware failure, rapid obsolescence of operating systems, and artists’ custom software, CCBA is tasked with the conservation of 22 computer-based artworks in the Guggenheim collection to ensure long-term storage and access to the public. The CCBA initiative is an opportunity for the Guggenheim to facilitate cross-institutional collaboration towards best-practice development, and CCBA integrates the museum’s ongoing work with the faculty and students of the Department of Computer Science at NYU’s Courant Institute for Mathematical Sciences.
Conserving Computer-Based Art is supported by the Carl & Marilynn Thoma Art Foundation, the New York State Council on the Arts with the support of Governor Andrew Cuomo and the New York State Legislature, Christie’s, and Josh Elkes.
About the Solomon R. Guggenheim Foundation
The Solomon R. Guggenheim Foundation was established in 1937 and is dedicated to promoting the understanding and appreciation of modern and contemporary art through exhibitions, education programs, research initiatives, and publications. The Guggenheim international constellation of museums includes the Solomon R. Guggenheim Museum, New York; the Peggy Guggenheim Collection, Venice; the Guggenheim Museum Bilbao; and the future Guggenheim Abu Dhabi. In 2019, the Frank Lloyd Wright-designed Solomon R. Guggenheim Museum celebrates 60 years as an architectural icon and “temple of spirit” where radical art and architecture meet. To learn more about the museum and the Guggenheim’s activities around the world, visit guggenheim.org.
About the Courant Institute of Mathematical Sciences
New York University’s Courant Institute of Mathematical Sciences is a leading center for research and education in mathematics and computer science. The Institute has contributed to domestic and international science and engineering by promoting an integrated view of mathematics and computation. Faculty and students are engaged in a broad range of research activities, which include many areas of mathematics and computer science as well as the application of these disciplines to problems in the biological, physical, and economic sciences. The Courant Institute has played a central role in the development of applied mathematics, analysis, and computer science, and its faculty has received numerous national and international awards in recognition of their extraordinary research accomplishments. For more information, visit http://www.cims.nyu.edu/.
Have fun exploring these relatively newly available art works.
It seems that DOXA (The Documentary Media Society), an organization that once a year in the Spring produces a documentary film festival is expanding its empire.
According to an October 15, 2018 posting by Rebecca Bollwitt (Miss 604 blog), DOXA is presenting something new, The Vancouver Podcast Festival in November 2018 (Note: A link has been removed),
A new festival dedicated to highlighting the power of podcasting as a non-fiction medium will present an array of public and industry events from November 8-10, 2018. Vancouver Podcast Festival, presented by DOXA features three days of panels, hands-on workshops, and live podcast presentations and tapings to celebrate one of the world’s fastest-growing mediums.
Vancouver Podcast Festival
When: November 8-10, 2018 Tickets: Available now online Where: Rio Theatre, CBC Vancouver, The Post @ 750, Secret Location, and the Vancouver Public Library Central Branch.
The theme of the festival is “True Crime and Justice,” and will feature internationally acclaimed shows, including You Must Remember This, hailed as “addictive” by The Guardian and “essential” by Vanity Fair. Other exciting talents include the award-winning Someone Knows Something and Peabody winner In The Dark, shows that take justice into their own hands and cause real change, overturning cases, uncovering killers and exposing flaws in our legal systems. At the Vancouver Podcast Festival, these journalists will reveal how they make and share their groundbreaking work.
Thursday, November 8, 2018 11:00 AM
Vancouver Public Library
View all events tagged “FREE EVENT”
The Fear of Science brings together scientists and common people for an unfiltered discussion about complicated and sometimes controversial science-fears in a fun and respectful way. We dive into the wide world of science to demystify, debunk and delight! Each show features a new science fear, with special guests and more surprises along the way.
Vancouver Public Library 350 W Georgia St
Vancouver, BC V6B 6B1
November 8, 2018, 11:00 AM – 12:00 PM
They are offering a range of events that include politics and podcasting, journalism and podcasting, live shows, and panel discussions. Most of these events are free. Go here for tickets and more information.
Dr. Konstantin (Kostya) Novoselov, one of the two scientists at the University of Manchester (UK) who were awarded Nobel prizes for their work with graphene, has embarked on an artistic career of sorts. From an August 8, 2018 news item on Nanowwerk,
Nobel prize-winning physicist Sir Kostya Novoselov worked with artist Mary Griffiths to create Prospect Planes – a video artwork resulting from months of scientific and artistic research and experimentation using graphene.
Prospect Planes will be unveiled as part of The Hexagon Experiment series of events at the Great Exhibition of the North 2018, Newcastle, on August 17 .
Mary Griffiths, has previously worked on other graphene artworks including From Seathwaite– an installation in the National Graphene Institute, which depicts the story of graphite and graphene – its geography, geology and development in the North West of England.
Mary Griffiths, who is also Senior Curator at The Whitworth said: “Having previously worked alongside Kostya on other projects, I was aware of his passion for art. This has been a tremendously exciting and rewarding project, which will help people to better understand the unique qualities of graphene, while bringing Manchester’s passion for collaboration and creativity across the arts, industry and science to life.
“In many ways, the story of the scientific research which led to the creation of Prospect Planes is as exciting as the artwork itself. By taking my pencil drawing and patterning it in 2D with a single layer of graphene atoms, then creating an animated digital work of art from the graphene data, we hope to provoke further conversations about the nature of the first 2D material and the potential benefits and purposes of graphene.”
Sir Kostya Novoselov said: “In this particular collaboration with Mary, we merged two existing concepts to develop a new platform, which can result in multiple art projects. I really hope that we will continue working together to develop this platform even further.”
The Hexagon Experiment is taking place just a few months before the official launch of the £60m Graphene Engineering Innovation Centre, part of a major investment in 2D materials infrastructure across Manchester, cementing its reputation as Graphene City.
Prospect Planes was commissioned by Manchester-based creative music charity Brighter Sound.
Lauren Laverne is joined by composer Sara Lowes and visual artist Mary Griffiths to discuss their experiments with music, art and science. Followed by a performance of Sara Lowes’ graphene-inspired composition Graphene Suite, and the unveiling of new graphene art by Mary Griffiths and Professor Kostya Novoselov. Alongside Andre Geim, Novoselov was awarded the Nobel Prize in Physics in 2010 for his groundbreaking experiments with graphene.
About The Hexagon Experiment
Music, art and science collide in an explosive celebration of women’s creativity
A six-part series of ‘Friday night experiments’ featuring live music, conversations and original commissions from pioneering women at the forefront of music, art and science.
Inspired by the creativity that led to the discovery of the Nobel-Prize winning ‘wonder material’ graphene, The Hexagon Experiment brings together the North’s most exciting musicians and scientists for six free events – from music made by robots to a spectacular tribute to an unsung heroine.
One final comment, the title for the evening appears to have been inspired by a novella, from the Flatland Wikipedia entry (Note: Links have been removed),
Flatland: A Romance of Many Dimensions is a satirical novella by the English schoolmaster Edwin Abbott Abbott, first published in 1884 by Seeley & Co. of London.
Written pseudonymously by “A Square”, the book used the fictional two-dimensional world of Flatland to comment on the hierarchy of Victorian culture, but the novella’s more enduring contribution is its examination of dimensions.
That’s all folks.
ETA August 14, 2018: Not quite all. Hopefully this attempt to add a few details for people not familiar with graphene won’t lead increased confusion. The Hexagon event ‘Advetures in Flatland’ which includes Novoselov’s and Griffiths’ video project features some wordplay based on graphene’s two dimensional nature.
While my main interest is the group’s temporary art gallery, I am providing a brief explanatory introduction and a couple of previews for SIGGRAPH 2018.
For anyone unfamiliar with the Special Interest Group on Computer GRAPHics and Interactive Techniques (SIGGRAPH) and its conferences, from the SIGGRAPH Wikipedia entry Note: Links have been removed),
Some highlights of the conference are its Animation Theater and Electronic Theater presentations, where recently created CG films are played. There is a large exhibition floor, where several hundred companies set up elaborate booths and compete for attention and recruits. Most of the companies are in the engineering, graphics, motion picture, or video game industries. There are also many booths for schools which specialize in computer graphics or interactivity.
Dozens of research papers are presented each year, and SIGGRAPH is widely considered the most prestigious forum for the publication of computer graphics research. The recent paper acceptance rate for SIGGRAPH has been less than 26%. The submitted papers are peer-reviewed in a single-blind process. There has been some criticism about the preference of SIGGRAPH paper reviewers for novel results rather than useful incremental progress. …
This is the third SIGGRAPH Vancouver has hosted; the others were in 2011 and 2014. The theme for the 2018 iteration is ‘Generations’; here’s more about it from an Aug. 2, 2018 article by Terry Flores for Variety,
While its focus is firmly forward thinking, SIGGRAPH 2018, the computer graphics, animation, virtual reality, games, digital art, mixed reality, and emerging technologies conference, is also tipping its hat to the past thanks to its theme this year: Generations. The conference runs Aug. 12-16 in Vancouver, B.C.
“In the literal people sense, pioneers in the computer graphics industry are standing shoulder to shoulder with researchers, practitioners and the future of the industry — young people — mentoring them, dabbling across multiple disciplines to innovate, relate, and grow,” says SIGGRAPH 2018 conference chair Roy C. Anthony, VP of creative development and operations at software and technology firm Ventuz. “This is really what SIGGRAPH has always been about. Generations really seemed like a very appropriate way of looking back and remembering where we all came from and how far we’ve come.”
SIGGRAPH 2018 has a number of treats in store for attendees, including the debut of Disney’s first VR film, the short “Cycles”; production sessions on the making of “Blade Runner 2049,” “Game of Thrones,” “Incredibles 2” and “Avengers: Infinity War”; as well as sneak peeks of Disney’s upcoming “Ralph Breaks the Internet: Wreck-It Ralph 2” and Laika’s “Missing Link.”
That list of ‘treats’ in the last paragraph makes the conference seem more like an iteration of a ‘comic-con’ than a technology conference.
CHICAGO–In the burgeoning world of virtual reality (VR) technology, it remains a challenge to provide users with a realistic perception of infinite space and natural walking capabilities in the virtual environment. A team of computer scientists has introduced a new approach to address this problem by leveraging a natural human phenomenon: eye blinks.
All humans are functionally blind for about 10 percent of the time under normal circumstances due to eye blinks and saccades, a rapid movement of the eye between two points or objects. Eye blinks are a common and natural cause of so-called “change blindness,” which indicates the inability for humans to notice changes to visual scenes. Zeroing in on eye blinks and change blindness, the team has devised a novel computational system that effectively redirects the user in the virtual environment during these natural instances, all with undetectable camera movements to deliver orientation redirection.
“Previous RDW [redirected walking] techniques apply rotations continuously while the user is walking. But the amount of unnoticeable rotations is limited,” notes Eike Langbehn, lead author of the research and doctoral candidate at the University of Hamburg. “That’s why an orthogonal approach is needed–we add some additional rotations when the user is not focused on the visuals. When we learned that humans are functionally blind for some time due to blinks, we thought, ‘Why don’t we do the redirection during eye blinks?'”
Human eye blinks occur approximately 10 to 20 times per minute, about every 4 to 19 seconds. Leveraging this window of opportunity–where humans are unable to detect major motion changes while in a virtual environment–the researchers devised an approach to synchronize a computer graphics rendering system with this visual process, and introduce any useful motion changes in virtual scenes to enhance users’ overall VR experience.
The researchers’ experiments revealed that imperceptible camera rotations of 2 to 5 degrees and translations of 4 to 9 cm of the user’s viewpoint are possible during a blink without users even noticing. They tracked test participants’ eye blinks by an eye tracker in a VR head-mounted display. In a confirmatory study, the team validated that participants could not reliably detect in which of two eye blinks their viewpoint was manipulated while walking a VR curved path. The tests relied on unconscious natural eye blinking, but the researchers say redirection during blinking could be carried out consciously. Since users can consciously blink multiple times a day without much effort, eye blinks provide great potential to be used as an intentional trigger in their approach.
The team will present their work at SIGGRAPH 2018, held 12-16 August in Vancouver, British Columbia. The annual conference and exhibition showcases the world’s leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.
“RDW is a big challenge since current techniques still need too much space to enable unlimited walking in VR,” notes Langbehn. “Our work might contribute to a reduction of space since we found out that unnoticeable rotations of up to five degrees are possible during blinks. This means we can improve the performance of RDW by approximately 50 percent.”
The team’s results could be used in combination with other VR research, such as novel steering algorithms, improved path prediction, and rotations during saccades, to name a few. Down the road, such techniques could some day enable consumer VR users to virtually walk beyond their living room.
Langbehn collaborated on the work with Frank Steinicke of University of Hamburg, Markus Lappe of University of Muenster, Gregory F. Welch of University of Central Florida, and Gerd Bruder, also of University of Central Florida. For the full paper and video, visit the team’s project page.
About ACM, ACM SIGGRAPH, and SIGGRAPH 2018
ACM, the Association for Computing Machinery, is the world’s largest educational and scientific computing society, uniting educators, researchers, and professionals to inspire dialogue, share resources, and address the field’s challenges. ACM SIGGRAPH is a special interest group within ACM that serves as an interdisciplinary community for members in research, technology, and applications in computer graphics and interactive techniques. SIGGRAPH is the world’s leading annual interdisciplinary educational experience showcasing the latest in computer graphics and interactive techniques. SIGGRAPH 2018, marking the 45th annual conference hosted by ACM SIGGRAPH, will take place from 12-16 August at the Vancouver Convention Centre in Vancouver, B.C.
They have provided an image illustrating what they mean (I don’t find it especially informative),
Caption: The viewing behavior of a virtual reality user, including fixations (in green) and saccades (in red). A blink fully suppresses visual perception. Credit: Eike Langbehn
Walt Disney Animation Studios will debut its first ever virtual reality short film at SIGGRAPH 2018, and the hope is viewers will walk away feeling connected to the characters as equally as they will with the VR technology involved in making the film.
Cycles, an experimental film directed by Jeff Gipson, centers around the true meaning of creating a home and the life it holds inside its walls. The idea for the film is personal, inspired by Gipson’s childhood spending time with his grandparents and creating memories in their home, and later, having to move them to an assisted living residence.
“Every house has a story unique to the people, the characters who live there,” says Gipson. “We wanted to create a story in this single place and be able to have the viewer witness life happening around them. It is an emotionally driven film, expressing the real ups and downs, the happy and sad moments in life.”
For Cycles, Gipson also drew from his past life as an architect, having spent several years designing skate parks, and from his passion for action sports, including freestyle BMX. In Los Angeles, where Gipson lives, it is not unusual to find homes with an empty swimming pool reserved for skating or freestyle biking. Part of the pitch for Cycles came out of Gipson’s experience riding in these empty pools and being curious about the homes attached to them, the families who lived there, and the memories they made.
SIGGRAPH attendees will have the opportunity to experience Cycles at the Immersive Pavilion, a new space for this year’s conference. The Pavilion is devoted exclusively to virtual, augmented, and mixed reality and will contain: the VR Theater, a storytelling extravaganza that is part of the Computer Animation Festival; the Vrcade, a space for VR, AR, and MR games or experiences; and the well-known Village, for showcasing large-scale projects. SIGGRAPH 2018, held 12-16 August in Vancouver, British Columbia, is an annual gathering that showcases the world’s leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.
The production team completed Cycles in four months with about 50 collaborators as part of a professional development program at the studio. A key difference in VR filmmaking includes getting creative with how to translate a story to the VR “screen.” Pre-visualizing the narrative, for one, was a challenge. Rather than traditional storyboarding, Gipson and his team instead used a mix of Quill VR painting techniques and motion capture to “storyboard” Cycles, incorporating painters and artists to generate sculptures or 3D models of characters early on and draw scenes for the VR space. The creators also got innovative with the use of light and color saturation in scenes to help guide the user’s eyes during the film.
“What’s cool for VR is that we are really on the edge of trying to figure out what it is and how to tell stories in this new medium,” says Gipson. “In VR, you can look anywhere and really be transported to a different world, experience it from different angles, and see every detail. We want people watching to feel alive and feel emotion, and give them a true cinematic experience.”
This is Gipson’s VR directorial debut. He joined Walt Disney Animation Studios in 2013, serving as a lighting artist on Disney favorites like Frozen, Zootopia, and Moana. Of getting to direct the studio’s first VR short, he says, “VR is an amazing technology and a lot of times the technology is what is really celebrated. We hope more and more people begin to see the emotional weight of VR films, and with Cycles in particular, we hope they will feel the emotions we aimed to convey with our story.”
Apparently this is a still from the ‘short’,
Caption: Disney Animation Studios will present ‘Cycles’ , its first virtual reality (VR) short, at ACM SIGGRAPH 2018. Credit: Disney Animation Studios
Google has unveiled a new virtual reality (VR) immersive experience based on a novel system that captures and renders high-quality, realistic images from the real world using light fields. Created by a team of leading researchers at Google, Welcome to Light Fields is the tech giant’s splash into the nascent arena of light fields VR experiences, an exciting corner of VR video technology gaining traction for its promise to deliver extremely high-quality imagery and experiences in the virtual world.
Google released Welcome to Light Fields earlier this year as a free app on Steam VR for HTC Vive, Oculus Rift, and Windows Mixed Reality headsets. The creators will demonstrate the VR experience at SIGGRAPH 2018, in the Immersive Pavilion, a new space for this year’s conference. The Pavilion is devoted exclusively to virtual, augmented, and mixed reality and will contain: the Vrcade, a space for VR, AR, and MR games or experiences; the VR Theater, a storytelling extravaganza that is part of the Computer Animation Festival; and the well-known Village, for showcasing large-scale projects. SIGGRAPH 2018, held 12-16 August in Vancouver, British Columbia, is an annual gathering that showcases the world’s leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.
Destinations in Welcome to Light Fields include NASA’s Space Shuttle Discovery, delivering to viewers an astronaut’s view inside the flight deck, which has never been open to the public; the pristine teak and mahogany interiors of the Gamble House, an architectural treasure in Pasadena, CA; and the glorious St. Stephen’s Church in Granada Hills, CA, home to a stunning wall of more than 14,000 pieces of glimmering stained glass.
“I love that light fields in VR can teleport you to exotic places in the real world, and truly make you believe you are there,” says Ryan Overbeck, software engineer at Google who co-led the project. “To me, this is magic.”
To bring this experience to life, Overbeck worked with a team that included Paul Debevec, senior staff engineer at Google, who managed the project and led the hardware piece with engineers Xueming Yu, Jay Busch, and Graham Fyffe. With Overbeck, Daniel Erickson and Daniel Evangelakos focused on the software end. The researchers designed a comprehensive system for capturing and rendering high-quality, spherical light field still images from footage captured in the real world. They developed two easy-to-use light field camera rigs, based on the GoPro Hero4action sports camera, that efficiently capture thousands of images on the surface of a sphere. Those images were then passed through a cloud-based light-field-processing pipeline.
Among other things, explains Overbeck, “The processing pipeline uses computer vision to place the images in 3D and generate depth maps, and we use a modified version of our vp9 video codec
to compress the light field data down to a manageable size.” To render a light field dataset, he notes, the team used a rendering algorithm that blends between the thousands of light field images in real-time.
The team relied on Google’s talented pool of engineers in computer vision, graphics, video compression, and machine learning to overcome the unique challenges posed in light fields technology. They also collaborated closely with the WebM team (who make the vp9 video codec) to develop the high-quality light field compression format incorporated into their system, and leaned heavily on the expertise of the Jump VR team to help pose the images and generate depth maps. (Jump is Google’s professional VR system for achieving 3D-360 video production at scale.)
Indeed, with Welcome to Light Fields, the Google team is demonstrating the potential and promise of light field VR technology, showcasing the technology’s ability to provide a truly immersive experience with a level of unmatched realism. Though light fields technology has been researched and explored in computer graphics for more than 30 years, practical systems for actually delivering high-quality light field experiences has not yet been possible.
Part of the team’s motivation behind creating this VR light field experience is to invigorate the nascent field.
“Welcome to Light Fields proves that it is now possible to make a compelling light field VR viewer that runs on consumer-grade hardware, and we hope that this knowledge will encourage others to get involved with building light field technology and media,” says Overbeck. “We understand that in order to eventually make compelling consumer products based on light fields, we need a thriving light field ecosystem. We need open light field codecs, we need artists creating beautiful light field imagery, and we need people using VR in order to engage with light fields.”
I don’t really understand why this image, which looks like something belongs on advertising material, would be chosen to accompany a news release on a science-based distribution outlet,
Caption: A team of leading researchers at Google, will unveil the new immersive virtual reality (VR) experience “Welcome to Lightfields” at ACM SIGGRAPH 2018. Credit: Image courtesy of Google/Overbeck
Advances in computer-generated imagery have brought vivid, realistic animations to life, but the sounds associated with what we see simulated on screen, such as two objects colliding, are often recordings. Now researchers at Stanford University have developed a system that automatically renders accurate sounds for a wide variety of animated phenomena.
“There’s been a Holy Grail in computing of being able to simulate reality for humans. We can animate scenes and render them visually with physics and computer graphics, but, as for sounds, they are usually made up,” said Doug James, professor of computer science at Stanford University. “Currently there exists no way to generate realistic synchronized sounds for complex animated content, such as splashing water or colliding objects, automatically. This fills that void.”
The researchers will present their work on this sound synthesis system as part of ACM SIGGRAPH 2018, the leading conference on computer graphics and interactive techniques. In addition to enlivening movies and virtual reality worlds, this system could also help engineering companies prototype how products would sound before being physically produced, and hopefully encourage designs that are quieter and less irritating, the researchers said.
“I’ve spent years trying to solve partial differential equations – which govern how sound propagates – by hand,” said Jui-Hsien Wang, a graduate student in James’ lab and in the Institute for Computational and Mathematical Engineering (ICME), and lead author of the paper. “This is actually a place where you don’t just solve the equation but you can actually hear it once you’ve done it. That’s really exciting to me and it’s fun.”
Informed by geometry and physical motion, the system figures out the vibrations of each object and how, like a loudspeaker, those vibrations excite sound waves. It computes the pressure waves cast off by rapidly moving and vibrating surfaces but does not replicate room acoustics. So, although it does not recreate the echoes in a grand cathedral, it can resolve detailed sounds from scenarios like a crashing cymbal, an upside-down bowl spinning to a stop, a glass filling up with water or a virtual character talking into a megaphone.
Most sounds associated with animations rely on pre-recorded clips, which require vast manual effort to synchronize with the action on-screen. These clips are also restricted to noises that exist – they can’t predict anything new. Other systems that produce and predict sounds as accurate as those of James and his team work only in special cases, or assume the geometry doesn’t deform very much. They also require a long pre-computation phase for each separate object.
“Ours is essentially just a render button with minimal pre-processing that treats all objects together in one acoustic wave simulation,” said Ante Qu, a graduate student in James’ lab and co-author of the paper.
The simulated sound that results from this method is highly detailed. It takes into account the sound waves produced by each object in an animation but also predicts how those waves bend, bounce or deaden based on their interactions with other objects and sound waves in the scene.
In its current form, the group’s process takes a while to create the finished product. But, now that they have proven this technique’s potential, they can focus on performance optimizations, such as implementing their method on parallel GPU hardware, that should make it drastically faster.
And, even in its current state, the results are worth the wait.
“The first water sounds we generated with the system were among the best ones we had simulated – and water is a huge challenge in computer-generated sound,” said James. “We thought we might get a little improvement, but it is dramatically better than previous approaches even right out of the box. It was really striking.”
Although the group’s work has faithfully rendered sounds of various objects spinning, falling and banging into each other, more complex objects and interactions – like the reverberating tones of a Stradivarius violin – remain difficult to model realistically. That, the group said, will have to wait for a future solution.
Timothy Langlois of Adobe Research is a co-author of this paper. This research was funded by the National Science Foundation and Adobe Research. James is also a professor, by courtesy, of music and a member of Stanford Bio-X.
Researchers Timothy Langlois, Doug L. James, Ante Qu and Jui-Hsien Wang have created a video featuring highlights of animations with sounds synthesized using the Stanford researchers’ new system.,
The researchers have also provided this image,
By computing pressure waves cast off by rapidly moving and vibrating surfaces – such as a cymbal – a new sound synthesis system developed by Stanford researchers can automatically render realistic sound for computer animations. (Image credit: Timothy Langlois, Doug L. James, Ante Qu and Jui-Hsien Wang)
It does seem like we’re synthesizing the world around us, eh?
SIGGRAPH 2018, the world’s leading showcase of digital art created using computer graphics and interactive techniques, will present a special Art Gallery, entitled “Origins,” and historic Art Papers in Vancouver, B.C. The 45th SIGGRAPH conference will take place 12–16 August at the Vancouver Convention Centre. The programs will also honor the generations of creators that have come before through a special, 50th anniversary edition of the Leonard journal. To register for the conference, visit S2018.SIGGRAPH.ORG.
The SIGGRAPH 2018 ART GALLERY is a curated exhibition, conceived as a dialogical space that enables the viewer to reflect on man’s diverse cultural values and rituals through contemporary creative practices. Building upon an exciting and eclectic selection of creative practices mediated through technologies that represent the sophistication of our times, the SIGGRAPH 2018 Art Gallery will embrace the narratives of the indigenous communities based near Vancouver and throughout Canada as a source of inspiration. The exhibition will feature contemporary media artworks, art pieces by indigenous communities, and other traces of technologically mediated Ludic practices.
Andrés Burbano, SIGGRAPH 2018 Art Gallery chair and professor at Universidad de los Andes, said, “The Art Gallery aims to articulate myth and technology, science and art, the deep past and the computational present, and will coalesce around a theme of ‘Origins.’ Media and technological creative expressions will explore principles such as the origins of the cosmos, the origins of life, the origins of human presence, the origins of the occupation of territories in the Americas, and the origins of people living in the vast territories of the Arctic.”
He continued, “The venue [in Vancouver] hopes to rekindle the original spark that ignited the collaborative spirit of the SIGGRAPH community of engineers, scientists, and artists, who came together to create the very first conference in the early 1970s.”
Highlights from the 2018 Art Gallery include:
Transformation Mask (Canada) [Technology Based]
Shawn Hunt, independent; and Microsoft Garage: Andy Klein, Robert Butterworth, Jonathan Cobb, Jeremy Kersey, Stacey Mulcahy, Brendan O’Rourke, Brent Silk, and Julia Taylor-Hell, Microsoft Vancouver
TRANSFORMATION MASK is an interactive installation that features the Microsoft HoloLens. It utilizes electronics and mechanical engineering to express a physical and digital transformation. Participants are immersed in spatial sounds and holographic visuals.
Somnium (U.S.) [Science Based]
Marko Peljhan, Danny Bazo, and Karl Yerkes, University of California, Santa Barbara
Somnium is a cybernetic installation that provides visitors with the ability to sensorily, cognitively, and emotionally contemplate and experience exoplanetary discoveries, their macro and micro dimensions, and the potential for life in our Galaxy. Some might call it “space telescope.”
Ernest Edmonds Retrospective – Art Systems 1968-2018 (United Kingdom) [History Based]
Ernest Edmonds, De Montfort University
Celebrating one of the pioneers of computer graphics-based art since the early 1970s, this Ernest Edmonds career retrospective will showcase snapshots of Edmonds’ work as it developed over the years. With one piece from each decade, the retrospective will also demonstrate how vital the Leonardo journal has been throughout the 50-year journey.
In addition to the works above, the Art Gallery will feature pieces from notable female artists Ozge Samanci, Ruth West, and Nicole L’Hullier. For more information about the Edmonds retrospective, read THIS POST ON THE ACM SIGGRAPH BLOG.
The SIGGRAPH 2018 ART PAPERS program is designed to feature research from artists, scientists, theorists, technologists, historians, and more in one of four categories: project description, theory/criticism, methods, or history. The chosen work was selected by an international jury of scholars, artists, and immersive technology developers.
To celebrate the 50th anniversary of LEONARDO (MIT Press), and 10 years of its annual SIGGRAPH issue, SIGGRAPH 2018 is pleased to announce a special anniversary edition of the journal, which will feature the 2018 art papers. For 50 years, Leonardo has been the definitive publication for artist-academics. To learn more about the relationship between SIGGRAPH and the journal, listen to THIS EPISODE OF THE SIGGRAPH SPOTLIGHT PODCAST.
“In order to encourage a wider range of topics, we introduced a new submission type, short papers. This enabled us to accept more content than in previous years. Additionally, for the first time, we will introduce sessions that integrate the Art Gallery artist talks with Art Papers talks, promoting richer connections between these two creative communities,” said Angus Forbes, SIGGRAPH 2018 Art Papers chair and professor at University of California, Santa Cruz.
Art Papers highlights include:
Alienating the Familiar with CGI: A Recipe for Making a Full CGI Art House Animated Feature [Long]
Alex Counsell and Paul Charisse, University of Portsmouth
This paper explores the process of making and funding an art house feature film using full CGI in a marketplace where this has never been attempted. It explores cutting-edge technology and production approaches, as well as routes to successful fundraising.
Augmented Fauna and Glass Mutations: A Dialogue Between Material and Technique in Glassblowing and 3D Printing [Long]
Tobias Klein, City University of Hong Kong
The two presented artworks, “Augmented Fauna” and “Glass Mutations,” were created during an artist residence at the PILCHUCK GLASS SCHOOL. They are examples of the qualities and methods established through a synthesis between digital workflows and traditional craft processes and thus formulate the notion of digital craftsmanship.
Inhabitat: An Imaginary Ecosystem in a Children’s Science Museum [Short]
Graham Wakefield, York University, and Haru Hyunkyung Ji, OCAD University
“Inhabitat” is a mixed reality artwork in which participants become part of an imaginary ecology through three simultaneous perspectives of scale and agency; three distinct ways to see with other eyes. This imaginary world was exhibited at a children’s science museum for five months, using an interactive projection-augmented sculpture, a large screen and speaker array, and a virtual reality head-mounted display.
What’s the what?
My father used to say that and I always assumed it meant summarize the high points, if you need to, and get to the point—fast. In that spirit, I am both fascinated and mildly appalled. The virtual, mixed, and augmented reality technologies, as well as, the others being featured at SIGGRAPH 2018 are wondrous in many ways but it seems we are coming ever closer to a world where we no longer interact with nature or other humans directly. (see my August 10, 2018 posting about the ‘extinction of experience’ for research that encourages more direct interaction with nature) I realize that SIGGRAPH is intended as a primarily technical experience but I think a little more content questioning these technologies and their applications (social implications) might be in order. That’s often the artist’s role but I can’t see anything in the art gallery descriptions that hint at any sort of fundamental critique.
Phys.org has a Dec. 12, 2016 essay by Nicole Miller-Struttmann on the topic of empathy and science communication,
Science communication remains as challenging as it is necessary in the era of big data. Scientists are encouraged to reach out to non-experts through social media, collaborations with citizen scientists, and non-technical abstracts. As a science enthusiast (and extrovert), I truly enjoy making these connections and having conversations that span expertise, interests and geographic barriers. However, recent divisive and impassioned responses to the surprising election results in the U.S. made me question how effective these approaches are for connecting with the public.
Are we all just stuck in our own echo chambers, ignoring those that disagree with us?
How do we break out of these silos to reach those that disengage from science or stop listening when we focus on evidence? Particularly evidence that is increasingly large in volume and in scale? Recent research suggests that a few key approaches might help: (1) managing our social media use with purpose, (2) tailoring outreach efforts to a distinct public, and (3) empathizing with our audience(s) in a deep, meaningful way.
The essay, which originally appeared on the PLOS Ecology Community blog in a Dec. 9, 2016 posting, goes on to discuss social media, citizen science/crowdsourcing, design thinking, and next gen data visualization (Note: Links have been removed),
Many of us attempt to broaden our impact by sharing interesting studies with friends, family, colleagues, and the broader public on social media. While the potential to interact directly with non-experts through social media is immense, confirmation bias (the tendency to interpret and share information that supports one’s existing beliefs) provides a significant barrier to reaching non-traditional and contrarian publics. Insights from network analyses suggest that these barriers can be overcome by managing our connections and crafting our messages carefully. …
Technology has revolutionized how the public engages in science, particularly data acquisition, interpretation and dissemination. The potential benefits of citizen science and crowd sourcing projects are immense, but there are significant challenges as well. Paramount among them is the reliance on “near-experts” and amateur scientists. Domroese and Johnson (2016) suggest that understanding what motivates citizen scientists to get involved – not what we think motivates them – is the first step to deepening their involvement and attracting diverse participants.
Design Thinking may provide a framework for reaching diverse and under-represented publics. While similar to scientific thinking in several ways,
design thinking includes a crucial step that scientific thinking does not: empathizing with your audience.
It requires that the designer put themselves in the shoes of their audience, understand what motivates them (as Domroese and Johnson suggest), consider how they will interact with and perceive the ‘product’, and appeal to the perspective. Yajima (2015) summarizes how design thinking can “catalyze scientific innovation” but also why it might be a strange fit for scientists. …
Connecting the public to big data is particularly challenging, as the data are often complex with multifaceted stories to tell. Recent work suggests that art-based, interactive displays are more effective at fostering understanding of complex issues, such as climate change.
Thomsen (2015) explains that by eliciting visceral responses and stimulating the imagination, interactive displays can deepen understanding and may elicit behavioral changes.
I recommend reading this piece in its entirety as Miller-Struttmann presents a more cohesive description of current science communication practices and ideas than is sometimes the case.
Final comment, I would like to add one suggestion and that’s the adoption of an attitude of ‘muscular’ empathy. People are going to disagree with you, sometimes quite strongly (aggressively), and it can be very difficult to maintain communication with people who don’t want (i.e., reject) the communication. Maintaining empathy in the face of failure and rejection which can extend for decades or longer requires a certain muscularity
It’s a good idea whether or not the backup site is in Canada and regardless of who is president of the United States, i.e., having a backup for the world’s digital memory. The Internet Archives has announced that it is raising funds to allow for the creation of a backup site. Here’s more from a Dec. 1, 2016 news item on phys.org,
The Internet Archive, which keeps historical records of Web pages, is creating a new backup center in Canada, citing concerns about surveillance following the US presidential election of Donald Trump.
“On November 9 in America, we woke up to a new administration promising radical change. It was a firm reminder that institutions like ours, built for the long term, need to design for change,” said a blog post from Brewster Kahle, founder and digital librarian at the organization.
“For us, it means keeping our cultural materials safe, private and perpetually accessible. It means preparing for a Web that may face greater restrictions.”
While Trump has announced no new digital policies, his campaign comments have raised concerns his administration would be more active on government surveillance and less sensitive to civil liberties.
Glyn Moody in a Nov. 30, 2016 posting on Techdirt eloquently describes the Internet Archive’s role (Note: Links have been removed),
The Internet Archive is probably the most important site that most people have never heard of, much less used. It is an amazing thing: not just a huge collection of freely-available digitized materials, but a backup copy of much of today’s Web, available through something known as the Wayback Machine. It gets its name from the fact that it lets visitors view snapshots of vast numbers of Web pages as they have changed over the last two decades since the Internet Archive was founded — some 279 billion pages currently. That feature makes it an indispensable — and generally unique — record of pages and information that have since disappeared, sometimes because somebody powerful found them inconvenient.
Even more eloquently, Brewster Kahle explains the initiative in his Nov. 29, 2016 posting on one of the Internet Archive blogs,
The history of libraries is one of loss. The Library of Alexandria is best known for its disappearance.
Libraries like ours are susceptible to different fault lines:
So this year, we have set a new goal: to create a copy of Internet Archive’s digital collections in another country. We are building the Internet Archive of Canada because, to quote our friends at LOCKSS, “lots of copies keep stuff safe.” This project will cost millions. So this is the one time of the year I will ask you: please make a tax-deductible donation to help make sure the Internet Archive lasts forever. (FAQ on this effort).
Throughout history, libraries have fought against terrible violations of privacy—where people have been rounded up simply for what they read. At the Internet Archive, we are fighting to protect our readers’ privacy in the digital world.
We can do this because we are independent, thanks to broad support from many of you. The Internet Archive is a non-profit library built on trust. Our mission: to give everyone access to all knowledge, forever. For free. The Internet Archive has only 150 staff but runs one of the top-250 websites in the world. Reader privacy is very important to us, so we don’t accept ads that track your behavior. We don’t even collect your IP address. But we still need to pay for the increasing costs of servers, staff and rent.
You may not know this, but your support for the Internet Archive makes more than 3 million e-books available for free to millions of Open Library patrons around the world.
Your support has fueled the work of journalists who used our Political TV Ad Archive in their fact-checking of candidates’ claims.
It keeps the Wayback Machine going, saving 300 million Web pages each week, so no one will ever be able to change the past just because there is no digital record of it. The Web needs a memory, the ability to look back.
My two most relevant past posts on the topic of archives and memories are this May 18, 2012 piece about Luciana Duranti’s talk about authenticity and trust regarding digital documents and this March 8, 2012 posting about digital memory, which also features a mention of Brewster Kahle and the Internet Archives.